#ubuntu-server 2006-01-02
* #ubuntu-server  [freenode-info]  If you're at a conference, please contact freenode staff to make sure we've made special allowance for many users coming into our network from a single internet address ( http://freenode.net/faq.shtml#gettinghelp ). Private messages from unregistered users are currently blocked, except to network staff, services and participating registered users ( http://freenode.net/faq.shtml#privmsg )... Thanks!
<FlannelKing> what's special about ubuntu that makes its default apache install not work with mod_dav_svn and mod_rewrite?
<FlannelKing> what's special about ubuntu that makes mod_rewrite not work in .conf files for modules?
<Pygi> hoho etcp :)
<gremid> hi, are there any release notes related to the current ubuntu server release?
<Pygi> gremid: what are you interested in?
<gremid> i want to know the differences between the server edition and the normal breezy release, aka. the current feature set of the server release.
<Pygi> heh, differences
<Pygi> tweaked kernel, and a few other tweaks
<Pygi> that's mostly it :)
<gremid> Pygi, thanks.
<neuralis> gremid: actually, that's wrong. the breezy server kernel not tweaked in any way. the only difference is that the server edition performs a minimal package install.
<neuralis> *is not
<Pygi> well, breezy server kernel is not tweaked but....
<Pygi> dapper one will be...
<neuralis> Pygi: please read the question. he asked about breezy.
<Pygi> neuralis: yes, read the question....
<gremid> neuralis, thx.
<neuralis> gremid: sure.
<segfault> any study on dazuko for 6.04?
<neuralis> segfault: no, we haven't planned on it, but i'll take a look at it.
<Travis> is there a list somewhere that has the features of Ubuntu Server?
<Pygi> hoho chara once again :P
<MarioMeyer> heya :P
<Pygi> what's up?
<MarioMeyer> writting some papers.. u?
<Pygi> writing core for some application
#ubuntu-server 2006-01-03
<FlannelKing> Alright guys, why doesn't my config setup (http://pastebin.ca/35053) allow the rewrites at the bottom of that page?
<fabbione> FlannelKing: in none of your configs you load mod_rewrite.
<FlannelKing> yes, I do, its mods-enabled, standard ubuntu shindig
<FlannelKing> so, it gets loaded when mods-enabled/*.load gets loaded
<FlannelKing> I wish it was that simple
<fabbione> check errors.log?
<fabbione> works here
<fabbione> so the module does work
<fabbione> it has something to do with what you are trying to do
<fabbione> and imho you could just redirect instead of rewrite for what you are doing
<fabbione> also.. why are you using PT?
<fabbione> did you read what PT is for?
<FlannelKing> Pass Through, yeah, but that's not the important bit, since removing it stilld oesnt work
<FlannelKing> Ive tried numerous permutations and locations over the past week, can't et anything to work.
<FlannelKing> and I cant figure out why, and it appears no one really can.
<fabbione> what module do you load first?
<fabbione> mod_rewrite or mod_dav?
<fabbione> and try swapping them
<FlannelKing> um, they load in order, I think, so dav before rewrite
<FlannelKing> I assume they load in alphabetical order?
<fabbione> the load order is important
<fabbione> yes they shoul
<fabbione> should
<FlannelKing> I figured it wasnt, becaues all the loads get loaded, before the config files for any
<fabbione> try with something like: 01_mod_rewrite
<fabbione> and 02_mod_dav
<FlannelKing> yeah,
<fabbione> and swap them
<FlannelKing> let me try
<fabbione> i need to go for a shower
<fabbione> bbl
<FlannelKing> nope, still doesn't work
<fabbione> works here fine
<fabbione> i did add the same svn lines to my site
<fabbione> www.fabbione.net/svn/
<fabbione> it redirects to /shower/
<fabbione>         RewriteEngine on
<fabbione>         RewriteRule ^/svn$              /shower    [PT] 
<fabbione>         RewriteRule ^/svn/$             /shower    [PT] 
<fabbione>         RewriteRule ^/svn/index.html$   /shower    [PT] 
<fabbione> this is inside the virtualhost directive
<fabbione> so it's either where you stick the config bits
<fabbione> or interaction with mod_dav
<fabbione> for sure mod_rewrite works
<FlannelKing> is that default ubuntu configs?
<FlannelKing> with the SymLinks or whatnot
<FlannelKing> erm, do you have moddav?
<FlannelKing> yeah, well, it doesn't even work with other rewrites
<FlannelKing> if I try and rewrite ^/$
<FlannelKing> and it still doesnt work
<fabbione> it's ubuntu yes
<fabbione> default config
<fabbione> other than the path to the website
<fabbione> no i don't have mod_dav
<fabbione> try one test case at a time
<fabbione> start from the default
<fabbione> and add rewrite
<fabbione> once rewrite works
<fabbione> add mod_dav
<fabbione> and see what happens
<FlannelKing> yeah, just means I have to start over from scratch. bother.
<fabbione> no just move the config file somewhere for later usage
<FlannelKing> oh, I know.
<FlannelKing> just means I have to remember what I did, and undo it.
<FlannelKing> to get back to default
<fabbione> reason why you never edit the default, but create a copy
<FlannelKing> but, thats a great idea, no idea why I didnt think of it sooner
<FlannelKing> you think removing my ssl site has anything to do with it?
<FlannelKing> erm, adding it
<FlannelKing> nevermind
<fabbione> start with the default
<fabbione> reallyl
<fabbione> add the rewrite
<fabbione> check it works
<fabbione> add mod_dav
<fabbione> once you get the basic done, the rest will work almost automagically
<FlannelKing> yeah
<fabbione> adding extra crap on top won't help and for sure you won't find anybody to help you later on
<FlannelKing> so, rewrite firsT? and where would I put the rewrite stuff? which file?
<fabbione> i did add it to my enable-sites/default (or whatever you have there)
<FlannelKing> alright
<fabbione> inside the <virtualhost> directive
<FlannelKing> outside of everything?
<FlannelKing> alright
<FlannelKing> so, back to default, now try rewrite first you say?
<fabbione> yes
<fabbione> create 2 empty dirs
<fabbione> in www area
<fabbione> foo and bar
<fabbione> check that you can rewrite from foo to bar
<fabbione> or something like that
<fabbione> nothing too fancy really
<fabbione> that will ensure your rewrite rules are working
<fabbione> and check all 3 of them
<fabbione> not just one
<FlannelKing> alright, modrewrite works
<FlannelKing> now, add the subversion/webdav stuff?
<FlannelKing> alright, that's when it breaks
<FlannelKing> so, is it because the dav_svn has a Location /svn in it? and it's usurping the url before it gets a chance? or what?
<fabbione> probably
<fabbione> i don't have mod_dav, so it might be that Location and rewrite ruls are in conflict
<FlannelKing> well, you can take a look at that conf file at my psatebin,
<FlannelKing> umm...
<FlannelKing> http://pastebin.ca/35053
<fabbione> i already did.. to test, but i am not going to add mod_dav for testing on my production server
<FlannelKing> oh, right, and I dont expect you to
<FlannelKing> it just makes me wonder why no one else has this problem.
<FlannelKing> I guess Ill have to ask for their conf, or someting.
<fabbione> did you try to google?
<fabbione> or to check the docs on httpd.apache.org?
<fabbione> there is a deep tutorial on mod_rewrite
<fabbione> perhaps it's mentioned how to do it properly
<FlannelKing> yeah, Ive been learning a lot, but none of it helps
<fabbione> did you try #apache ?
<FlannelKing> daily
<FlannelKing> and #svn
<FlannelKing> thanks a lot fabbione, you've been much more of a help than countless others, even if it was just knocking some sense into me to stop me acting ignorant
<fabbione> no problem
<fabbione> time to start fixing the tivo :)
<fabbione> later
<FlannelKing> good luck
<fabbione> shawarma: ping?
<fabbione> unping
<shawarma> fabbione: pong?
<shawarma> fabbione: er... unpong..
<Pygi> ho, cara
<Pygi> chara*
<MarioMeyer_> heya
<hazmat> does ubuntu have any scripts to manage the /etc/rc.* services
<shawarma> update-rc.d
<hazmat> shawarma, thanks
<shawarma> No problem.
<hazmat> also found rcconf from a google search
<shawarma> Hmm.. Never heard of it.
<hazmat> apparently just a front end to update-rc.d
<shawarma> hazmat: Right.
<fabbione> hey shawarma
<fabbione> shawarma: you got mails :)
<shawarma> fabbione: Hey there.
<Pygi> fabbione: ho, now your mail box :)
<shawarma> fabbione: Yeah, I saw them. Imported, and uploaded.
<fabbione> Pygi: ?
<shawarma> fabbione: Thanks.
<fabbione> shawarma: no problem
<Pygi> fabbione: ah, nothing, I was reffering to that you became a mail box :P
<Pygi> fabbione shawarma: you got mails :)
* fabbione nods
<Pygi> hah, yes, I'll stop talkin' now :P
<hazmat> the ubuntu server mailing list in the topic, isn't publicly advertised in the mailman index of either lists.ubuntu.com or lists.canonical.com... the only way to find out how to subscribe is to know mailman urls or subscription emails.
<fabbione> hazmat: yes it's a known issue
<Pygi> yup, we know...
<fabbione> just pick a list and change the url to ubuntu-server
<fabbione> it will work just fine
#ubuntu-server 2006-01-04
<FlannelKing> Anyone know why (in apache2) a <location> (that activates web_dav) would overrule a mod_rewrite of that directory?
<mxpxpod> has anyone heard of a problem with breezy having expat conflicts with libapache2-mod-python and libapach2-mod-php?
<mxpxpod> because with mod_php and mod_python loaded, I get random segfaults in my python apps
#ubuntu-server 2006-01-05
<tepsipakki> not built yet, i believe
<tepsipakki> duh
<Pygi> huh, wrong channeL :)
<tepsipakki> only one window away.. ;)
<Pygi> hehe :)
<tepsipakki> what's on the agenda for the server team?
<MarioMeyer> 1st of all, to cellebrate the new year.. :P
<tepsipakki> I'm about to start testing NFSv4 with krb5 hopefully next week. it needs a few packages that are still missing from debian/ubuntu, and a few patched ones.. I'm willing to help on making those
<tepsipakki> MarioMeyer: right on!
<tepsipakki> the missing packages are libgssapi and librpcsecgss. there are packages for sid made by upstream, so it should be easy to do for dapper
<tepsipakki> but the tricky part is getting support for nfs-common and util-linux... ;)
<mxpxpod> has anyone experienced, with server 5.10, crashes when using mod_python and mod_php at the same time?
<FlannelKing> Anyone know how to get mod_rewrite to work with a web_dav Location block in apache2?
<FlannelKing> Anyone know how to get around the problem of having (apache2) a Location block and not being able to mod_rewrite in that location?
<FlannelKing> Anyone know what else needs to be done to get rewrite logs to work?
<FlannelKing> I currently have: RewriteLog "/var/log/apache2/rewrite.log"
<FlannelKing> RewriteLogLevel 9
<spike> hi there
<segfault> hi
<MarioMeyer> :P
#ubuntu-server 2006-01-06
<Pygi> Happy New year to everyone :)
<Valandil> happy new year :)
<spike> to you dude
<Valandil> :)
* Pygi is away: http://fama.sf.net
* Pygi is back (gone 00:00:08)
<theCore> how do I make mod_python work in apache2 ?
<mxpxpod> theCore: are you using mod_php with mod_python on ubuntu server?
<theCore> yes
<mxpxpod> I've found problems doing that
<theCore> should i remove mod_php ?
<mxpxpod> yes
<mxpxpod> if you don't have to use php
<theCore> i don't want to use php anymore, I want to use psp instead
<theCore> (Python Server Page)
<mxpxpod> right
<theCore> spaghetti code aren't for me
<mxpxpod> hehe
<mxpxpod> I've gotta rewrite some of my code from php to python
<theCore> do I need to make some custom config to make it work, after installing it?
<mxpxpod> not sure... I'm new to using python and apache myself
<theCore> does it work for you ?
<mxpxpod> no, because the server I have set up still needs to run php, so my python apps don't work :(
<mxpxpod> so I have to convert it all over and then disable php
<theCore> so, are you able to make work without php ?
<mxpxpod> yeah, but all my configs are at work
<theCore> nope, it still  doesn't work
<theCore> the module load correctly, but when I go to a .py file it just to download it
<theCore> oh!
<theCore> i made it work in lynx
<theCore> so, the problem come from firefox, not the server
<theCore> hmm, so how I ca fix that ....
<theCore> can*
<theCore> it work with mod_php, too
<theCore> yea ! it work in firefox
<theCore> that because firefox don't handle text file
<theCore> but when you feed it with a correct html, it work like a charm
<Valandil> hi Pygi :) and a happy new year
<Pygi> hi Valandil :) Happy New Year, best wishes :)
<Valandil> thx :)
<Valandil> <--- is  working on tax-announcement :(
<Pygi> <--- is workin' on a secret app :)
<Valandil> :))
#ubuntu-server 2006-01-07
<TTT_Travis> Hi, I installed ubuntu server, but all that installed was the base how do I install all of the programs that come with it?
<TTT_Travis> nevermind
<justin82> hey all
<fabbione> morning guys
<Pygi> ho Valandil
<Valandil> hi Pygi :)
<Pygi> ho Valandil
<Valandil> :)
<Pygi> now, wanna support me becoming a member by voting for me at that council or whatever? :)
<Valandil> hum? could You explain please?
* Valandil is a little slow today
<Pygi> huh, nothin' important :P
<Valandil> no, I mean, I don't understand, what You whish... You mean, member of ubuntu?
<Pygi> yesh :P
<Valandil> and how can I support this?
<Valandil> (sorry, I did not have the time yet, to analyze ubuntu-internal procedures. I'm just working on distributing ubuntu and collecting proposals for ubuntu-server)
<Pygi> huh, doesn't matter as I said :P
<Valandil> btw: my first productive ubuntu-server is online since yesterday
<Pygi> hehe, congrats
<Valandil> and if it seems to be stable, the others will follow
<Valandil> by now I have big problems with MaxDB
<Valandil> thx :)
<Pygi> huh, what's wrong with MaxDB?
<Valandil> if it doesn't work on ubuntu, I have to see :(
<fabbione> it's in universe....
<Valandil> it won't start
<fabbione> i doubt somebody even looked at it
<Valandil> wahttpd crashes with segfault
<Valandil> hmmm
<Valandil> I use it productive since 2003 on debian woody
<Pygi> :/
<Valandil> I built adaptive packgages for it, but in debian noone was interested :( So I was my own mantainer ;)
<Pygi> :/
<Pygi> huh, offer it to the MOTU's?
<Valandil> Pygi: since it is in universe - does it have no mantainer? or where ca I find the Contact to mantainer? I looked in Database, but there is nothing written about mantainer
<Pygi> well, then you can take it over :)
<Valandil> yes, this was my intention :)
<Pygi> if you are a MOTU :)
<Pygi> let's go to MOTU, and I'll back you up :P
<Pygi> #ubuntu-motu
<Valandil> mom, telephone...
<Pygi> huh,k, tell once ur back
<Valandil> Pygi: I'm back
<Pygi> k
<Valandil> sorry, was my main customer - and the 'new year briefing' ;)
<Pygi> so come to motu if u want :p
<Pygi> np
<Valandil> I'd like to, but I'm not shure, if I'm good enough...
<Valandil> I'm yust a server-fanatic student ;)
<Valandil> but I'd do my very best...
<Pygi> :)
<Valandil> went to #ubuntu-motu
<Pygi> k, now say it :)
<Valandil> say what? 'I want to take over a package'? (or samething like that)?
<Valandil> *getting red*
<Pygi> yesh :P
<Valandil> ;)
<Pygi> they are silent :/
<Valandil> then I will wait :)
<Valandil> My work will take several hours, so no Prob ;)
<Pygi> hehe :)
<spike> hi there
<Pygi> hi spike
<Valandil> hi spike :)
<fabbione> hey guys
<Pygi> hey fabbione
<fabbione> who feels to contribute 5 minutes of time?
<spike> lo fabbione
<Valandil> hi fabbione
* spike raises his hand
* Valandil too
<fabbione> perfect
<fabbione> https://wiki.ubuntu.com/ServerCandy <-
<fabbione> Third party software inclusion <-
<Pygi> huh, so you don't need me :P
<fabbione> we need to create the list mentioned there
<fabbione> who wants to be in charge of it?
<fabbione> and start mailing also ubuntu-server
<spike> expected deadline?
<spike> besides the canned "asap" :)
<Pygi> hm, 10 years? :)
<fabbione> i would say to have a list to start from in a week?
<fabbione> max 2 weeks
<fabbione> spike: i am not asking to do the full list
<fabbione> but start it
<fabbione> put it in the wiki
<fabbione> post the page on the mailing list
<fabbione> and let the others add their piece of software
<fabbione> it's virtually impossible to know all of them by one person
<fabbione> Valandil, spike: can i count on you for this work?
<fabbione> i mean..
<fabbione> to start the list of tools
<fabbione> i don't expect you to run and check everything
<Valandil> just a sec, still reading (my english, You remember? ;) )
<spike> fabbione: yeah, already have a few of e
<spike> 'em noted down along the time from debian-isp
<spike> fabbione: I'll go and setup myself an account on the wiki, havent got one so far
<fabbione> spike: ok cool
<fabbione> spike: best to do is to create a wiki page, link it from the main servercandy page
<fabbione> and post the url to the mailing list
<spike> fabbione: but I'm definitely in the 2 weeks time slot, not one sorry, have some more travelling scheduled in the next week and stuff to do since I just moved. but I'll definitely do it, I planned to start contributing anyway with the new year
<spike> fabbione: sure, I'll go that way
<fabbione> spike: perfect
<Pygi> fabbione: how is this going along?
<Pygi> https://wiki.ubuntu.com/NetworkWideUpdates
<fabbione> spike:  i suggest to open the page asap, even with one entry only.. and post it
<fabbione> so people will have time to start adding stuff to it
<fabbione> spike: the rest will come on its own (i hope)
<fabbione> Pygi: no idea. you need to ask to who is in charge (see launchpad entry)
<spike> fabbione: I'll take some time to add stuff about automation too, guess I have got something to say about it, and I'd definitely love to see more of that in ubuntu
<spike> which is actually linked to that page Pygi just mentioned
<fabbione> automatation?
<fabbione> i only nees "Third party software inclusion" for now
<fabbione> from https://wiki.ubuntu.com/ServerCandy
<Pygi> fabbione: that page in launchpad seems to be erased :/
<fabbione> Pygi: iirc it's mvo that is doing that spec
<spike> fabbione: well, ok, saw the page Pygi posted and thought of mentioning it. imho, they're wasting time on that. there are already *good* tools, production quality,that do that
<spike> u don't really need to spend time redesigning and implementing a new one
<fabbione> spike: you want to tell that in the page :)
<fabbione> really
<fabbione> i don't follow each single subprojhect in ubuntu
<fabbione> i would need 96 hours a day to do so
<spike> ok, I have no idea about how things really work in the ubuntu team, sorry
<fabbione> spike: eheh no problem :)
<spike> started collecting a few info about motu a week ago or so
<Pygi> fabbione:heh, I shall candidate myself for a member on next meeting :P
<fabbione> Pygi: you should first produce high quality contribution.. a mail to -server isn't enough.. really
<Pygi> hehe :)
<fabbione> you need to do a bit more, get known by the community
<fabbione> and so on
<Pygi> huh, I am "known" :)
<fabbione> it's a suggestion to avoid you the pain to go there a few times for nothing
<Pygi> ah, ok, nevermind
<Pygi> k, thanks :P
<Valandil> fabbione: I'll try to. by now I'm not yet shure, what exactly You want... but I think, I have a idea of what You want ;)
<fabbione> Valandil: i want a list of proprietary tools that it would be nice to have on CD
<fabbione> or extremely easy to install
<fabbione> like for example: tool from company FOO to manage their special RAID controller
<fabbione> but we need a list of these tools
<Valandil> OK, got it
<Valandil> OK, I'll do my part. But I don't use much of proprietary stuff, so I'm not really best address for this ;)
<Pygi> fabbione: please kick me out of server team, thanks
<fabbione> Pygi: why?
<Pygi> fabbione: 'cause I am, and I obviously won't do nothin', so no point in continuing this....
<fabbione> Pygi: dude.. stop and think one minute, ok?
<fabbione> to be an ubuntu memeber with upload privileges
<fabbione> you need at least to have some experience with packaging
<Pygi> fabbione: huh, yes, I do understand
<fabbione> and i can tell you that people in the different teams that do check on other people
<fabbione> will look at that
<Pygi> fabbione: no problem with that :P
<fabbione> hence my suggestion to do something more
<fabbione> that does not mean you need to vaporize yourself with a garbage disintegrator
<fabbione> it's a suggestion to start to contribute more
<fabbione> and become a memeber
<fabbione> just do it in the right steps
<fabbione> so that you will SCORE a success on the first attempt
<Pygi> huh, I tried contributing at start with InstantServer, and nothin' happened out of that because none actually wanted to work :/
<Pygi> it actually doesn't matter to me if I am member or not...
<fabbione> yes i remember the Instant server thing
<fabbione> but i don't think it's good to just give up like this
<fabbione> try and see
<fabbione> it's not like i have a fetish need to kick people out of the team
<fabbione> it doesn't give me any personal satisfaction.. you know ;)
<Pygi> huh, yesh, I do understand you don't wanna kick no one, but I was accepted into team just for that instant thingy :/
<Pygi> And as I ain't doing on it anymore, ....
<fabbione> Pygi: try to do something else :)
<fabbione> there are so many things to do
<fabbione> like the list that spike and Valandil will start to work on
<fabbione> it's not a 2 person thing
<fabbione> and it's something you could as well do
<fabbione> browing sites to find them might even be fun
<fabbione> browsing
<Pygi> g
<Pygi> heh*
<fabbione> i have issues to go and look at some websites.. i get this irrestable need of buying everything :P
<Pygi> hehe :)
<Valandil> *lol*
<fabbione> and i am sure my wife is not ready yet for a SUN e25k in the house
<fabbione> neither is my bank account, but i can still trade my wife ;)
<Pygi> fabbione: I believe you understand why and what I am doin' .... You are also a developer, andyou wouldn't like to do something where development is not involved :P
<Pygi> and lol :)
<fabbione> Pygi: if you want to develop for real.. want to start to look at "Provide a RCS /etc out of the box"
<fabbione> and see if we can get somewhere using shell scripts?
<fabbione> perhaps something modular that we can plug in different RCS?
<fabbione> that will require some code writing
<Pygi> huh, as I said fabbione, please kick me out of team...
<Pygi> thats better for everyone...
<fabbione> ok remind me in a couple of days when i will spend time on launchpad
<Pygi> k
<fabbione> so perhaps you have time to think again about it
<Valandil> Pygi: perhaps You could find also fun in 'baby-sitting' ;-) I could need someone, who takes me by hand... I'd like to work, but I don't know how :(
<Pygi> huh :P
<Valandil> no, really - I'm admin, but I'm defnitely not experienced in developing things for public
<Pygi> Valandil: well, you see, I am developing all the time something :P
<Valandil> Pygi: ;-) so You'd be a perfect mentor ;)
<Valandil> Pygi: No, to be serious: it was just an idea
<Pygi> heh, what would you wanna know? coding? in what language?
<Valandil> all languages ;)
<spike> included the universal language of love
<Valandil> I love shell-scripts, perl and c, but in c i'm just starting
<Valandil> spike: *lol*
<Pygi> well, in those I could assist you in C, and shell scripts
<Valandil> but I think my main problem is, how to work for ubuntu / public/...
<Pygi> can help you in python as well
<Pygi> valandil: go fix keyboard change error
<Valandil> python is very interesting, but never had the time
* Pygi would like to do that, but don't know where to get code and where to send patches :)
<Valandil> I don't even know anything about this error
<Valandil> what do You mean?
<Pygi> well, changing keyboard layout in that gnome utility doesn't work :P
<Pygi> thats the bug :P
<Valandil> *uurgh* gnome :( ... OK, I will take a look on this...
<Pygi> thats easy to fix, and you will learn :)
<Valandil> OK :) I'll do my very best
<Valandil> but why did You say, You caqn't get the code?
<Valandil> isn't it in the sources-package?
<Pygi> I don't know where to get ubuntu code :P
<spike> is this #ubuntu-server, isnt it? why on earth u'd want to spend time on fixing a gnome related issue?
<Pygi> ok, excuse me spike :P
<Pygi> now I'll go not to bother anyone again
* spike shrugs
<Valandil> ooops
<spike> what?
<Valandil> hmmm feel a little uncomfortable right now...
<spike> Valandil: u shouldnt at all
<Valandil> spike: could be... fell like a little boy ;)
<spike> Valandil: if he cant act differently than a 13teen kid living in a cruel world that doesnt accept him as a 1337 member of motu it's not ur fault
<Valandil> could be ;)
<Valandil> but more or less I feel anyway like al little boy who opened the door to the great wide world ;)
<Valandil> but I have planned to grow up :))
<spike> Valandil: the best advice I can give u then is get a plan and stick to that. jumping among things like u were gonna do with this gnome keyboard thing is just a waste of time for everybody
<spike> Valandil: what do u wanna do?
<spike> and dont tell me "programming" :)
<Valandil> spike: I'd like to do some effective work. This has two reasons: I work productive with linux for several years, and I'd like to give something back to community. And second:
<Valandil> I need some things for my servers, where I put work into. Why should I waste my work only for me? I could give it to others.
<Valandil> for this, I need experience in maintaining - and that I don't have
<Valandil> so I need a mentor or someone like that, who tells me: do this or that, and then You can use it like this, and so on...
<spike> Valandil: "effective work" is the same as "programming" :)
<Valandil> spike: not what I meant
<Valandil> I meant, some work, not only I have benefit from
<spike> dude, that doesnt mean anything either. what re u talking about? packaging/coding ur own software/tools u use on ur machines?
<Valandil> if You tell me for example that You need some recherches on some topic, to help Your Project, I wold say: OK, I'll take part
<Valandil> I just don't know _what_ would be useful
<Valandil> so the list for fabbione is something I can do, because he told me what he's looking for
<segfault> calm down.
<Valandil> You know, all I need is orientation. World of linux / ubuntu is far too wide ...
<Valandil> I'm as calm as I could be ;)
<Valandil> segfault: sorry, didn't want to do flaming in here...
<segfault> hehe
<segfault> just kidding
<segfault> :)
<Valandil> OK :-)
<Valandil> brb (hope so) --> reboot
<Valandil> re :)
* spike shrugs
<spike> I cant believe this telco installs router @ offices with web interface by default open on the nic facing the net
<spike> default user/pwd of course, but I wasnt even asking for that much...
<Valandil> brb --> reboot (I hate 2.6)
#ubuntu-server 2006-01-08
<spike> hi there
<fabbione> hi spike
#ubuntu-server 2007-01-01
<rverrips> Mornin' all - Happy new year ... I'm trying to use rescue on a software RAID-0 root (/dev/md0) - The Ubuntu 6.10 server CD starts up md fine and I can mount the drive via the console, but the "Start Rescue mode" option only shows the participants in the RAID, (i.e. /dev/sda1 and /dev/sda2) rather than the raid drive, i.e. /dev/md0 ... Where can I set this up manually to continue to the resuce mode?
#ubuntu-server 2007-01-02
<Linuturk> I've got a Dapper server up, and I'm trying to install Cacti via this guide: help.ubuntu.com/community/Cacti
<Linuturk> I've gotten all the packages installed, but when I try to browse to localhost/cacti it prompts me to download a file
<Linuturk> my guess is there is a config off in the apache config or php config file
<Linuturk> but I'm not sure how to fix it
<Linuturk> I've got a Ubuntu Lamp server setup. I've installed the Cacti 
<Linuturk>                   bandwidth monitor, and it loads fine when I open 
<Linuturk>                   127.0.0.1/cacti | when I try to load it via localhost/cacti 
<Linuturk>                   it asks me to download the actual php file. how do I fix this
<Linuturk> bah
<Linuturk> sorry :(
#ubuntu-server 2007-01-03
<orangefly> can anyone help set up jabber....???....
<orangefly> can anyone help set up jabber....???....
<somerville32> Jabber server or client?
<orangefly> server....
<orangefly> sorry....i had given up on a reply....
<orangefly> i've heard conflicting opinions....is webin usefull....???....
<somerville32> I find webmin useful
<[miles] > good afternoon #ubuntu-server 
<[miles] > guys, I've installed 6.06... very happy
<[miles] > hitting one problem...
<[miles] > the box is to be a spam checking relay
<[miles] > and thus, I've installed spamassassin on it
<[miles] > however, I keep getting an error with the sa-update command
<[miles] > anyone aware of this problem?
<[miles] > http://pastebin.ca/302712
<[miles] > thats the error
<lionel> [miles] : install libwww-perl package
<[miles] > hi lionel 
<[miles] > ok thanks
<lionel> you're welcome :)
<[miles] > lionel, nice name
<[miles] > same as my father
<[miles] > :)
<lionel> :)
<[miles] > lionel, french?
<lionel> yep
<lionel> Is my english so bad ? :)
<[miles] > no
<[miles] > just the name is french
<[miles] > lionel, was in france to pass new years eve 
<[miles] > then back to Barcelona
<lionel> Where were you ?
<[miles] > Perpinagnan
<[miles] > cant write it
<[miles] > sorry
<lionel> Perpignan is the correct :)
<lionel> not too far from Barcelona
<[miles] > mmm
<[miles] > yeah
<[miles] > a couple of hours or more
#ubuntu-server 2007-01-04
<gaten> is mysql in LAMP compiled with --with-mysqld-ldflags = -all-static on edgy?
* Starting logfile irclogs/ubuntu-server.log
#ubuntu-server 2007-01-05
<[miles] > morning #ubuntu-server 
<ivoks> morning
#ubuntu-server 2007-01-07
* dura waves
* joejaxx waves
<dura> can I ask something sort of... dumb?
<dura> :|
<dura> what to I have configured wrong if my browser asks me to save a .cgi instead of opening it up?
<dura> (apache2)
* dura sighs
<dura> isn't there a mod_cgi or something?
<dura> hmm yes there is
<dura> :|
* dura was looking for a module named *cgi not *scgi :|
<edgy> Hi, After I installed ubuntu I want to insall the lamp server using synaptic or adept or apt-get, is there a virtual package for it that turns my system to be an ubuntu-server?
<Rev_Tig> Hi All,  has anyone come across an machine that hangs with the phrase "Starting up ... " with a flashing cursor on the line below, this happens almost immediatly after grub,  this is a clean install of 6.10 on an old p200 with 64Meg of ram,  any thoughts?  Install completed fine (albiet slowly :) ) It does not appear to be booting the kernel and I have tried reinstalling grub from the install / recovery option on the cd...  anyone got any ideas?
<Rev_Tig> it booted and loaded the "alternative CD" and that produced an almost working system apart from it had all the desktop apps on there which I did not need and filled the disk :)  I got a bit trigger happy with apt-get remove and decided to wipe the lot and just bung the server version on instead :)
<Rev_Tig> anyhoo,  apart from the machine failing to boot the install went rather well :)  Well done guys I am impressed :)  Even when it dropped into low memory load it was a clean and consistant install :)  The thing that I noticed (not worth a bug report) is that the discription of the UTF console encoding goes off the screen without a method for scrolling the screen... and if you have to get that anal about something then there really is nothing to comp
<Rev_Tig> lain about :) 
#ubuntu-server 2007-12-31
<nealmcb> J_5: and see also the ltsp and edubuntu approaches for easy admin of apps for lots of people
<J_5> yeah that is what i am reading about now.
<J_5> im a a little confused tho. i already have a ubuntu server up and running. do i just install ltsp on that?
<nealmcb> J_5: I'm not up-to-date on ltsp - I think it is bundled and configured with edubuntu well, but can also be installed on an ubuntu/kubuntu system
<J_5> nealmcb: i found some info on installing it, im off to try it out :). thanks!
<david____> Does anyone have a few minutes to help solve a router question... I setup a ubuntu router (shorewall/bind/dhcp3-server). The box has 2 internal interfaced that are bridged. The box works great with clients directly connected. I plugged my wireless linksys router and access to the web from its clients is very very slow. I am looking for some pointers on how to fix this problem. Thx
<kraut> moin
<Vincent_k> hello, I'm having major problems setting up a pxe server on gutsy-server box
<Vincent_k> anyone got it working?
<lifesf> Fresh Ubuntu-Server 7.10 install (the perfect setup) and ispconfig; i cannot configure my ftp through ispconfig... and still do not entirely understand how to configure it through terminal yet; is there a good alternative other than installing Ubuntu Gui? which would slow down the server?
<lifesf> or simply find out how to add then change/edit users? : at the moment a ftp access to the var/www directory would be incredibly great
<lifesf> my hdd disappeared after giving up on trying to learn most functions in terminal only for server: i had made my hdd mount automatically; it was working fine; then after apt-get install ubuntu-desktop paff it disapeared; is not viewable in ["my" computer]
<h00s> lifesf: if you followed perfect setup and installed ispconfig, ftp is already set up. you access it with user account created for domain
<h00s> lifesf: for creating users, when you create site (new site icon in ispconfig), click on new created site and select tab user & email
<lifesf> when i go to webftp there is nothing
<lifesf> and now because i couldn't manage to get it to work properly... i installed gui and got my ftp working but now my problem is:
<lifesf> and now because i couldn't manage to get it to work properly... i installed gui and got my ftp working but now my problem is:
<lifesf> my hdd disappeared after giving up on trying to learn most functions in terminal only for server: i had made my hdd mount automatically; it was working fine; then after apt-get install ubuntu-desktop paff it disapeared; is not viewable in ["my" computer]
<lifesf> how do i get my hdd back?
<lifesf> my hdd disappeared after giving up on trying to learn most functions in terminal only for server: i had made my hdd mount automatically; it was working fine; then after apt-get install ubuntu-desktop paff it disapeared; is not viewable in ["my" computer]
<lifesf> how do i get my hdd back?
<lifesf> ok; i umounted it and then did the sudo mount -a and now it is working again on my server but is still not visible in my ubuntu gui
<lifesf> also: adding site like mentioned earlier in ispconfig doesn't work... i cannot really get passed anything
<corporeal> i keep getting "locale: Cannot set LC_CTYPE to default locale: No such file or directory"
<corporeal> how can i fix this
<nealmcb> corporeal: what are you doing?
<corporeal> nealmcb: running apt-get. but i fixed it.
<nealmcb> corporeal: what did you do?  I've gotten that in the past and fixed it but forget how....
<corporeal> nealmcb: apt-get install language-base-en-pack or something like that
 * nealmcb nods
#ubuntu-server 2008-01-01
<donspaulding> I'm looking to build a PXE boot server that can boot .iso's.  Is this possible or will I just be wasting my time?
<kraut> moin
<mikubuntu> guys, is webmin an integral/essential part of the lamp installation?  It failed to load from sourceforge, and now when i look at webmin homepage, it says its a unix program, but under downloads has a debian package.  ubuntu is debian, right?  should i download this one?
<mikubuntu> i was following the instructions on this page: http://www.firehazrd.com/ubuntu/how-to-setup-lamp-on-ubuntu-710 , and as far as i know everything else loaded properly ...
<Gh0sty> webmin is just an easy webinterface to manage your stuff but its certainly not essential
<Gh0sty> i advise even against it since in the past it tended to break systems ... :|
<Gh0sty> and security has never been very good with that, but ofcourse you need to have a bit of command line knowledge then to configure your stuff
<mikubuntu> Gh0sty: what other program would fullfill the mission that you might recommend?  i just managed to download webmin with gdebi, and then i was going to install magento, which is a fairly new ecommerce app that looks pretty good to me, but i'm a newbie, so anything shiney always looks good to me :)
<Gh0sty> have a look at ispconfig (but i have the impression the site is down for the moment, must be a Y2k8 problem ...)
<mikubuntu_> hmmmmm.... i wonder if you would consider helping me download the proper file for magento install? this is the page: http://www.magentocommerce.com/download   and i'm not sure which package to select.
<Gh0sty> tar.bz2 will do
<Gh0sty> probably its only webinterface so does not really matter
<Gh0sty> only different types of compression
<mikubuntu_> so the bz2 is the one at the bottom of the ddropdown list; how do i unpack it once downloaded?
<mikubuntu_> wait, there's two bz2 files; you mean the larger one that appears last on the drop down?
<h00s> mikubuntu_: tar xvzf filename.tar.bz2 /destination-directory
<mikubuntu_> ok, well i started to download to the desktop.  can you spell out the command i have to use?  i assume i have to sudo right?
<h00s> if you will extracting to /var/www then you must use sudo, yes. if you extracting to Desktop, home dir etc then you don't have to use sudo
<mikubuntu_> hmmm, so still i don't know what to do with the file
<mikubuntu_> when i right click on the file on the desktop, it asks if i want to open with arcvhive mgr or another?
<mikubuntu_> also '[extract here]' appears on right click
<h00s> mikubuntu_: you must extract it and then move that extracted directory to /var/www. in /var/www are located web server files. since magentocom. is web application, it's must be located in that folder so the apache/php can serve it...
<mikubuntu_> h00s: i extracted it on the desktop and now there's a folder icon there below the bz2 file that says 'magento' 10 items... is that the one you want me to move?  can you tell me how?
<h00s> yes, that's the directory you have to move. to move it, type this in terminal: sudo mv ~/Desktop/magento /var/www
<mikubuntu_> what's the BEST way to get the files where they need to go?  should i have not extracted to desktop?  should i trash that, and open the tar with anotther app?
<mikubuntu_> ok, lemme try
<mikubuntu_> should there have been any output in terminal?  do i need to check it?  din't see any output ...
<mikubuntu_> i forgot how to cli check for file location
<h00s> if there's no output, it's ok. there shouldn't be magento directory on desktop anymore..
<mikubuntu_> ok, you
<mikubuntu_> youre right, it's gone!
<mikubuntu_> does that make me a developomer now?
<h00s> :D
<mikubuntu_> so, how do i make the whole thing launchable now?  it doesn't appear in menu yet... do i just have to launch magento, or do i have to start the whole LAMP every time i work on it?
<h00s> all the way :)
<mikubuntu_> i should call the bank and see if any money is pouring in yet
<h00s> it's not application like other applications on your desktop. it's a web app so you have to "run" it from browser (http://localhost/magento). yes, you must have running apache+php and mysql to able to work on it.
<mikubuntu_> ummmm.... so how to launch all of them if they don't appear on menu?
<h00s> also, you must configure magento (connection to your mysql database etc.). since it is magento specific, i can't help it with that
<h00s> look up :) "run" it from browser (http://localhost/magento)
<mikubuntu_> ya, i'm gonna start studying all their forums more, but they din't have but five ppl in #magento ....
<mikubuntu_> h00s: so when i 'launch' http://localhost/magento will it automagically start the LAMP features it requires, i.e., apache, mysql, php, etc...?
<h00s> mikubuntu_: noup, it's starts automatically with server/computer if they are installed
<mikubuntu_> guess not, i tried and it said 'magento not on this server'
<mikubuntu_> so i have to configure magento to those components, but how do i do that if i can't launch it?
<mikubuntu_> you're prolly busy, i guess i should stop torturing you with this drivel ...
<h00s> yeah, sry i respond slow, working on something here. there's too much questions and it's quite difficult to answer it on irc. i suggest you search http://help.ubuntu.com, advanced topis, installing servers etc.
<mikubuntu_> thanks for all your help.  was just studying the magento wiki, and i guess i'll go back to #magento and see if anybody showed up there yet.
<mikubuntu_> oh, yeah, and i gotta call the bank.
<J_5> how do i tell what version of packages i have installed? Like apache, php, etc?
<nealmcb> J_5: apt-cache policy apache2
<nealmcb> Happy New Year, All!!
<J_5> nealmcb: thanks! and happy new year to you as well
<nealmcb> :-)
<J_5> if i apt-get remove a package, does that remove everything? including log files?
<nealmcb> J_5: you need to also purge it to remove configuration files.  I'm not sure about log files
<J_5> how do i purge it?
<nealmcb> J_5: see "man apt-get"
<J_5> nealmcb: ok thanks
<Ubuntu-fr790> bonsoir
<Buntix> quelqu'un ici ? :p
#ubuntu-server 2008-01-02
<osmosis> Can anyone help me with this dpkg  apt-get install error?  http://dpaste.com/29601/
<kraut> moin
<AnRkey> I have a file called -v
<AnRkey> how do i delete it?
<AnRkey> when i use rm -v, the rm program things the -v is part of the command
<kraut> rm -rf \-v
<kraut> perhaps this works?
<AnRkey> kraut, rm \ -v did it, thanks
<AnRkey> i had to put a space in though between the \ and the -v
<AnRkey> sheesh, that drove me nuts
<AnRkey> 15min to delete a friggin file
<AnRkey> happy new year
<kraut> AnRkey: anyhow, mc is a good tool to delete files with crappy names
<kraut> --exclude= for example
<soren> AnRkey: If you needed a backslash there, it wasn't called "-v", but " -v"..
<soren> The trouble with a file called "-v" is that it's interpreted as an option to rm. To delete a file called "-v", you can either "rm -- -v" or "rm ./-v".
<AnRkey> thanks soren
<AnRkey> kraut, mc is a nice tool
<AnRkey> i love mcedit
<kraut> i hate mcedit ;)
<AnRkey> i love the find/replace in mcedit
<AnRkey> for when i am lazy
<CrummyGummy> Hi all-> I have on occasion foind that I've restarted heartbeat and new heartbeat process have been spawned without killing the old ones. This generally happens when things are going wrong and has caused a whole lot of problems, I have also found the same problems in my custom scripts. Has anyone here found something similar.
<CrummyGummy>  /foind/found/
<CrummyGummy> K, I've been looking at the heartbeat script and its got nothing to do with the other scripts that I'm having trouble with.
<sergevn> where is the Samba smbpasswd database stored?
<avatar_> sergevn: afaik somewhere under /var/lib
<sergevn> How is it possible to see the currently active users in that database?
<zul> sergevn: pdbedit
<sergevn> root@manny:/var/lib/samba# pdbedit passdb.tdb -u serge
<sergevn> serge:1000:Serge van Namen,,,
<sergevn> So there is no password set?
<soren> sergevn: You can't say based on that.
<soren> sergevn: Try pdbedit -L -v
<soren> Er...
<soren> No.
<soren> I meant -L -w
<sergevn> Hmm the usernames are there, I have a clean ( i think ) samba configuration, but still those users get permission denied when they try to access the folder
<soren> Do they have access, you think?
<sergevn> There always needs to be an Unix users for each samba user correct?
<soren> sergevn: The correct answer: no. The answer you're proably looking for: yes.
<soren> sergevn: There are verious ways to work around the need, so in fact, it's not needed, but you really should either add Unix users to match the Samba ones or teach your Unix system to use Samba as a user db backend.
<pteague_work> hmm... how do i change my server's hostname?
<nealmcb> pteague that is a surprisingly complicated question :-)
<nealmcb> it could mean dns, hostname, /etc/hosts, apache config, etc
<pteague_work> the hostname on the actual computer... but it seems the virtual server isn't liking whatever it has as a network card so guess i'm reinstalling anyways
<StuTheBearded> hi :)
<StuTheBearded> i was wondering if anyone could help me with a hard drive problem I can't find an answer too?
<StuTheBearded> the problem is with the detecting of /dev/hdX, where my hard drives are labelled from hde not hda even though the primary disc is on the primary ide controller and the kernel is reporting it can't fine /dev/hde3
<StuTheBearded> dev/hde3 should is /
<XiXaQ> is there any software package to install if you want to setup dynamic dns for a domain?
<jdstrand> hi sommer
<jdstrand> sommer: I sound an inaccuracy in https://help.ubuntu.com/7.10/server/C/httpd.html
<sommer> jdstrand: okay, which part?
<jdstrand> s/sound/found/
<jdstrand> +CompatEnvVars should not be used:
<jdstrand> Syntax error on line 11 of /etc/apache2/sites-enabled/strandboge.com_ssl:
<jdstrand> SSLOptions: Illegal option 'CompatEnvVars'
<jdstrand> heh-- you got a little more than I intended... :)
<sommer> jdstrand: I wondered about those options myself... they are listed in the Apache docs
<jdstrand> sommer: all the otehrs work fine
 * jdstrand wonders why he can't type...
<sommer> jdstrand: do you know if they are really needed?
<sommer> all of them that is
<sommer> from my testing it works fine without that line, but I wasn't 100% sure
<jdstrand> sommer: hold on...
<sommer> jdstrand: standing by... heh
<jdstrand> sommer: ok
<jdstrand> +StrictRequire should definitely be there IMO
<jdstrand> it makes sure that when using SSLRequireSSL that access is forbidden
<jdstrand> (that is the intuitive use of SSLRequireSSL)
<jdstrand> I am not sure FakeBasicAuth is a good default
<jdstrand> it allows using a client cert instead of basic authentication if the client has a valid one
<jdstrand> sommer: that is not an intuitive use IMO and would be site-specific
<sommer> jdstrand: seems like most of those options are more advanced than what we're trying to document
<sommer> jdstrand: I agree, should we just remove the SSLOptions line?
<jdstrand> ExportCertData seems harmless enough-- exports ssl environment variables so cgi can access them
<jdstrand> it too seems site-specific
<jdstrand> so really, all can go with no problems, but I really think +StrictRequire should stay
<sommer> jdstrand: okay... works for me
<sommer> I'll update it, thanks for the input
<jdstrand> sommer: cool, so you'll fix that?
<jdstrand> ah great-- thanks!
<sommer> it'll be for hardy anyway
<jdstrand> sommer: should I come to you on this sort of thing in the future, or is there a package I can assign a bug to in LP?
<sommer> jdstrand: works for me, but if you'd like to file a bug you can just assign it to the doc team
<jdstrand> ok thanks
<sommer> there is also the ubuntu-serverguide package
<sommer> no problem, thank you
<jdstrand> sommer: I'll file the bug to see if we can get the gutsy docs fixed, or at least errata'd
<sommer> cool... I'm not sure what the exact process for that is, but someone on the doc list will know I'm sure
<sommer> it's probably the same as the SRU process... I'm thinking
<mathiaz> sommer: yes. ubuntu-serverguide being a package, the SRU process should applied to update the package for a stable release.
<pteague_work> anybody know how i could export a vm from my vmware server on my work devel box & import it on my home devel box?
<GSaldana> can anyone help me with apache2 and mod rewrite rules? I have enabled it with a2enmod rewrite, but the .htaccess rules dont work
<GSaldana> aahh nevermind, i had to put AllowOverride All on my sites enabled file
<sommer> mathiaz: cool... and welcome back
<mathiaz> sommer: thanks :)
<joejaxx> mathiaz: thanks :)
<macd> mathiaz, off hand do you know who packages ruby in ubuntu? or is it just a upstream sync type of thing?
<mathiaz> macd: it's a sync from debian IIRC.
<macd> I'm wondering if we can jsut repackage mongrel to have the monglre cluster scripts in it already
<macd> rather than having a package with literally 3 files in it
<mathiaz> macd: are you refering to the meta-package with 3 files in it ?
<macd> in your email you noted that a package for mongrel cluster was needed, which would be about 3 files, and anyone wanting to run more than a single mongrel isntance would want that, so why not make the base package include the cluster portions to manage the config/start/stop of mongrel
<mathiaz> macd: Makes sense. (I'm not familiar with the mongrel package)
<mathiaz> macd: could the mongrel_cluster init script replace the one shipped by mongrel ?
#ubuntu-server 2008-01-03
<macd> yes
<macd> and start a single mongrel, or a cluster of them (whatever is specified on /etc/mongrel_cluster/
<mathiaz> macd: or to rephrase: could mongrel_cluster be used to manage only one mongrel instance ?
<mathiaz> macd: that seems reasonable to me.
<macd> yeah it can be used to manamge a single mongrel isntance
<mathiaz> macd: so instead of creating a new package, support for multiple mongrel instances should be added to the mongrel package itself.
<macd> thats the road I was thinking the best to take
<mathiaz> macd: yes. Makes sense to me also, if it doesn't invole heavy changes.
 * macd starts on it tonight
<macd> are you able to upload to universe? or do I need to go through revu?
<kraut> moin
<m1r0> hello all
<pteague_work> has anybody else run both an md5sum & the check CD for defects for ubuntu-jeos?  my md5sum matches, but it comes back as the cd being defective... granted, i'm booting a vm off an iso, but ubuntu desktop & ubuntu server don't report errors when checking for defects
<PatrickDE> Heya
<sommer> hello
<PatrickDE> Just a quick question on postfix - does anybody have (a link to) a backport to dapper (amd64) of postfix >=2.3
<lamont> PatrickDE: 2.4.5-3build1~dapper1 is the current dapper-backport version...
<lamont> ah, which is ftbfs on amd64
<PatrickDE> ;) 2.4.5-3 failed for amd64, didn't it? (https://launchpad.net/ubuntu/+source/postfix/2.4.5-3build1~dapper1)
<PatrickDE> btw. hi lamont
<PatrickDE> I was trying to use dovecot sasl with postfix - when I saw that dapper is using 2.2
<lamont> and requires a backport of tinycdb, which won't be happening in the backport archives
<PatrickDE> ic - so no dovecot, but cyrus then?
<lamont> PatrickDE: or a little backporting work
<PatrickDE> I've to admit, that I'm pretty new to all this stuff - so I'ld go for some kind of existing solution :)
<sommer> PatrickDE: I'd recommend trying the Cyrus SASL option
<PatrickDE> sommer: guess that's best - I take it :)
<PatrickDE> thanks a lot guys!
<sommer> np
<Leovenous> Ah... its saner in here.
<Leovenous> I have an installation (server LTS) that hangs at 83% "Installing the kernel - retrieving and installing linux server"
<Leovenous> P4, 1.7 with a WD 40GB HD
<Leovenous> Kinda old, but still kicking.
<Leovenous> The CD checks out fine.
<Leovenous> Any ideas what my problem is?
<leonel> no ideas ...
<leonel> you can Install Gutsy  and   then upgrade tu   HARDY  ( LTS ) on April ..
<Leovenous> Perhaps
<Leovenous> I'll give it a shot. Thanks.
<good_dana> right now i have 2 ubuntu virtual machines running 6.06 server lts on microsoft virtual server, has anyone tried to get geOS to run on ms virtual server?
<good_dana> by geOS i mean jeOS
<leonel> not me but should work
<sergevn> you should try JeOS as host operating system ;)
<good_dana> lol
<good_dana> i already broke my brain setting up a xen virtual host on centOS
<good_dana> although that was mostly due to SATAraid
<culix> hello together, i have a problem with the postinst script of mysql-server-5.0 on dapper, the script stops after "Stopping MySQL..."
<culix> it is an update from 5.0.22-0ubuntu6.06.5 to 6.06.6
<mathiaz> culix: you should have a look at /var/log/daemon.log
<mathiaz> culix: mysql error messages should be logged there.
<culix> mathiaz: that was the hint, disk space of logging space was full, thx for that :)
<sergevn> Is there a safeway to flush your /var/log/ files
#ubuntu-server 2008-01-04
<kgoetz> its installed, and has a file (+x) in /etc/cron.daily/00logwatch, but i dont get emails from it. when i run it manually i get mail from the script
<kgoetz> s/its intalld/ i have logwatch installed
<soren> syslog tells you when cron runs something. Does it mention it?
<kgoetz> let me check
<kgoetz> holy heck, how has syslog hit 32mb :S
<kgoetz> Jan  4 09:17:01 moon CRON[13505]: Authentication service cannot retrieve authentication info
<kgoetz> looks like syslog/cron has been broken since 27th of september
<kgoetz> soren: looks like thats the problem :/ i'll try and work out this cron+syslog issue
<kgoetz> turns out shadow was misformed. fixed that and cron+other stuff starts working again.
<XiXaQ> I want to setup a shared folder on my ubuntu server so that windows clients can connect using only a password. I've been reading and reading, but I don't understand how it's possible. I don't want to duplicate all users usernames and passwords between all windows clients and ubuntu.
<XiXaQ> can someone explain this? I've tried setting security = share, but then it isn't password protected at all..
<XiXaQ> I wonder if irc would be as popular if everyone was required to read the irc protocol before they were allowed to run the client.. :)
<XiXaQ> it feels like samba is doing something similar to me.
<PanzerMKZ> well I setup my samba install
<PanzerMKZ> and gave it one user
<PanzerMKZ> give it a valid user = username
<XiXaQ> I don't want to tie it to usernames. Everybody that knows the password should be able to read and write, just like in windows. Isn't that possible?
<PanzerMKZ> I have one username
<PanzerMKZ> panzer
<PanzerMKZ> so that line is set to valid user = panzer
<PanzerMKZ> and there is a password for user panzer
<PanzerMKZ> so I go to any of my windows boxen and pop that share and log in using panzer
<XiXaQ> ok, so if a friend comes over for tea, brings his laptop and wants to open a file, then he just has to create a new user in his system, log out and back in with that user, then connect to the share?
<PanzerMKZ> no
<PanzerMKZ> you come over to my pad
<PanzerMKZ> for tea
<PanzerMKZ> you got a windows box
<PanzerMKZ> you log in as you on your box
<PanzerMKZ> you go to start>run
<PanzerMKZ> type in //companion
<PanzerMKZ> companion in this case being my ubuntu file server running samba
<PanzerMKZ> up pops in a user/password window
<PanzerMKZ> and you put in the user panzer and my super secret password of lace
<PanzerMKZ> and bam you have access to all the file shares
<XiXaQ> so I'll have to setup one user per share?
<PanzerMKZ> is that not what you wanted?
<PanzerMKZ> one user
<XiXaQ> do you know how people share files in windows xp?
<PanzerMKZ> for all your shares?
<PanzerMKZ> there is a ten connection limit for xp file shares
<XiXaQ> I'd like to have a share with a password, another share with a different password. This means I have to create two different users, then share the resources as those users and give those usernames and passwords to the people who're supposed to access the shares?
<PanzerMKZ> but smb users don't have to be system users
<XiXaQ> they don't?
<PanzerMKZ> no
<XiXaQ> oh...
<PanzerMKZ> I could have smb user fred that is no where on my system
<XiXaQ> then how do I add this user?
<PanzerMKZ> first answer is man samba
<PanzerMKZ> which is what I am going to do now
<XiXaQ> you don't think I've been doing that for weeks?
<XiXaQ> everyone has been explaining how to install single-signon setups with directory controllers, and ldap, dhcp and god knows. I only want my damn directory to be available to those who know the password :)
<XiXaQ> the samba configuration guide has 47 chapters.
<PanzerMKZ> nice
<PanzerMKZ> man smbpasswd
<XiXaQ> well, actually, that's just the howto :)
<PanzerMKZ> it will talk about adding users
<XiXaQ> thanks :)
<PanzerMKZ> and changing passwords
<PanzerMKZ> the -a command adds a user
<PanzerMKZ> you need example of my smb.conf file?
<XiXaQ> that would be nice.
<PanzerMKZ> http://pastebin.com/d6421821c
<PanzerMKZ> is for a iso share I have
<XiXaQ> that's the kind of share I want to setup. :)
<XiXaQ> but panzer is a real user, right?
<PanzerMKZ> well on that box yes
<XiXaQ> panzer can login using ssh with that password, for instance?
<PanzerMKZ> but the passwd for panzer on smb is different then the system
<PanzerMKZ> so the answer is no
<XiXaQ> how do I do that?
<PanzerMKZ> setup the share
<PanzerMKZ> and then work your magic with smbpasswd
<XiXaQ> oh.. Ok. Normal user doesn't automatically have access to their own shares? You must use smbpasswd first?
<PanzerMKZ> to add users and passwords
<PanzerMKZ> yea
<XiXaQ> aha..
<XiXaQ> does that make sense in anyway?
<PanzerMKZ> yea
<PanzerMKZ> basicly you are asking if you have a samba server up
<XiXaQ> pardon?!
<PanzerMKZ> and you add a new user to the system does the system users home dir get shared out automatically
<XiXaQ> no. If I make normal user. I then add that users home directory as a share in /etc/samba/smb.conf. That users password will be invalid, because I haven't set a sambapassword for him yet?
<PanzerMKZ> yea
<XiXaQ> ok.. Is that a requirement, or is it possible to use pam instead?
<PanzerMKZ> you can set it up different ways by changing the smb.conf
<PanzerMKZ> there should be things you uncomment
<XiXaQ> ok, that's fine for my setup. However, if I wanted to let other unix users share their folders at will.. I really don't want to make all admins?
<XiXaQ> it seems to me strange that shares should be spesified in the main configuration file?
<PanzerMKZ> there is parts for users to add
<XiXaQ> By default, \\server\username shares can be connected to by anyone with access to the samba server.  Un-comment the following parameter to make sure that only "username" can connect to \\server\username This might need tweaking when using external authentication schemes
<XiXaQ>    ;valid users = %S
<XiXaQ> does this mean that by default, all users can read from and write to other users homes?
<XiXaQ> PanzerMKZ, thanks. I think I got it.
<XiXaQ> it's the documentation that confused me. help.ubuntu.com's samba documentation seems to explain more about how Active Directory works and what LDAP is than how to setup a share.
<XiXaQ> think maybe I'll write a simpler guide when I get this right.
<c1|freaky> does anyone know if theres a tool for windows which can knock on port? because i want to secure ssh using knockd
<c1|freaky> and is there any good firewall solution for ubuntu server?
<c1|freaky> i mean, configuration, standard mechanisms etc.?
<c1|freaky> I also need some intrusion detection software ...
<ScottK> iptables is built into the kernel.  That's your firewall.
<c1|freaky> ok
<ScottK> Is it just you connecting via SSH?
<c1|freaky> no
<ScottK> OK.  I generally just rate limit SSH connections via iptables, but it doesn't scale well for lots of users.
<c1|freaky> i want to secure ssh logins using knockd ... but i dont know if theres any software for windows which can knock on ports
<c1|freaky> like knock does
 * ScottK doesn't use Windows.  Sorry.  Can't help on that.
<c1|freaky> ok thanks
<c1|freaky> im waiting for my new server :D
<c1|freaky> 6GB DDR2, 2 750GB SATA II HDDs, AMD Athlon 64 X2 DualCore :D
<normanm> hi all
<normanm> We use ubuntu on some servers here. We need to support php4 because of some old CRM. I saw feisty dropped the php4 support. Any idea if there are some sources where we can get the needed debs ?
<ScottK> Run Dapper would be my suggestion.
<normanm> ScottK, dapper is really out of date :-/
<ScottK> Yes, but you want to run PHP4.
<ScottK> I'm typing this on a Dapper desktop because it basically does what I need.
<ScottK> Debian is, I think, also dumping PHP4, so it's a losing battle I think.
<normanm> ScottK, Dapper not works on the x4100 servers with the "supported" kernel
<normanm> ScottK, well I don't want... I need :-/
<ScottK> Ah.
<ScottK> I don't know enough about PHP to have a useful opinion then.
<normanm> The company i'm workin for is using a self devolped crm wich only support php4
<normanm> btw.. Is dapper still getting security updates ?
<lamont> hrm... do I want to turn on IDN support in bind9, I wonder?
<lamont> normanm: only until june of 2011 (for ubuntu-standard portions), or June of 2009 (for desktop stuff)
<lamont> :-)
<normanm> lamont, hmm thats not bad..
<normanm> So now i need to think about if i want to use dapper drake or i want to use freebsd
<normanm> on the webservers
<lamont> normanm: or stall until april and put hardy on, which will have server security support until april of 2013 :-)
<normanm> lamont, ;-)
<lamont> or I suppose you could just dist-upgrade from dapper to hardy once it's out. (That'll be tested/supported)
<normanm> lamont, well i don't think hardy will support php4
<lamont> ah, there is that.
<normanm> lamont, :-P
<lamont> you do know that it stands for "Please Hack Promptly", right?
<normanm> I allready upgraded all servers except the webservers to gutsy
<normanm> lamont, tell me something new... But what should i do if my boss wants it :-P
<lamont> normanm: short term?  do it.  longer term?  resume.
<normanm> lamont, yes.
<normanm> BTW, do you know if there is something like kernel security level planed for ubuntu ?
<lamont> ijiot circumstances require change... :-)
<lamont> as in C2 or B1 or such?
<lamont> orange-book levels?
<lamont> no clue on that one.
<lamont> security fixes? already happens
<normanm> Something like in freebsd which prevent modules to be loaded. don't allow the the time to be set more then 1 second in the past/future. Don't allow raw access to block devices etc
<lamont> ah.  one could use selinux to do that, quite possibly could use apparmor (which is there by default in gutsy...)
 * lamont prefers selinux, wasn't consulted wrt what got turned on in gutsy's kernel
<lamont> time to sleep
<kraut> moin
<_ruben> mornin
<ScatterBrain> By default the snmp daemon runs as user "snmp".  How can I give that user permissions to read the log files in /var/log?
<ScatterBrain> I've tried adding it to the "adm"group, but that doesn't seem to work.
<Kamping_Kaiser> why do you want them to?
<ScatterBrain> I need to run a script that greps the logs via snmp'd "pass" functionality so I can keep stats with Cacti.
<Kamping_Kaiser> unsure. night mate
<ScatterBrain> at this point, I'm looking to keep track of Postfix and Amavis.
<lamont> ScatterBrain: group adm should be sufficient.  OTOH, it probably requires restarting the daemon
<ScatterBrain> lamont, yeah did that.
<ScatterBrain> I'm still getting permission denied trying to read /var/log/mail.log
<ScatterBrain> even with 644 permissions.
<ScatterBrain> Go figure.
<lamont> and what are the perms on /var/log :-)
<jetole> hey guys, I need to know how badly I have just fsck'd myself
<ScatterBrain> lamont: by default they are 640, with root/adm user/group ownership.
<jetole> root didn't look twice before hitting enter and did a rm -f /var/log
<ScatterBrain> When I change then 644, the script works.
 * jetole is currently waiting for the server to reboot
<lamont> ScatterBrain: ls -ld /var/log :)
<lamont> it's not 640 by default
<jetole> hmmm, it looks like most the files recreated themselves upon reboot... I think
<lamont> jetole: mkdir /var/log; chmod 755 var/log; chown root:root /var/log
<lamont> (or reboot and everything should just do the right thing)
<ScatterBrain> lamont: mine was.
<lamont> except maybe /var/log/ would wind up 555 instead of 755...
<lamont> ScatterBrain: the directory had no execute permissions?
<lamont> that would explain why you couldn't open any file under it...
<lamont> (since exec is needed to open a file...)
<ScatterBrain> oh, the lod directory itself.
<lamont> er..
<ScatterBrain> not the file.
<ScatterBrain> s/lod/log
<lamont> exec permission on a directory is needed to open files in the directory
<lamont> ScatterBrain: yes.
<jetole> lamont: rm -f /var/log/
<lamont> jetole: that shouldn't do anything, should it?
<jetole> doesn't delete the directory or any subdirs
<jetole> just all flat files in /var/log
<lamont> mkdir x; rm -f x
<lamont> rm: cannot remove `x': Is a directory
<jetole> right, well I wasn't sure if all logs recreated themselves or not
<ScatterBrain> lamont: blue tmp # ls -ld /var/log
<ScatterBrain> drwxr-xr-x 8 root root 2048 2008-01-04 10:37 /var/log
<lamont> ScatterBrain: which is what it should be,.
<jetole> since I know logrotate has options for logs that need to be touched after an old one is moved
<ScatterBrain> so If I change the directory so that root/adm owns it will that be OK?
<lamont> ScatterBrain: anyone in the world is allowed to open files in that directory, depending on the permissions
<lamont> is apparmour bitching about anything?
<jetole> I have an apparmor dir in var/log so I am not sure why I would change perms on var/log for a file that has it's own var/log/apparmor
<lamont> jetole: if there are any logs that don't rebuild after a restart of the daemon (so that a reboot of the system is sufficient....), then it's a bug in the daemon
<jetole> lamont: any known bugs in server common packages that I may need to be aware of? ;)
<lamont> apparmor does all kinds of neat funky stuff with permissions totally independent of the filesystem permissions
<lamont> jetole: not personally, no
<jetole> alright, well thanks for the help
<lamont> ScatterBrain: if the file is mode 644 and you can read it but the snmp daemon can't, then it's something outside of FS permissions
 * lamont finally heads off to get to work for the dasy
<ScatterBrain> lamont: like what?
<lamont> like apparmor
<lamont> or selinux
<ScatterBrain> the script works as root and if I set the perm to 644.
<ScatterBrain> on a dapper box?
<lamont> root skips all kinds of perm checks
<lamont> dapper.
<lamont> if the daemon can't open it, I
<lamont> 'm pretty sure it's FS perms somewhere.
<lamont> anyway, gotta run
<ScatterBrain> lamont: OK, I'll keep looking.
<ScatterBrain> thanks.
<sergevn> Does anyone has any experience with denyhosts?
<nealmcb> sergevn: what's your question?
<hangthedj> i just upgraded to hardy, how to i change the shell to say hardy instead of gutsy 7.10 Tribe 3?
<delphiuk> can someone help me with a 6.06 upgrade problem? I have an output if you need to see it?
<Kamping_Kaiser> delphiuk, what is the problem?
<Kamping_Kaiser> hangthedj, not sure, try /etc/issue, but be sure you know what your doing before messing with stuff like htat
<hangthedj> ok thanks
<delphiuk> Kamping_Kaiser: http://paste.ubuntu-nl.org/50774/
<Kamping_Kaiser> delphiuk, hm
<Kamping_Kaiser> are you using offical repositories?
<delphiuk> Kamping_Kaiser: Oh yes, nothing is "non standard"
<Kamping_Kaiser> BTW, try running `export LANG=C` to get rid of those locale/perl errors (makes things easier to read)
<Kamping_Kaiser> delphiuk, try `export LANG=C && apt-get -f install`
<Kamping_Kaiser> tell me what that outputs
<delphiuk> richard@sugar:~$ sudo export LANG=C && apt-get -f install
<delphiuk> sudo: export: command not found
<Kamping_Kaiser> `export LANG=C && sudo apt-get -f install`
<Kamping_Kaiser> export is a shell built in (see help export in the shell if you want to see more)
<delphiuk> Kamping_Kaiser: http://paste.ubuntu-nl.org/50778/
<Kamping_Kaiser> try `sudo apt-get --purge remove apache2-utils` (it may remove a bunch of stuff, i'm not sure)
#ubuntu-server 2008-01-05
<zul> dendrobates: oh darn ottawa wining 1-0 against buffalo
<mathiaz> zul: is this a big problem ?
<zul> mathiaz: nope just ribbing dendrobates for later :)
<mathiaz> zul: I'm not sure that dendrobates really cares about buffalo
<mathiaz> zul: OTOH if ottawa is playing Atlanta soon...
<zul> mathiaz: i know just reminding him who has the better team :)
<zul> dendrobates is actually going to the all-star game
<mathiaz> zul: is ottawa still the best team these days ?
<zul> in the east
<Yahooadam> W: GPG error: http://security.ubuntu.com feisty-security Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
<Yahooadam> W: You may want to run apt-get update to correct these problems
<Nathan_P> Hello?
<ScottK> Hello
<youpi> hello
<NineTeen67Comet> Hi all .. little question about squid (I've got a 6 yr old w/his own Linux box) .. I would like to limit my kids access to the www .. Squid seems the best deal, but can I lock down ONLY his login/box?
<pete2> hello
<pete2> i have problem with ubuntu 7.10 scanner support in MFP (printer/scanner/copyer), i bought HP M1005 MFP, printer works perfectly, but scanner just dont get detected in system, any tips how to fix this ?
<googlah> is this server-related?
<pete2> printer is attached to server
<pete2> and it dont get detected
<pete2> i mean the scanner on printer
<googlah> wierd.. i havent played with printers at all, maybe someone else can help you..
<pete2> printer works perfectly ,B/W , but scanner dont get detectd at all, tnx googlah
<Nafallo> pete2: http://hplip.sourceforge.net/supported_devices/index.html
<pete2> tnx nafallo, i check
<pete2> sadly but is not there :/
#ubuntu-server 2008-01-06
<c1|freaky> [01:54:12] <c1|freaky> hi all. im having a strange problem: i rent a new server, with larger discs and better hardware. i used partimage to save and restore the old disks on the new server. the old discs were arround 150GB each. the new server has 2 750GB HDDs. my problem is this: http://main.freakyy.de/cfdisk.txt http://main.freakyy.de/df.txt as you can see, cfdisk shows me 2 750GB HDDs but df -h shows me arround 140GB partitions. which are not
<c1|freaky> shown in cfdisk. what is wrong
<c1|freaky> [01:54:12] <c1|freaky> anyone got any idea how i can fix that?
<Kamping_Kaiser> do you need to expand the partitions?
<c1|freaky> umm, yea
<c1|freaky> i want to use up all disc space for those partitions
<c1|freaky> just one swap
<c1|freaky> Kamping_Kaiser ^^
<c1|freaky> the thing is, that there are no 150GB partitions. it seems partimage has just ... put the data on there like that. in reality, those partitions are not present the way they are shown in df -h
<Kamping_Kaiser> odd. sorry i cant stick around, heading out ofr lunch
<c1|freaky> :(( ok
<c1|freaky> when i did a dpkg --get-selections > file and dpkg --set-selections < file how can i get apt to install them? oO
<hames> how can i limit a ssh user to only read his home directory and /usr/bin
<Nafallo> anyone has anything to say about Maxtor Atlas 15K II?
<soren> hames: You can't.
<Nafallo> hames: no /bin/bash? ;-)
<mok0> ho
<\sh> jo
<\sh> mok0: I removed all swap parts from fstab...but still mkswap doesn't work because device or resource busy
<\sh> I think I'll reinstall dapper and try to fix it on edgy
<mok0> \sh: that is the problem.
<mok0> \sh: why dont you go to a more recent
<Nafallo> \sh: lsof?
<mok0> \sh: feisty?
<\sh> Nafallo: nothing...no process is using /dev/sda2 or /dev/sdb2
<\sh> mok0: i'm running on gutsy...
<soren> \sh: It won't show if it's the kernel using it.
<mok0> \sh: I'd be kinda fun to understand what's going on, though
<soren> \sh: What's the problem exactly?
<\sh> soren: ok...I installed dapper on a server...running two md devices and have two separate swap parts...
<\sh> soren: md0 -> raid1 / md1 -> raid1 working perfectly.../dev/sda2 /dev/sdb2 -> swap parts..were running fine on dapper
<\sh> soren: after upgrading from dapper over edgy,feisty to gutsy I can't see any swap devices anymore...swapon -a -e gives me "invalid argument for /dev/sda2 and sdb2" mkswap /dev/sda2 or mkswap /dev/sdb2 gives me "device or resource busy"
<\sh> soren: even when I let fstab forget about having swap parts anyways
<soren> \sh: Noting in /proc/swaps?
<\sh> soren: googling gives me a lot of pointers of the same problem..and https://edge.launchpad.net/ubuntu/+bug/66637 is also about it..but I don't have a laptop but a setver
<ubotu> Launchpad bug 66637 in util-linux "After running mkswap, swap space is discarded, system fails to hibernate (invalid swap signature)" [High,Confirmed]
<soren> \sh: Do you have evms installed?
<\sh> soren: nothing in /proc/swaps
<\sh> soren: yepp
<soren> \sh: Do you need it?
<\sh> soren: no it's not started anyways
<\sh> soren: just mdadm
<soren> \sh: If not, remove it, reboot, and you're likely to be much happier.
<\sh> soren: some more informations? what about evms installed but not being started?
<soren> \sh: i forget how it gets started, to be honest. I thought it ran through udev?
<Nafallo> I would think it has rules in udev running under the initramfs already.
<soren> etc/udev/rules.d/85-evms.rules
<soren> It does use udev.
<soren> \sh: ...so AFAIR, it creates a new block device (devmapper style) for each existing block device.
<Nafallo> it's in universe those days ;-)
<mok0> soren: yikes!
<soren> mok0: It's *very* annoying.
<\sh> fck
<\sh> that's it
<\sh> this is crap
<\sh> really
<soren> It is.
<mok0> \sh: it also explains that the problem appeared after an upgrade
<\sh> evms is being installed by default on dapper server lts right?
<soren> \sh: Yes.
<soren> \sh: but back then, it didn't cause as much trouble as it does now.
<soren> \sh: ...due to all the udev changes we made since then.
<\sh> soren: so we will have problems doing dapper -lts > hardy lts dist-upgrades...
<soren> \sh: I know :)
<\sh> oh fun
<\sh> that cost me now 2 hours of time ...
<\sh> and all google pointers are totally crap too
<mok0> \sh: but we all learned something :-)
<Nafallo> that -server has related support as well, rather than -motu? ;-)
<mok0> \sh: make a note of this problem somewhere, so it get's indexed by google
<\sh> soren: do we have a good fix for it for the dapper -> hardy upgrade path...or just letting all server systems die and put the solution in the documentation? ,-)
<Nafallo> \sh: blog it! ;-)
 * Nafallo would bet some server are using the crap
<\sh> Nafallo: strato in germany is giving a ubuntu dapper lts automatic installation for their root servers...so people will using the upgrade path lts -> lts next time
<soren> \sh: I don't remember, to be honest...
 * soren has a fever today, so don't expect much..
<Nafallo> \sh: yea. so let mvo add some weird hook that in weird ways find out if evms is in use and remove it if not :-P
<mok0> Nafallo: mvo?
<Nafallo> mok0: Michael Vogt, the maintainer of the upgrade tool :-)
<mok0> Glad not to be in his shoes :-)
<\sh> well, I'll blog it now ;)
<mok0> \sh: is your server up & running?
<\sh> mok0: yepp
<mok0> \sh: cool
<\sh> soren: thx for the fix :)
<GUARDiAN-> hi, i installed ubuntu-server with english language and now i have only "C", "en_US.utf8" and "POSIX" as locales. how can i install for example the "de_DE.utf8"-locale?
<ivoks> install language-pack-de
<GUARDiAN-> hm, interesting. i tried that with aptitude and it wants ~430mb dependencies. apt-get doesn't want these deps.
<ivoks> last time i checked, packages are installed with apt-get
<ivoks> aptitude does a lot more than apt-get
<soren> aptitude pulls in recommended packages, too.
<ivoks> soren: happy new year! :)
<Nafallo> by default...
<Nafallo> I use aptitude with sane options.
<soren> ivoks: Likewise! :)
<ivoks> what's up with likewise anyway? :)
<soren> Ask dendrobates.
<ivoks> i was in france for a week and i'm outdated a bit :D
<GUARDiAN-> language-pack-de did it. thanks for your help
<ivoks> np
<ScottK2> ivoks: Do you have much experience with using milters?
<ivoks> ScottK2: never used them :/
<ScottK2> ivoks: OK.  Thanks.
<ScottK2> I doubt many of us here have.
<ivoks> :)
<ScottK2> I'm using dkim-filter (dkim-milter) for signing DKIM and that's working well, but that's my only experience.
<ScottK2> BTW, 4 of the (I think) 8 needed MIRs to get amavisd-new into Main are done.
#ubuntu-server 2008-12-29
<Fenix|home> Greetings!
<Fenix|home> What user accounts on a fresh install are required?
<Elite> HI
<Elite> I run ubuntu hardy server and I have a IDE controller card in the server and the OS detects and show the card in lspci but the disk that is attached to to doesn't show in fdisk -l how do I access this disk?
<drdebian_> have you tried using fdisk /dev/sda blindly and replacing sda with sdb, sdc, ... ?
<jmarsden> Maybe you could also try    sudo lshw -short -class disk   # sudo apt-get install lshw first if necessary
<lukehasnoname> Is there a hotkey for switching channels in xchat
<jnet1216> hey guys, wondering if anyone is familiar with ubuntu 8.10 and compiling php, i have been on it for a while and been failing =(
<Gargoyle> jnet1216: Have done it on other systems, but not specifically 8.10
<jnet1216> http://ubuntuforums.org/showthread.php?p=6454685 =)
<Gargoyle> jnet1216: Have you downloaded the PHP source?
<jnet1216> yes
<_ruben> php5 in 8.10 has sockets built in
<Gargoyle> :D That's a quick solution then!
<jnet1216> not with my phpinfo() says =(
<_ruben>  The following extensions are built in: bcmath bz2 calendar ctype date dba
<_ruben>  dom exif filter ftp gettext hash iconv json libxml mbstring mime_magic
<_ruben>  openssl pcre posix Reflection session shmop SimpleXML soap sockets SPL
<_ruben>  standard sysvmsg sysvsem sysvshm tokenizer wddx xml xmlreader xmlwriter zip
<_ruben>  zlib.
<_ruben> thats what apt-cache tells me
<jnet1216> hmmh thats wierd
<jnet1216> wait
<jnet1216> yeah now it says enabled
<jnet1216> but
<jnet1216> when i run it command link
<jnet1216> err
<jnet1216> command line*
<jnet1216> i get errors
<Gargoyle> what error?
<jnet1216> one second, im having some issues with samba now =( second day with ubuntu
<jnet1216> sorry Gargoyle
<jnet1216> looks like i do have sockets, im not sure where i got that from
<Gargoyle> :)
<jnet1216> just having socket_bind issues, but i think i got more than one script running
<jnet1216> Gargoyle: would you happen to know why my shared folder only gave me access to a specified folder, and not its subdirectories?
<jnet1216> i installed samba a few hours a go
<jnet1216> and shared /var/www
<jnet1216> i mapped it to my Z: on my laptop (vista)
<jnet1216> i can access and edit all the files in it, but once i go into a sub dir, like includes, i have read only
<Gargoyle> jnet1216: Sounds like your user permissions are wrong.
<jnet1216> hmmh yeah could be, since www was owned by root, but then i chowned it to nobody.nogroup
<jnet1216> and it worked fine with samba
<jnet1216> and it shared, (when it was owned by root i was having issues)
<jnet1216> but then the rest of the files were uploaded prior to samba, with proftpd
<jnet1216> probably under another account
<jnet1216> whats the best way to go about this?
<jnet1216> when www was owned by root, i was having some reallly wierd issues trying to access it
<Gargoyle> jnet1216: IIRC, in order for apache to read the files, they must be owned by someuser:www-data
<Gargoyle> jnet1216: and have the appropriate permissions for group read (and execute for directories).
<jnet1216> thats really odd
<jnet1216> http://96.49.25.158/test.php
<jnet1216> looking at permissions
<jnet1216> its owned by nobody and nogroup
<Gargoyle> jnet1216: Is that because you put the files there with samba?
<Gargoyle> jnet1216: and you have told samba to use that user account?
<jnet1216> the files were already there before i installed samba
<jnet1216> i uploaded them from proftpd
<jnet1216> but after i installed samba
<jnet1216> i ran some chown command on the var/www folder
<Gargoyle> jnet1216: OK, var/www should be owned by root:root
<Gargoyle> jnet1216: then the directories inside should be owned by user:www-data
<Gargoyle> jnet1216: Unless you plan on having a "magic" apache config that will automatically search for directories in /var/www/ based on the url.
<Gargoyle> jnet1216: If you set user's primary group to www-data, then new files that you create will automatically have the correct group.
<Gargoyle> jnet1216: then make sure that samba (and proftp) are using the same user (or one configured the same) for uploading the files.
<jnet1216> well, when www is owned by root
<jnet1216> im having issues accessing it
<jnet1216> but
<jnet1216> i think i understand now
<Gargoyle> are you putting your website files in /var/www or /var/www/some_website_directory?
<jnet1216> var/www
<jnet1216> i think if i get ftp and samba to use the same user
<jnet1216> and reupload the content
<jnet1216> i should be fine?
<Gargoyle> jnet1216: You could re-upload, or you could just use chown
<jnet1216> yeah i think thats exactly the issue
<jnet1216> things uploaded with ftp
<jnet1216> are not accessible
<jnet1216> via samba
<Gargoyle> jnet1216: Is this just a test server?
<jnet1216> yeah, for development
<jnet1216> till i get some money to get a dedicated online
<jnet1216> at a datacenter
<Gargoyle> jnet1216: My advice would be that when you get one online, use ssh (sftp) for uploading your files, not samba or ftp.
<Gargoyle> jnet1216: If you are using a windows machine, get yourself putty and winscp.
<jnet1216> the only reason im using samba
<jnet1216> is so i can map it as a network drive, things get so much easier =(
<jnet1216> but yeah, someone told me that too
<jnet1216> im using putty right now
<jnet1216> to control the server
<Gargoyle> jnet1216: Yeah, a mapped drive is the next best solution to an actual local dev environment.
<jnet1216> heyy
<jnet1216> i just found something that can let me map a drive letter to sftp
<jnet1216> but then, its not free
<jnet1216> anyways, its 3:24 in the morning
<jnet1216> one last question haha
<jnet1216> how would i get samba to use a certain user while uploading?
<jnet1216> when i upload right now
<jnet1216> its under nobody/nogroup
<_ruben> disable guest logins and use proper accounts (smbpasswd can be used for that)
<jnet1216> coolio, ill try that out 2moro morning again, even tho i had some major issues logging in from vista with that earlier
<jnet1216> thansk guys, really helpful =)
<_ruben> (s)ftp(s) are far more friendlier protocols to use
<uvirtbot> New bug: #312140 in samba (main) "crash while coping data from local machine to remote samba share" [Undecided,New] https://launchpad.net/bugs/312140
<WoLf_Loonie> Hello, and sorry to disturb.. I'm unable to get IPv6 to work correctly on Ubuntu 8.10, I've set up /etc/network/interfaces with what I've found googling around for days, and with a configuration I'm able to resolve the hostnames correctly, but I can't connect nor ping any IPv6 host.. under the same router, I have another computer running Vista, and IPv6 is correctly working there.
<maswan> WoLf_Loonie: does ifconfig show the correct ipv6 adress on your interface?
<WoLf_Loonie> maswan: yes, showing the correct address for global
<WoLf_Loonie> maswan: could it help if I'll pastebin the contents of interfaces and the output of ifconfig?
<WoLf_Loonie> I think I got the "up ip route add default" line wrong.
<rat> hello there
<rat> i have a broblem with msql ndb cluster...
<rat> problem
<rat> is there any patch or a way to find the configuration from the mysql-server-5 package so i can recompile it my self?
<Faust-C> recompile ?
<Faust-C> rat: how about you give the exact issue that way we can assit you better
<rat> yes from source
<rat> the problem is that my mysql-server dosn have the ndb engine built in
<rat> so i need to enable it
<Faust-C> rat: apt-get has features to do a source install
<Gargoyle> rat: Also, I think the MySQL docs tell you what options were used to build... or at least they used to.
<rat> Faust-C : i know but i need the ./configuration directives
<Gargoyle> rat: I have used these for a few years on my local test install - http://pastebin.com/d28f81b84
<rat> ok thanks
<Gargoyle> is ndb something to do with cluster?
<rat> yes
<Gargoyle> There are separate downloads for MySQL Cluster now.
<rat> from the mysql.com?
<Gargoyle> rat: No. Microsoft have started shipping it instead of SQL Server 2008!
<Gargoyle> ;)
<rat> :)
<Gargoyle> http://dev.mysql.com/downloads/cluster/
<rat> Gargoyle : thanks
<Gargoyle> Different version numbers and all sorts... Not kept 100% up on mysql in recend months, but I think a lot of MySQL Cluster specific changes have been made that they are not going to backport into the main branch, so it lives on its own now.
<maswan> WoLf_Loonie: Hm. "ip route list" and "ip -f inet6 route list" should say stuff about what routing things you have online.
<maswan> WoLf_Loonie: sorry for the slow responses, I'm working as well
<isle86> looking for help to set up apache in a very simple configuration, specially with virtual hosts, where I'm lost
<Gargoyle> isle86: What you got so far?
<isle86> Gargoyle: apache is running. i have a dyndns account, and now arvernes.dyndns.org points to my computer. Now I would like to add an ubuntu packets repository. The repo is done, but now, I'm fighting with apache i'm discovering.
<Gargoyle> isle86: Your going to have to be more specific than "fighting with apache"
<isle86> yes, actually, I try to set up the virtual hosts named "ubuntu" on my machine.
<Faust-C> isle86: http://www.petersblog.org/node/840
<isle86> this is what it looks like : http://rafb.net/p/ecrCbL40.html
<Gargoyle> isle86: that looks ok. Did you edit /etc/apache2/sites-available/default or copy it?
<isle86> I don't know if I have to keep the <Directory /> statement in that file.
<isle86> Gargoyle: that's a copy of the default file
<Gargoyle> ok, you should be able to just run "sudo a2ensite default_copy" (or whatever you called it) and then restart apache.
<Gargoyle> Also, you can get rid of the serverAlias line
<Gargoyle> Also, on your dyndns account you need to add ubuntu.arvernes.dyndns.com if you are expecting outside people to see it
<isle86> the new fiel is named "ubuntu" so "sudo a2ensite ubuntu" ? do I have to remove the "default" file ?
<mregister> sorry to be a pain i am newbie, can i be seen?
<Gargoyle> isle86: Nope, but you only need the NameVirtualHost line once.
<Gargoyle> mregister: Yep!
<Gargoyle> isle86: So if you have default and ubuntu both enabled, apache will have a little moan, but that is not critical
<isle86> I would like to keep both. Possible ?
<Gargoyle> isle86: yes, that is the idea behind virtual hosts.
<isle86> ok, so let's run the command you said above.
<Gargoyle> isle86: You won't be able to see the second virtual site until you fix or fudge the ubuntu.arvernes.dyndns.com dns entry.
<isle86> done, restart apache, now you said to activate something on the dyndns web site.... I thought as arvernes.dyndns.org was registered, everything above would be visible
<Gargoyle> isle86: Apache is not a DNS server!
<Gargoyle> isle86: You can fudge it for local testing by editing your /etc/hosts file and adding an entry for it.
<isle86> I'm on the dyndns web site to find out where to activate that "subdomain"
<isle86> I guess I activated the right stuff. .??/
<isle86> should be available at http://ubuntu.arvernes.dyndns.org
<isle86> do not know if it works though ...
<Gargoyle> isle86: If you have then dyndns has not caught up yet!
<Gargoyle> isle86: ping: cannot resolve ubuntu.arvernes.dyndns.com: Unknown host
<isle86> Gargoyle: :-(
<isle86> Do I have to change something in the apache2.conf file ? Could be my mistake ?
<Gargoyle> isle86: Did you edit your hosts file?
<isle86> no
<Gargoyle> isle86: Then no... generally speaking you don't need to go messing in the main config file.
<isle86> Gargoyle: ok
<Gargoyle> isle86: As I said, Apache is a web server not a DNS server. If you have setup dyndns to say that your managing it as a subdomain, then you also need to setup a dns server
<isle86> Gargoyle: on the dyndns web site, I activated the following feature :
<isle86> Hostname : arvernes.dyndns.org
<Gargoyle> isle86: That will only allow you to run 1 website.
<isle86> Wildcard: (x) Create wildcards alias for *.host.domain.tld (so *.arvernes.dyndns.org)
<Gargoyle> isle86: I would imagine that would work, but I have not used dyndns
<isle86> So Gargoyle that should do the trick ??
<isle86> maybe I have to wait for ddclient to reset its semafore file ?
<Gargoyle> isle86: err, you should be working now. The hostname resolves to the correct address
<Gargoyle> I can see your dists folder ;-)
<isle86> ok, so it works. Now I have to understand some stuff with that vhost file. Thank you Gargoyle I wouldn't have thought to the dyndns stuff without you.
<Gargoyle> isle86: No probs. You can use whatever you like for your servername values and apache will use them. actually getting that name to work for a http request from the browser is DNS's job - not apache's
<Gargoyle> For all my testing I just have single words, like "site1", "site2", "drupal". And then edit my hosts file.
<isle86> Gargoyle: is the default configuration files provided with apache are secure enuf or do I have to change different settings
<Gargoyle> isle86: Generally you won't need to worry about "Apache's" security. You'll only get problems when you start using modules that allow greater access to the filesystem or users to upload, like PHP or WebDAV
<ball> Is Ubuntu Server a sensible choice for cluster nodes?
<isle86> Gargoyle: ok, back in a few minutes, pb with my keyboard
<Gargoyle> ball: Depends on the cluster. I am sure the redhat cluster suite is now part of the main repo and hence will be supported by canonical.
<ball> Gargoyle: thanks.
<isle86> Gargoyle: hmm, something must be wrong, as arvernes.dyndns.org and ubuntu.arvernes.dyndns.org point to the same web page (I should say : directory)
<Gargoyle> pastebin your default and ubuntu files from the apache2/sites-available dir
<isle86> ok
<isle86> Gargoyle: these are they : http://rafb.net/p/uRzMKw97.html
<Gargoyle> you have your ubuntu root filesystem "inside" your default one.
<isle86> is it that part :
<isle86> 	<Directory />
<isle86> 		Options FollowSymLinks
<isle86> 		AllowOverride None
<isle86> 	</Directory>
<Gargoyle> isle86: Do you just have the ubuntu directory in /var/www ?
<isle86> right now yes. but I will have my own web site once this one will work.
<Gargoyle> isle86: OK, just make a new directory called /var/www/default, and then edit any paths in the default config file to update /var/www to /var/www/default
<isle86> done, but still same result
<Gargoyle> isle86: Restarted apache?
<isle86> I did, maybe I have to add the ServerName and ServerAlias statements to the default file.
<Gargoyle> isle86: not really, if you leave them out, then that one should answer any request to your IP with an unmatched name
<Gargoyle> isle86: re-pastebin the files.
<isle86> ok
<Gargoyle> isle86: Are you planning on using cgi-scripts, and do you want access to the apache docs on your local server?
<isle86> why not. I'm copying them. Could it be because of those statement : one is allowing everything as I have NameVirtualHost * and the second one say NameVirtualHost *:80
<isle86> this is the past http://rafb.net/p/PJEo2g12.html
<WoLf_Loonie> maswan: sorry, had to go to the doctor. =\
<WoLf_Loonie> maswan: the output of what you asked before: http://pastebin.com/d8538c7e
<ball> Does Ubuntu Server include software RAID?
<Gargoyle> isle86: Those configs look OK. You can try removing NameVirtualHost from the ubuntu one.
<Gargoyle> isle86: However, my browser does not seem to think that arvernes.dyndns.com resolves properly. Perhaps that is the problem.
<Gargoyle> isle86: gtg now, hope you get the last few kinks sorted.
<isle86> ok,, thank you.
<maswan> WoLf_Loonie: that's the HEv6 thingies? it looks to me like multiple entries, but then again I don't have much clues if you are tunnling or something..
<WoLf_Loonie> Tunneling with Hurricane Electric (tunnelbroker.net)
<WoLf_Loonie> so I've named the interface Hev6
<maswan> ah, but perhaps it shouldn't go out through eth0 as well then?
<maswan> note that I'm just guessing here though
<WoLf_Loonie> Well, I only have eth0, and I'm guessing it has to go through something to reach internet =)
<maswan> yes, but if you have a tunnel, the ipv6 packets just go to the tunnel endpoint and then gets sent over v4 to the other endpoint of the tunnel
<WoLf_Loonie> it's v6 over v4, not native v6
<maswan> so only the v4 should go over eth0
<Juaco> i'm stuck with uid mappings when mounting w2k server active directory shares from ubuntu, if anyone can help it will be appreciated, thx
<mregister> i am trying to get my LAMP server (Hardy) to email php generated emails. is exim4 the best option for that
<jmarsden|work> mregister: Any reasonable MTA would work, including exim, postfix or (for minimalist approach) ssmtp
<mregister> ok so i need to create an account with my ISP for exim4 to connect to? like i would for any other client? then configure exim4 to connect to my ISP like any other client?
<jmarsden|work> mregister: You should configure exim4 to route all outgoing email to a "smarthost", which would be your ISP's mailserver, yes.
<mregister> so in the intial setup when it ask about server type i specify "internet only" all smtp traffic?
<jmarsden|work> mregister: I'm no exim4 expert... if this is all you need a mail server for, exim4 is total overkill, you might find ssmtp smaller and simpler?
<mregister> you maybe right. i need it to send email to sales@example.com orders@example.com etc..
<jmarsden|work> Yes; any MTA you set up can do that just fine.
<uvirtbot> New bug: #312274 in sysstat (universe) "backport sysstat 8.1.7-1 from jaunty to Intrepid" [Undecided,New] https://launchpad.net/bugs/312274
<pablop> how can I start a server on startup?
<pablop> I installed ejabberd from source and I run it using: ejabberdctl start
<jmarsden|work> ejabberd |    2.0.1-2 | intrepid/universe | source, amd64, i386
<jmarsden|work> ejabberd is packaged already, why not use the package?
<pablop> I've tried the package but there is an error
<pablop> apt-get install ejabberd
<pablop> ejabberdctl start
<pablop> RPC failed on the node ejabberd@...: nodedown
<Nafallo> invoke-rc.d ejabberd start ?
<Nafallo> also, doesn't the daemon start on install?
<pablop> when I install it from source I can start and stop it but don't know how to start it on startup
<pablop> I have a script ejabberdctl that works. it has start, stop, restart, status functions. How do I tell ubuntu to run the start on startup?
<Nafallo> you shouldn't have to.
<pablop> but I built it from source and it doesn't run on startup
<pablop> on the package run on startup (I think)
<Nafallo> ehrm. oki. can't help you then.
<pablop> thanks
<jmarsden|work> pablop: You really should try using the package, but you might be able to do something like   update-rc.d ejabberd defaults
<Nafallo> jmarsden|work: only if the init-script exists :-)
<Nafallo> (in /etc/init.d)
<jmarsden|work> True.  That's why pablop should really be working with the package, not installing from a source tarball, IMO... when a package exists for something you use it, and if it has issues, you fix it... you don't start over from a source tarball, it's just wasteful
<Nafallo> jmarsden|work: +100000
<pablop> don't know how to fix the package :)
<pablop> I thought that init-script is just a script that calls ejabberdctl start
<Nafallo> 'tisn't
<jmarsden|work> pablop: Have you filed a clear and complete bug report?
<Nafallo> starts up erlang and stuff
<pablop> no I didn't filed a report
<jmarsden|work> pablop: If you believe you have found a bug in a package, file a good bug report thoroughly documenting the issue.
<pablop> whrer?
<pablop> where?
<jmarsden|work> https://bugs.launchpad.net/bugs/+filebug
<pablop> thanks. I'm submitting the bug report
<jmarsden|work> pablop: BTW, on my desktop PC I just did sudo apt-get install ejabberd && sudo /etc/init.d/ejabberd start
<jmarsden|work> pablop: And it worked as expected... no bug seen...
<pablop> you are right
<pablop> that's strange. ejabberdctl start doesn't work but ejabberd start work
<jmarsden|work> So... probably you should just uninstall your locally compiled stuff, install the package, and use it :)
<pablop> there are other issues. I need some custom modules that I can install only on the build from source version
<jmarsden|work> pablop: apt-get source ejabberd and work on it to add what you need?  That way you can give back your changes to the community.
<pablop> I wish I could give back. It's their module that is not trivial to install
<pablop> I managed to install it only manually after installing erlang and then ejabberd from source
<henkjan> hmm, wanted to spend some birthday money on books, eg Daemon by Daniel Suarez
<henkjan> but i see kirkland is giving them away for free
<jetsaredim> what's the easiest way to revert a config file back to what was initially installed?
<bitsbam_> hey all
<bitsbam_> how would i go about finding the time zone that my server is using ?
<henkjan> cat /etc/timezone
<bitsbam_> thanks henkjan
<bitsbam_> i am doing a fresh install of ubuntu 8.10 and a bit confused on the options for the email server. there are options ' no configureation, internet site,  internet site with smarthost, satellite system, and local only.
<bitsbam_> the internet one is the one that has me confused.
<bitsbam_> says mail is sent and received by smtp. isn't received done by pop?
<jmarsden|work> bitsbam_: If you have a static IP and will be running a full mailserver, use "Internet site"; if you are on a home DSL or similar, and need to relay everything you send via your ISP, use "internet site with smarthost"
<bitsbam_> basically, this computer needs to be our company mail server, it needs to get and send mail right here
<bitsbam_> ok, we are the first one. static ip, we are the full blown mail server
<jmarsden|work> Sounds like it.
<jmarsden|work> BTW for POP3 you would install a POP3 server too, something like dovecot
<bitsbam_> very good. does hostname and domain name need to be the same? host is just the computer, and domain refers to everything behind the LAN.  ?
<bitsbam_> he he, package updating as in just install from source?
<jmarsden|work> bitsbam_: No, ideally you would grab the current source package using apt-get source and then modify it to use the new source tarball instead of whatever it uses now.
<bitsbam_> oh, so it compiles with the same options, uses same /etc/init.d/ scripts, etc.. i get it, cool
<bitsbam_> will do
<jmarsden|work> See https://wiki.ubuntu.com/PackagingGuide/Complete#Recipe:%20Updating%20An%20Ubuntu%20Package
<mregister> i just tried to apt-get install ssmtp and i get error conflicts with mail-transport-agent. do i need to remove exim4 first?
<jmarsden|work> mregister: Probably.  Try it :)
<bitsbam_> jmarsdenwork, got it thanks
<jmarsden|work> No problem
<WoLf_Loonie> Hello, and sorry to disturb yet again. This time I'm having a small issue with setting up Postfix and Dovecot on my server.. I used the Maildirs format, and followed the help pages on the Ubuntu website, but now root mail is not getting forwarded to my user, (I added the alias under /etc/alias) nor kept under root.
<WoLf_Loonie> and I'm getting a " Relay access denied " error back when I try to email the server.
<jmarsden|work> WoLf_Loonie: Relay access denied often means you forgot to tell postfix which domain(s) to accept mail for...
<Deeps> attempting an rsync between two cifs mounts, from //nas/av to //tvpc/av via "router", which is a hardy server, i'm seeing a load avg ~2.00 and 50% iowait in top + iostat, despite there being no physical io on the router
<Deeps> ie, on router, i'm doing rsync -aPv /mnt/nas/av /mnt/tvpc/av (which are both cifs mountpoints to their relevant machines)
<Deeps> that doesn't strike me as "right"
<jmarsden|work> Deeps: It may be more appropriate to run rsyncd on a server and use rsync protocols, not SMB, on the wire?
<Deeps> possibly, however 'nas' is a linuxed based ARM processored barebones unit, and 'tvpc' is a windows box, which may lead to more complications
<Deeps> i was simply noting that this was occurring, rather than it being much of a greivance, 'router's function is usually fairly limited, and despite the load still manages my adsl connections just fine
<Deeps> eitherway, alternating the process is more trouble than it's really worth, the rsync will be done in another day or two, and i highly doubt anyone will notice any issue
<Deeps> (been at it since wednesday without any real problem)
<jmarsden|work> Well, you could get a Cygwin-based rsync to run on the Windows box if you had to... but I see what you mean.  Does "router" have good NICs in it, or cheap ones that use lots of CPU?  If router uses cheap (RealTek etc.) NICS, that may be part of why this is happening.
<Elite> Hi
#ubuntu-server 2008-12-30
<onetwentyseven> Hello, anyone alive in here?
<jmarsden|work> onetwentyseven: If you have a question to ask, please just go ahead and ask it.  See /topic
<onetwentyseven> Thanks for the heads up, but I figured out my problem. I think I was just looking into the problem too much.
<ball> hello jtaji
<ball> hello sloopy
<sloopy> hello ball
<Alex_21> Hi, I needa web front-end for managing BIND. Does one exist?
<Jeeves_> ika
<Jeeves_> ola
<Jeeves_> is qemu-img the only way to convert vmware images to kvm images?
<nme> TLS under openldap package does not work?
<asterisk_user2> hello
<etronik> Does ubuntu-server apply to jeOS? or vice-versa?  I would think so...but not sure
<serwou> hi
<serwou> i've upgraded my ubuntu server from Hardy to Intrepid and I have a couple of things broken : some perl modules (Can't locate Sys/SigAction.pm) and trac : unable to stat: /usr/share/trac/cgi-bin  Any clue for this please ?
<serwou> sorry, back
<_ruben> etronik: roughly server is a subset of desktop, and jeos a subset of server .. which applies to the standard packages that are installed, all 3 have their specific kernel
<etronik> ah, thinking of jeoS as a subset of server is a good tip ! so kernel issues apart, all/most the general configuration issues apply...
<_ruben> correct
<gate_keeper_> Guys,
<gate_keeper_> what can cause this .. .
<gate_keeper_> kernel: [24925.821917] end_request: I/O error, dev nbd0, sector 225260
<gate_keeper_> kernel: [24909.885538] end_request: I/O error, dev nbd0, sector 225909.921436] nbd0: Attempted send on closed socket
<gate_keeper_> :-/
<asterisk_user2> maybe a bad
<asterisk_user2> harddisk
<asterisk_user2> is it ide ?
<gate_keeper_> nbd0 :/
<gate_keeper_> sata
<asterisk_user2> o wait :)
<asterisk_user2> look here:
<asterisk_user2> http://www.linux-archive.org/edubuntu-development/157316-nbd0-attempted-send-closed-socket-trying-shut-off-thin-clients-loggin-screen-fails.html
<gate_keeper_> :)
<gate_keeper_> with no reply
<gate_keeper_> another problem is that, the server doesn't allow to connect to him...
<uvirtbot`> New bug: #312449 in samba (main) "package samba-common 2:3.2.3-1ubuntu3.3 failed to install/upgrade: subprocess post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/312449
<gate_keeper_> u can get connection refused, or it takes forever to accept the login
<NovusT> I've managed to ftp a file into a ubuntu server directory with a parentheses in the name i.e. fileexample(2).tar.gz.  Now I can't delete it - any ideas?
<_ruben> either escape the parentheses like this: \(2\) .. or use a wildcard like * or ? .. or use tabcompletion ..
<NovusT> That did it - perfect. Thanks Ruben!
<KurtKraut> How can I list iptable rules on Ubuntu Intrepid Ibex ? iptables -L shows me nothing and I have rules set and working.
<Jeeves_> iptables -L -n -v
<KurtKraut> Jeeves_: still, no rule is prompted, but I have working rules.
<Deeps> what type of rules?
<KurtKraut> Jeeves_, Deeps, please take a look at this: http://paste.ubuntu.com/96026/
<Deeps> the key there is '-t nat'
<Deeps> iptables -t nat -L
<KurtKraut> Deeps: oh yeah, you're right. Thanks a lot.
<Deeps> by omitting -t, it defaults to -t filter
<KurtKraut> Deeps: got it. Thanks a lot.
<KurtKraut> I was struggling with this.
<KurtKraut> Does anyone recommend here a HowTo of a Squid Transparent Proxy ? All documentation I found is a bit outdated somehow
<Deeps> btw you can combine args that dont take params, eg, iptables -nvL instead of iptables -n -v -L
<RainCT> Hi
<RainCT> I get "This kernel requires the following features not present on the CPu:   0:6"  booting Ubuntu 8.04 Server Ed. on VirtualBox. Any idea?
<Jeeves_> I'm trying to use iscsi storage pools for libvirt
<Jeeves_> Anyone with experience on that ?
<Nafallo> Jeeves_: tell me if you find out how to use it. I try to use an LVM with it :-)
<etronik> RainCT:
<etronik> RainCT: in VirtualBox, enable the following checkbox with the PAE thingy..
<etronik> RainCT: in the VM configuration I think in the processor tab
<Jeeves_> Nafallo: Thanks for your help mate!
<Nafallo> Jeeves_: hehe. I would help if I knew how :-)
<RainCT> etronik: uhm.. I can't find any processor tab
<etronik> RainCT: just a sec while I call up the GUI
<etronik> sorry, it is in the Advanced tab
<etronik> Enable PAE/NX
<RainCT> ah, thanks
<RainCT> works now :)
<etronik> yep! I had that prob a few days ago myself
<nealmcb> server team meeting this morning?
<yann2> mh?
<yann2> I thought it was now :)
<nealmcb> 2 weeks ago we agreed to meet last week, and last week just a few of us showed up.  probably good to take a rest and come back energized next week.  mathiaz will help - I suspect he'll be back by then :)
<gleesond> is there a way to throttle the bandwidth of the users on my server?
<gleesond> seems that some of my users are using a bunch of bandwidth in ssh via sftp
<genii> trickle isn't bad
<gleesond> oh cool
<gleesond> thanks
<genii> gleesond: np
<tsteele> anyone available to help with domain questions?
<ScottK> Your odds are better if you actually ask the question.
<tsteele> How do I check to see if I actually am connected to my office domain?
<tsteele> I have set my IP, Gateway, and DNS settings, and have given the connection name my domain name
<tsteele> I can ping this server from other machines
<viezerd> if I do a switch from win server 2003 with vmware to ubuntu server with kvm, can I say that my hardware is supported for kvm because vmware is also working ?
<tsteele> I have tried to run domainjoin-cli and was unsuccessful
<viezerd> egrep '(vmx|svm)' --color=always /proc/cpuinfo << that I can check in ubuntu to make sure my hardware is supported but I want to make sure it is before I install ubuntu server
<AshTray-> Ebox supports interpid?
<jmarsden|work> AshTray-: Yes.  rmadison says: ebox | 0.11.99-0ubuntu11 | intrepid/universe | source, all
<AshTray-> http://pastebin.com/m72d1e6da
<Deeps> get rid of the .s before+after ebox
<Deeps> ^ebox-.*
<uvirtbot`> Deeps: Error: "ebox-.*" is not a valid command.
<Deeps> is what you want to be installing, not .^ebox-.*.
<AshTray-> E: Broken packages
<AshTray-> http://pastebin.com/m7f03a63a
<AshTray-> Strange...
<tsteele> New to Linux servers, If I can RDP into a Machine on my Domain does that mean that I have connected to the domain.
<Faust-C> tsteele: no
<Faust-C> tsteele: if youre wondering about AD intergration you have 2 options, samba or likewise
<tsteele> So I need to use or install SAMBA
<Faust-C> or likewise
<viezerd> http://h18000.www1.hp.com/products/quickspecs/10902_div/10902_div.HTML << thats the one I have, will it work with kvm virtualization ?
<tsteele> I installed likewise which then I ran the domainjoin
<tsteele> I had an error on the domainjoin-cli, asking about ports to be opened the DC however the firewall is turned off
<nephish> hey all, i was in here yesterday talking about how i can run mysql version 5.1 when the 8.10 Ubuntu is only on version 5.0, It was suggested that i make a package from the Ubuntu source package, and there was a link, i have lost the link, anyone know of a good how to on this?
<jfroebe> https://wiki.ubuntu.com/PackagingGuide/Complete#Recipe:%20Updating%20An%20Ubuntu%20Package
<jfroebe> nephish, is that the link?
<nephish> Jfroebe, that's it, even the same example.
<nephish> thanks, now i know firefox has some way to bookmark......
<nephish> :)
<jfroebe> :)
<Faust-C> hmm cant get this GPT issue to go away
<Deeps> lol umm, i'm attempting to install libpam-umask, and it wants to remove the kernel
<jmedina> Deeps: it tries to remove everything
 * jmedina things libpam-umkas has its own operating system :D
<Deeps> indeed
<Deeps> seeing a few bugs on launchpad about this
<Deeps> filed months ago
<jmedina> Deeps: so what is libpam-umask used for?
<Deeps> defining the umask systemwide
<jmedina> set umask per user? or something?
<Deeps> or per user, or whatever
<Deeps> rather than having to set in shell rc files as well as login.defs
<jmarsden|work> Deeps: Seems safer to do it the old-fashioned way, rather than removing your kernel ;)
<jmedina> Deeps: and I guess it can set umask for every pamized application not only to interactive shells
<Deeps> aye
<jmedina> sounds good
<Deeps> libpam-umask is deprecated in favour of libpam-modules, however the logins.def file still refences it
<Deeps> and the package is borked
<Deeps> references*
<Deeps> this has been a known bug in hardy since march and it's still there :/
<Deeps> ah well, got it working now, i'm happy
<nephish> hey all, anyone suggest a quick way to set up master-master replication? getting a copy of the database is a challenge, because it is live.
<nephish> database is 7 GB
<zxq> is there an easy way to set a program to run on startup?
<zxq> or setup a program as a service?
<nephish> zxq what program?
<zxq> ventrilo server
<zxq> only reason i ask, I want the server to be running during restarts, and i dont really want to see it when I'm ssh'ed into the server.
<nephish> zxq, does it not run as a daemon in /etc/init.d ?
<zxq> negative. its a manual install, in which its really just a tar ball unpacking and thats that.
<zxq> never had to do this manually, so its quite the adventure >.<
<zxq> if theres a way to make it a daemon in /etc/init.d im all for it.
<nephish> you may start in the /etc/init.d and look at some of the files, i think there is a command having to do with rc to make it a boot script
<etronik> what package do I need to install to get make ?
<jmedina> etronik: make
<etronik> eheh :)
#ubuntu-server 2008-12-31
<mindframe> theres nothing that should prevent ubuntu from detecting a new memory module is there?
<mindframe> stupid managed hosting provider says they installed the extra module
<nephish> anyone here using mysql replication?
<yann2> mindframe > I once had a hosting provider sending me a mail that they shut down my server because I was sending spam (which was wrong, but the number of the server was correct) - and apologized 3 days later saying they turned my machine back on again. It never went down - some poor guy must have wondered what happened to his server
<yann2> so I guess, you just made someone happy :)
<mindframe> yann2, they definitely rebooted my server several times
<mindframe> theyve rebooted mine on accident before
<jmarsden|work> mindframe: If you boot with a kernel parameter such as   mem=512M   your kernel will only see 512M... so you can artificially restrict the RAM a Linux kernel sees... I can't think of how that would happen "accidentally" though.
<mindframe> yeah i wouldnt have set that
<jmarsden|work> What does     sudo lshw -short -c memory    output?  # sudo apt-get install lshw first if necessary
<mindframe> hahahah
<mindframe> thanks :)
<jmarsden|work> No problem.  Thought that might be the info you needed ;)
<mindframe> theres 0/1c then 0/1c/0 1 and 2 ... /0/1c should list the total memory installed right?  /0/1c/0 should be a dimm slot?
<jmarsden|work> The numbers differen between boards,but yes.  My home dekstop shows: /0/19/0                        memory      2GiB DIMM 800 MHz (1.2 ns)
<mindframe> boneheads in the data center
<shingen> having an issue with initramfs not detecting my fakeraid, but while in the initramfs busybox session, I can run dmraid -ay just fine... so I was wondering if anyone knew which script to add dmraid -ay to so that my fakeraid will boot?
<Alex_21> Hi, can anyone assist me in adding DNS a records to a DNS server, specifically Ubuntu Hardy with EBox and BIND9, please
<Alex_21> Where is the BIND9 config file placed by default?
<Alex_21> Please
<ball> no idea.
<persia> Alex_21, Have you tried `dpkg -L bind9` ?
<Alex_21> Ok, I figured it out. It is in /etc/bind/named.conf
<Alex_21> Good night. Bani bash
<uvirtbot`> New bug: #312648 in samba (main) "package samba 2:3.2.3-1ubuntu3.3 failed to install/upgrade: subprocess new post-removal script returned error exit status 132" [Undecided,New] https://launchpad.net/bugs/312648
<lazywalker> Hello, can I subscribe the `what's new` of ubuntu apt source? mail or rss feed.
<persia> lazywalker, e.g. intrepid-changes?  ( https://lists.ubuntu.com/mailman/listinfo/Intrepid-changes )
<lazywalker> persia: does  this mail list contains the changes of apt source?
<persia> lazywalker, Well, that and the related (hardy-changes, jaunty-changes, etc.) receive mail for each upload, which is not quite the same thing, but fairly similar.
<persia> On the other hand, if you just want changes to the apt package itself, then I don't think there is such a subscription available.
<lazywalker> persia: This one you provide just fine for me , Thanks a lot:)
<Alex_21> I added a domain to bind in EBox, but I can't find the file that contains the data for that domain. Where could it be?
<Alex_21> I added a domain to bind in EBox, but I can't find the file that contains the data for that domain. Where could it be?
<Deeps> /var/lib/bind?
<Deeps> /var/cache/bind?
<erichammond> Alex_21: /etc/bind/ ?
<Alex_21> Not in /etc/bind/
<Alex_21> Not in /var/lib/bind/ either
<Alex_21> Or in /var/cache/bind either
<Deeps> look in /etc/bind at the named.conf[.local]
<Deeps> should tell you in there where the zone files are stored
<Alex_21> Ok, thanks
<Alex_21> Nothing is in there
<Gargoyle> hello
<gcleric> Gargoyle: aloha!
<Achilleus> hello, I have installed squid, and redirected every  http: 80 request to 3128 the port of the proxy but I get the following page, however, if I use proxy in my browser I get pages normally.  http://paste.ubuntu.com/96877/
<gcleric> Achilleus:  are you trying to setup a transparent proxy?
<Achilleus> gcleric, yes exactly
<gcleric> have you see http://kuscsik.blogspot.com/2008/01/transparent-proxy-with-squid-3-on.html
<Achilleus> gcleric, thank you very much
<Achilleus> gcleric, it worked thanks
<hedin> when i try to see the statistics on a device that in smokeping (http and dns probes), i get the following error... ERROR: Section 'senta' does not exist (display webpage). when i wanna se some details about the "location" senta.... any ideas how to fix it?
<etronik> where is Perl's  CPAN configuration stored ?
<ProfFalken> etronik: AFAIK, it's usually user dependent and stored in ~/.cpan - others will probably correct me on this one!
<etronik> thanks, I looked in there but... a whole ton of files...
<ProfFalken> etronik: ~/.cpan/CPAN/MyConfig.pm
<ProfFalken> do you need to re-run the setup?
 * ProfFalken rephrases the question so it makes more sense...
<etronik> ywah I think so...
<ProfFalken> etronik: what are you trying to achieve? Have you already configured CPAN, or do you need to configure it from scratch?
<etronik> ProfFalken: I want to install bugzilla and I in the process of getting the perl modules... right ? but...
<etronik> the /usr/bin/perl install-module.pl --all   fails with make related errors
<etronik> re
 * ProfFalken doesn't use CPAN much (he doesn't code much perl) and has never had to reconfigure it
<ProfFalken> etronik: a quick google has revealed that to reconfigure CPAN, you can type the following:
<bitsbam> anyone here using master-master MySQL data replication?
<etronik> hmm quick google ? hadn't thought of that, what search terms you used ?
<ProfFalken> etronik: perl -e shell -MCPAN
<ProfFalken> etronik: o conf ini
<ProfFalken> etronik: "reconfigure CPAN" - came up with the following:
<etronik> should I run that with sudo ?
<ProfFalken> etronik: http://www.linuxquestions.org/questions/linux-newbie-8/how-to-reconfigure-cpan-314795/
<ProfFalken> etronik: ah, do you need to set this system-wide?
<ProfFalken> if so, I'd do it as root, but others would know more about this than me
<etronik> ok
<etronik> thanks for the link - usefull
<ProfFalken> bitsbam: I've used it recently, but not at the moment, what's up?
<ProfFalken> etronik: no probs.
<bitsbam> trying to weigh out the pros, cons
<bitsbam> i have a database that is live, 7 GB and we need to set up a backup computer that is a little hotter than what mysqldump can do for us
<bitsbam> not to mention dumping the bigger tables takes long enough to be a problem
<ProfFalken> bitsbam: master-master would do it, do you need updates to be applied to the backup computer at the same time as the master?
<bitsbam> and the backup server is going to be located in a different Geographical location
<bitsbam> ProfFalken, well preferably
<bitsbam> we need to be able to kick off the backup server incase the main one  dies
<bitsbam> We are in the dead center of Tornado Alley
<ProfFalken> bitsbam: ok, master-master is relatively easy, you setup the 1st server to be the slave of the second and the second to be the slave of the first (if that makes sense?)
<ProfFalken> bitsbam: !! :o)
<bitsbam> makes sense
<bitsbam> so any change to either will propagate to the other?
<bitsbam> Is it pretty risky to do?
<ProfFalken> bitsbam: that's correct.  If you set the correct flags in /etc/mysql/my.cnf then you'll be able to avoid duplicate IDs etc.
<bitsbam> oh, splendid
<ProfFalken> bitsbam: http://capttofu.livejournal.com/1752.html is a good place to start.
<ProfFalken> bitsbam: I've sucessfully used this as the backend to a LAMP stack running on top of Linux-HA servers with two load balancers in front of it and Joomla suddenly became cluster aware! :o)
 * ProfFalken loves linux...
<bitsbam> Great link, ProfFalken, thanks
<ProfFalken> bitsbam: np.
<ProfFalken> bitsbam: and good luck... :o)
<bitsbam> thanks, you may be hearing from me again :)
<etronik> ProfFalken: the lisk yout sent mentions /use/lib/perl5/%.x.x/CPAN but I have both /usr/lib/perl and /use/lib/perl5 and none have a CPAN directory..weird... any ideias ?
<etronik> I have in my system both a perl command and perl5.8.8 - should I have both? is one of them a link or virtual command pointing to another ?
<ProfFalken> etronik: and that's where my knowledge of CPAN/Perl runs thin on the ground - you'll need to check with someone else on this one, have you #ubuntu-uk?
<etronik> ah ok, thanks anyway, you already helped anyhow :)  re #ubuntu-uk ? you mean  try asking there ?
<ProfFalken> etronik: yup - there's a few perl wizards in there I think, failing that, check the CPAN docs or google for your local perl-mongers group.
<ProfFalken> bbl...
<etronik> will do thanks ProfFalken
<etronik> ProfFalken: for your reference (future help :-) the CPAN config file in in /etc/perl/CPAN  !! :-)
 * Gargoyle waves
<bitsbam> If i set up replication on my two MySQL databases, then i shut one down and move it to another location that will not be on the LAN, will this cause a great problem?
<etronik> besides replication probably stopping to work ?
<bitsbam> etronik, is that pretty likely?
<bitsbam> would hate to have to copy the database over the net to the backup server again, i mean, good grief
<etronik> why not maintain that copy/replice on the LAN, and make an off-line copy from that replica ??
<etronik> I mean, having replication into a offline site wil be less performant, and less performant means more costly some way or another
<etronik> consider having your production copy and a replica both online on the LAN and with high performance and availability, then making an offline copy from the replica with less stringent timming requirements
<etronik> lke a thee-way replication
<etronik> guys, how do I check (coomand line) if I have sendmail or postfix installed ?
<random00> How do you change a computers name via the command line?
<bitsbam> etronik, good idea
<bitsbam> etronik, what OS are you using?
<etronik> ermmm
<bitsbam> duh,
<bitsbam> forgot where i am
<etronik> jeOS under virtualbox under a Windows host... presently
<bitsbam> look in /etc/init.d
<ray_tru`> Is there a tool to monitor disk access in ubuntu ?
<yann2> iostat i think?
<ray_tru`> thx
<ProfFalken> etronik: thanks, I'll file that away for future reference... :o)
<etronik> how do I find out the group that Apache process belogs to ?
<etronik> belongs...
<ProfFalken> bitsbam: if you setup replication to work on DNS or similar (i.e. names, not ip addresses) then you shouldn't have an issue as long as you update your DNS entries...
<ProfFalken> bitsbam: I'd test it first though... :o)
<bitsbam> ok cool.
<bitsbam> am going to give it a try.
<peppe__> salve a tutti
<peppe__> avrei un problea con vlc e streaming su http
<bitsbam> if a slave ( or master if master master rep) is down for a couple of hours, can the master catch it back up? or would that be a hard fail?
<peppe__> quando apro vlc ed inserisco l'ip (http://127.0.0.1:1234) nella sezione "apri sorgente di rete" non parte lo stream
<ProfFalken> etronik: check out the apache config file, it should be set in there somewhere (!).  If you're running Ubuntu (which I presume you are given the name of this channel!) then its probably www-data
<ProfFalken> bitsbam: Never tried outage of hours in length, but in theory yes, they should still replicate.  I'd check the mysql docs on replication to confirm this though as I've never had to do it.
<bitsbam> ok
<bitsbam> thanks
<etronik> ProfFalken: confirmed, I used top and forced apache to work and appear at the top of the list... colvoluted way to find that out huh?
<oneseventeen> I've got a chrooted ssh enviornment set up, but I can see files no matter who I am signed in as
<oneseventeen> I'm assuming I messed up my umask?
<peppe__> ita?
<pteague_work> anybody know what version of debian ubuntu's hardy relates to? i'm trying to utilize dotdeb to update my hardy server's php version to at least 5.2.4 & only 5.2.3 seems to be in the hardy repos... & dotdeb doesn't seem to have ubuntu release names
<oneseventeen> to be clear, a umask of 077 would mean only the creator of a file could see it or mess with it, right?
<ProfFalken> pteague_work: cat /etc/debian_version on your Hardy box. IIRC it spits out testing/VERSION.  If it doesn't, then Etch is coming up to being 18 months old, Sarge is older than that and Lenny is newer.
<pteague_work> oh crap, nm...  seems this is worse than i thought... the server is still running gutsy >.>
<ProfFalken> pteague_work: Etch (installed on a number of servers here) has 5.2.0 as the latest version of PHP5, so I suggest that Lenny/Testing is the repo you are after.
<oneseventeen> ok, weird... I am SSH'd into my server as user2 and looking at a file owned by user1 with permissions 600
<oneseventeen> shouldn't that file be invisible to me, since I don't have read permissions?
<oneseventeen> nevermind, just realized I can't retrieve it, just see the filename
<pteague_work> dotdeb has debs compiled for latest releases from apache, mysql, & php... so they have 5.2.8
 * ProfFalken runs off to look at .deb
<pteague_work> oneseventeen: directory would have to be not readable for you to not see any filenames
<oneseventeen> pteague_work: thanks!  I'll change folder perms real quick and check again
<oneseventeen> pteague_work: thanks, worked liek a champ!  much more secure chrooted sftp server now :)
<pteague_work> hmm...  ok, went out to check on dotdeb, my sources.d.list file is set up right & `aptitude update` download the info... but it seems to be preferring ubuntu packages over dotdeb so i'm not seeing the newer versions :(
<ProfFalken> pteague_work: you may need to set a priority for dotdeb in /etc/apt/preferences to force dotdeb to be the default.  Can't remember the syntax off-hand, but google will know! :o)
<ProfFalken> pteague_work: it's called package pinning IIRC
<bitsbam> i have a directory that i used scp to get from another computer, it will not let me change it's ownership.
<bitsbam> sudo chown mysql testdatabase -R
<bitsbam> gives me a permission denied.
 * ProfFalken is outta here...
<ProfFalken> Good luck to all those who managed to get the night-shift tonight, Happy New Year all, PF.
<ProfFalken> who
<pteague_work> i think what i really need to do is upgrade to hardy & then to intrepid...
<pteague_work> bitsbam: did you remember to use sudo ?
<disposable> i've installed slapd on ubuntu 8.04 and during installation set Administrator password to be 'password'. i then installed phpldapadmin and now, when i try to log in as cn=admin,dc=example,dc=com, i get 'bad username or password' what am i doing wrong?
<bitsbam> pteague_work,   i remembered sudo , i forgot that it was an sshfs mounted folder on another computer.
<bitsbam> :)
<bitsbam> dumb mistake,
<pteague_work> hehe
<eagles0513875> how can i encrypt my shoutcast files
#ubuntu-server 2009-01-01
<pteague> ubuntu no longer lists the md5sums for the iso's?
<pteague> geeze...  yay google for something that should be listed on the download page >.>
<ghost3> I have an ideal that id like to play with using the ubuntu server but I had some questions. my ideal is to build a couple server boxes for emergency communcations using wirless. first question; how hard is it to have the servers cloud their  connection and clients cloud the connection too?
<ghost3> in away like olpc did their  clouding the network
<ghost3> my server would provide all means of communacating i.e. ejabbered, phone server, basic web page chat page, etc etc...is there a project all ready in the works for this?
<ghost3> well..if anyone has any ideals or can point me in the right direction please e-mail me at michaelhoward78@gmail.com thanks.
<ball> Happy new year.
<pteague> happy new year :)
<pteague> considering all the nifty problems the file server i'm working on was having i thought i'd do a memory test... with all 3 168-pin sdram in i had over 150 errors by the time it finished the 4th test >_<
<pteague> of course that's on top of 2 of 3 drives in a 3 drive raid5 array failing...
<pteague> after testing all 3 of the dimms (512mb & 2x 256mb) only 1 of the 256mb is any good :(  ...  at least server only requires 256mb ram...
<ball> pteague: I'd switch to your backup server until you have that machine thoroughly debugged (or replaced).
<pteague> it's my parents' home file/web server
<ball> Why mix 512M and 256M DIMMs?
<ball> pteague: I hope you've got the data backed up then.
<pteague> it's 1 of my old 1ghz mobo & cpu... it's been re-purposed so many times & i don't think the memory is what was originally in there
<pteague> yes, it's backed up...  the plan was to have 2 raid5 file servers, there's & mine, & use rsync to back them up to the other
<ball> mITX?
<ball> pteague: ah good, it sounds as though you're on top of things then.
<pteague> it's a full size atx in a full size tower case
<pteague> mine is a mini-itx with a single 1gb chip & has a backplane with 4 hard drives in it...
<ball> SCA?
<pteague> i think what killed their hard drives this last summer was the 2x dehumidifiers & the open windows in the basement... & no fan <.<  they were having issues & i came over on a weekend & it was broiling
<pteague> SCA?
<ball> SCSI, 80-pin... often hot-swap
<ball> Single Connector Attachment iirc
<pteague> oh, nope... sata... found a nifty 4 sata connector pci card & bought 1 for both boxes...  considering my file server survived the summer (i think the backplane saved it) i got them the same backplane for theirs for xmas
<pteague> i'm almost wondering if it'd save me a lot of pain & suffering to toss the old board, cpu, ram & case & buy the same hardware i got for mine
<ball> parallel PCI?
<ball> What's their existing array, PATA?
<pteague> it's sata... i got the pci sata card over a year
<ball> pteague: is it in a 64 bit slot, or at least a 66 MHz 32-bit one?
<pteague> the atx board is pretty old... it was my desktop eons ago it's got 1 of those... oh, what where they? isa slots? on it...  the rest are pci
<ball> You probably don't want to put SATA into one of those slots
<ball> ...especially for more than two drives
<pteague> it still has the 3.5" drive attached to the case, but that's not been connected to the motherboard in a long time >.>
<pteague> would lshw tell me what kind of pci slots they are?
<ball> I don't know Linux well enough to answer that one.
<pteague> k
<ball> The mainboard manual would, but if it's got ISA slots, they're probably basic 32-bit, 33 MHz ones
<ball> ...all sharing bandwidth
<pteague> it's just the 1 isa slot & i think it was on a separate channel or something... it does have an agp slot, 2x i think
<ball> Right.  A board that old is going to be sharing very limited bandwidth amongst its PCI slots
<ball> Hang more than a couple of drives off that, you may notice the bottlenecks
<ball> What's in it?  Pentium III?
<pteague> a 1ghz amd... athlon i think
<ball> I have a couple of Athlons sitting around here. Can't find mainboards for them.
<ball> One is Socket A (Athlon XP) and the other's Socket 939 (Athlon 64)
<ball> I quite fancy a Sempron chip, but I should probably get an Athlon 64 for the virtualisation.
<pteague> hmm...  considering the new intel mini-itx on newegg i may just grab that for my mythtv frontend & move the other boards around to upgrade my parents' file server
<ball> Atom?
<pteague> http://www.newegg.com/Product/Product.aspx?Item=N82E16813121359
<ball> ah, that's the dual-core jobbie
<ball> Nice chip.  It has amd64, but lack VT afaik
<pteague> i'm using the previous model for my mythtv frontend & it works great... granted, i'm not serving up hd content yet
<ball> The 230?
<pteague> i think it's this 1 - http://www.newegg.com/Product/Product.aspx?Item=N82E16813121342 - in which case yes
<pteague> actually, i still have lshw up... yes, it is the 230
<ball> That was on my Christmas list
<ball> I have a case and PSU sat waiting for it.
<ball> That's a great price considering it comes with the chip.
<pteague> yep & considering the power consumption... :)
 * ball nods
<ball> Pity there's a fan on the chipset, but that can be fixed
<pteague> i'm wondering if that's really on the cpu though...  on mine the cpu is fanless, it's the gpu that has the fan
<ball> Well, it's on the N. bridge, which contains the GPU
<ball> My ideal board wouldn't even have a GPU, but I'm considered unusual in my tastes.
<pteague> for server or console only really don't need a gpu...  for the mythtv frontend i do though
 * ball nods
<techsupport> ubuntu 8.10 server installation asked me if i want to install virtual machine server, which virtual machine server are they talking about ?
<eDeperatus> hello
<magtom2003> morning all
<magtom2003> is there a dummies guide anywhere for setting up a mail server?
<yann2> apparmor broken in hardy > https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/268880 :(
<uvirtbot`> Launchpad bug 268880 in apparmor "aa-logprof : multiple Use of uninitialized value " [Undecided,Confirmed]
<shubuntu> hi, does ubuntu have a web config similar to webmin or ispconfig?
<andol> shubuntu: I belive e-box is the solution best integrated with ubuntu.
<shubuntu> i checked the website, though there wasn't much info, i need to know if i offer ebox, instead of webmin / ispconfig my clients won't curse me
<shubuntu> does it have a similar look and feel?
<andol> shubuntu: No idea, I haven't really looked into neither webmin or e-box myself.
<shubuntu> what do you offer to your clients?
<andol> shubuntu: I don't have clients in that regard.
<shubuntu> oh ok
<shubuntu> thanks anyways
<AdamDV> I have SSL setup on my domain (webmain shows that it's using an SSL cert, only it dosen't seem to be mine....) But I can't seem to get the actual site to use the cert, any ideas?
<shubuntu> create your own ssl
<AdamDV> Setup a self-signed SSL cewrtificate?
<AdamDV> shubuntu: ?
<Deeps> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. See !ebox instead.
<Deeps> :(
<shubuntu> apt-get install ssh openssh-server
<AdamDV> Ermmm, webmin was just my example
<AdamDV> Webmin: favicon blue
<AdamDV> SIte: favicon gray <-- I want this blue, how ?
 * AdamDV installs stuff
<AdamDV> shubuntu: Done.
<AdamDV> What now?
<shubuntu> which distro are you on?
<shubuntu> 8.0.4 / 10
<shubuntu> http://www.howtoforge.com/perfect-server-ubuntu8.04-lts
<shubuntu> http://www.howtoforge.com/perfect-server-ubuntu-8.10
<shubuntu> just follow that
<shubuntu> will help you learn how to fix it up
<shubuntu> if you only are interested in the mail bit, just follow the mail bits
<shubuntu> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<AdamDV> 8.04
<shubuntu> then the first link is better suited for you
<AdamDV> ermmm, I need the following still:
<AdamDV> Dovecot / sendmail
<AdamDV> SSL for the main site
<AdamDV> SSL for mail
<AdamDV> WIll that link work?
<Deeps> gnuhost may be of interest to you
<persia> Do you really need sendmail, or just an MTA?
<Deeps> hmm, that not be what i was thinking of
<Deeps> gplhost
<Deeps> that's the one
<Deeps> http://www.gplhost.com/software-dtc.html
<AdamDV> I need an MTA that works with roundcube
<AdamDV> That is EASY to set up.
<AdamDV> and works with IMAP/POP3.
<AdamDV> Possible? Postfix, sendmail, or somehting else?
<persia> sendmail isn't known for it's ease of configuration.
<shubuntu> that tells you how to create an ssl
<shubuntu> you use openssl to create an rsa based ssl
<shubuntu> just follow the link
<Deeps> an ssl certificate*
<Deeps> and postfix would be the recommended MTA to use in ubuntu
<Deeps> https://help.ubuntu.com/8.04/serverguide/C/postfix.html
<AdamDV> Ok
<AdamDV> Yesterday I tried that link, and the dovecot one, to no avail....
<AdamDV> I think i should purge dovecot and sendmail and reinstall?
<persia> reinstallation is drastic.  Purging should do all you need.
<AdamDV> Oh, ya I meant reinstall dovecot and sendmail, not the OS :P
<shubuntu> AdamDV: don't purge. remove
<shubuntu> and then reinstall
<AdamDV> I think i fscxked my configs
<AdamDV> Ebox gives me thisL:
<AdamDV> locale: Cannot set LC_CTYPE to default locale: No such file or directory
<AdamDV> locale: Cannot set LC_MESSAGES to default locale: No such file or directory
<AdamDV> locale: Cannot set LC_ALL to default locale: No such file or directory
<shubuntu> oooh
<AdamDV> ANd jsut sits there....
<shubuntu> install locales
<shubuntu> haha
<shubuntu> aptitude install language-pack-en
<shubuntu> then dpkg-reconfigure locales
<AdamDV> language-pack-en gives me the same error.....
<AdamDV> ANd ctrl-C fixs it....
<AdamDV> O.o
<shubuntu> aptitude reinstall language-pack-en then
<AdamDV> n_1%3a8.04+20080805_all.deb) ...
<AdamDV> Unpacking replacement language-pack-en ...
<AdamDV> Setting up ebox (0.11.99-0ubuntu11) ...
<AdamDV> Errrr....
<shubuntu> first install and fix your language pack
<AdamDV> I did....
<shubuntu> then reconfigure your locales
<AdamDV> Ok..
<shubuntu> then aptitude install ebox-all
<AdamDV> Ok
<shubuntu> although ispconfig is really good too
<shubuntu> if you wanna build it, you'd be really in for a good treat with that
<AdamDV> What is it?
<AdamDV> root@ubuntu:/etc/apache2/sites-available# dpkg --configure -a
<AdamDV> Setting up ebox (0.11.99-0ubuntu11) ...
<AdamDV> O_O?
<AdamDV> Ok, ebox sucks. Fuck it. Can I see some ISPconfig screenies?
<shubuntu> http://images.google.com/images?hl=en&q=ispconfig&btnG=Search+Images&gbv=2
<AdamDV> SOrry, kernel panic'd
<AdamDV> Is postfix install and go?
<AdamDV> What was that SSL link again?
<AdamDV> Deeps?
<netz>     /SET autocreate_own_query OFF
<LinuxLover4> I have an Ubuntu Server 8.10, How can I control the fan speeds on my server?? I have an Intel SE7501B42 MotherBoard
<LinuxLover4_> I have an Ubuntu Server 8.10, How can I control the fan speeds on my server?? I have an Intel SE7501B42 MotherBoard
<nealmcb> kees: have you seen the renewal of the google android root exploit? http://www.gotontheinter.net/content/rc30-downgrade-merry-christmas-everyone
<_CitizenKane_> Is php-gd installed by default with ubuntu server AMP stack?
<yann2> is anyone here using apparmor? it seems totally broken to me...
<yann2> (in hardy)
<kso512> yann2: i haven't cracked that shell yet...
<yann2> kso512 ?
<kso512> yann2: the apparmor "shell" - I haven't tinkered with it yet..
<yann2> I'm speaking of https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/268880 :(
<uvirtbot`> Launchpad bug 268880 in apparmor "aa-logprof : multiple Use of uninitialized value " [Undecided,Confirmed]
<kees> nealmcb: ooh, neato
<kees> yann2: apparmor works great for me in hardy
<yann2> your got a profile for /usr/lib/cg-bin/php5? :)
<kees> yann2: some of the helper log parsing tools could use some polish, but AA itself is great
<kees> yann2: nope, haven't tried doing apache confinement yet
<yann2> well right now I can't use it without that aa log analyzer :(
<fiXXXerMet> I have a freshly installed 8.10 64bit server install and I want to do a full apt-get dist-upgrade.  Is there a way to also save those packages (the ones being upgraded / installed) to a CD, so that if I have to reinstall, apt will get those packages from CD?
<hads> The packages will be cached in /var/cache/apt/archives/
<fiXXXerMet> So I could just copy them to a CD, and then copy them back to /var/cache/apt/archves/ after I reinstall, then apt would pick them up from there?
<ScottK> Ther'es also a package called aptoncd.  Dunno how well it works.
<fiXXXerMet> brb, dinner
<veloc1ty> hi, i wonder if anyone can help me to figure out some strange things in my apache logfile?
<veloc1ty> get requests wich look like a bruteforce attack
<hads> Sounds pretty standard
<veloc1ty> hopefully.. i'm a newbie with server stuff ;)
<veloc1ty> what i get is stuff like this: 87.106.11.67 - - [02/Jan/2009:00:02:59 +0100] "GET http://login1.login.vip.dcn.yahoo.com/config/login?.patner=sbc&login=__36ddd&passwd=penny&.save=1 HTTP/1.0 " 404 36050 "-" "-"
<veloc1ty> it's in the access.log
<veloc1ty> i'm wondering if it's harmless or something wich needs to be investigated
#ubuntu-server 2009-01-02
<ball> Anyone seen dendrobates?
<hedin> hi where do i request a version bump of a package in ubuntu server?
<selinuxium> Hi all, I am trying to install ubuntu on a Poweredge 1800 with a adaptec 2610sa Raid controller. The install appears to complete but on the reboot I get a rub error 22... Any help gratefully received :)
<selinuxium> s/rub/grub/
<ropetin> selinuxium: I won't be much help, but how do you have the RAID configured?
<selinuxium> the raid is configured as raid 5. But it is blank... the OS is installed on a separate drive...
<ropetin> Hmmm, so did maybe the boot drive device change between install and reboot?  I've had that a couple of times, especially when using usb CDs or keys to install
<ropetin> Just have to edit the grub entry
<selinuxium> ropetin: OK ,will have a look, I will see if I can work it out! :)
<selinuxium> ropetin: I have boot the server with a live disc and I am looking in the /boot/grub/menu.lst   everythign is pointing to hd1,0   the live disc is showing the boot drive to be sdb1... What do I need to put in here fot it to read the correct drive?
<ropetin> If it really is sdb1, then that is fine.  For slitz and giggles though, you could try hd0,0?
<selinuxium> ropetin: I'll have a play.  :)
<ObsidianX> is there a better (but still free) web interface than webmin?
<kraut> moin
<selinuxium> Hi all, I am trying to install ubuntu on a Poweredge 1800 with a adaptec 2610sa Raid controller and a seperate SATA drive on which the OS is loaded. The install appears to complete but on the reboot I get a grub error 22... Booted with live CD /boot/grub/menu.1st show everything pointing at (hd1,0).  ran grub then find /boot/grub/stage1 and it is reporting the (hd1,0) is correct.  Any help gratefully recieved.
<Mal3ko> how do you install Pearl?
<Mal3ko> compile from source?
<selinuxium> Mal3ko: PERL... :)   sudo apt-get install perl
<selinuxium> :)
<Mal3ko> which ver?
<Mal3ko> and does that include DBI and DBD::mysql modules?
<selinuxium> Mal3ko:  Perl 5.10.0-11.1ubuntu2
<selinuxium> Mal3ko: sudo apt-get install perl libdbd-mysql-perl libclass-dbi-mysql-perl
<selinuxium> Mal3ko: I believe...
<Mal3ko> selinuxium: i think perl is installed by default in ubuntu 8.10
<Mal3ko> now i just need to check if the modules are already installed
<Mal3ko> DBI and DBD::mysql modules
<autoditac> hi. i try to install grub 1.96 (from hardy-backports) on an lvm partition on top of a md raid 10.
<autoditac> when i do "grub-install --modules='biosdisk pc raid lvm xfs ext2' '(md0)'" i get:
<autoditac> grub-probe: error: unknown device
<autoditac> the root device is on a RAID array or LVM volume.
<autoditac> grub-setup: error: Can't embed the core image, but this is required when
<autoditac> what's that supposed to mean?
<techsupport> ubuntu server installation asked me if i want to install a virtual machine server, which virtual machine do they talk about ?
<uvirtbot`> New bug: #313275 in logwatch (universe) "logwatch stunnel script doesn't match any stunnel4 log entries" [Undecided,New] https://launchpad.net/bugs/313275
<techsupport> and can i install windows on top of it ?
<Deeps> techsupport: https://help.ubuntu.com/8.10/serverguide/C/virtualization.html
<techsupport> cant find anything that says windows
<uvirtbot`> New bug: #313287 in libnss-ldap (universe) "libnss-ldap uses wrong port for ldaps://" [Undecided,New] https://launchpad.net/bugs/313287
<maw_> techsupport: xen
<maw_> ...I assume
<maw_> and yes xen supports windows guest OS
<jmarsden> maw_: Why assume Xen ?  KVM is more likely.  Could even be virtualbox ?  And all of them support Windows as a guest...
<maw_> I made the assumption as many OS have xen 'ready' kernels. I know SLES, RHEL do at least. But, I don't know so I just through that out there.
<maw_> I am sure if someone is interested they can do their own research :)
<jmarsden> http://www.ubuntu.com/products/whatisubuntu/serveredition/technologies/virtualization  suggests KVM is the Ubuntu-favoured one.
<maw_> thanks for clarifying
<stumpy_ghost> hello all. i have installed ubuntu-8.04.1-server with the LAMP stack. I have also installed otrs via apt get. the webpage is placed in server.com/otrs/index.pl
<stumpy_ghost> how can i move this to the document root?
<stumpy_ghost> so just server.com will bring up the page?
<ProfFalken> stumpy_ghost: two options, 1 - edit otrs httpd.conf/apache.conf so it uses otrs as the root for a vhost. 2 - use mod_rewrite and redirect all traffic for / to /otrs/index.pl
<ProfFalken> stumpy_ghost: I recon a quick google for otrs vhost would probably do it...
 * ProfFalken goes back to sleep - see you all tomorrow...
 * stumpy_ghost waves bye to ProfFalken the sleepy helper
<smultron> anyone know of a good way to log system uptime/downtime to calculate percent availability?
<Yann2> haproxy does that for me :)
<Nafallo> nagios?
<maw_> smultron: cacti, nagios...
#ubuntu-server 2009-01-03
<nephish>  hey vim users, is there a way to get the syntax highlighting colors as rich as the same color theme in gvim?
<Kamping_Kaiser> huh?
<mljohns4> I recently performed a fresh Ubuntu LAMP install. Using phpmyadmin to interface with the MySQL server, I noticed that there were a couple of user accounts created by default, such as "ANY" and "debian-sys-maint". Are these necessary for the LAMP server to operate?
<mljohns4> The "ANY" user has no password set, which is of concern
<ball> hello medic33
<techsupport> I installed ubuntu server in vmware, its telling me to install vmware tools by mounting the virtual cd drive, how can i do that ?
<ball> I don't know vmware, but if you've somehow told it to map the physical CD-ROM drive to a device presented to the Ubuntu Server guest, then it's probably somewhat automatic
<hads> "mount /cdrom" ?
<shubuntu> guys has any of you got a csr signed before? can you help me out
<shubuntu> does anyone know how to create a new mail acount in postfix?
<J-_> I need some suggestions. I'm running a LAMP installation on hardy, and I need a program to update with afraid.org. What client should I use that's easy to configure?
 * delcoyote hi
<nyarla> anybody familiar with pure-ftpd? nice piece of software btw. But I cant achieve to process the transfer.log into my stats analyser, because the log file is rw root-only. How can I make it www-data readable by default?
<J-_> http://pastebin.com/d6ebf7121 Does that look correct? it's my network interface.
<shubuntu> J-_: that's not enough
<shubuntu> you need to put in more than that
<shubuntu> there's network, and broadcast
<shubuntu> you need both of those too
<J-_> shubuntu: what will network and broadcast be? I'm on a dynamic IP
<shubuntu> ooh you can't assign a dynamic ip statically
<Deeps> huh?
<Deeps> that is enough
<Deeps> you dont need network and broadcast in there as well
<J-_> =\
<Deeps> if you want to define them, in your case, network would be 192.168.1.0, broadcast 192.168.1.255
<Deeps> your netmask already makes that clear though
<J-_> I still don't know why my domain isn't working then.
<Deeps> what's your domain?
<J-_> http://www.bytebind.com
<Deeps> i see 'test by justin' blog
<J-_> wtf
<J-_> whoops
<J-_> Deeps: could you do a screenshot for me?
<J-_> Does it looks like a drupal installation?
<Deeps> yep
<J-_> ...
<J-_> I wonder why it's not working locally then
<Deeps> if you're within your NAT'd lan, trying to connect to your WAN ip, you'll be hitting your router
<Deeps> and your router wont do the port forwarding for local (lan) clients
<Deeps> alter your local hosts file so www.bytebind.com hits the lan ip of the server (192.168.1.120?) and then try
<J-_> I mean I can use my LAN IP and my domain will work. But, prior to hardy(dapper) I could go to my .com website, and it would show up as well.
<J-_> Deeps: how would that be done?
<Deeps> are you using an ubuntu server as a router?
<shubuntu> don't set your ip manually
<shubuntu> set it through dhcp
<shubuntu> you should be good
<J-_> Deeps: no.
<Deeps> shubuntu: why would you want a server to aquire an ip from dhcp?
<shubuntu> because he doesn't know what he's advertising his ip as
<Deeps> J-_: then your upgrade from dapper to hardy shouldn't have any effect, as it'd be your router thats handling the nat traversal
<shubuntu> and where the main entry for his network is
<J-_> hmm
<Deeps> shubuntu: he has a server on his lan behind his nat router + wan gateway
<J-_> Deeps: Indeed
<Deeps> shubuntu: said server shouldn't have a dynamic address or it can cause problems, especially when port forwarding comes into play (updating all forwards to point to new ip)
<shubuntu> do a netstat -nat
<Deeps> J-_: what OS are you on?
<Deeps> J-_: (as in, your client that you're connecting to your server from)
<J-_> Deeps: Currently Ubuntu(Hardy) on my laptop.
<Deeps> J-_: open /etc/hosts in a text file
<J-_> Done
<Deeps> format is ip hostname
<Deeps> eg
<Deeps> 192.168.1.120 www.bytebind.com
<J-_> Will try it, thanks.
<Deeps> (incase that wasn't already clear)
<J-_> I do so, restarted network interfaces and it seems it didn't do anything
<Deeps> you may need to restart your browser, as that caches hostname->ip
<shubuntu> hey edit /etc/hosts
<shubuntu> make sure your ip is properly set for your domain.tld
<shubuntu> then echo subdomain.domain.tld > /etc/hostname
<shubuntu> then /etc/init.d/hostnam.sh start
<shubuntu> then if you type in hostname
<shubuntu> you should get your correct domain
<Deeps> would be correct if he had subdomain.domain.tld set to resolve to his lan ip, which would then also be incorrect to publish to the internet
<shubuntu> then type in hostname -f
<shubuntu> no he should set his lan ip as eth1
<shubuntu> his eth0 should be his real ip
<shubuntu> the one outside the router
<Deeps> then what would his router use?
<shubuntu> eth0
<Deeps> but his router isnt his ubuntu server...
<shubuntu> his machine would connect locally using eth1
<shubuntu> and externally using eth0
<Deeps> he's not using an ubuntu server as a router...
<shubuntu> so what
<Deeps> at least, not this server in question, anyway
<J-_> I've always used eth0
<shubuntu> the router does the routing
<Deeps> this server isn't a nat gateway, it's simply another machine behind the nat router.
<J-_> That didn't do it though, I'll google some.
<J-_> Maybe I have to log in and out again.
<shubuntu> try all
<shubuntu> :P
<Deeps> possibly, i'm not very familiar with using linux desktops
<J-_> Is it possible to automount an external drive easily on a LAMP server?
<maxbaldwin> What does the *L* in LAMP server stand for?
<Nafallo> Linux
<evarlast> hence WAMP where Linux becomes Windows or SAMP where linux becomes Solaris, or MAMP where it becomes MAC
<lapo> hi
<lapo> I'm having a strange problem, I have a samba+ldap pdc on hardy (samba 3.0.28a)
<lapo> I can join other samba machine w/o problems
<lapo> on those btw wbinfo -g works while wbinfo -u don't (Error looking up domain users)
<lapo> any idea?
<moldy> hi
<akincer> I'm trying to use some variables as part of a complex pipe in a custom init.d script and it doesn't seem to be working, anybody here proficient in such things?
<akincer> nm, figured it out
<altf2o> quick question: Got Ubuntu Server 8.10 on an old 500Mhz/256MB RAM/13GB HD box. I believe i've found the proper "incremental backup solution" on the help page w/ simple scripts. However i want to create a /complete/ Image so if i ever need to reimage the drive, or a new one i can. Anyone have any tools in mind capable of doing that?
<Deeps> partimage
<hads> altf2o: dd
<Deeps> or dd, if space isn't an issue
<hads> I was trying to remember the one that skipped empty blocks but can't think of it.
<hads> partimage is probably what I was thinking of.
<altf2o> perfect. Now i do have multiple partitions, that wouldn't impact my ability to successfully reimage correct? Or might some "links" or other settings be lost in that process?
<altf2o> I.e. if i go from my current multi-partition setup, to a new install w/ a single partition & swap space only
<Deeps> partimage skips empty blocks, or at least, compresses them down so they're small enough
<Deeps> if you restore from an image, you're restoring the original partitioning layout too
<Deeps> although if you want a full disk image from multiple paritions, you may be better off piping dd through gzip
<Deeps> as partimage makes images per partition
<altf2o> ok, perfect. I'll kind of weigh my options & go from there. I would just hate to be S.O.L. in the event of a hardware crash. I'd hate to setup all the stuff on this box again, lol.
<altf2o> rephrase: The nerd in me would love it, the logical part of me would think, "Idiot should've made an image"
<Aperture> Hi Everyone. I'm running Wordpress, PHP, Apache, and MySQL on an Ubuntu machine (to use as a webserver). It is wirelessly connected to my network. Every 10 minutes or so, I can't connect to the server and if I ping it, I get the message "host is down/unreachable". I've gone through and pulled back all of the energy saving settings to no avail.
<jtaji> running a server over wireless is bound for failure
<jtaji> I wouldn't trust it anyway
<Deeps> wireless is fail
<Deeps> maintain an active ping to the server
<Deeps> or some other kind of active session with traffic going back and forth over the wireless
<Deeps> if it still drops out after a while, then it's not energy saving issues
<Deeps> it's a shoddy wifi card / access point
<WoLf_Loonie> Hello, and sorry to disturb. I'm having an issue with my server, was trying to create a self signed SSL certificate to be used with Dovecot.. but I can't get the hostname right.. should be only "hostname", instead, it states "hostname.domain", and it always returns an error when I check the emails.. anyone could point me where to look to fix this?
<WoLf_Loonie> Nevermind, I've found the error. (was a different default filename used on the docs I was going by. >.<)
#ubuntu-server 2009-01-04
<astrubhar> Hey everyone.
<astrubhar> Does anybody have experience installing suPHP on Ubuntu Hardy?
<yann2> i use mod_fastcgi...
<yann2> su_php is rather slow if you consider using in prod
<yann2> but fastcgi is a bit buggy, but faster, make your choice :)
<yann2> gotta go, good luck
<oracleofmist> hey guys
<oracleofmist> i want to make a lan all be able to connect using copanetwork.com as its domain, how would i set this up
<Kamping_Kaiser> connect to what?
<oracleofmist> eachother
<oracleofmist> would i have to setup my server with bind9?
<Kamping_Kaiser> that doesnt really answer my question
<Kamping_Kaiser> since 'connect' is a very lose word, and could mean a whole heap of things
<oracleofmist> i want all the computer on the network to be able to connect to eachother via ssh, etc using hostnames
<oracleofmist> or if i have a local smtp server resolve smtp.copanetwork.com as the server on the lan
<Kamping_Kaiser> any local dns server from dnsmasq through bind will let you setup the domain lookups. then you'll need to configure services on the systems to listen for the apropriate fqdn or hostname
<oracleofmist> alright, this is a bit of a project for me as this area is new
<oracleofmist> now when i setup my ubuntu server i set it up on copanetwork.com  however my desktop was setup prior and is not setup with a domain
<oracleofmist> how would i configure it to be on the domain?
<Kamping_Kaiser> you can do one of two things, depending on how you want the network to function. either make the dns server do all the 'smarts' (eg, know which host is which), or tell each machine what its fqdn is, then the dns server is there to tell the rest of the network
<hads> dnsmasq is neat
<Kamping_Kaiser> hads, i like it :)
 * Kamping_Kaiser is using it for this exact purpose, on his small network (it would be less neat trying to manage a Uni with it though :DD)
<oracleofmist> so dnsmasq over bind9?
<Kamping_Kaiser> oracleofmist, if you want to really understand dns, bind9 is where its at
<oracleofmist> ok
<Kamping_Kaiser> if you just want things to start working, dnsmasq might be a nicer solution
<oracleofmist> i'm just wondering because i currently have 4 computers on this network and am planning on adding a couple computers each month so i want this to be easy to setup
<oracleofmist> however handle large scale in the long run
<hads> How large do you onsider large.
<hads> dnsmasq can do a lot of people's large.
<oracleofmist> well if i keep the existing infrastructure about 40 or so computers by the end of the year, with possible vpn
<oracleofmist> is it easier to manage? i'm just wondering what the advantages are of each
<hads> Well if you want DHCP and DNS it will handle that for you easily.
<oracleofmist> iok
<oracleofmist> already off to a bad start
<oracleofmist> dnsmasq: failed to create listening socket: Address already in use
<oracleofmist> that was just installing it
<Kamping_Kaiser> because bind is running
<oracleofmist> i uninstalled bind9 but you are talking about something different?
<oracleofmist> how do i stop bind from running?
<Kamping_Kaiser> did you remove bind before installing dnsmasq?
<oracleofmist> i did  apt-get purge bind9
<oracleofmist> do i have to remove regular bind as well?
<Kamping_Kaiser> that would have errored. `apt-get --purge remove bind9` would be the proper command
<evarlast> i thought bind conflicted with bind9
<evarlast> oh, nvmnd, there is no bind package anymore
<oracleofmist> well in either case it is gone
<oracleofmist> probably restart then?
<ScottK> The purge should have stopped it at well as removed it.
 * Kamping_Kaiser isnt convinced its gone.
<hads> netstat
<oracleofmist> from netstat i don't see "bind" anywhere
<Kamping_Kaiser> oracleofmist, do you see 'named' or 'dnsmasq'?
<oracleofmist> http://pastebin.com/d276cb278
<oracleofmist> no but i see named running from "top"
<hads> That's not all of your netstat
<oracleofmist> ok killed "named" and dnsmasq starts
<Kamping_Kaiser> so bind is still installed
<oracleofmist> that was everything put out by netstat
<oracleofmist> the only thing i excluded was one line above the top that has the titles for each column
<Kamping_Kaiser> oracleofmist, fwiw, you should check you've removed bind9
<oracleofmist_> however i do see bind9-host and libbind9-40 installed
<smultron> after you `export` a env-variable, should it stick to that on subsequent logins?
<Kamping_Kaiser> no
<smultron> how do you get it to 'stick'?
<Kamping_Kaiser> you could add it to your bashrc if its just for you
<smultron> ah
<Doonz> Hey im lookking at a way to speed up internet browsing. Crurrently i have my xp bo running to my ubuntu box wich is the firewall and the dns server. What can i do on the Ubuntu box to speed things up?
<smultron> Doonz: have you tested your internet speed from the ubuntu box and compared it to that on the xp box?
<Kamping_Kaiser> setup a caching proxy presumably
<Doonz> smultron yeah its almost identical
<Doonz> Like if i downlad a large file my speed is fine. but when im just browsing the web it seems sluggish
<smultron> perhaps DNS or cache, like Kamping_Kaiser suggested
<Doonz> see i was kinda thinking caching on the ubuntu box. Right now the ubuntu box does all the dns fetching
<smultron> something like squid cache would do
<Doonz> ok ill give that a shot
<uvirtbot`> New bug: #313650 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.0.67-0ubuntu6 [modified: /var/lib/dpkg/info/mysql-server-5.0.list] failed to install/upgrade: subprocess pre-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/313650
<J-_> After a break from the server, I turned it back on and I found that sudo /etc/init.d/networking restart doesn't work. So, I cd /etc/init.d/ and networking isn't there. Is there a way to reconfigure the system to bring it back? My server still works both locally(with internal IP), and people say my website is working. Though, I can't access it through my LAN with either WAN IP, or domain name.
<genii> J-_: You have no /etc/init.d/networking ? Is your /etc partition maybe getting corrupt is my wonder
<J-_> genii: How can I see if that's the problem?
<J-_> Or, at least, does anyone have /etc/init.d/networking in 8.04? Would be awesome if I could copy, and re-add it. If I can
<genii> J-_: Boot to recovery and run fsck on it comes firstly to mind
<Kamping_Kaiser> that file is in 'netbase', and i'm not sure how you'd have any networking without that file. i can only assume you've rm'd something you shouldnt, or that genii 's suggestion of a fsck is in order
<shubuntu> hi, i have two tlds for the same domain name. i want them to load the same pages and one to be an alias for the other. I set it in the Vhosts_ispconfig.conf file but it's not working, help please?
<Kamping_Kaiser> what is 'it', what provides that file?
<shubuntu> it's basically a file that is included in the apache2.conf
<shubuntu> it is used by ispconfig (a web control panel)
<shubuntu> the domain.com works when aliased for www.domain.net
<shubuntu> but
<shubuntu> www.domain.com doesn't work
<Kamping_Kaiser> i've just added both into the top of my vhosts, so cant comment on that file setup sorry
<genii> shubuntu: PErhaps ask in #apache
<shubuntu> did that
<shubuntu> no answer
 * Gargoyle stumbles into 2009. Hello! :-)
<uvirtbot`> New bug: #313735 in mysql-dfsg-5.0 (main) "ysql-server-5.0: il sottoprocesso pre-removal script ha restituito un codice di errore 1" [Undecided,New] https://launchpad.net/bugs/313735
<ball> How do I configure Ubuntu Server to be a DHCP server on one interface only?
<ball> umm... how do I check what version of Ubuntu Server I have installed?  uname -a just tells me about the kernel
<jtaji> ball: lsb_release -a
<ball> Ah, I'm all ibexed up then.
<ball> I have to go, time to feed my daughter
<ball> Thanks jtaji
<ball> I think I need more network cards.
<Deeps> can never have too many
<ball> I suppose I could scrap one of my oldest machines and re-deploy the NIC from that.
<Jeeves_> adns: /etc/resolv.conf:2: invalid nameserver address `2001:7b8:3:2c::1:53'
<Jeeves_> That's strange
<ball> Jeeves_: are on an ipv6 network?
<Jeeves_> Jups
<Jeeves_> And resolving works, usually
<ball> Does the :: between 2c and 1 signify 0 ?
<Jeeves_> jups
<Jeeves_> it's the AAAA-record of nscache1.bit.nl
<ball> I have to go.
<backenfutter> Call to undefined function mysql_connect()
<backenfutter> How do I get rid of that?
<ProfFalken> backenfutter: programming language?
<backenfutter> php
<backenfutter> extension=mysql.so is commented in
<backenfutter> still I get the error
<ProfFalken> backenfutter: have you installed php5-mysql?
<ProfFalken> (it's an obvious one, but still catches me out every so often!)
<backenfutter> nope, and it seems that was the missing link ;) thx
<ProfFalken> lol :o)
<ProfFalken> no probs, glad to help!
<finite9> can someone please help me with what is hopefully a simple solution?  I really need a GUI on Ubuntu Server and the wiki says to install xserver-xorg and xserver-xorg-core then either openbox or fluxbox, but how do you start X after installing these?  Starting fluxbox just says cannot connect to X server?
<ProfFalken> finite9: usually just type "startx" at a cmd-prompt, however you may need other packages...
<finite9> startx wasnt installed
<ProfFalken> hmmm, that'd do it, can't remember off the top of my head which package you need for that, but once that's installed you're probably up and running...
<finite9> I tried "sudo Xorg" and that seems to start basic X but then I tried to start fluxbox again from tty2 and it still says cannot coonect to xserver
<ProfFalken> finite9: check out your .xinitrc - should have a line in it that starts fluxbox...
<ProfFalken> finite9: if it's not there, add it! :oP
<finite9> ok, thanks!
<pteague> what's the suggested fs for lvm these days?
<TTT_Travis> Hi, I have an old ubuntu server with a few drives setup as an LVM, I moved those drives to my new server and copied the config file and the mount line from the FSTAB but now when it mounts all of the files are read only and are highlighted green on the server
<pteague> it's been a while since i looked at lvm...  doesn't lvm use more than just fstab to define things?  have you gone through the process of pvcreate, etc ?
<pteague> hmm... maybe you don't need to run pvcreate on the drives... but i'm wondering if you need to set up the volume groups & such
<TTT_Travis> well they were already setup on the old server, i just copied the same lvm config files to the new one
<TTT_Travis> it mounts right, and I can see all of my files, but I just can't change anything
<pteague> ok, so you did copy the lvm configs & not just /etc/fstab ... have you looked at the lvm config files to see if they're using the correct dev ids for the drives?
<TTT_Travis> yes I copied both, but ill check the dev ids
<pteague> i know when dealing with mdadm for software raid i've had issues even in major updates of ubuntu-server with the drives ending up with different ids which screws things up & i have to regenerate the software raid
<pteague> i'm currently going through http://tldp.org/HOWTO/LVM-HOWTO/ , but it's not been updated since 2006... not sure how much lvm or any of the filesystems have changed since then
<hads> Should be still valid
<pteague> i'm just wondering if ext3 works well with lvm or if i should use something else
<pteague> then again i'd also like to mess around with zfs...
<TTT_Travis> hmmm
<TTT_Travis> the physical volumes in the LVM config do each have IDs but what would that have to do with them not being writeable
#ubuntu-server 2010-01-04
<bmunat> I use postfix and amavis and have been getting cron emails from my backup machines erroneously labeled as spam. I've added the sender addresses to 51-whitelist with a value of -15.0, but this seems to be ignored. Does anyone know how I get amavis to use the sender whitelist in 51-whitelist?
<ScottK> bmunat: Just out of curiousity, why 51-whitelist instead of 50-user?
<bmunat> hmm, I think something (tutorial, etc.) told me to do it that way
<bmunat> guess I could try putting the map in 50-user
<ScottK> I'm pretty sure it won't matter, just curious.
<ScottK> bmunat: Would you pastebin your 51-whitelist?
<bmunat> argh, it keeps saying it tripped the spam filter... will try pastie
<bmunat> http://pastie.org/765476
<bmunat> i'm not super-great with perl syntax... maybe I'm missing something. though amavis starts up, so I don't think I have a syntax error
<bmunat> do you know if matching a whitelist entry should make the spam headers not be added to the mail?
<bmunat> cuz after moving the contents of 51-whitelist into 50-user, I still see spam headers on a test mail from my backup machine
<ScottK> bmunat: Look at lines 145 - 155 of 20-user.  What you have there is incomplete.
<bmunat> ScottK: 20-debian_defaults you mean?
<ScottK> bmunat: Yes.  Sorry.
<bmunat> ahhh... I need to replace the entire data structure
<bmunat> duh, that makes sense
<ScottK> Don't forget you'll need some more bits at the end too.
<bmunat> yeah, got them... and a new test email from the backup machine still has the spam headers, but the score is much lower (close to -15, which is what I set it to), so it appears to be working
<bmunat> thank you so much for your help
<ScottK> bmunat: You're welcome.
<zzz2009> ?
<jla> test
<jmarsden> jla: This is a channel for Ubuntu server support, not testing IRC... do you have a specific question about Ubuntu server?
<jla> Sorry, I thought my client had died
<jla> at the moment just lurking
<jmarsden> jla: /ping is the way to check if you are still talking to the IRC server...
<jla> thanks
<jmarsden> You're welcome.
<jla> i do have a ? why are so many of the config files for things like amavis split up into so many bits
<bmunat> so the updates can replace standard files
<bmunat> and leave your customized files
<ScottK> jla: The amavis thing really confused me at first, but now I really like it.
<ScottK> If it was all one big config file, when you changed anything, you'd almost certainly end up having to sort out maintainer changes versus yours on upgrades.  The way they waterfall if you've set something specific you want, maintainer config changes won't affect that.
<jla> I have been trying to setup amavis/clamav/spamassassin and I am finding the chopped up files annoying, eg I thought i had config'd not to quarantine anything only to find a whole bunch of files in thq spamass qu directory. I didn't even look just deleted the whole damn thing
<jla> bacj in 5
<bmunat> oh yeah... my quaratine dir was several gigs before i noticed it was doing that :-(
<bmunat> i just added a weekly cron job to delete everything over a week old
<bmunat> tho I've never actually had to retrieve something from quarantine....
<ScottK> jla: I believe that the setup in the Ubuntu server guide works reasonably well.  Even if it's not ideal for you, it's a good basis to start from.
<bmunat> this may be more of a mysql question, but I'm trying to get rid of the logrotate errors for the mysql logs on an ubuntu system (i.e. http://www.lornajane.net/posts/2008/Logrotate-Error-on-Ubuntu) and even though i've added the user to mysql i still get the logrotate errors...
<jla> not sure I agree, i think they are far too loose on things like spam/virii etc. we work on the basis that if it even smell like spam... discard it. why waste resources both human and electonic
<bmunat> i even did "GRANT ALL ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY PASSWORD 'foo'" and flushed privileges.... and the debian-sys-maint user still can't connect
<jla> When I first started looking for a server distro I liked the sound of ubuntu, but the more I try to admin it the less enamored I am
<jla> The quarrantine (sp) example, why should bmunat have to run a cron job to clean it up, why not let the admin decide if they want to qu stuff or not.
<HFSPLUS> UBUNTU AND LINUX ARE CANCER IN A SENSE IF YOU USE IT YOUR BODY WILL GET SEPSIS AND YOU WILL GET CANCER EVERYWHERE IN YOUR BODY\
<HFSPLUS> !ops
<ubottu> Help! Channel emergency! soren, lamont, mathiaz or tom
<HFSPLUS> !staff'
<HFSPLUS> !staff
<ubottu> Hey nalioth, jenda, rob, SportChick, seanw, Dave2, Christel, tomaw, Gary, PriceChild, niko or stew, I could use a bit of your time :)
<jla> ScottK: I can see some of the reason for separating some items, but they seem to have added things like whitelist in out of the way places.
<ScottK> jla: Don't worry about it.  Put your own whitely in 50-user and it wil over-ride the maintainers.
<jla> ScottK: I guess so, but not having all the default config in one place makes trying to determine what needs to be over-ridden a pain
<jtaji> jla: this is how Debian does it
<jtaji> it does make life easier once you understand the system
<jla> jtaji: I not sure about that. the example I have been using is amavis-new quarantine directory, I don't want it or need it. if it smells like spam/virii we don't accept it, but the amavis setup from debian/ubuntu seems to work on the basis that spam should br delivered if not to the recipient then some where else why!!!!!
<jla> Spam should be dumped asap, not delivered
<jla> enough of my rantings
<jla> back tolurking
<jtaji> hehe.. tis ok
<jtaji> you certainly can't make everybody happy with defaults, either
<qman__> I really like the way config files are split up, it's one of the more important reasons I use ubuntu
<qman__> rather than one massive unmaintainable file, it's grouped logically
<qman__> and you can still search for a directive with a grep -R if you don't know where it is
<jla> qman: I might agree if all the config files were in one place, again the example of amavis, most of the config files are in /etc/amavis, however at least two files are in /usr/shared/amavis/conf.d.
<jla> qman: if it were only amavis I might just mutter under my breath, however there are many packages that have been messed about. this creates problems most importantly it is difficult to get support from the developers. and documentation provided by the developers no longer applies
<jtaji> jla: you mean /etc/amavis/conf.d/
<jtaji> oh I see them
<jtaji> http://packages.ubuntu.com/karmic/all/amavisd-new/filelist
<jtaji> I don't use it or have it installed to check.. but I'd assume those two aren't read in, and are meant to be copied into /etc/amavis/conf.d/ if desired, but aren't by default for some reason
<jtaji> I suspect that's the case, or a bug should be filed, because all system config files go in /etc, period
<qman__> yeah
<jla> jtaji: they are read, the second turns off the various checks which are then turned on or not in 15-content_filter_mod. The fact that they are hidden and not in /etc/amavis/conf.d really worries me. What else are they hiding, if they don't respect total transparency we might as well stay with M$ and their tricks.
<jla> jtaji: the stuff in /var.... is not an accident
<jtaji> now you're just being silly, it's clearly transparent as you've found it
<jmarsden> jla: For full transparency, as always, the best documentation is the source code.  If you 100% need to, read it.  Then ask Microsoft to do let you read theirs, and see how far you get.
<jtaji> and as for amavis defaults, as far as I know it's more common to not drop email into /dev/null, but rather to quarantine
<jla> I only found it by accident, when something did not make sense, so i went looking.
<jla> jamrsden: reading the code is fine, if you are a programmer, however how many server admins are programmers and would understand the source? as to M$ that was my point.
<twb> jmarsden: if you pay Microsoft enough -- if, for example, you're a multinational or a G8 government, you can sometimes get access to the source.
<jmarsden> twb: True, but neither jla nor I are likely to meet those criteria.
<twb> Just mentioning it for completeness :-)
<jla> jtaji: why expend time and resources on things like spam and virii. we are seriously considering requiring pre-registration in order to send us email.
<jmarsden> jla: At minimum, a good sysadmin understands the software packaging system that his systems use, and so can quickly and easily see where any given package puts its files.  If that's not enough for you, which apparently it isn't or you wouldn't be complaining about amavis, then... time to read the source.
<jtaji> jla: fair enough, I'm just not sure your organization's requirements are in the majority
<jla> jmarsden: my concern is that in several cases I have found that there appears to be an attempt hid some of the files involved in packages I wish to use. i other cases the packages have been modified in ways that the original developers do not support, which can lead to other problems.
<jmarsden> jla: Hidden files?  That dpkg -L PACKAGENAME does not show?  Really?
<jmarsden> Or that you cannot see from looking at the source package if you care enough about the details to do that?  Are these "hidden" files not mentioned anywhere at all in the package documentation?
 * jmarsden installs amavis in a VM to check on this claim for himself...
<jtaji> jla: bottom line, FHS states that /usr/share/ hierarchy is for architecture independent data, so if those files aren't intended to be modified, then it might be the right spot
<jtaji> but sometimes it's a judgement call... and you might have found that different distros make different decisions sometimes
<jla> jmarsden: I think you will find that at least acouple of files in amavis are not in the documentation and are not with the other config files.
<jmarsden> The two files in /usr/share/amavis/conf.d/ are specifically not to be edited, as comments within them state.  So they are architechure-independent data files, and so are in a perfectly reasonable location per the FHS.  As for being hidden -- they aren't hidden.  Their names do not start with a leading period, their perms allow everyone to read them, and dpkg -L avamis-new clearly lists them.
<jmarsden> Lastly, they *are* mentioned in the documentation, see /usr/share/amavisd-new/README.Debian
<ruben23> hi
<jmarsden> jla: As I said earlier: "a good sysadmin understands the software packaging system that his systems use, and so can quickly and easily see where any given package puts its files"
<jla> jmarsden: the documentation for amavis-new is on the amavis-new web site, NOT on the debian site after all amavis-new is not devloped by debian, just A
<jmarsden> jla: The documentation foe the amavisd-new Debian package is on your system when you install that package.
<jmarsden> If you want to install it from a source tarball and use the upstream web site docs, that is your choice.
<jla> as postfix is documented on the postfix web site. If you are not the developer then pissing arround with what you don't own is bad manners. If you want a change then you should suggest then to the develops, contibute the code but to crap on somnebody elses rug is not nice!
<jmarsden> jla: If you do not understand the value of a packaging system, use a distribution that does not use one.  If you use Ubuntu, a good sysadmin would understand the Ubuntu packaging system as part of their sysadmin responsibilities.
<jmarsden> BTW, the documentation for the Ubuntu postfix package is installed with it and more comes in the related postfix-doc package.
<jla> jmarsden: we can argue this till the cows come home. I happen to think that the modification that debian/ubuntu make just make life more difficult not easier.
<JanC> twb: actually, even students can get access to a lot of Microsoft sourcecode if they need it for something study-related (& after signing a very strict NDA)
<JanC> jla: <offtopic>"virii" is a non-existent word</offtopic>
<jmarsden> jla: Your claim that "the" documentation for a Ubuntu package which you just installed is by definition on the upstream web site is clearly inaccurate, and demonstrates a total misunderstanding of what a software package is, IMO.
<ScottK> jla: I can understand feeling that way.  I thought so initially too.
<ScottK> jmarsden: I think you're being a bit aggressive towards someone that is new here.
<JanC> reading README.Debian is often a good start  âº
<jmarsden> ScottK: Probably.  He made a claim he can't substantiate regarding this whole issue of "hidden" files, comparison with MS, etc etc... I'm not sure that was the best way to start out as a newcomer, if such he is.
<JanC> README.Debian (or README.Debian.gz) should explain Debian/Ubuntu-specific configuration stuff
<ScottK> jmarsden: Certainly, but you've been around here long enough to know better.
<jmarsden> JanC: In the case being discussed, it does.  But it was claimed that this is not "the documentation".
<jla> I agree, however if I want to setup a package like amavis then I need to understand the full documentation, which is on the upstream site. in the case of postfix it is also available in the /var/doc... but that is not the case in all cases.
<JanC> jmarsden: I can imagine not everybody new to Ubuntu/Debian doesn't know about that convention though
<JanC> jla: if this sort of docs is missing from /usr/share/doc/<packagename>/ then that's a bug you should report
<jmarsden> JanC: Agreed.  There is a difference between coming asking for help, and coming in talking about "...  crap on somnebody elses rug ..."
<JanC> well, if you have been trying to fix things for hours, politeness sometimes has something to desire  ;)
<jla> ScottK: i am/was migrating my severs from centos/fedora to ubuntu. we currently have a number of services among them postfix/dovecot with amavis-new/clamav/spamassassin/postgrey spam control.
<JanC> I hope jla will remember to check those docs in the future  âº
<ScottK> jla: That's a good combination.
<jla> jmarsden: I may have come on a little strong there.
<jmarsden> jla: Fair enough.
<jla> ScottK: we have managed to get our spam level down to about 5%. we work on the basis "that if smells like spam we can it" nothing that gets spamassassin > 5 is accepted, no viruses are accept , even to quarantine, the risks of accidentally triggering are too great, particularly as the majority of downstream are windows.
<ScottK> jla: One thing I think you will like about Ubuntu is that we actively maintain clamav so that the current release (once it's tested and reverse depends are updated if needed) is kept available for all supported releases.
<jla> getting late and my battery is low, so I had better exit stage right.
<kingmanor> ok i switched kernels somehow
<kingmanor> now it says im running 2.6.31-16-generic-pae instead of 2.6.31-16-server
<kingmanor> how do i change it back
<thefish> anyone know of a server info script that can output wiki markup? I used one ages ago that dumps mediawiki formatted server reports, just cant remember the name of it
<twb> You want to put server logs on a wiki?
<thefish> twb: i want to put server hardware and software info on a wiki, like a parsed output of hwinfo
<twb> Ah.
<twb> I would just stick it in a PRE block, because I'm lazy
<J_P> hi all
<J_P> people, I have a interesting problem here. df command show that are there space, but qhen I try create any dir show not space left...
<J_P> look http://dpaste.com/140854/
<jpds> J_P: What does 'df -i' show?
<soren> J_P: You're probably out of i-nodes. "df -i", like jpds suggests, will show you how many i-nodes (instead of space) are available.
 * jpds hugs soren.
<ttx> soren: o/
<J_P> jpds: sorry for long time, look there
<J_P> http://dpaste.com/140856/
<jpds> Yeah, you're out of inodes.
<J_P> soren: http://dpaste.com/140856/
<J_P> what is that? or why that?
<J_P> jpds: how I repair that?
<J_P> jpds: I would like have as reference the df -h correct..
<guntbert> J_P: you probably created *many* small files somewhere
<J_P> guntbert: humm.. I think yes. But not I. The ZoneMinder software that run on it..
<jpds> J_P: http://www.linfo.org/inode.html
<J_P> jpds: yes
<J_P> There are two ways in which a filesystem can run out of space: it can consume all the space for adding new data (i.e., to existing files or to new files), or it can use up all the inodes. The latter can bring computer use to a sudden stop just as easily as can the former, because exhaustion of the inodes will prohibit the creation of additional files even if sufficient HDD space exists.
<J_P> well, I think that solution is use less HD space..
<guntbert> J_P: you sulution would be to delete *many* files - its an inode problem and no space problem
<guntbert> *your
<twb> find /home -xdev -size -2048c or something
<twb> (I haven't been paying much attention.)
<tos_> ok so USERS are able to CD wherever the hell they want in /etc/passwd and whatever, how can i keep them to /home only!?
<twb> tos_: why does it matter?
<alex_joni> passwords are not stored in /etc/passwd, so you shouldn't worry about that
<twb> alex_joni: unless you specifically force that ;-)
<twb> Which would be a dumb thing to do
<soren> ttx: Hey, dude.
<ttx> soren: how was your holiday break ?
<soren> ttx: My INBOX says "too long".
<ttx> soren: My burndown chart says "too flat" :)
<soren> ttx: Yeah, that too. I had a /very/ short list of stuff I wanted to get done over the holidays. I haven't done anything.
<ttx> soren: that's good !
<soren> That's a point of view :)
<twb> I wanted to be left alone, and I got that for once
<twb> soren: http://www.structuredprocrastination.com/
<soren> Fascinating
<soren> truly
<twb> soren: it advocates making sure your list of things to do is REALLY long
<soren> check
<twb> If I was one of those GTD wankers, I'd tell you that it changed my life.  But it didn't, because I basically ignore the essay's advice.
 * soren is a GTD wanker
<twb> :-)
<twb> They are never far away
<soren> My favourite part of that essay is the photo with the caption: "Author practices jumping rope with seaweed while work awaits."
<twb> I say this while making lunch at 10:20PM
<ScottK> cemc: I think the issue you saw with spamasassin and amavisd-new was bug 502615.  I'm curious what you think of the proposed solution?
<uvirtbot> Launchpad bug 502615 in spamassassin "/etc/cron.daily/spamassassin should restart amavisd" [Undecided,New] https://launchpad.net/bugs/502615
 * soren lunches
<tos_> ok what im trying to do is stop users from going to other directorys
<twb> tos_: why?
<cemc> ScottK: that solution only works if sa-update is used from crontab, not when the spamassassin _package_ is updated. like you said, maybe not everyone is using sa-updates
<tos_> twb... im just paranoid i guess
<ScottK> cemc: OK.  I'm open to ideas then.
<twb> tos_: then unplug your system and turn it off
<tos_> yeah
<tos_> good idea
<twb> tos_: if you're concerned about security, you should start out by working out the attack vectors and then closing them down.
<twb> You don't start by picking a random thing to lock down, and trying to do it -- it's not a productive use of time.
<cemc> ScottK: maybe it should check in postinst if there's amavis with spamcheck running... ?
<ScottK> cemc: I was thinking similarly.  The concern I have is if there is a performance impact on heavily loaded sites.  Not sure and not sure who to check with.
<cemc> ScottK: performance impact in what way? by restarting avamis?
<ScottK> Yes.
<ScottK> If you stop all the running processes on a fast moving site, is that going to be a problem.
<ScottK> I'm guessing not since it's not like you're doing this every 5 minutes.
<ScottK> I think it's more of a potential concern for the sa-update cron job.
<erichammond> http://alestic.com/2010/01/vmbuilder-ebs-boot-ami
<erichammond> feedback welcomed
<twb> erichammond: you don't say what AMI and EBS stand for :P
<twb> ec2 makes me assume they're a eucalyptus thing
<erichammond> twb: Good point.  However, if somebody doesn't know what those mean, they are not the target audience of this article :)
<twb> I figured :-)
<erichammond> EC2 is a service run by Amazon which provides on-demand, self-service, pay-as-you-go computing infrastructure including virtual servers.
<erichammond> Eucalyptus implements something like EC2 but you run it on your own hardware.
<erichammond> I use EC2 extensively.  I've never used Eucalyptus.
<erichammond> When you start a virtual server on Amazon EC2 (or Eucalyptus for that matter, but I'll stop mentioning it) you need to tell it the Amazon machine image (AMI) which you want the server to run.
<erichammond> This determines the Linux distro (or OpenSolaris or Windows version) as well as what software is installed by default and how it is configured.
<erichammond> There are a number of publicly available AMIs which you can choose from, including Ubuntu ones built by the great folks on this channel.
<erichammond> In some situations, though, you may want to build your own custom AMIs which is what this article provides steps for advanced users to take and adapt.
<guntbert> erichammond: thx for the mini lecture - appreciated here :-)
<erichammond> EBS is Amazon's Elastic Block Store which is persistent storage on EC2.  Instances run from normal (S3 based) AMIs lose everything stored on local disk when they are terminated.  Instances run from EBS boot AMIs have their root disk stored persistently and they can stop/start at will just like you would expect from a normal physical computer.
<erichammond> I'm not sure if it's useful without me talking, but here is a presentation I gave about building custom AMIs for EC2 at OSCON 2009: http://oscon2009talk.notlong.com
<erichammond> I will be updating the talk to use vmbuilder instead of ec2ubuntu-build-ami for presenting at the next venue which will have me.
<alonswartz> Hey folks, do you see any issues in deploying UEC on Amazon EC2?
<erichammond> alonswartz: It won't work.
<alonswartz> erichammond, could you explain why?
<erichammond> alonswartz: There are VM experts on this channel.  I am not one of them.  All I know is that nobody has been able to get any second level VM working on top of of EC2's custom Xen framework and lived to tell the tale.
<erichammond> (As it turns out I can be an EC2 expert without knowing much about VMs.  One of the beauties of EC2 is it hides (almost) all that stuff from you.)
<Aison> i'm using samba 3.4.0 server and on linux client 3.4.4
<Aison> when copy big files, there are almost allways transmission errors. So when copy from client to server the resulting md5 sums of the copied files are almost allways different
<Aison> this happens only with samba. I also used NFS or RSYNC and there were never errors so far
<alonswartz> erichammond: I've been following alestic since its conception, you do great work!
<alonswartz> it seems like an interesting project to get UEC running ontop of ec2, i'll give it a stab, but if those who have tried are willing to share their experience, that would be great
<pmatulis> Aison: character set issue?  just an idea
<Aison> pmatulis, don't think so. Do the characterset affect the content of the file?
<pmatulis> Aison: well you can configure samba for different character sets, so it must do something at that level
<pmatulis> Aison: check your smb.conf for character sets (man smb.conf)
<Aison> I think that only affects the filenames
<pmatulis> Aison: like i said, just something to investigate (an idea)
<Aison> I just noticed, that with windows clients, there are no errors ;)
<erichammond> alonswartz: The people who have tried to run a VM on top of EC2 and failed over the last few years shared their experience on the EC2 forum: http://ec2forum.notlong.com
<erichammond> It doesn't have the best search engine, unfortunately
<alonswartz> erichammond: thanks, i'll take a look
<zul> morning
<pmatulis> mornin'
<uvirtbot> New bug: #498734 in samba (main) "nmbd stops randomly -> cannot access using hostname from Windows XP" [Low,Incomplete] https://launchpad.net/bugs/498734
<uvirtbot> New bug: #501364 in postfix (main) "Postfix not sending SMFIC_RCPT to milter, libmilter rejecting state transition" [Wishlist,Confirmed] https://launchpad.net/bugs/501364
<kirkland>  morning, server devs :-)
<jiboumans> morning kirkland
<smoser> good morning / happy new year all.
<jibouman`> morning smoser
<ivoks> someone knows; ppa build daemons can't access net during building?
<jpds> ivoks: They can't.
<ivoks> that's bad... :/
<jpds> No it isn't.
<zul> hey smoser
<ivoks> well, depends on point of view
<zul> ivoks: its just like the regular builds
<zul> ivoks: happy new year btw
<ttx> smoser: happy 2010 to you too !
<ivoks> i have one source that uses network resource for building
<ivoks> zul: happy new year :)
<ivoks> happy new 20101 everybody!
<ivoks> warning: failed to load external entity "http://docbook.sourceforge.net/release/xsl/current/manpages/docbook.xsl"
<ttx> ivoks: then it's not fully reproducible, since it depends on "something else"
<ivoks> i guess i should install docbook-xsl and use it instead of the network resource
<ttx> ivoks: I don't know... but it sounds like a good idea :)
<ScottK> ivoks: I'm curious for your opinion on what we should do about bug 502615
<uvirtbot> Launchpad bug 502615 in spamassassin "/etc/cron.daily/spamassassin should restart amavisd" [Undecided,New] https://launchpad.net/bugs/502615
<ivoks> i'm wearing my cluster hat at the moment :)
<ivoks> why should it restart amavis?
<ivoks> huh?
<ivoks> amavis just uses spamassassin daemon
<ivoks> it isn't aware of spamassassin rules
<ScottK> OK, just comment in the bug then when you have a moment please.
<ivoks> it just pases mail to spamassassin
<ivoks> ok
<Jeeves_> ivoks: Are you sure?
<Jeeves_> I'm not :)
<ivoks> i'm not; i didn't check
<ivoks> but if amavis calls spam client to check mail
<ivoks> i don't see why it shuould be restarted
<ivoks> it's like restarting firefox cause your apache now has PHP module :)
<ivoks> or restarting amavis cause you have new clamav
<ivoks> anyway, i'll take a look as soon as i finish cluster stuff
<baffle> Anyone here familiar with NPIV? I have QLogic QLE2560 adapters and Brocade-switches with NPIV Enabled on the ports to the server. But /sys/class/fc_host/host3/max_npiv_vports is "0", and "echo '2100001b32fff001:2000001b32fff001' > /sys/class/fc_host/host3/vport_create" gives "write error: No space left on device"; I.e. no NPIV ports gets created. Ideas?
<baffle> "port_type" is; NPort (fabric via point-to-point)
<baffle> So that should be correct..
<Zim_> hello
<Zim_> can anyone tell me how to go about having a CGI script run instead of index.html?
<MagicFab> DavidLevin, hi
<MagicFab> ..and Welcome :D
<Zim_> hi
<ivoks> mathiaz: happy new year
<mathiaz> ivoks: happy new year !
<mathiaz> ivoks: how are you doing?
<ivoks> good, you?
<mathiaz> ivoks: \o/
<ivoks> jolly, eh? :D
<ivoks> i have a workaround for NFS'd /home in 9.10
<ivoks> instead in /home, mount it in /media/home and then add mount -o bind /media/home /home to rc.local
<ivoks> :D
<Jeeves_> Zim_: Ehm, you'd have to set ExecCGI on that dir
<Zim_> Jeeves: thanks for the pointer
<Jeeves_> np!
<zul> hi mathiaz
<Zim_> Jeeves: Do i place the script in cgi-bin or the /var/www directory
<Jeeves_> Zim_: That depends
<Jeeves_> if you put it in the cgi-bin, your url looks differen
<Jeeves_> +t
<Zim_> Looks like I'll have rename the index.html to something else, plus change all the urls that link to it, and put the script in the www flder
<Jeeves_> Uh, possibly
<mathiaz> zul: hi - what's the story about autofs?
<mathiaz> zul: is autofs 5 stable enough to be move to main in place of autofs 4?
<zul> mathiaz: i think so both rhel and suse use it
<zul> mathiaz: im gong to done the MIR request today
<mathiaz> kirkland: ttx: are the current lucid eucalyptus packages functional?
<mathiaz> kirkland: ttx: or should I stick to the alpha1?
<kirkland> mathiaz: they should be functional
<kirkland> mathiaz: i think we'd like to know if they're not :-)
<mathiaz> kirkland: ok - I'll test them
<jibouman`> smoser: eta 3 mins
<ttx> mathiaz: theer are a number of issues
<ttx> mathiaz: like bug 499491
<uvirtbot> Launchpad bug 499491 in openvpn "tun module no longer automatically available (was: Euca 1.6.2 fails to boot an instance)" [High,Invalid] https://launchpad.net/bugs/499491
<zul> jibouman` and ttx: when you got a sec can you comment on bug 491510
<uvirtbot> Launchpad bug 491510 in monit "MIR for monit." [Undecided,Incomplete] https://launchpad.net/bugs/491510
<Zim_> folks, am I going nuts? I've just been told that there is not CGI language!?! I thought it was perl
<ScottK> Zim_: I do cgi in Python.
<Zim_> *no not not
<ScottK> cgi and programming language are reasonably orthogonal.
<Zim_> ScottK: am i understanding correctly when Im thinking you write a python script, but save it as .cgi?
<ScottK> It was more complex than that, but yes.
<ScottK> I also first set it up 5 years ago, so I don't recall the details.
<ttx> cgi is an interface, not a language
<ttx> a way to call programs from a web server.
<ogra> thats what the I stands for ;)
<ogra> in CGI :)
<ttx> CGL ? :)
<ogra> hehe
<Zim_> thanks chaps
<ttx> zul: about bug 491510 -- you should check with the ubuntuone people
<uvirtbot> Launchpad bug 491510 in monit "MIR for monit." [Undecided,Incomplete] https://launchpad.net/bugs/491510
<ttx> how badly needed it is, and would it be server or clientside.
<zul> *sigh* ok
<Jeeves_> Hmmm
<Jeeves_> Is there a channel about Launchpad/PPA on Freenode?
<Pici> Jeeves_: #launchpad
<Jeeves_> I could've guessed that :)
<jla> ScottK: a restart of amavis maybe needed in order to fix stale any stale sockets. If I remember correctly amavis uses the spamassissin socket and smtp protocol to pass email for checking. If you stop adn restart
<baffle> If anyone happends to google this: Brocade switches has to be in "Access Gateway Mode" to enable NPIV to hosts, even if you've set each port to "Enable NPIV". If it is not set up in such a way, max_npiv_vports is reported as 0. Using "Access Gateway Mode" disables all other normal switch features like zoning, so that kinda blows.
<ivoks> jla: right, restarting amavis is pointless
<ScottK> ivoks: I think that's the opposite of what he said?
<jla> If you restart spamassassin i think amavis will be using the "wrong" socket.
<ivoks> that's a bug in amavis
<ScottK> Does it eventually straighten itself out or does it last forever?
<ivoks> i'll do some testing tomorrow
<ScottK> Thanks
<ivoks> and hopefully start working on mail stack
<jla> How that a bug, amavis picks up spamassassin's socket when it starts and keeps it open, you kill the sa the socket is now stale, a reload/restart fixes the problem
<kirkland> ttx: so eucalyptus.conf ....
<kirkland> ttx: to fix Bug #458211, i'd like to change the way its generated
<uvirtbot> Launchpad bug 458211 in eucalyptus "eucalyptus.conf, euca_conf is confusing and underdocumented" [Undecided,Confirmed] https://launchpad.net/bugs/458211
<kirkland> ttx: and fix and fix Bug #487275, too
<uvirtbot> Launchpad bug 487275 in eucalyptus "eucalyptus.conf should not be a conffile" [High,Triaged] https://launchpad.net/bugs/487275
<kirkland> ttx: i'd like to move all of the documentation into a manpage
<kirkland> ttx:  and generate the file such that it's not a conffile
<kirkland> ttx: and only write the pertinent parts to eucalyptus.conf
<ScottK> ivoks: If you start working on it, that's great.  I've had no time.  You might want to start with making the dovecot-postfix patch in dovecot apply so the package will build.
<kirkland> ttx: as the eucalyptus-nc's eucalyptus.conf (for instance) contains a bunch of cruft that's not pertinent to NCs
<ttx> kirkland: you would generate it from user-configurable or state-driven bits ?
<ivoks> ScottK: yep...
<ttx> s/or/and/
<ivoks> take care... got to go now
<ttx> kirkland: I have two issues with it currently
<jla> ScottK: Again from memory the condition lasts until amavis is restarted. Also from memory you may also need a restart because amavis builds its own sa config dynamically
<ttx> kirkland: one is the "non-pertinent" thing
<ttx> kirkland: the other is the "let's make euca_conf rewrite parts of it" approach
<ScottK> jla: Thanks.  I'll be interested to see what ivoks' testing produces.  He's pretty smart about this stuff.
<kirkland> ttx: i don't think i understand your question ...
<kirkland> ttx: user-configurable == debconf ?
<ttx> kirkland: my question doesn't make sense, i'll rephrase
<ttx> kirkland: what do you mean by "generate the file" ?
<kirkland> ttx: well, drop it from the files installed by the package
<kirkland> ttx: and generate it in the postinst, if it doesn't already exist
<kirkland> ttx: that was my initial thought
<kirkland> ttx: debconf'ing each of the items would be nice, i thought
<kirkland> ttx: since we already do some of them
<kirkland> ttx: and would put a prettier front end on it
<kirkland> ttx: make it far more usable
<ttx> kirkland: one issue is that euca_conf rewrites eucalyptus.conf
<kirkland> ttx: though that's slightly more work that I'd want to commit to for a2
<ttx> kirkland: another way to do it would be to make eucalyptus.conf source several files, some of them user-configurable, some of them state-driven (like the NODES= line)
<kirkland> ttx: i think i like that idea ...  create a hierarchy of sourced files
<ttx> and make sure euca_conf only messes with /var/lib/eucalyptus/configured_nodes and friends
<kirkland> ttx: we could install a "base" eucalyptus-base.conf with the common defaults
<ttx> kirkland: ultimately I'd like the component registration process not require to run as root
<kirkland> ttx: then debconf could write eucalyptus-custom.conf
<kirkland> ttx: ah, debconf requires root, right
<ttx> kirkland: at that point autoregistration through euca_conf needs to rewrite eucalyptus.conf
<kirkland> ttx: right
<ttx> kirkland: so all the registration tasks run as root, which is... scary.
<kirkland> ttx: okay, i'll think on this a little more, but I like the idea of a hierarchy of sourced files
 * ScottK looks at Bug #502071 and thinks "Not bad".  Fixes uploaded, tested and relesead for production in two days on 5 releases.
<uvirtbot> Launchpad bug 502071 in spamassassin "FH_DATE_PAST_20XX scores on all mails dated 2010 or later" [High,Fix released] https://launchpad.net/bugs/502071
<kirkland> ttx: what about this ....
<kirkland> ttx: the package installs a base /etc/eucalyptus/eucalyptus.conf (as it does now)
<ttx> kirkland: if we can change it so that euca_conf can run as "eucalyptus" it's slightly less scary.
<kirkland> ttx: and euca_conf reads that, then sources ~euclayptus/.eucalyptus.conf
<kirkland> ttx: and always writes to ~euclayptus/.eucalyptus.conf
<kirkland> ttx: right ... i'm just thinking how to make this look like every other normal program
<ttx> kirkland: that would work... though it might be confusing
<kirkland> ttx: we'll have a root-administered global configuration file
<ttx> the beauty of it is taht you can almost keep the current one
<ttx> hm, scratch that. We don't want to keep it :)
<kirkland> ttx: bbiab, on the phone with jibouman`
<ttx> kirkland: yes, I was thinking about sourcing /var/lib/eucalyptus/configured_nodes at the end of /etc/eucalyptus/eucalyptus.conf, and have euca_conf only write to "configured_nodes"... whichj amounts to the same
<ttx> kirkland: though our upstart scripts, I think, happily source eucalyptus.conf as root, istr
<ttx> so escalation from eucalyptus to root would be pretty trivial.
<ttx> (if not already)
<ttx> kirkland: I think that security pass can be solved post-alpha2, I just mention it so that your design doesn't end up orthogonal to it.
<Mike_lifeguard> Hello. I wanted to check ssh keys with ssh-vulnkey, and I have installed openssh-blacklist and openssh-blacklist-extra. However, I still get one listed as unknown:
<Mike_lifeguard> /home/alphos/.ssh/authorized_keys:2: Unknown (blacklist file not installed): RSA 1023 ad:01:41:d1:9e:0d:fe:c5:5f:13:91:7c:3f:8f:6c:8c /home/alphos/.ssh/authorized_keys
<Mike_lifeguard> I see that the length is 1023 - that's wrong, isn't it? Should be 1024
<bdeb> Hey, I am having MPT Fusion problems, it this the place for questions?
<bdeb> I have an LSI p211-4i SAS controller.  lsiutils says that ' 0 MPT ports found'
<bdeb> there is no ioc0 listed in /proc/mpt.  just a summary file and a verions file
<karmst> hello everyone
<karmst> I'm trying to find what the best way is to make incremental image backups of Ubuntu?
<karmst> can anyone help?
<bdeb> I use Bacula for backups.  www.bacula.org
<karmst> is there anything to do live backups?
<karmst> so you can be using the system and still get a full backup
<karmst> Like a VSS?
<bdeb> I belive you would want to put you volumes on LVM.  Then you can snapshot them, mount the snapshots, and then back up.
<bdeb> Thats how I backup Zimbra
<karmst> ah
<karmst> have you had to do a restore before using that method?
<karmst> or a bare metal restore?
<bdeb> no i havent done a bare metal restore.  but I believe you can.
<karmst> ok
<karmst> thank you
<smoser> mathiaz, ping
<mathiaz> smoser: o/
<smoser> had you started any of the ec2-config "plugins"
<mathiaz> smoser: zappy vew hii-ear!
<smoser> why thank you sir. the same to you and yours
 * zul things mathiaz might still be drunk
<mathiaz> zul: it's been 4 days now
<mathiaz> smoser: well - by plugins, you mean writting upstart jobs?
<zul> well you could have gone on a bender but anyways
<mathiaz> smoser: if so - nope - not yet
<smoser> the closest i got to a mathiaz level bender was watching "hangover" the movie on new years eve :)
<smoser> mathiaz, right.
<zul> smoser: ill you my greyhound bender eventually
<smoser> ok, i didnt' htink so. zul is going to take a stab at those.
<mathiaz> smoser: the main issue is reading the yaml config file
<smoser> ?
<mathiaz> smoser: depending on when you run the upstart job, you may not have access to /usr yet
<mathiaz> smoser: which means you may not be able to parse the yaml file
<mathiaz> smoser: you basically need to be able to code: if the option apt-update is in the config file and is set to yes, run this upstart job
<mathiaz> smoser: you basically need to be able to code: if the option apt-update is in the config file and is set to yes, run this *code*
<smoser> not a worry. i know its a mess, but this is ec2-init. specific purpose, /usr == /
<mathiaz> smoser: doing so in perl, python, ruby is easy
<mathiaz> smoser: oh ok - so if we assume that /usr == /
<smoser> i think to be reasonable at this point we have to assume that.
<mathiaz> smoser: then all upstart jobs can depend on / being mounted
<smoser> i think they can depend on /usr being mounted just as well
<mathiaz> smoser: so writing an upstart job that check whether apt-update is set to yes is easy
<smoser> do you know in upstart, can you write "i depend on /usr" which will be synonomous with 'i depend on /' if /usr == / ?
<smoser> mathiaz, right, i know they're easy, just need to start knocking them off
<mathiaz> smoser: yop
<mathiaz> smoser: as far as /usr dependency I don't know
<mathiaz> smoser: (my yop was for your call for starting to write them)
<smoser> fwiw, we're no more broken than we were before.
<smoser> ec2-init was set to run long before /usr was guaranteed to be mounted
<smoser> in karmic
<smoser> and previous
<smoser> thats not to say its not broken, but we've been broken in that assumption before.
<mathiaz> smoser: oh - so we can write upstart jobs that start on started mountall = / and started cloud-config?
<smoser> well i think that you dont have to depend on mounted /
<zul> so if i get this straight you have a python-yaml config file and you have a plugin in ec2-init that does stuff based on the config file right?
<smoser> as cloud-config will not be emitted until that is the case
<smoser> as it depends on it
<smoser> zul, the plan is that we add these config parsers as upstart jobs
<mathiaz> smoser: agreed
<mathiaz> zul: https://wiki.ubuntu.com/ServerLucidCloudConfig
<mathiaz> zul: the design section outlines the plan
<smoser> right. mathiaz was helpful and wrote things down :)
<smoser> so zul the idea is that each little config snippit has a corresponding upstart job
<smoser> and it reads an environment variable that is set to say where the config file is. then, it reads its section, and responds accordingly
<zul> so the upstart job tell ec2-init to do whatever?
<mathiaz> zul: not really
<mathiaz> zul: upstart jobs are independent of ec2-init
<zul> ok
<mathiaz> zul: they wait for the cloud-config event to be fired by ec2-init
<mathiaz> zul: and then they read the relevant configuration file
<mathiaz> zul: based on the content of the configuration file, they do whatever they need to
<zul> i think I get it
<smoser> right. its fairly simple design, mathiaz gets credit.
<mathiaz> zul: for example, apt_update upstart job checks if apt_update is set to true, if so it runs apt-get update
<zul> so it cloud-config done?
<zul> er...so is cloud-config done?
<smoser> the event, no. nothing is delivering that yet
<smoser> but that will be fired by ec2-init (which may/should be renamed)
<mathiaz> zul: you can start to write the upstart job though
<zul> ok
<smoser> right.
<mathiaz> zul: what matters here is to agree on the configuration syntax
<mathiaz> zul: there is an example on the wiki page
<mathiaz> zul: and I've been discussing some part of it (the default for apt-update) with erichammond on ubuntu-cloud@
<zul> so you need me to write the configuration files?
<zul> yeah im just going to the discussion now
<mathiaz> zul: I'd suggest you to write the upstart jobs
<mathiaz> zul: that will parse the configuation file
<mathiaz> zul: as the syntax and proposed configuration options are already laid out
<zul> mathiaz: k
<mathiaz> zul: there is an example/reference configuration file at https://wiki.ubuntu.com/ServerLucidCloudConfig
<zul> mathiaz: ok ill be bugging you guys alot then
<smoser> thank you mathiaz zul
<kinja-sheep> Hello, I'm trying to use dnsmasq to create DNS caching. I got that one to work nicely. However, I'm struggling with DHCP server. I have a machine connected to the laptop (via router but in switch mode). What am I doing it wrong? It can't obtain an IP address.? :(
<kinja-sheep> Any assistances would be nice. I'm still working on this one.
<naito_> Hello .... Anyone with the clud running ?
<naito_> I just installed Ubuntu Server 9.10 with Eucalyptus, and when i run "euca-describe-availability-zones verbose", it tells me that the max VMs that i can run are 0. Anyone had this problem ?
<kees> soren: oops, I was in the wrong channel.  vmbuilder.  yup, added it everywhere needed.  still explodes.
<naito_> Anyone ?
<soren> kees: traceback?
<kees> soren: one sec
<kees> soren:
<kees>   File "/usr/lib/python2.6/dist-packages/VMBuilder/plugins/ubuntu/distro.py", line 118, in preflight_check
<kees>     mod = __import__(modname, fromlist=[self.vm.suite])
<kees> ImportError: No module named lucid
<soren> kees: /usr/lib/python2.6/dist-packages/VMBuilder/plugins/ubuntu/lucid.py exists?
<kees> soren: ah, craps.  I have it in /usr/share/pyshared/VMBuilder/plugins/ubuntu/lucid.py
<soren> kees: Yeah. Yay, pycentral.
<kees> *facepalm*
<kees> yeah, working now.  thanks.  :)
<kees> soren: will you merge my branch for ext4 support and lucid guest support?  lp:~kees/vmbuilder/use-ext4/
<soren> kees: Not right now, but yes, sure.
<kees> soren: ok
<uvirtbot> New bug: #502855 in bind9 (main) "package gadmin-bind 0.2.3-5 failed to install/upgrade: problemi con le dipendenze - lasciato non configurato (dup-of: 437783)" [Undecided,Confirmed] https://launchpad.net/bugs/502855
<adac> Some time ago I installed gnome desktop on my server and now I want to remove it again. How to do that? I tried sudo apt-get remove ubuntu-desktop but it seems that this package was not the one i installed back then...
<guntbert> adac: that is only a meta package
<adac> guntbert, how can I find out which one is the real one?
<guntbert> adac: look at its dependecies - remove them
<adac> guntbert, how do I find out the dependecies on command line?
<guntbert> adac: apt-cache show <package> (its quite a lot :))
<adac> guntbert, Ok! let's see if I get lucky :D
<mathiaz> kirkland: hi!
<mathiaz> kirkland: I'm trying to install UEC and the step to discover new nodes doesn't seem to work
<mathiaz> kirkland: sudo euca_conf --no-rsync --discover-nodes
<mathiaz> kirkland: http://paste.ubuntu.com/351456/
<kirkland> mathiaz: hmm
<kirkland> mathiaz: are your nodes broadcasting the avahi message?
<mathiaz> kirkland: well - there is an avahi-publish process running on the node
<mathiaz> kirkland: http://paste.ubuntu.com/351457/
<mathiaz> kirkland: hm - I've rebooted the CC
<mathiaz> kirkland: it finds a node now
<kirkland> mathiaz: reboot was required?
<kirkland> mathiaz: euca version?
<mathiaz> kirkland: it was a package installation
<mathiaz> kirkland: using the latest from lucid
<mathiaz> kirkland: now I get this: http://paste.ubuntu.com/351459/
<kirkland> mathiaz: which is what?
<kirkland> mathiaz: we have made a few uploads today
<mathiaz> kirkland: 1.6.2~bzr1120-0ubuntu1
<jibouman`> time to call it a night; see you guys tomorrow
<kirkland> jiboumans: later
<kirkland> mathiaz: okay, thanks
<mathiaz> kirkland: well - it cannot find the node anymore now
<kirkland> mathiaz: ?  it didn't, then it did, now it's not again?
<mathiaz> kirkland: it didn't, it rebooted, it did, it's not again
<mathiaz> kirkland: it == CC
<mathiaz> kirkland: the NC hasn't moved (and I haven't added it to the CC)
<mathiaz> kirkland: well - I've rebooted the CC - and the node can be discovered
<kirkland> mathiaz: but only for a short while
<kirkland> mathiaz: then it can't?
<mathiaz> kirkland: let me wait for the short while
<mathiaz> kirkland: the next issue is that it detected the ipv6 address
<mathiaz> kirkland: and then it fails to login if I try to add the node
<mathiaz> kirkland: http://paste.ubuntu.com/351462/
<mathiaz> kirkland: ok - and now the node cannot be discovered anymore
<kirkland> mathiaz: hrm, that stinks
<mathiaz> kirkland: hm well. Seems like it's working now
<mathiaz> kirkland: I'm confused
<mathiaz> kirkland: but at least it seems that the CC is talking to the NC
<mathiaz> kirkland: now I can't get the credentials - http://paste.ubuntu.com/351469/
<kirkland> mathiaz: is the web frontend running?
<mathiaz> kirkland: which process would it be?
<mathiaz> kirkland: I'm running CC+CLC+Walrus+SC on one machine
<kirkland> mathiaz: apache listening on 8443, i think
<mathiaz> kirkland: yop - eucalyptus-cloud is listening on port 8443
<kirkland> mathiaz: hmm, then the cred download should work
<mathiaz> kirkland: reading through the log - I can see some jdbc connection errors
<mathiaz> kirkland: hm - it seems that eucalyptus-cloud is not answering to request
<mathiaz> kirkland: a wget on https://localhost:8443/register times out
<kirkland> mathiaz: sorry, i'm hacking on the wsdl stubs atm
<kirkland> mathiaz: let me get this handled, and i'll give you my full attention ;-)
#ubuntu-server 2010-01-05
<mathiaz> kirkland: still around?
<kirkland> mathiaz: mostly
<mathiaz> kirkland: do you some time to help in debugging the stuck eucalyptus-cloud situation I'm running into?
<kirkland> mathiaz: sure, i can try
<mathiaz> kirkland: I can't get the credentials and thus can't access UEC
<mathiaz> kirkland: so it seems that everything blocks when trying to access http://127.xx:8443/register
<kirkland> mathiaz: should be https
<kirkland> mathiaz: can you telnet to 127.0.0.1 8443
<kirkland> mathiaz: is it actually listening?
<mathiaz> kirkland: it can read the ssl certificate
<mathiaz> kirkland: hm - well - not always apparently
<kirkland> mathiaz: hmm, cannot reliably read the ssl cert?
<mathiaz> kirkland: hm - wait - nothing listen on 8443
<mathiaz> kirkland: there is something on 8773
<kirkland> mathiaz: i think that's the CC
<kirkland> mathiaz: the CLC is on 8443
<mathiaz> kirkland: well eucalyptus-cloud is listening on 8773
<mathiaz> kirkland: let me reboot the CC
<kirkland> mathiaz: hrm, we have a real problem, if all this rebooting is required
<mathiaz> kirkland: yeah - I'd like to get a UEC setup running though
<mathiaz> kirkland: so that I can test my stress scripts (which I'm very proud of :) )
<kirkland> mathiaz: i haven't tested today's ISO yet today
<kirkland> mathiaz: i've been fighting wsdl stubs patch since about 10am
<mathiaz> kirkland: ok - so now eucalyptus-cloud is listening on 8443
<kirkland> mathiaz: i think i'm about done with it
<kirkland> mathiaz: can you point a web browser to it?
<mathiaz> kirkland: http://paste.ubuntu.com/351526/
<kirkland> mathiaz: okay, let's look at the logs
<kirkland> mathiaz: anything interesting in /var/log/eucalyptus?
<mathiaz> kirkland: hm - cloud-error doesn't show anything new since the reboot
<mathiaz> kirkland: it's full of jdbc errors though
<kirkland> mathiaz: can you pastebin it?
<kirkland> mathiaz: those could be the problem
<mathiaz> kirkland: http://people.canonical.com/~mathiaz/cloud-error.log
<kirkland> mathiaz: yeah, something is definitely wrong with the DB connection
<mathiaz> kirkland: ok - so packages reinstalled
<mathiaz> kirkland: I was able to register the NC
<mathiaz> kirkland: euca_conf --get-credentials mycreds.zip failed though
<mathiaz> kirkland: http://paste.ubuntu.com/351533/
<kirkland> mathiaz: same error on the wget?
<mathiaz> kirkland: yes
<mathiaz> kirkland: it times out
<mathiaz> kirkland: https://127.0.0.1:8443/ works well
<mathiaz> kirkland: https://127.0.0.1:8443/register fails
<kirkland> mathiaz: oh?  you can browse to it now?
<mathiaz> kirkland: hm - you're right. I could try to go that way
<mathiaz> kirkland: hm well - I can get to the first page
<mathiaz> kirkland: but then it stays there: "Loading data from server..."
<mathiaz> kirkland: something else is broken
<kirkland> mathiaz: i'm planning on installing a fresh UEC tomorrow
<mathiaz> kirkland: ok - I'm going to file a bug with my current setup
<mathiaz> kirkland: you may run into the same issue tomorrow
<kirkland> mathiaz: please do
<kirkland> mathiaz: i likely will
<kirkland> mathiaz: i need to dig through the logs more thoroughly
<mathiaz> kirkland: https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/503180
<uvirtbot> Launchpad bug 503180 in eucalyptus "eucalyptus-cloud doesn't reply to requests" [High,New]
<kirkland> mathiaz: thx
<sabgenton> !usb 3g modem
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<sabgenton> !3g modem
<sabgenton> !3g
<sabgenton> !wireless internet
<sabgenton> anyone have some good ways to set up huawei 3g modems
<sabgenton> I used to just make  a dialer and dial it
<sabgenton> but maybe theres a nicer way in ubuntu
<sabgenton> ?
<sabgenton> :)
<godsyn> please assist: I'd like to disable all auth for samba. How? I'd like everyone to be a guest.
<bitprophet> Is there an easier way to obtain/rebuild debs from newer releases? right now I wget the dsc/tar.gz/diff.gz from packages.ubuntu.com and then dpkg-source + dpkg-buildpackage
<bitprophet> but it'd be nice if there was something like apt-get source <release_name> <package_name> or something. hmm...maybe if I add the newer distro as a source line in my sources.list?
<bitprophet> excellent, I can just add all the deb-src lines for the distro in question to sources.list and it works just fine. can't believe it took me this long to figure this out.
<bitprophet> sorry for the spam :0
<bitprophet> :)
<orudie> how can I safely chown and/or chmod directory within /var/www/ so that the content are listed when accessed by the browser. Right now it says 403 Forbidden.
<j416> orudie: the web user needs access to it. Either chown it to the web user, or chmod so that the web user has at least read access.
<orudie> j416, the chown is set to www-data.www-data
<twb> billybigrigger: apt-get --build source
<orudie> j416, not sure what chmod should be set to , safely :)
<twb> billybigrigger: to select a release, you can use -t as normal.
<j416> orudie: if your web user is www-data, then it should be fine
<twb> billybigrigger: bear in mind that backporting packages from a newer releases is something for experts to do, and only rarely.
<j416> orudie: the only user that needs access to those files are your web user, supposedly
<orudie> chmod 755 will be ok ?
<twb> Oops, all of that was meant for bitprophet, who left.
<j416> so you should be able to set directories to 700 and files to 600
<j416> orudie: 755 for directories is ok, yes
<j416> you can use 755 / 644
<j416> it all depends on what you want your users to be able to see, of course.
<j416> 700 / 600  ->  only the owner has access
<j416> 755 / 644  ->  everyone can read, but only the owner can both read and write
<orudie> just did 755 still forbidden j416
<j416> what file are you trying to access?
<j416> and what permissions does it have?
<twb> j416: the httpd will need read access to it, too.
<j416> twb: tell orudie :)
<j416> if the httpd is run as the web user as it should be, it would have read access to it, though
<twb> Not if the files are owned:grouped by j416:j416.
<twb> Or orudie or whatever
<j416> of course not
<orudie> j416, its set to chown www-data.www-data
<j416> twb: ^
<orudie> :)
<orudie> what should it be set to ?
<orudie> www-data.root ?
<twb> orudie: www-data:www-data is fine.
<j416> orudie: are you sure you managed to chown it properly?
<j416> (looking at your above syntax)
<j416> (what does "ls -l" say?)
<twb> I'm curious as to the details of the 403, too
<orudie> this is from apache2 error.log client denied by server configuration: /var/www/server-status j416 twb
<orudie> oops
<orudie> Directory index forbidden by Options directive: /var/www/directory
<orudie> that
<j416> that's an entirely different error then
<j416> it means you're trying to list the contents of a directory
<j416> but you are not allowed to.
<j416> (because your web server is set up not to allow it)
<zul> whoop...freeradius finally has ssl support in lucid
<orudie> how would I fix it ?
<orudie> :)
<sabgenton> any one know if there is some sort of management for 3g  modems in ubuntu server
<sabgenton> or you just make your own script?
<j416> orudie: normally, you don't want to allow directory listing, as it is a possible security risk
<j416> if you really want to, the setting is in line 10 of /etc/apache2/sites-enabled/000-default
<j416> in a default installation
<j416> (I think)
<j416> I might have modified mine a little, but thereabouts.
<twb> sabgenton: IIRC they just act like any other modem.
<twb> sabgenton: it's not really server gear.
<sabgenton> yeah they do you just make a ppp conection
<j416> orudie: if you're looking for more support on apache, perhaps #httpd is a good place.
<sabgenton> ubuntu desktop has this gui thing that asks you what network they are with and sets up the settings for you
<sabgenton> twb: ah found a script for my modem
<sabgenton> https://help.ubuntu.com/community/DialupModemHowto/Huawei/E220
<sabgenton> they could have had a terminal tool that set up a dialer like the desktop gui
<sabgenton> the desktop setup sucks anyway cause it  relys on you having X open all the time
<twb> Xvfb!  Har har!  (That's not a serious suggestion.)
<darthanubis> is there a doc or faq somewhere that lays out how to setup wifi on my server. I don't want to run a DE on my server just to get it to connect to the WAP.
<darthanubis> https://help.ubuntu.com/9.10/serverguide/C/network-configuration.html
<darthanubis> that page does not cover wifi w/ wpa usage
<manish> Hello
<manish> Is it a good idea to change over to Ubuntu server 9.10 from existing Windows 2000 server based Network?
<manish> Existing Network has PDC, Mail Server, Active Directory, File server, etc
<darthanubis> manish, what do you think we'd say here?
<manish> darthanubis, :) Yes
<manish> But what I want to know is how complicated is setting up this and what are the resources I need to look for before getting hands on ubuntu server 9.10?
<JanC> manish: it all depends on several things, like do you want to replace some servers, all servers, also replace desktops (some? all?), etc.
<manish> to start with I want to replace PDC
<manish> this server keeps failing. so looking for alternatives
<manish> workstations are mix of Win2k and WinXP SP3 on LAN
<genii> manish: I've found the official guide to be pretty useful as a starting point https://help.ubuntu.com/9.10/serverguide/C/index.html
<manish> ok thanks will go though it during day.
<genii> darthanubis: http://ubuntuforums.org/showthread.php?t=1205009 might be useful to you for the wpa from commandline
<darthanubis> genii, ty
<sabgenton> can i get back to the friendly menu of options when I installed the cd?
<sabgenton> lamp etd
<sabgenton> etc
<jtaji> sabgenton: do 'tasksel --list-tasks'
<jtaji> then you can install one with 'sudo tasksel install taskname'
<sabgenton> awsome
<sabgenton> will go try
<sabgenton> jtaji: i just wanted open vdial but must be under other
<sabgenton> wvdail
<ScottK> manish: One of the key issues you'll run into in such a transition is the without Samba 4 (which is not yet released) you can't have a Linux AD PDC.  So consider a partial migration for now.
<sabgenton> how do I rig apt aptutude or tasksel to the cd and not online
<terinjokes> can i get a head-25 of /etc/init.d/networking seems I accidently wrote to that file
<twb> terinjokes: I don't understand the question.
<jtaji> terinjokes: http://paste.ubuntu.com/351600/
<twb> Oh, you want a copy from an non-broken system?  It depends what release you're running.
<terinjokes> 9.10
<jtaji> top 25 lines he meant
<terinjokes> jtaji: that looks right, thanks
<terinjokes> jtaji: that's 93, give or take ;)
<twb> Try aptitude download netbase; mkdir /tmp/x; dpkg -x netbase_*deb /tmp/x; cat /tmp/x/etc/init.d/networking
<jtaji> interesting
<twb> There's also simply "aptitude reinstall netbase", but I don't think it'll fix conffiles.
<twb> (By design.)
<terinjokes> yeah.... i forget when I have to connect directy to the box (it's normally headless) that ny keyboard is Programmer Dvorak, but the computer expects English US. i've got to rely on muscle memory (makes password prompts fun)
<terinjokes> now, back to what i was doing, converting the system from router to bridge (due to some nasty problems with multicast, unless you guess know how to forward multicast)
<terinjokes> ok, got a problem, i'm bridging wlan0 --> eth0, if the bridge isn't turn on (comment out br0 in interfaces) i can ping the internet from the box. if the bridge is on (br0 is uncomment, bridge_ports eth0 wlan0) the wlan0 interface gets an IP, but then it loses it
<twb> teddymills: sudo dpkg-reconfigure console-setup
<twb> Gah, bloody /quitters
<terinjokes> i'm running two computers on a subnet with a linux box acting as a router, any ideas on how to forward multicast
<danielck> can anyone help me with a locale problem I'm having on Hardy?
<danielck> the locale command gives me "POSIX" for everything exept LANG which is empty
<danielck> even though I've set sudo /usr/sbin/update-locale LANG=en_US.UTF-8
<danielck> (rebooted and all)
<terinjokes> danielck: it's not dead, it's just asleep
<danielck> heh
<jmarsden> danielck: What did you expect or want your locale to be?
<danielck> en_US.UTF-8
<jmarsden> OK, and does the output of   locale -a   include that locale?
<danielck> hmm, now I'm catching on, it has en_US.utf8
<jmarsden> danielck: OK, so try using that and see if it helps.
<danielck> nope
<danielck> after reboot, still POSIX
<danielck> hmm, wait a minute...
<jmarsden> OK...  Is the English language pack installed on your machine?  Does   dpkg -l | grep language-pack-en    tell you anything useful?
<danielck> yes, seems to be installed like it should be, I actually already reinstalled that
<jmarsden> OK.  And are the contents of /etc/default/locale    appropriate?
<danielck> the contents of /etc/default/locale is now simply
<danielck> LANG=en_US.utf8
<jmarsden> Sounds like something is (re)setting your locale somewhere unexpected... did you add a command to .bashrc or /etc/profile or similar related to locale stuff at all?
<danielck> no
<jmarsden> Oh... and there is a dash in utf-8 ... check that LANG=   setting again?
<danielck> well yes, that's what I had in the first place, UTF-8, but locale -a lists it without a dash
<jmarsden> OK.  On my Karmic install here I have it as LANG="en_US.UTF-8" .. will check in a Hardy VM if I still have a hardy VM around to test with...
<danielck> ok thanks
<jmarsden> What you first installed the machine, was it set for some less common (non-English) locale?
<jmarsden> s/What/When/
<danielck> no, I should specify that it is a VPS
<danielck> and I can't recall if I checked the locale when I installed
<jmarsden> danielck: OK, but you installed it from an official Ubuntu Server ISO, yourself?  (Or did your VPS provider give you some customized thing?)
<danielck> well, I didn't actually install it, since it came from an image, but anyway
<danielck> customized
<danielck> so it could have been posix to start with, but I'm not sure
<jmarsden> ah, so that could possibly be one source of this issue... do you have a way to check (like run a local VM created from that same image)?
<danielck> no, unfortunately my host doesn't provide tools for that
<jmarsden> danielck: No, I meant download the image and then create a VM on a local PC you own, as a test :)  But OK...
<jmarsden> I have a Hardy 8.04.3 Desktop VM  and it has LANG="en_US.UTF-8"  in /etc/default/locale and the locale command output is fine.
<danielck> ok
<danielck> is it possible to make an image internally from within hardy?
<jmarsden> You won't be able to run a VM inside a VM (well, not as far as I know, that's still sort of a research topic, recursive virtual machines :)
<danielck> oh I don't meen running it, just creating the image
<jmarsden> OK, I have a Ubuntu 8.04 Server CD image (.iso) here, so I can create a new Ubuntu Hardy server VM and see about playing with locales in there...
<jmarsden> danielck: That should be doable, but I'm not sure how it would help you
<danielck> well... if that's your idea of a good time :) I'm very grateful though of course for your help!
<jmarsden> danielck: It'll only take about 10 minutes, I have a fast desktop machine here... :)
<jmarsden> And BTW I have had some interest in locale-related bugs, in fact my main Karmic host OS currently has 200+ locales installed (all avalable language packs!) because of some testing I was doing...
<danielck> jmarsden: I mean, if I could create an image of my current setup, I could play around with a copy on my desktop in a vm, no?
<jmarsden> danielck: Oh, I see.  Yes... for that you could probably use any disk image backup tool (or just dd piped to a compressor and then ssh, even) and then move the backup to your local PC, turn it into a VM disk image there and off you go.  There may be better ways than that, but off the top of my head that's how I'd do it.  I virtualized an ancient unmaintained Fedora Core Linux box that way a few months ago, although I was local so the transfer was
<jmarsden>  over a fast LAN not over the Internet.
<danielck> right
<jmarsden> But if you could download the install image your VPS provider offers, you could start from that, which might be smaller and would let you go over the install process from it and see what happens.
<uvirtbot> New bug: #503180 in eucalyptus (main) "eucalyptus-cloud doesn't reply to requests" [High,Confirmed] https://launchpad.net/bugs/503180
<danielck> jmarsden: indeed, that would be good
<jmarsden> That's what I had in mind earlier when I suggested you could "run a local VM created from that same image" -- sorry if I was unclear.
<jmarsden> Hardy server VM install in progress, BTW...
<danielck> sorry, you weren't really unclear, rather my head is a bit fluffy inside
<jmarsden> :) OK.
<danielck> I'm going to reboot this machine, apparently it also needed some locale love, be back in a minute
<jmarsden> OK.
<danielck> hmm, setting the locale on this remote server worked just fine, and it's also a Hardy install
<danielck> but from a different host and it's not a VPS
<jmarsden> OK.  And my new hardy server VM set up its locale just fine "out of the ISO", too.
<jmarsden> So it starts to look as though something in the VPS image or how it was installed could be responsible for the problem you are seeing.
<danielck> yes... when googling about this I'm finding references to debian bugs which were fixed in 2005
<danielck> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=330701
<uvirtbot> Debian bug 330701 in libpam-modules "locales: LANG from /etc/environment is not chosen, always LANG= and POSIX" [Important,Fixed]
<jmarsden> Seems unlikely those would still have been in Hardy in 2008.
<danielck> indeed :)
<jmarsden> As a cheap workaround, if you do   LANG=en_US.UTF-8 locale    # do you get output that looks "normal" (i.e. en_US.UTF-8 for all of the locale variables displayed)?
<danielck> yes
<danielck> except LC_ALL
<jmarsden> OK.  So if you were to add a line     set LANG="en_US.UTF-8" in /etc/profile you might have a "mostly working" quick fix.
<jmarsden> It's ugly and not at all what you are *supposed* to do...
<danielck> right
<jmarsden> Did that work for you?
<jmarsden> Or "right" as it "right, it's ugly" :)
<danielck> "right, it's ugly" :)
<danielck> jmarsden: lets see if it works
<jmarsden> Go for it.
<danielck> now let's be clear, I just add this line to the end of /etc/profile?
<danielck> LANG="en_US.UTF-8"
<jmarsden> Yes.
<danielck> didn't work
<jmarsden> danielck: OK, if you . /etc/profile   # does that change the value of LANG in your shell?
<danielck> no
<danielck> even putting LANG="en_US.UTF-8" in my .bashrc won't work
<jmarsden> Wow.  Then something is setting it back again within each shell?  Seems very weird.   Try    export LANG="en_US.UTF-8"
<jmarsden> Maybe try first just at the shell prompt, then in ~/.bashrc, then in /etc/profile ?
<danielck> within the file
<danielck> ok
<danielck> worked at the shell prompt
<jmarsden> That's a start :)
<danielck> also worked in bashrc after a re-login
<jmarsden> Better still...
<danielck> also worked in /etc/profile (removed it from .bashrc)
<jmarsden> OK... so at least you now seem to have a working workaround :)
<danielck> now I'm not much of a linux guru, so I have to ask, is the end result the same, or might I run into trouble with this approach?
<jmarsden> danielck: There may be some services that won't "see" this approach and so won't have the new locale info.
<jmarsden> And of course setting things up so different users have different locales is now awkward...
<danielck> indeed
<danielck> luckily I don't need multiple locales
<danielck> my next trouble is getting php (running as cgi) load it's extensions...
<danielck> :(
<jmarsden> Is there a reason you run PHP as a CGI, it tends to have horrid performance that way.
<danielck> I'm running nginx, and using php-fastcgi which I undestand should be pretty fast
<jmarsden> Ah, OK.  I've only used Apache (well, and its predecessor NCSA httpd, a long time ago!).
<danielck> got both running (apparently) fine, but php isn't loading any extensions
<danielck> I gave nginx a try since apache was eating up all my memory on my 256 VPS
<twb> I used to like thttpd
<twb> Now I use busybox httpd
<jmarsden> OK.  Is the user that nginx runs as the same one apache2 runs as?  Is there anything about the location or permissions of the php.ini file that changes when you use the CGI version?  I'm just guessing...
<danielck> yes for both
<twb> nginx doesn't Provides: httpd-cgi, so I wouldn't bet on it supporting non-static pages at all.
<danielck> with cgi it uses /etc/php5/cgi/php.ini
<danielck> twb: well it seems to work, i'm getting php working fine
<danielck> the output of phpinfo just doens't show any extensions loading
<twb> Hmm, the code itself refers to fastcgi, so I guess it's a packaging mistake
<danielck> I have nginx installed from source
<danielck> anyway, I'll be back, need a break from the computer :)
<jmarsden> Why?  It's packaged, isn;t it?  OK...
<danielck> jmarsden: huge thanks for your help!
<danielck> jmarsden: the version supplied by hardy is pretty old
<jmarsden> danielck: If you need a newer one, I'd look in hardy-backports or backport the one in karmic, rather than DIY from a source tarball...
<jmarsden> Anyway, take a break first :)
<danielck> pretty much followed this tutorial http://www.mensk.com/webmaster-toolbox/perfect-ubuntu-hardy-nginx-mysql5-php5-wordpress/
<danielck> see ya in a bit
<danielck> back
<jmarsden> OK.  I'm sure how much I can help with nginx and compiling things from source tarballs and then wondering why they don't do what you want or need, though.  And it is approaching 1am here...
<danielck> jmarsden: no worries, you've been a great help!
<jmarsden> You might want to try using the Hardy packaged nginx and see whether php5-cgi then "just works" ?
<danielck> I might do that :)
 * jmarsden attempts to remove himself from chair, and move towards bed... goodnight :)
<danielck> jmarsden: gnight
<johe|work> maybe someone help me on http://pastebin.com/m2e2e1e1a its the strace output of an snmpd service, which dies with floating point exception,
<johe|work> it affects amd64 systems with LTS but only real one, we have some of amd64 as vmware guests which dont die that way
<johe|work> but also 6 real servers how does the some failure
<jiboumans> good morning
<_ruben> ok .. this is weird .. i have a server with (among other hdds) a 2G flash disk for the OS, 128M for /boot (ext2) and the rest LVM .. cfdisk/fdisk/parted all confirm this layout .. however when i do sudo mount /dev/sda1 /boot, it shows me an empty 2G partition
<_ruben> the data most likely is still there, as the box does boot just fine (even though i dont seem to have a way to access any of the /boot/ files)
<jla> morning all
<Hellsheep> Hey Guys i'm having an issue installing linux-headers, i tried to use apt-get install linux-headers-`uname -r` except it gives me this error message: E: Couldn't find package linux-headers-2.6.18-128.2.1.el5.028stab064.4
<Hellsheep> I tried typing uname -r and i get this: 2.6.18-128.2.1.el5.028stab064.4 - So which linux-headers do i install? >.<
<pmatulis> Hellsheep: cloud stuff?
<_ruben> you're not running a stock ubuntu kernel
<Hellsheep> Hmm
<_ruben> looks redhat'ish
<Hellsheep> That would explain it then :P
<Hellsheep> I was pretty sure i just installed Ubuntu 9.04 though >.<
<Hellsheep> weird
<_ruben> in Xen perhaps?
<_ruben> 2.6.18 kernel is rather old for ubuntu .. think 6.06 or so ? :)
<Hellsheep> Oh i see
<Hellsheep> well thanks, ill try sus something out
<Hellsheep> And see if i cant fix this :P
<zul> morning
<ttx> zul: yo
<ttx> zul: you added "Using bzr for packaging (zulcss)" on tomorrow's meeting agenda, could you clarify ?
<ScottK> Dapper was 2.6.15.
<_ruben> close enough :)
<zul> ttx: i want to get people starting using bzr for maintaining our server packages
<ttx> zul: ok
<ttx> zul: could this wait for the next week meeting ? This week's agenda is already overloaded
<zul> ttx: sure
<ttx> zul: ok, please move that line to a "next meeting" header on the same page
<zul> ttx: done
<zul> hey smoser
<smoser> good morning
<ttx> zul: thx
<ttx> smoser: o/
<smoser> good morning ttx, and happy new year
<zul> ttx: stupid wikis
<ttx> smoser: best wishes to you too !
<ScottK> zul: Keep in mind that lamont already maintains postfix, bind, an who knows what else in Git.  We shouldn't mess with that.
<zul> ScottK: yes of course
<jdstrand> coffeedude: hi! so I added a preliminary debdiff to bug #274350. Since I am not a likewise user or have a way to test it, it may be too simplistic.... Would you mind reviewing and commenting on it?
<uvirtbot> Launchpad bug 274350 in apparmor "apparmor HOMEDIRS not adjusted for likewise" [High,Fix released] https://launchpad.net/bugs/274350
<coffeedude> jdstrand, Sure.
<coffeedude> jdstrand, we are only looking at fixing this in lucid right?  So only in the new likewise-open packages.  The debdiff is correct for 4.1 but I've already fixed this in the likewise-open PPA.
<smoser> zul, ttx where should issues 1 and 4 at http://groups.google.com/group/ec2ubuntu/browse_thread/thread/30f3af093528aad4 be raised ?
<coffeedude> jdstrand, or is this a proposed change to backport to Hardy and later?
<jdstrand> coffeedude: correct, only for Lucid
<jdstrand> coffeedude: if someone want to SRU for hardy, I certainly wouldn't oppose it, but I am also not driving it ;)
<coffeedude> jdstrand, K.  So you diff won't apply to the new packages.  I've already fixed it there like we discussed. pitti is reviewing the debs for me and they are planned for upload for alpha 2.
<ttx> smoser: launchpad ?
<jdstrand> coffeedude: that is great! Do you mind if I adjust the bug accordingly?
<ttx> smoser: otherwise the ubuntu-cloud list sounds like a good place for discussion ?
<coffeedude> jdstrand, please do.
<smoser> neither is at all cloud related
<smoser> at least i wouldn't understand why that would be
<ttx> smoser: you mean every karmic has tons of console-kit-daemon warnings in logs ?
<jdstrand> coffeedude: ok, done. thanks again :)
<zul> smoser: they should open bugs in launchpad
<smoser> fair enough
<jdstrand> coffeedude: I was not aware of likewise-open *and* likewise-open5. Will both be in lucid? have you fixed both? (also, where are the new packages-- I can't seem to find the ppa)
<ttx> jdstrand: both should update to likewise-open in lucid
<jdstrand> ttx: so there will only be likewise-open in lucid, which is version 5?
<ttx> jdstrand: that's the plan.
<coffeedude> likewise-open5 is going away.  The new likewise-open packages replqace likewise-open5.
<jdstrand> I see
<jdstrand> gotcha
<smoser> soren, around?
<coffeedude> jdstrand, https://launchpad.net/~likewise-open/+archive/ppa/+packages?field.name_filter=&field.status_filter=published&field.series_filter=lucid
 * jdstrand wonders why he couldn't find that...
<jdstrand> coffeedude: thanks
<Maleko> can anyone give me a better alternatve to the standard ping tool in linux? preferably one that can show jitter and hops in each ping reply
<soren> smoser: Yup.
<smoser> gonna send you an email realy quick. i'd like your comments
<smoser> see inbox
<zul> smoser: the initramfs tooks good to me
<BBHoss> Hi, can someone explain the memory usage to me?  I have a server with 4GB ram.  MySQL is taking up 16% of that, but the RSS is around 600MB.  Tomcat is taking up about 300MB.  When I run free -m though, it only shows 105MB free, with a ton of memory in "buffers/cache".  Is this normal?
<BBHoss> Monit keeps sending me warnings that it is over 85% memory usage
<Jeeves_> BBHoss: Yes.
<Jeeves_> The kernel keeps a lot of files in cache
<_ruben> monit should "ignore" the cache/buffers
<Jeeves_> so it doens't have to read all files from disk (which is slow)
<BBHoss> _ruben: it doesn't appear to be
<BBHoss> unless something  is using up all of the memory at that time
<_ruben> then bug the monit authors to fix the check :)
<BBHoss> _ruben: is it supposed to ignore that, or are you saying that it needs to?
<_ruben> i never used monit, so i dont know its "proper" workings .. a *good* test would be to ignore it though (even though not a lot of apps actually do)
<uvirtbot> New bug: #503402 in samba (main) "winbind crashes on authentication (winbind_pam_auth)" [Undecided,New] https://launchpad.net/bugs/503402
<BBHoss> _ruben: why if it has plenty of memory in buggers and caches, would is be swapping to disk?
<BBHoss> buffers
<_ruben> swapping out unused (or hardly used) memory space in favor of more cache can speed up the system
<BBHoss> _ruben: Here is my free -m output, It looks like only ~500MB is being cached/buffered, and at around 3500MB is actually used.  Is this correct? https://gist.github.com/c8c1bff98627edbe0d20
<_ruben> correct
<zul> ttx: ping
<BBHoss> _ruben: well then where does the other 3500MB come from, the RSS from the two biggest processes sums up to under 1GB
<BBHoss> _ruben: here is the ps aux output: https://gist.github.com/cc4e7a85e544533cdbc6
<ttx> zul: pong
<_ruben> you just reached the edge of my expertise .. the RSS versus VSZ etc stuff is still quite a mystery to me .. your virtual memory usage seems roughly 3.5G though
<BBHoss> _ruben: heh ok
<zul> ttx: do you want to me merge openvpn for you?
<ttx> zul: you can
<zul> ttx: thanks
<kirkland> ttx: ping
<ttx> kirkland: pong
<kirkland> ttx: hey, i was just syncing today's ISO, hoping you might give me a 5-minute status update on where we are?
<ttx> kirkland: sure
<ttx> kirkland: I first tested the classic CLC+Walrus+CC+SC / NC topology
<ttx> kirkland: ran into bug 503180, or something similar
<uvirtbot> Launchpad bug 503180 in eucalyptus "eucalyptus-cloud doesn't reply to requests" [High,Confirmed] https://launchpad.net/bugs/503180
<ttx> got to stop / kill / start
<ttx> otherwise started up and registered ok
<ttx> then I tested CLC+Walrus / CC+SC / NC
<kirkland> ttx: okay, yeah, mathiaz and I discussed that one yesterday
<kirkland> ttx: I'll get Dan to ssh into my setup once it's up
<ttx> CC+SC fails with several bugs
<ttx> one of them is a bugfix that got lost in a merge
<ttx> fix uploaded as rev825
<ttx> one other is linked to the fact the publication shouldn't wait for the component to be started up
 * soren pauses for dinner
<ttx> Fix will be uploaedd in the next minutes
<ttx> and the last one is due to the preseed file not being fetched from the CLC
<kirkland> ttx: cool, man
<ttx> (bug 503339, Colin will look at it)
<uvirtbot> Launchpad bug 503339 in wget "UEC lucid installer: CC fails to register when separated from CLC" [Undecided,New] https://launchpad.net/bugs/503339
<kirkland> ttx: so i think I've got the wsdl stubs "handled"
<ttx> I haven't looked into the fix, tbh
<kirkland> ttx: i added a hook to the build that makes the build fail if the wsdl files have changed, printing an informative error message
<ttx> was just wondering how you got rid of the patchsystem and kept a patch :)
<kirkland> ttx: well, i wanted to discuss that ...
<kirkland> ttx: we *could* apply that patch directly to the bzr source
<kirkland> ttx: and my script could just update those files and debcommit
<ttx> kirkland: that doesn't sound like a good idea :)
<kirkland> ttx: but i wondered if it made sense to keep that one patch separate
<kirkland> ttx: i also considered keeping a *separate* bzr repository for the generated code
<kirkland> ttx: eucalyptus-generated
<ttx> kirkland: it does make sense. however keeping the patch out of a patchsystem is probably less confusing
<kirkland> ttx: and as part of our merge process, copy that in
<kirkland> ttx: okay, so pull it out of debian/patches?
<kirkland> ttx: and just put it in wsdl/* ?
<ttx> kirkland: I think having a patchsystem arond seems to imply that patches shouold go there. Having it separate and directly applied in debian/rules would mitigate that
<ttx> will make it more obvious to new contributors to the package
<mathiaz> ttx: so what's the deal with https://bugs.launchpad.net/bugs/503180?
<uvirtbot> Launchpad bug 503180 in eucalyptus "eucalyptus-cloud doesn't reply to requests" [High,Confirmed]
<mathiaz> ttx: I've tried to restart eucalyptus-cloud, but it still fails
<ttx> mathiaz: did you check for stale eucalyptus-cloud processes before starting again ?
<ttx> I always had to kill -9 one
<mathiaz> ttx: right - I had to do the same
<ttx> and it still fails ?
<mathiaz> ttx: that being said, I was actually rebooting the system IIRC
<mathiaz> ttx: I'll give it another try
<ttx> might be two separate bugs, with me only hitting the first layer
<ScottK> ttx: I'm reasonably certain that isn't a package you want new contributors coming anywhere near.
<kirkland> ttx: agreed
<kirkland> ttx: okay, i'll affect that change today
<ttx> ScottK: even experienced contributors might get tempted to use the existing patchsystem rather than apply patches directly to source, since that's the common rule
<ScottK> True, but lintian will at least warn you about it.
<mathiaz> ttx: hm well - I can get the credentials now - seems like the night was a good thing for the CLC
<uvirtbot> New bug: #503432 in nmap (main) "nmap scans wrong network when invalid (?) range specified" [Undecided,New] https://launchpad.net/bugs/503432
<ttx> mathiaz: it looks a lot like the old db deadlock thing, though they are no longer using c3p0
<ttx> kirkland: please release with whatever you have and spin a CD before the end of your day, so that I can pick up my changes tomorrow morning
<kirkland> ttx: understood
<ttx> kirkland: thanks !
 * ttx grumbles as his parents crashed their ubuntu box
<kirkland> ttx: did last nights work out well for you?
<kirkland> ttx: i worked until about 8pm, uploaded, and asked slangasek to schedule a build for 4 hours later
<ttx> kirkland: it was perfect.
<kirkland> ttx: figured that should pick up my changes, and complete before your day started
<kirkland> ttx: sweet, then i think we're in sync
<ttx> we could have a regular build at that time
<unit3> hmm... anyone know why, on boot, the device files in my lvm vg get assigned root:disk ownership, but when I create a new LV it gets assigned root:root ownership?
<unit3> (on karmic)
<turkeymonkey> in 8.04, i install xen-compatible kernels for my vps with the linux-image-xen package. i've noticed it doesn't exist in 9.10... what has replaced this packaged in 9.10 (or how can I install a xen-compatible kernel)?
<unit3> nothing's replaced this. the problem is the dom0 patches don't exist in newer kernels.
<unit3> you can install the old packages from hardy, but then you lose newer kernel functionality.
<unit3> it's sort of a crappy situation, and is why people (like Redhat) are moving to kvm / libvirt instead.
<unit3> but that's not totally production ready either, depending on your setup.
<unit3> so... stick with hardy, I guess, for now.
<turkeymonkey> thanks for the info, that explains alot.
<turkeymonkey> hardy it is, then... still have around 3 years of support for server :)
<unit3> yep. hopefully for 10.04 there'll be some new supported options, but it's hard to say, with xen in the state it is.
<Hypnoz> anyone know if I can make an NFS export from an NFS mount? exportfs -r complains when the line is in there
<zul> smoser: ping
<smoser> here
<zul> smoser: something like bzr branch lp:~zulcss/ec2-init/ec2-init-config ?
<smoser> zul, yeah, basically. thats what we're looking for.
<smoser> i wonder, why the differeing 'start on'
<zul> smoser; i thought they would be more sensible
<smoser> more sensible?
<zul> that if cloud-config is started then the network should be started and local-filesystem should be started as well
<zul> but ill defer to you though
<smoser> cloud-config will not be emitted until / is mounted and network eth0 up
<zul> smoser: ah ok so ill change those
<smoser> so i dont knwo what would be the best practice..
<zul> ditto
<smoser> in some sense it makes sense to have each of those listed, the job is then more stand alone
<smoser> but requires more maintainance
<kirkland> mathiaz: howdy
<kirkland> mathiaz: okay, fresh UEC installed today
<kirkland> mathiaz: i am see your node-discovery problem
<mathiaz> kirkland: o/ texas!
<kirkland> mathiaz: first, no nodes are found
<kirkland> mathiaz: eventually, after re-running, it does find the ipv6 address
<kirkland> mathiaz: do you have an open bug for this one?
<mathiaz> kirkland: not yet
<mathiaz> kirkland: did you try to run avahi-browse from the command line?
<kirkland> mathiaz: yup
<mathiaz> kirkland: if you look at euca_conf you'll find the exact command used to find the nodes
<mathiaz> kirkland: and so it's an issue with avahi?
<mathiaz> kirkland: because when I tried to run avahi-browse from the command line I got the expected result
<mathiaz> kirkland: that being said it seems non-deterministic
<kirkland> mathiaz: i think it is probably an avahi issue
<mathiaz> kirkland: since you first don't see the nodes, then the ipv6 address, etc..
<kirkland> kees: howdy, are you around?
<mathiaz> kirkland: I would say so as well
<kirkland> kees: i seem to remember you sending me a note before xmas, about some new cool avahi improvement that you made, and we should take advantage of
<kirkland> kees: i seem to have forgotten what that magic was :-)
<kirkland> kees: jog a memory?
<kirkland> mathiaz: hmm ...
<kirkland> mathiaz: the shell command in euca_conf works correctly
<kirkland> avahi-browse -prt _eucalyptus._tcp | grep '^=.*"type=node"' | cut -d\; -f8 | sort -u
<kirkland> 10.1.1.71
<mathiaz> kirkland: right
<mathiaz> kirkland: how long did you wait?
<mathiaz> kirkland: I'd try to reboot the CC, then run multiple times the avahi-browse command right after boot
<mathiaz> kirkland: and see how the result evolves
<kirkland> mathiaz: hmm, okay, i'll need to install a new node
<kirkland> mathiaz: b/c this one is working perfectly
<kirkland> mathiaz: i was able to grab my creds too
<kirkland> AVAILABILITYZONE        |- m1.small     0002 / 0002   1    128     2
<mathiaz> kirkland: so you didn't run into the stuck eucalyptus-cloud process?
<kirkland> mathiaz: not yet
<mathiaz> kirkland: if you can get your credentials you haven't run into it then
<kirkland> mathiaz: okay, well, lucky me
<kirkland> mathiaz: this is from an ISO install
<kirkland> mathiaz: you do exclusively package installs, right?
<mathiaz> kirkland: yes
<mathiaz> kirkland: ttx run into the same problem
<mathiaz> kirkland: and torsten as well
<Hypnoz> anyone know if i can NFS export a directory that is an NFS mount?
<guntbert> Hypnoz: why would you want to do that?
<Hypnoz> to get around some subnet shit. A server (ServerA)can't see our nfs server, so i want to mount it on a server (ServerB) that can see both subnets. The ServerA can mount the NFS from ServerB
<guntbert> Hypnoz: aha - understandable - the easiest way would be to try it?
<guntbert> Hypnoz: but I really don't know
<Hypnoz> # exportfs -r
<Hypnoz> exportfs: Warning: /autohome does not support NFS export.
<guntbert> Hypnoz: then it seems impossible - but try to read man exports
<kees> kirkland: it's for registering a name without registering a reverse.
<kees> i.e. being able to finally do the thing I had suggested as an option.
<kees> http://git.0pointer.de/?p=avahi.git;a=commitdiff;h=63b105af928ab6027ce7c905b8ac051fc23a2880
<kirkland> kees: ah
<kirkland> kees: okay, so we would add -R to the options in the eucalyptus upstart scripts
<kirkland> kees: that do the avahi publish
<mathiaz> kirkland: well - I don't think we need to do that anymore
<mathiaz> kirkland: IIRC the avahi-publish jobs are publishing the IP address to use
<mathiaz> kirkland: the same way as they're publishing the type
<kirkland> mathiaz: right, we encoded that into the publish string
<mathiaz> kirkland: that way, euca_find_cluster doesn't rely on avahi name resolution
<kirkland> mathiaz: i'm trying to figure out if it would make more sense to go back and do it kees' way
<mathiaz> kirkland: we should actually do the same thing for the nodes
<mathiaz> kirkland: well - I don't think so - in the case where the system has multiple IPs
<mathiaz> kirkland: you wanna publish one specific IPs to be used
<smoser> general "how does this work" question: given http://packages.qa.debian.org/p/python-boto.html (and presence of 1.9b-1 there) will ubuntu just pick that up ?
<smoser> or does someone need to do something?
<zul> smoser: it would have to be merged because there is ubuntu changes
<kirkland> mathiaz: ah, right
<zul> smoser: i could take care of that though
<smoser> zul, you mind, i'll just put it together and you look at it ?
<smoser> the diff is minimal.
<zul> smoser: sure
<smoser> zul, so should i open a bug to request the merge ?
<zul> smoser: yep
<zul> smoser: bug #?
<smoser> its coming
<smoser> i was learning requestsync
<smoser> https://wiki.ubuntu.com/SyncRequestProcess
<smoser> zul, https://bugs.edge.launchpad.net/ubuntu/+source/python-boto/+bug/503541 has branch merge request attached.
<uvirtbot> Launchpad bug 503541 in python-boto "Sync python-boto 1.9b-1 (main) from Debian testing (main)" [Wishlist,New]
<zul> smoser: you only changed the debian directory?
<smoser> correct.
<zul> but there...oh duh never mind
<smoser> you can see the diff that i merged with:
<smoser> bzr diff -rtag:1.8d-1..tag:1.8d-1ubuntu2
<zul> smoser: yeah i wasnt paying attention for a sec
<smoser> the 1ubuntu1 were merged upstream so no need
<ewan_> Hey, can anybody point me in the right direction for setting up my ubuntu server as an IRC server?
<unit3> irc's a little tricksy.
<unit3> generally, you want to figure out which ircd you want to run, and then just use the ubuntu package.
<unit3> the config docs will match the ircd you pick.
<unit3> do you have one in mind?
<zul> smoser: done
<zul> and with that i bid you adieau (gotta go shovel)
<ewan_> not really
<ewan_> i just want something reasonably simple to set up
<unit3> well... i can't recommend anything, I've always found irc servers to be sort of bizarre.
<ewan_> fair enough. i've just read about ircd-hybrid so i'm going to give that a go
<unit3> good luck. :)
<joe-mac> is anybody using the HP management agents that say they are for jaunty only on hardy? we only use lts on our servers and i'm wondernig if i can throw these debvs in our custom repo...
<nvme> does the ubuntu server ediition kernel have different set of modules than regular ?
<genii> nvme: Yes
<nvme> genii, where can i get a .deb kernel image for use with drbl ?
<genii> nvme: No idea.
<genii> nvme: If the particular module you need exists for the -server kernel, you can use the initramfs tools to make a new vmlinuz which contains it
<genii> Apparently didn't care for that idea.
<jla> off topic, but could some kind person tell me how to reduce the noise on this channel?
<guntbert> jla: you can hide the joins/parts in your client
<jla> guntbert I am using pidgin under windows, and it seems a remarkably sparse piece of software
<guntbert> jla: look for something like "conference mode"
<guntbert> jla: and there is #pidgin
<jla> guntbert: thanks it appears to be in the plugins. Thanks
<cemc> jla: I personally recommend some other app for irc (like xchat, mirc). they seem better suited than pidgin (imho)
<jla> cemc I concur with yho.
<ewan_> i'm using irssi. IRC in console is just too cool to resist
<jla> cemc: do you know if xchat, mirc also support MSN messanger, I have to use it occasionally.
<ewan_> I've set up ircd-hybrid and i can connect to the IRC server from the host (/connect 127.0.0.1) but i can't connect from any other machines on the LAN. i just get connection refused. any ideas?
<jla> ewan: firewall?
<cemc> jla: I don't think they do
<Vog> anyone else having problems importing keys from the keyserver.ubuntu.com ?
<jla> cemc: too much to hope for!
<cemc> Vog: problems like how? it seems to work for me
<jla> ScottK: did you get any answer on the amavis restart ?
<guntbert> Vog: you can use most other gpg servers as well
<cemc> jla: you mean https://bugs.launchpad.net/ubuntu/+source/spamassassin/+bug/502615 ?
<uvirtbot> Launchpad bug 502615 in spamassassin "/etc/cron.daily/spamassassin should restart amavisd" [Undecided,New]
<ewan_> jla: could be. i don't know how to configure the ubuntu server's firewall though. or if i've even installed one
<cemc> ewan_: maybe it's listening only on localhost? check with netstat -nlp
<ewan_> cemc: you may be right. i see "127.0.0.1:6667" but no others ending with :6667
<cemc> ewan_: it seems it's configured to listen only on localhost. you may want to check the config file (if it has one)
<jla> ewan: I not sure how to config FW under ubuntu - I usually edit /etc/iptables, however I think ubuntu user User Friendly Firewall (UFW)
<ewan_> cemc, jla: thankyou both, i got it working by editing the config file. could have sworn i already did that, but ah well :D
<jla> cemc: .../502615 - yes, there was a brief conversation about whether amavis needs restar
<jla> restart. my 2C was yes, to avoid stale sockets. But I am NOT authorative
<cemc> jla: I was going to look into it, then I got distracted... I'll update the bugreport ASAP.
<cemc> jla: I'm not sure it's the best thing to restart from crontab. not everybody uses sa-update, and even if... not sure if it's ok to restart amavis from cron. but I'll try to look into it. maybe a HUP if it supports it
<cemc> I was going to try that first
<jdstrand> jla: you can create an iptables script of course, but for a host-based firewall it is recommended you use ufw. See https://wiki.ubuntu.com/UncomplicatedFirewall. For a routing firewall, you might also consider something like shorewall
<jdstrand> jla: that page has links to firewall docs as well (ufw et al)
<Vog> cemc: Was trying to add a key for a launchpady ppa repository. It timed out twice and worked on the third try. Only after a abnormally long pause.
<uvirtbot> New bug: #479836 in euca2ools (main) "euca2ools: requires EC2 certificate from ec2-ami-tools in multiverse" [Low,Triaged] https://launchpad.net/bugs/479836
<cemc> Vog: yeah, that happened to me a couple of times a while back. after some insisting it downloaded the key :)
<marks256> Say if i have two NICs in a server, eth0 and eth1. eth0 goes to internet (on the network 192.168.0.0), and eth1 has a DHCP server (network 192.168.1.0). How do i specify where data goes? if i want to contact 192.168.0.50 (eth0) vs contacting 192.168.1.10 (eth1)
<marks256> how does that even work?
<marks256> In other words, is there a way to tell linux that all requests on the 192.168.1.x network get sent to eth1, but all other requests get sent to eth0?
<Vog> iptables and static routes
<marks256> can you point me to some reading?
<Vog> marks256: https://help.ubuntu.com/community/IptablesHowTo
<marks256> Vog, so the static routes are done using iptables?
<Vog> it isn't easy, there are other methods to do what I *think* you are trying to do.
<marks256> please share :)
<Vog> no the routes are setup seperatly
<marks256> oh ok
<Vog> http://www.ubuntugeek.com/howto-add-permanent-static-routes-in-ubuntu.html
<Vog> Honestly I'm just googling the terms and posting links
<Vog> Google is your friend.
<marks256> yes, i know. it's not your friend when you don't know what you want though. i've been googling these terms and have been coming up with what seems to be a lot of good information
<marks256> Basically i want to setup a "private network" on a server of mine with glusterfs. That way all the FS traffic is on it's own private network (that can eventually be upgraded to a faster connection, without upgrading a whole building)
<Vog> marks256: If you are trying to setup a easy to configure gateway you might want to look into ebox, pfsense or somethign like that.
<Vog> Either way though you'll have a lot of reading ahead of you.
<marks256> i don't need internet on the downstream
<marks256> i expected lots of reading :)
<Vog> Why not seup openvpn
<Vog> setup
<Vog> I'm not familiar with glusterfs at all...
<marks256> openvpn?
<Vog> Yeah it's a Open virtual private network
<marks256> glusterfs is a distributed file system. It is scalable to large sizes, and seems to work very well.
<marks256> what would i need vpn for?
<Vog> essentially a private tunnel between 2 or more sites over public networks.
<Vog> But as I said again I don't think I understand what you are trying to achieve :)
<cemc> marks256: I'm not sure I understand what you're asking up there ;)
<marks256> It's more from a bandwidth perspective. I don't want all that file system traffic running on an already bogged down network
<Vog> So you want to do a multi wan?
<Vog> some traffic over one interface the rest over another?
<marks256> Vog, not really?
<marks256> yeah sort of
<marks256> but eth1 will be a dhcp server
<stake> Hallo everybody.
<marks256> actually no it wont. the gluster clients will be static ips
<cemc> marks256: eth0 and eth1 will be connected to the same switch (the LAN switch) ?
<benedikt> Can postfix and apache start with certificates with passphrases (without the need to enter them manually)
<marks256> cemc, no. eth0 will be connected to the internet, and eth1 will be connected to another switch which connects the clients
<marks256> i'll see if i can draw a picture of what i want real quick
<cemc> benedikt: if you safe the passphrase in some file to load automatically, then the security of the passphrase is gone, don't you think? :)
<stake> short question: I want to start a copy-job on my server to move a lot of data.
<benedikt> cemc: good point. i didnt really think this though. ill stick to passphrase-less certificates
<Vog> stake: ok
<stake> When I want to use a ssh tunnel, can I start the job (from another pc) and then logoff the server?
<stake> so it can work on it's own?
 * stake is sorry for his poor English.
<cemc> stake: what are you using for copyinh ?
<cemc> like scp ?
<cemc> copying locally or copying to another server?
<Vog> stake: I suggest tar for the copy much better at lot's of files.
<stake> cemc: Actually I wanted to move the data from disk to disk on the server. More something like cp or mv.
<cemc> stake: use screen
<Vog> good call dem
<cemc> stake: http://www.gnu.org/software/screen/
<stake> Vog: If I tar the files, the problem isn't solved. I still have to copy/move the tar. ;)
<stake> cemc: ...was reading the man pages. :)
<stake> thx
<Vog> stake you can use tar to copy and move the files as well... :) let me get you a link...
<Vog> http://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/ but this is over a network... not between drives sorry bout that.
<cemc> stake: you start up screen and you start the copy process in one of the virtual terminals of that screen session. then you can detach from the screen session and logout
<marks256> Vog, cemc here is a (crude) picture of what i want to do http://img10.yfrog.com/img10/4944/networkl.png
<cemc> tomorrow you log back in then reattach to that running screen session and continue where you left off
<Vog> marks looks like a basic network gateway.
<stake> ejat:
<cemc> screen is generally a good tool if you have a to do lenghty stuff (like copying for hours) and you have a shaky ssh connection to the server
<stake> sorry.
<marks256> Vog, ok. You may be right. i'll look into it. thanks :)
<cemc> marks256: what's your question about this exactly?
<marks256> my question is, how does pc1 tell the difference between the two address 192.168.0.5 and 192.168.1.5
<stake> cemc: jap. That's what I'm looking for.
<stake> cemc: But it has to run on the server right?
<cemc> stake: yes
<cemc> marks256: you have 1-1 address from 192.168.0.x and 192.168.1.x configured on eth0 and eth1 respectively. then it knows that those networks are directly connected to it, on those two nic's (if the netmasks are right I guess)
<cemc> marks256: it will send packets to 192.168.1.x on eth1 'automagically', no need to worry about that. just have an IP from 192.168.1.x on eth1
<marks256> so if i were to ping 192.168.0.10, it would know to use eth0, and if i were to ping 192.168.1.15, it would know to use eth1
<benedikt> yes
<benedikt> the routing tables take care of that.
<marks256> huh. well now if only i can figure out how to set this up :)
<cemc> yes, if everything's configured alright, it should
<marks256> haha. IF everythign's configured right ;)
<marks256> so basically i need routing tables?
<cemc> well... :)
<benedikt> marks256: no, it takes care of that itself. you can check then by typing "ip route"
<marks256> so all i have to do it simply setup eth1 to be a host, and viola, everything will do it on it's own?
<ScottK> jla: I've been offline most of the day.  I didn't hear back from ivoks yet.
<benedikt> probably. depends on if you want the other pc's to also connect to the internet
<marks256> benedikt, nah. if i need to do that, i've done it before, so it shouldn't be a problem.
<marks256> Sweet. Thanks for the help all!
<benedikt> then it will just be fine if you correct the right interface with the right ip address
<stake> cemc: Ehm... I started screen. Started my job and logged off. Login again and looked in "top" if the job still runs. It does. ... How do I get back to the "screen"? :)
<benedikt> screen -r <pi>
<benedikt> pid*
<benedikt> or just screen -r if you only have one
<cemc> stake: I use screen -d -r
<benedikt> then can be named somehow. i forget how.
<stake> cemc: Great. Thanks man!
<cemc> if you have more then one, a list will appear and you can choose
<stake> You made my ... oh, a new day. ;)
<cemc> benedikt: that's -S i think
<stake> Really cool tool.
<stake> Bye then!
<cemc> bye
<benedikt> cemc: ah, right. thanks.
<mathiaz> kirkland: hey
<kirkland> mathiaz: yo
<mathiaz> kirkland: is there any reason why eucalyptus upstart job is shipped in eucalyptus-common?
<mathiaz> kirkland: it relies on eucalyptus-cloud, which is not part of eucalyptus-common?
<kirkland> mathiaz: are you questioning the existence of the job, or the location?
<mathiaz> kirkland: on the NC there is an eucalyptus.conf file
<mathiaz> kirkland: the location
<mathiaz> kirkland: It seems that the upstart job should be part of eucalyptus-cloud
<mathiaz> kirkland: rather than eucalyptus-common
<kirkland> mathiaz: i think that's true ...
<kirkland> mathiaz: i believe it was there so that you could just:
<mathiaz> kirkland: as the job fails to start on NC
<kirkland> sudo stop eucalyptus
<kirkland> sudo start eucalyptus
<kirkland> anywhere you are
<kirkland> mathiaz: and it would automatically start/stop the right components for that system
<mathiaz> kirkland: well - on an NC, sudo start eucalyptus doesn't work
<mathiaz> kirkland: hmmm
<mathiaz> kirkland: let me re-read the job
<kirkland> mathiaz: correct ...
<kirkland> mathiaz: i have a hack that makes that work
<kirkland> mathiaz: but slangasek nacked it (softly)
<mathiaz> kirkland: hm - well - the problem I see is that on an NC, the job doesn't do anything
<mathiaz> kirkland: but it's still marked as respawn
<mathiaz> kirkland: hm well - it's this line: [ -n "$services" ] || { stop; exit 0; }
<mathiaz> kirkland: oh well - that works I suppose
<mathiaz> kirkland: I've found out why I couldn't discover the nodes
<mathiaz> kirkland: bug 446036
<uvirtbot> Launchpad bug 446036 in eucalyptus "The node controller sometimes fails to start if libvirt is not ready" [Medium,Triaged] https://launchpad.net/bugs/446036
<mathiaz> kirkland: if eucalyptus-nc fails to start because libvirt is not ready, then the eucalyptus-nc-publication job is not started either
<mathiaz> kirkland: is automatic registration working?
<mathiaz> kirkland: hm - seems so
<mathiaz> kirkland: there is no need to run sudo euca_conf --no-rsync --discover-nodes anymore?
<mathiaz> kirkland: yop - auto-registration works!
<kirkland> mathiaz: autoreg is working for me too
<mathiaz> kirkland: cool - I didn't know about that
<mathiaz> kirkland: I'll update the test instructions then
<mathiaz> kirkland: as there is not need for the sudo euca_conf --no-rsync --discover-nodes anymore
<pilif12p> Can i dualboot desktop and server ?
<kirkland> mathiaz: okay, so i'm looking at the nc upstart scripts ...
<mathiaz> kirkland: IMO libvirt should be upstartified
<mathiaz> kirkland: so that eucalyptus-nc can start on started libvirt
<mathiaz> kirkland: the problem is that the pre-start script checks if libvirt is running
<mathiaz> kirkland: euca_test_nc $HYPERVISOR > /var/log/eucalyptus/euca_test_nc.log 2>&1 || exit 1
 * kirkland looks at libvirt's init script
<mathiaz> kirkland: ^^ that fails - and then the job is just stopped
<mathiaz> kirkland: even though it's marked respawn
<mathiaz> kirkland: but respawn only applies to the exec part of the job (which is apache2)
<kirkland> mathiaz: hmm, well if respawn isn't working properly we need to poke slangasek and keybuk
<kirkland> mathiaz: hmm, should we move that to the exec?
<kirkland> jdstrand: do you have any plans to upstartify libvirt-bin?
<mathiaz> kirkland: well
<mathiaz> kirkland: I think it makes sense to have apache2 executed here
<mathiaz> kirkland: so that it's directly supervised by init
<kirkland> mathiaz: yeah
<mathiaz> kirkland: and the reason it fails is because of euca_test_nc $HYPERVISOR > /var/log/eucalyptus/euca_test_nc.log 2>&1 || exit 1
<mathiaz> kirkland: ^^ that should be an upstart event *somehow*
<mathiaz> kirkland: ie - don't start eucalyptus-nc if euca_test_nc is not succesful
<kirkland> mathiaz: libvirt-bin looks pretty upstartable
<mathiaz> kirkland: I wonder what euca_test_nc
<mathiaz> kirkland: does?
<mathiaz> kirkland: hm - seems like it mainly checks wether libvirt is running correclty
#ubuntu-server 2010-01-06
<pilif12p> Which option do i use to dual boot?
<kirkland> mathiaz: yo
<kirkland> mathiaz: wanna test a libvirt-bin upstart script for me in your env where you're having node troubles?
<pilif12p> Hello again. I'm confused as how to dualboot
<tos__> how do i set the mac address to use when adding a network alias such as eth0:0 75.1.75.1
<JohnA> help gui
<jla> jdstrand: just got back from supper. an overly complex method for setting up a fw.
<JohnA> jdstrand: I really need to learn to type.
<jdstrand> kirkland`: re libvirt/upstart-- I do not
 * jdstrand wonders how 'ufw allow OpenSSH ; ufw enable' is so complex. *shrug*
 * jdstrand wanders off
<JohnA> a
<kirkland`> jdstrand: cool, i have one, working well for me
<kirkland`> jdstrand: are you interested in helping test it?  :-) :-) :-)
<erichammond> smoser, et al: Do you remember where the document is describing the release policy/schedule for official Ubuntu AMIs?
<smoser> https://wiki.ubuntu.com/UEC/Images/RefreshPolicy
<smoser> its on my plate to get that officially radified and agreed to by QA and tech board
<smoser> at the moment i'm leaning towards simplyfying the conditions for update to be something like:
<smoser> a. security or major issue
<smoser> b. monthly unless not necessary
<erichammond> I'd like to make a request for updates to be released for Karmic (and optionally Hardy).
<smoser> rather than some other more complex to predict update policy
<smoser> you mean *now* ? or in general.
<erichammond> The apt-get upgrade on Karmic has an ugly manual grub config prompt which can't be avoided with DEBIAN_FRONTEND=noninteractive
<smoser> the policy does say it covers karmic and hardy
<smoser> is there a bug open?
<erichammond> smoser: now as in "some day soon"
<erichammond> Would you like me to create a bug in launchpad?
<erichammond> will do
<erichammond> I like your monthly/major issue proposal
<erichammond> Ubuntu is very schedule driven in other ways so that folks can do their planning around it.
<erichammond> Hm, what package should I use as an argument to ubuntu-bug when I'm submitting a report against the AMI building software/process?
<zul> smoser: still around?
<smoser> here
<zul> python-boto got rejected because the changelog still said karmic ill fix it and re-upload it
<smoser> bugger. sorry.
<zul> no problem ill fix it and re-upload it
<erichammond> smoser: bug 503649 for your enjoyment
<uvirtbot> Launchpad bug 503649 in ubuntu "Release updated official Ubuntu EC2 AMIs for Karmic, Hardy" [Undecided,New] https://launchpad.net/bugs/503649
<smoser> oh, erichammond i actually was wondering about the grub update bug :-(
<erichammond> Well, the info there could be turned into such a bug if it really is a bug.
<erichammond> I don't like the behavior, but didn't know if it was intended.
<zul> smoser: fixed
<tos__> hi i am trying to make use of all my ips.... i have ubuntu-server installed with 1 NIC.. and im trying to get each ip on that NIC, but my router only allowd 1-ip per MAC, how can i make Virtual NIC's with different mac's
<tos__> the eth0:0-1-2-3-4 doesn't work same mac for each
<tos__> aliasing
<tos__> !vzeth
<majuk> Hey guys, I have a group Admins and a group Inst. They both have rw to /srv/instr but only Admin has rw to /srv/admin. The problem I have is users in Admin are dumping folders into /srv/instr and getting their group, thus making instr group unable to see them.
<majuk> I've found the set guid for the folder, but that only modifies moved/created files, not directories
<majuk> set guid bit*
<tos__> ok so i tried to install openVZ to assist with creating virtual ethernet interface, However im using Ubuntu 9.10, and have never recompiled a kernel or anything like that... does anyone know of another way to accomplish this, My router only assigns 1 ip per MAC address and I cant change the router so I need a virtual solution... if there is an easy way out there for ubuntu with a HOW-to or not too much config'ing ??
<qman__> majuk, I have a similar situation, though I'm unaware of an enforcing solution
<qman__> I use a create mask in samba and a daily cron job to work around the problem
<majuk> qman__, Yeesh
<qman__> yeah
<qman__> I didn't say it was a good solution
<qman__> if there's a better way, I'm certainly interested
<majuk> I think I might have found one.
<majuk> qman__, Guy in #gentoo said it is determined by the program creating the folder. So there is probably a setting in Samba
<majuk> Just a guess. I looked through the man doc, but I'll look again.
<qman__> yeah, with samba you can force group and create mask
<qman__> but it only helps with files created via samba, stuff put in other ways can be a problem
<majuk> I know the create mask, but I only found a force user option
<majuk> Yea, my users don't. Windows kiddies.
<majuk> :D
<qman__> ah
<qman__> then samba should support everything you need
<majuk> Well then I'll take my pick-axe back to the man pages.
<qman__> force group = groupname
<qman__> create mask = 0664
<qman__> directory mask = 0775
<qman__> that's what I use
<majuk> Yea I have my everyone locked out for these folders. Hence the problem with my 2 groups not being able ot see one another's files.
<majuk> 0660
<uvirtbot> New bug: #461301 in eucalyptus "euca-run-instances unnecessarily encodes user data (dup-of: 461156)" [Undecided,Fix committed] https://launchpad.net/bugs/461301
<JohnA> quit
<xiaomai> i've mounted an nfs partition rw, but when i attempt to write to that partition (as root), i get permission denied errors.  how can that be?
<alvin> xiaomai: There are different possibilities here. NFS3 of NFS4?
<xiaomai> alvin: i think it's nfs3.  i've chmod 777 on the root directory and now i get this weird 4294967294 uid/gid on the files i create
<alvin> xiaomai: That's the reason. What line is in /etc/exports on your NFS server?
<xiaomai> alvin: i've got "/mntpoint    myip/range(rw,sync,no_subtree_check)"
<xiaomai> er, i guess mntpoint is a directory (/data/webserver (it's just part of the root filesystem))
<alvin> xiaomai: add 'no_root_squash' to your options
<alvin> then do $ sudo exportfs -av en remount the share
<Jaredster> has anyone here had troubles with CUPS after upgrading to karmic?
<Jaredster> I cant get it to share anymore over the network
<Jaredster> I've tried everything I can think of.
<Jaredster> it worked fine on jaunty
<xiaomai> alvin: beautiful, that did the trick, thanks!
<alvin> xiaomai: You're welcome
<xiaomai> alvin: i saw that option in the manpage, but it didn't seem like i needed it--guess i did
<jiboumans> morning
<Jaredster> ipp doesnt work in 9.10 apparently?
<twb> Jaredster: that's not a question.
<Jaredster> thank you
<_ruben> hmm .. is it "normal" for the `du` command to not work properly on fuse mounts?
<_ruben> it shows 0 bytes for everything
<_ruben> ls does show proper filesizes
<twb> _ruben: --apparent-size ?
<_ruben> twb: thanks .. that seems to do the trick ..
 * _ruben hides in shame for not even looking at the manpage of du
<twb> I am guessing that you're looking at files that don't have any blocks on a physical disk or something
<twb> Or maybe your FUSE implementation just doesn't implement those ioctls
<twb> s/implementation/filesystem/, I mean. e.g. sshfs
<_ruben> they're fuse mount of vmfs volumes (vmware luns)
<twb> Yecch
<twb> How does that relate to vmware's hgfs?
<_ruben> not at all
<_ruben> hgfs is a special fs to enable drag'n'drog for virtual machines on hosted products ... vmfs is used to store disk images on
<twb> Oh, so kinda like using LVM and giving a VM direct access to an LV (instead of a file on a filesystem) for its block device.
 * twb reads wikipedia
<Jaredster> I'm trying to configure cups right now to share a printer over the network
<Jaredster> and no other computer on the network can see it
<Jaredster> it worked fine with 9.04, but when I upgraded to 9.10 it broke.
<Jaredster> has anyone else dealt with this?
<twb> I stick to LTS, sorry.
<_ruben> twb: not really, basicly vmfs is just another cluster fs like ocfs2 .. though with some features like thick and thin provisioning
<twb> _ruben: okey dokey
<Jaredster> what do you do if you delete an init script and need to restore it?
<Jaredster> nevermind, got it
<twb> Jaredster: sudo dpkg-divert --rename?
<twb> Oh, "if you deleteD"
<Jaredster> yeah
<Jaredster> I purged it and reinstalled
<Jaredster> I was going to do it anyway
<jiboumans> ttx: ping?
<ttx> jiboumans: o/
<jiboumans> ready?
<ttx> jiboumans: fire
<jiboumans> awesome :)
<ivoks> ttx: thank you for reminder :)
 * kirkland going for a run; will try to be back for start of meeting
<zul> morning
<uvirtbot> New bug: #503008 in dovecot (main) "Dovecot-auth doesn't close sockets" [Medium,Incomplete] https://launchpad.net/bugs/503008
<uvirtbot> New bug: #503777 in squid (main) "Loop with moved_temporarily and when using storeurl" [Undecided,New] https://launchpad.net/bugs/503777
<ttx> kirkland: for some mysterious upstart reason, my "start on (started ssh and started avahi-daemon)" publication scripts are borken in the current euca. I'm investigating
<ttx> kirkland: for some reason when ssh/avahi-daemon restart when picking up eth0, the publication task is *not* restarted.
<mathiaz> ttx: are the upstart jobs marked respawn?
<mathiaz> ttx: if the avahi-daemon dies, avahi-publish aren't restartd
<ttx> mathiaz: that's the lead i've been following... but marking them respawn doesn't help. Furthermore, shouldn't it get restarted when ssh and avahi-daemon are "started" again ?
<ttx> it gets properly killed by the "stop on stopping ssh or stopping avahi-daemon"
<mathiaz> ttx: I'm not sure that a new interface coming up generates a started sshd/avahi-daemon upstart event
<ttx> but doesn't get restarted when ssh and avahi-daemon are started again
<ttx> hmm
<mathiaz> ttx: hm... I don't know then
<ttx> so they would stay started
<mathiaz> ttx: I'd ask keybuk about it
<ttx> and avahi-publish would get SIGTERMed when avahi-daemon dies
<ttx> I will
<jdstrand> kirkland: re upstart/libvirt> I can test it, sure
<kirkland> jdstrand: i was having trouble building lucid's libvirt locally
<kirkland> jdstrand: was failing some tests
<jdstrand> kirkland: the test suite is flaky in spots. did you build in a deep directory?
<kirkland> jdstrand: /local/source/libvirt/libvirt-0.7.2$
<kirkland> jdstrand: http://paste.ubuntu.com/352342/
<kirkland> jdstrand: interestingly, it built in my ppa
<kirkland> jdstrand: but not on my local system
<jdstrand> kirkland: are you using pbuilder?
<kirkland> jdstrand: not for this
<kirkland> jdstrand: is that a requirement for the test to work?
<jdstrand> I use sbuild, and it built fine
<jdstrand> kirkland: all the vmx2xml stuff dies cause it can't find something
<jdstrand> kirkland: I'd look at the build log in LP and make sure you have everything installed
<kirkland> jdstrand: okay
<jdstrand> kirkland: beyond that, soren was working on enabling the test suite-- so I'd ping him on problems there
<soren> hm?
<jdstrand> soren: libvirt testsuite not working for kirkland. seems a dep isn't installed, but he'll see
<kirkland> jdstrand: also, there's a regression in debian/control build deps
<soren> kirkland: Which test fails?
<kirkland> jdstrand: seems that open-iscsi snuck back in there (rather than open-iscsi-utils)
<kirkland> soren: http://paste.ubuntu.com/352342/
<jdstrand> kirkland: oops. can you fix?
<kirkland> jdstrand: yeah
<kirkland> jdstrand: i have it fixed with the libvirt stuff i'm working on
<soren> kirkland: *shrug* No clue.
<kirkland> soren: k
<uvirtbot> New bug: #503396 in php5 (main) "canary mismatch on efree() " [Medium,Confirmed] https://launchpad.net/bugs/503396
<uvirtbot> New bug: #503827 in openldap (main) "Always I want to update the system it crashes with that stuff" [Undecided,New] https://launchpad.net/bugs/503827
<garymc> Hi guys anyone know how i can resett my my phpmyadmin user and password, i seem to have forgotten them
<jdstrand> soren: btw, I don't mind doing the libvirt merge before it hits testing (in fact, I already looked at it earlier this week ;) but since it was such a big change, I wanted to see what bugs came out of it. Also, kirkland is working on libvirt/upstart and I have some other things I'm working on atm, so saying 'when it hits testing' more or less gave an accurate timeframe ;)
<kirkland> jdstrand: cool
<kirkland> jdstrand: yeah, i hope to upload the upstart change today
<jdstrand> kirkland: nice
<kirkland> jdstrand: i sent it to my PPA, while i figure out why it's not building for me locally
<kirkland> jdstrand: side benefit of getting others testing it
<smoser> kirkland, ttx you kwno what networking device the instances in EUC get ?
<Davidf88> anyone know about the enterprse clud stuff?
<kirkland> smoser: you mean eth*?
<smoser> ie, what -net nic,model=XXXXXX
<uvirtbot> New bug: #503850 in eucalyptus (main) "Upstart publication scripts no longer run" [Undecided,New] https://launchpad.net/bugs/503850
<Davidf88> Guys, hoping someone can help
<Davidf88> I have a 64 bit cluster and 64 bit node
<Davidf88> however having an issue with running instances
<Davidf88> they start, run then terminate straight away
<Davidf88> anyone got any ideas?
<smoser> Davidf88, need more info, this is karmic ? instance and uec?
<smoser> one thing to try is using a larger --instance-type
<Davidf88> smoser: its karmic yes
<Davidf88> yeah when using the larger --instance-type
<Davidf88> it just does the same thing
<Davidf88> but prolongs it
<smoser> kirkland, if you have an instance running, you can just dump its libvirt xml for me
<smoser> or even just ps -ww for the cmdline of kvm
<Davidf88> smoser: I am also gettng this
<Davidf88> in MANAGED-NOVLAN mode, priv interface 'eth0' must be a bridge, tunneling disabled
<Davidf88> in cc.log
<smoser> Davidf88, i'm sorry, the instance-type was really the only suggestion i had. i've had ones that were to small misteriously dying
<Davidf88> ok smoser somehow now have it running :s bit of a cluster-f**k to be honest
<mathiaz> nijaba`: I need to refine the script
<mathiaz> nijaba`: and make sure it gives out the correct packages
<nijaba`> mathiaz: have you looked at the python rewrite that mvo has done
<nijaba`> ?
<mathiaz> nijaba`: which one?
<nijaba`> mathiaz: ubutnu-maintenance-check
<nijaba`> mathiaz: he has rewrote it in python and verified the results
<mathiaz> nijaba`: does this grab the list of packages published by the security team?
<nijaba`> mathiaz: nope, it use seeds/germinate
<nijaba`> mathiaz: https://code.launchpad.net/~mvo/ubuntu-maintenance-check/python-port
<mathiaz> nijaba`: ah ok. I'll have a look at it then
<mathiaz> nijaba`: hm - would the result of the script include things like basic libraries?
<mathiaz> nijaba`: such as libopenssl for example?
<nijaba`> mathiaz: every package that is in main because of a server related seed
<mathiaz> nijaba`: IIUC libopenssl is maintained for 5 years, but not relevant to the server tema
<mathiaz> nijaba`: right - getting the list of -server related packages is a bit more difficult
<mathiaz> nijaba`: as we're looking for packages that are *only* pulled in by a server seed
<mathiaz> nijaba`: plus a couple of others
<nijaba`> mathiaz: well, what product do we have that is maintained for 5y which is not server releated?
<mathiaz> nijaba`: the perspective here is from the server team - and get the list of packages related to it
<mathiaz> nijaba`: example: libopenssl is maintained for 5 years
<nijaba`> mathiaz: or do you exclude packages maintained bu the foundation team, but which are used by the server?
<mathiaz> nijaba`: however I don't think that ubuntu-server should be a bug contact
<mathiaz> nijaba`: seems like a fondation thing to me
<mathiaz> nijaba`: right
<mathiaz> nijaba`: I'd like to exclude packages that are maintained by other team
<nijaba`> mathiaz: ok, so you do not want all server seeded pacakes, only the ones assigned to the server team, which is not tacked in seeds
<mathiaz> nijaba`: so we need to have a more fine-grained selection
<nijaba`> mathiaz: yep
<nijaba`> mathiaz: why do you add a ` to my handle?
<mathiaz> nijaba`: the script I wrote selects packages that: 1. are *directly* seeded in a server seed
<nijaba`> mathiaz: ok, makes sense
<mathiaz> 2. are *only* pulled in by a server seed
<mathiaz> nijaba`: because that's what irssi completes to
<mathiaz> nijaba`: at least that how I see your nickname
<nijaba`> hmm... weird.  should not...
<nijaba> should be fixed...
<rj175> Hello, I now have a instance running how do I ssh to it?!
<nijaba> rj175: ssh -i key ubuntu@ipaddress?
<rj175> nijaba: ssh: connect to host 192.168.1.35 port 22: No route to host
<nijaba> rj175: well, looks like you have a routing pb, then, or you are not specifying the public address
<rj175> nijaba: how can I debug this issue?
<nijaba> rj175: are you on uec or ec2?
<rj175> in using ubuntu 9.10 with eucalyptus
<nijaba> rj175: ok. I that case, what is the public ip address range you specified during install?
<rj175> 192.168.1.35-39
<nijaba> and what is your station ip address?
<rj175> u mean the cluster and node ips?
<rj175> nijaba: the cluster (the web front) is 192.168.1.32 and the node is 192.168.1.33
<nijaba> nope, the machine from which you are trying to ssh to your instance
<rj175> 192.168.1.82
<nijaba> rj175: can you ping .35 from .82?
<sub> Anyone know if HP's support pack available in any of the standard repos?
<rj175> nijaba: no
<nijaba> rj175: then your instance is not fully up, it seems.
<rj175> nijaba: the instance says its running
<nijaba> rj175: what does euca-describe-instance report?
<rj175> INSTANCE        i-38FC0669      emi-E01C1076    192.168.1.35    172.19.1.2      running         mykey   0       m1.large        2010-01-06T16:20:12.528Z        cluster1        e
<rj175> ki-F69E10F7     eri-0B151165
<nijaba> rj175: hmm...  then you should have a look at the boot messages from that instance.  sonds like it got stuck somewhere
<rj175> nijaba: where can i find the boot message?
<nijaba> rj175: euca-get-console-output
<ttx> smoser: ping
<smoser> here
<rj175> nijaba: all i get is:
<rj175> i-38FC0669
<rj175> 2010-01-06T16:39:33.132Z
<nijaba> rj175: you should really have something from the console.   Can you see what it says with --debug?
<rj175> it says the exact same
<rj175> let me try stopping it and running it again
<rj175> nijaba: just the same :()
<nijaba> rj175: :/
<rj175> nijaba: the is one thing in the cc.log, in MANAGED-NOVLAN mode, priv interface 'eth0' must be a bridge, tunneling disabled
<Doonz> Ok here's my issue. server 1 can log into server 2 through ssh without password authentication. but server 2 cannot log into server 1 without having to enter a password. Im trying to set up rsync from server 2 to server 1 without being prompted for password
<Doonz> Ok here's my issue. server 1 can log into server 2 through ssh without password authentication. but server 2 cannot log into server 1 without having to enter a password. Im trying to set up rsync from server 2 to server 1 without being prompted for password
<alex_joni> a server cannot log into another server
<alex_joni> a users logs in to another server
<alex_joni> you can setup users to not require a password using shared keys, examine your ~/.ssh/ folder
<jiboumans> zul++ # burner of charts
<uvirtbot> New bug: #503875 in rabbitmq-server (main) "Sync rabbitmq-server 1.7.0-3 (main) from Debian testing (main)" [Wishlist,New] https://launchpad.net/bugs/503875
<zul> not when im eating
<Doonz> alex_joni: yes
<Doonz> user1 exists on both servers
<Doonz> user1 runs the rsync job
<Doonz> user1 on server1 can just type ssh server2 and log into terminal. user1 on server2 runs the rsync command. but when user1 tries to ssh into server1 he is prompted for a password
<mathiaz> kirkland: jdstrand: where is the libvirt upstart job you worked on yesterday?
<jdstrand> mathiaz: I have not worked on it (presumably in kirkland's ppa)
<uvirtbot> New bug: #502518 in mysql-dfsg-5.0 (universe) "Lost connection to MySQL server during query" [Medium,Incomplete] https://launchpad.net/bugs/502518
<Doonz> user1 runs the rsync job
<Doonz> user1 on server1 can just type ssh server2 and log into terminal. user1 on server2 runs the rsync command. but when user1 tries to ssh into server1 he is prompted for a password
<kirkland> mathiaz: http://paste.ubuntu.com/352435/
<nod> hi - i've just done a vanilla install of 9.10 cloud with 1 cluster head and one node.  my store is empty of images, yet the docs seem to say i should have some default images there.
<nod> i don't mind bundling my own (and intend to) but I'm just curious if I've not missed something and therefore my installation isn't complete.
<nod> my node is being detected and the zone listed
<mathiaz> kirkland: great. I'll give it a try in daily UEC testing
<TeTeT> nod: there should be images in the image store, but not deployed in the cloud
<pmatulis> anyone have any experience with cyrus2dovecot (migration from cyrus to dovecot)?
<Doonz> user1 runs the rsync job
<Doonz> user1 on server1 can just type ssh server2 and log into terminal. user1 on server2 runs the rsync command. but when user1 tries to ssh into server1 he is prompted for a password
<zooko_sg> What's your favorite way to install Ubuntu as a guest in a VirtualBox?
<kirkland> mathiaz: hey
<kirkland> mathiaz: i have my pxe boot/install working
<mathiaz> kirkland: howdy sunland!
<kirkland> mathiaz: can you point me to your preseed files you're using for UEC?
<kirkland> ;-)
<zooko_sg> I guess I'll just dl the .iso and do an install...
<mathiaz> kirkland: sure - let me generate one
<kirkland> mathiaz: gimme a nc to start with
<smoser> kirkland, when you do get an instance up and running, please do a 'ps -aww | grep kvm' for me
<smoser> i just want to see what kvm is invoked as
<kirkland> smoser: will do
<smoser> which, i think, will tell me what network driver is being used.
<mathiaz> kirkland: http://people.canonical.com/~mathiaz/uec-node.preseed
<mathiaz> kirkland: you should have a quick look at it
<mathiaz> kirkland: as you may wanna change some information
<mathiaz> kirkland: the file is documented
<kirkland> mathiaz: of course ;-)
<mathiaz> kirkland: the key part is: d-i anna/choose_modules string eucalyptus-udeb
<kirkland> mathiaz: so where does this go in my tftpboot dir?
<mathiaz> kirkland: that line will kick the UEC installer component
<kirkland> mathiaz: or does it go with my http deb dir?
<mathiaz> kirkland: hm - multiple option
<mathiaz> kirkland: hm - multiple options
<mathiaz> kirkland: easiest is http deb dir
<mathiaz> kirkland: then you point the install to preseed=http://server/preseed via the command line
<mathiaz> kirkland: the most elegant is to stick into the ramdisk sent to the system
<mathiaz> kirkland: but that means regenerating the ramdisk
<kirkland> mathiaz: hrm
<mathiaz> kirkland: IIRC the ramdisk is fetched via tftp
<kirkland> mathiaz: okay, so this is perhaps where the cgi-bin comes in?
<kirkland> mathiaz: b/c i'd like the install to be automatic
<kirkland> mathiaz: in case i'm not at the console of the installing machine
<mathiaz> kirkland: right - both methods will be automatic
<mathiaz> kirkland: so if you use the http based method, you'll have to bootstrap a couple of debconf question via the kernel command line
<kirkland> mathiaz: oh?  so the name "preseed" is special?  a netboot always looks for that?
<mathiaz> kirkland: oh no. I made that up
<mathiaz> kirkland: on the kernel command line, you'll add: preseed=http://my-server/preseed
<mathiaz> kirkland: that means the installer will download the preseed as soon as it can (ie after networking is up)
<kirkland> mathiaz: okay -- how do i do that automatically?
<mathiaz> kirkland: that's through the dhcp option IIRC
 * mathiaz thinks
<kirkland> mathiaz: something in the install/netboot dir?
<kirkland> mathiaz: i was bind/loop mounting it
<kirkland> mathiaz: but i can create a rw copy
<kirkland> mathiaz: and edit that into there
<Doonz> user1 on server1 can just type ssh server2 and log into terminal. user1 on server2 runs the rsync command. but when user1 tries to ssh into server1 he is prompted for a password
<Doonz> i have added the id_rsa.pub to the servers authorized keys
<mathiaz> kirkland: it's set in the pxelinux.cfg file
<mathiaz> kirkland: http://archive.ubuntu.com/ubuntu/dists/lucid/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/
<kirkland> mathiaz: okay, coo
<mathiaz> kirkland: ^^ this should be the root of your tftpboot directory
<mathiaz> kirkland: the dhcp server points to pxelinux.0
<mathiaz> kirkland: then pxelinux.0 download its configuration from pxelinux.cfg/
<mathiaz> kirkland: it tries a list of well-known filenames
<mathiaz> kirkland: starting with the mac-address, then the IP address in HEX, and finally default
<mathiaz> kirkland: you'd have to look up the configuration format of pxelinux
<mathiaz> kirkland: IIRC you can specify which kernel, initrd should be downloaded
<mathiaz> kirkland: as well as the kernel command line
<kirkland> mathiaz: awesome
<kirkland> mathiaz: just what i need
<mathiaz> kirkland: the tricky part is to manage the pxelinux.cfg/ directory
<mathiaz> kirkland: what you'd usually do is that a the default file make the system boot from the local harddrive
<kirkland> mathiaz: right
<mathiaz> kirkland: if you want to (re)install the system, you just add a specific filename with the correct installation information
<mathiaz> kirkland: and then you need a script that will automatically remove the specific filename when the installation has completed
<mathiaz> kirkland: so that upon reboot the system doesn't reinstall
<mathiaz> kirkland: one way to do that is to write a cgi script that gets the IP address of the installed system
<sub> ^ that's how I've done it
<uvirtbot> sub: Error: "that's" is not a valid command.
<mathiaz> kirkland: and deletes the specific filename (using the HEX version of the IP address)
<sub> python CGI script to remove a symlink for the requesting IP address
<mathiaz> kirkland: and you just need to call the cgi script using a late_command in the preseed file
<mathiaz> sub: right - using symlinks helps as well
<mathiaz> kirkland: ^^ you can setup different installation profile
<mathiaz> kirkland: and then all you need to do is to create a symlink to the installation profile whenever you want to reinstall a node
<mathiaz> kirkland: and then reboot the node
<mathiaz> kirkland: another way to detect when an installation has been done is to scan syslog
<mathiaz> kirkland: you'll see request for tftp
<mathiaz> kirkland: when the pxelinux configuration file has been requested via tftp by the client, you can also erase it
<mathiaz> kirkland: that way you don't have to call for the cgi script at the end of the installation
<kirkland> mathiaz: cool
<kirkland> mathiaz: this is good stuff
<kirkland> mathiaz: its working well so far
 * kirkland is done burning USB sticks
<mathiaz> kirkland: yeah - it's a lot of pieces to be glued so far
<mathiaz> kirkland: and then you need to think about how you can automatically reboot a system
<mathiaz> kirkland: remotely - either via powercycling it
<mathiaz> kirkland: or wake-on-lan
<kirkland> mathiaz: we totally need a eucalyptus-auto-install package
<kirkland> mathiaz: or some such
<kirkland> mathiaz: that does everything i've done so far
<mathiaz> kirkland: well - we need a system-installation package with eucalyptus profiles
<kirkland> tftp, mounts, etc.
<mathiaz> kirkland: make sure that BIOS of the system are set to always boot first from the network
<mathiaz> kirkland: you wanna control everything from the network
<mathiaz> kirkland: as you can use pxelinux to boot from a local harddrive
<kirkland> mathiaz: right
<mathiaz> kirkland: (chainloading IIRC)
<smoser> kirkland, so i got the answer for mac
<smoser> err, for network adapter type
<smoser> its e1000
<smoser> aubre got http://paste.ubuntu.com/352480/ for me.
<smoser> 2 things interesting
<aubre> :)
<smoser> a.) i think ideal situation is that is virtio ... i'm just guessing that that should be the fastest network inteface for kvm
<aubre> I like how the mac starts with d0:0d
<mathiaz> smoser: right - virtio would be nice
<smoser> b.) i have the strongest case to get CONFIG_VIRTIO_NET=y and thus get us networking for kvm guests built into kernel rather than a module
<mathiaz> smoser: however it break compability with existing EC2 images
<smoser> (i dont think we're going to convince anyone to build in e1000 at this point)
<smoser> mathiaz, you're sure of that ?
<smoser> ec2 probably doesn't use e1000 nick, probably xen virtual. id have to check though
<mathiaz> smoser: well - existing EC2 images don't necessarly support virtio net devices
<kirkland> smoser: yes, definitely virtio would be nice
<kirkland> smoser: i *think* we're finally to a point where most guests should support virtio
<mathiaz> smoser: I think there is the same problem with virtio block devices
<smoser> remember that hte kernel is different for ec2 -> uec . so thats unavoidable.
<smoser> well, i think that virtio block was somethign else
<mathiaz> smoser: agreed.
<smoser> the last point i wanted to make was
<mathiaz> smoser: right - it was also a naming issue
<smoser> ./tools/gen_kvm_libvirt_xml was modified in November of this year to turn e1000 as the device model
<smoser> previously it was not set
<mathiaz> smoser: virtio block device show up as /dev/vda, where as EC2 images expect it to be sd*
<kirkland> mathiaz: i think we could reasonably hack that with symlinks
<smoser> http://paste.ubuntu.com/352486/
<smoser> mathiaz, your'e right on that.
<mathiaz> kirkland: good point
<kirkland> mathiaz: smoser: i'm all for using virtio disk and network the default in UEC
<smoser> kirkland, yes, it could be "fixed" but you'd have to change the image (udev in it to do that)
<kirkland> mathiaz: smoser: our VMs would *smoke*
<smoser> so with changing to virtio for block device you'd have to change the AMI/EMI
<kirkland> mathiaz: okay, one more silly question ... ks=http:/path/to/a/preseed file on the kernel command line...
<mathiaz> kirkland: smoser: right - ^^ that was the issue against virtio block devices
<kirkland> mathiaz: that should work right?
<smoser> what does a virtio network device show up as ?
<smoser> mathiaz, right, you are correct.
<kirkland> mathiaz: are preseed/kickstart files interchangable in ubuntu?
<kirkland> smoser: just eth0
<smoser> yeah, so we might be able to get away with that
<mathiaz> smoser: that's the good point: eth*
<smoser> at very least it would be nice to allow someone to change that
<kirkland> smoser: thankfully, that's no different
<kirkland> 00:03.0 Ethernet controller: Qumranet, Inc. Unknown device 1000
<mathiaz> kirkland: hm - I don't think so
<kirkland> eth1      Link encap:Ethernet  HWaddr 00:16:36:32:4a:25
<kirkland> mathiaz: oh?
<kirkland> mathiaz: what's the kernel cmdline for a preseed file then?
<mathiaz> kirkland: hm - file=http://
<smoser> since it has recently changed (from whatever kvm default is) to e1000
<mathiaz> kirkland: ?
<smoser> we might be able to argue that it should have been changed to virtio
<mathiaz> kirkland: I think it's file=
<mathiaz> kirkland: https://help.ubuntu.com/9.10/installation-guide/i386/appendix-preseed.html
<kirkland> smoser: who changed it?
<smoser> see patebin for log
<kirkland> mathiaz: cheers, mate
<smoser> http://paste.ubuntu.com/352486/
<smoser> oh, silly  me. root did :)
<mathiaz> kirkland:   preseed/url=http://host/path/to/preseed.cfg
<kirkland> smoser: nice
<kirkland> smoser: is that you, or eucalyptus?
<mathiaz> kirkland: ah - it's url= actually
<smoser> that log was from bazaar.launchpad.net/~eucalyptus-maintainers/eucalyptus/1.6/
<mathiaz> kirkland: file= will point the installer to a local file on the ISO
<smoser> http://paste.ubuntu.com/352487/
<kirkland> smoser: those guys kill me with that
<smoser> well, you kniow it was someone who was root
<smoser> so that limits it
<mathiaz> smoser: hm - I can't think of any reason to *not* use virtio network devices by default on UEC
<smoser> unless they used euca-root-wrap :)
<kirkland> mathiaz: clients are the only limitation
<kirkland> mathiaz: the client must support virtio
<kirkland> mathiaz: which is true for Ubuntu >= hardy
<mathiaz> kirkland: right - and that's an issue at the kernel level
<kirkland> mathiaz: i don't know what that threshold is for debian, fedora, centos, etc.
<mathiaz> kirkland: like smoser mentionned the EC2 kernel != UEC kernel
<kirkland> mathiaz: but like i said, i think it's a safe assumption in 2010
<garymc> Hi anyone know how i make my server work through a static ip address?
<garymc> I bought 5 static ip address and im assigning a ip to each server
<mathiaz> kirkland: since we provide official UEC kernels, we can say that they will support virtio
<kirkland> mathiaz: sure
<garymc> anyone?
<mathiaz> garymc: "man interfaces" outlines how to configure a static ip address
<garymc> where is that? im a newbie
<mathiaz> garymc: http://doc.ubuntu.com/ubuntu/serverguide/C/network-configuration.html
<garymc> mathiaz i think maybe this is something differnt
<garymc> My Internet provider has gave me 5 ip address to the one broadband connection. Now before this I could access the server fine, and i would have to tunnel to the other servers
<garymc> but now I assign an static ip to each server the servers are not connecting to the router. My phone company say i have to get my server to either recognise this or make sure they have dhcp enableed which i think they do?
<mathiaz> garymc: you need to connect all of your servers directly to the router
<mathiaz> garymc: and set them up with their respective ip address
<garymc> they are connected directly to the router
<guntbert> garymc: and the router must be configured appropiately
<garymc> yes it is supposed to be, would i need to set anything else?
<garymc> like in the server?
<mathiaz> garymc: you need to make sure that the server is configured with the static ip address
<mathiaz> garymc: if you run the command: ifconfig
<mathiaz> garymc: it will give you the list of configure network interfaces
<garymc> ok, im working remotley
<mathiaz> garymc: and what IP address they're listening to
<garymc> eth0 and eth1
<garymc> 192.168.1.29
<garymc> the router is handing that out
<nod> hi - i asked a question earlier then ran off to lunch about 5 mins later.
<nod> i see Tetec responded but is gone now
<nod> hi - i've just done a vanilla install of 9.10 cloud with 1 cluster head and one node.  my store is empty of images, yet the docs seem to say i should have some default images there.
<nod> i don't mind bundling my own (and intend to) but I'm just curious if I've not missed something and therefore my installation isn't complete.
<nod> there's currently nothing listed in the image store
<mathiaz> nod: is the cluster head allowed to reach the internet?
<nod> mathiaz: ya
<mathiaz> nod: you should check if the image-store-proxy is running on the system
<mathiaz> nod: and look into its log files: /var/log/image-store-proxy/
<nod> excellent.  will do
<nod> thx for the pointer
<nod> it says it's started up and listening on port 52780
<nod> no errors.  and - it's currently running now
<nod> whoa...  bizarre!  i just restarted it and _now_ there's images in the store :|
<blackxored> hello, anyone knows about a way to reset a user's password in a AD domain through the shell?
<garymc> ok i had to reset server or ifdown then ifupo for it to sort its ip address
<garymc> anyone know the command to change my root password?
<neonfreon> passwd
<guntbert> !root | garymc
<ubottu> garymc: Do not try to guess the root password, that is impossible. Instead, realise the truth... there is no root password. Then you will see that it is 'sudo' that grants you access and not the root password. Look at https://help.ubuntu.com/community/RootSudo
<garymc> what?
<garymc> I changed it before so i could give someone access i just want to change it back
<neonfreon> passwd -l
<garymc> l
<garymc> ?
<garymc> ok
<garymc> no still dont get it
<garymc> so dont i do something like, passwd change letmein
<neonfreon> no
<neonfreon> you want no passwd to work
<neonfreon> so that sudo is the only way
<neonfreon> read the man page on passwd
<neonfreon> see what the -l command does
<guntbert> garymc: you *should* disable the root password again (like neonfreon said)
<ChrisRut> I am running Ubuntu Server 8.04 (hardy), and when I login remotely i can't see the colors when using ls, however after I type "bash" into the shell I see colors when using ls, I've already checked my .bashrc and all the colors are enabled by default (colors=auto), and I've checked "echo $SHELL
<ChrisRut> " and which bash, and they both report /bin/bash
<ChrisRut> hoever I don't see colors until I enter "bash" into shell
<ChrisRut> why might this be?
<soren> mathiaz: You know, a team ppa for those auto builds would be good. The first package just failed, and I just realised that as long as I'm the only one getting those e-mails, I'm also the poor sod who has to deal with it :)
<mathiaz> soren: :)
<mathiaz> soren: that being said, having team doesn't mean that the build failures will automatically be dealt with by *others* ;)
<soren> mathiaz: No, but then we can all join in on procrastinating. It's a joint effort. :)
<Doonz> Hi guys, What im trying to do is set up passwordless transfers using rsync between my two servers. each server has the same user on it with the same password. this will be the user calling for the rsync transfer on server2. The user we will call user1. from server 1 user1 can initiate a ssh session with server2 without being prompted to enter a password. Now from server2 user1 cannot initiate a ssh session with server1 without enter
<ChrisRut> Doonz,
<ChrisRut> you need to setup password-less authorization keys
<Doonz> Chris wich i did. i did it from server 1 to server 2 first
<Doonz> then redid the steps from server2 to server1
<ChrisRut> Doonz: are both servers using the same "user"
<Doonz> yes with same passwrd
<Doonz> server 1 to server2 works
<Doonz> server2 to server1 doesnt
<ChrisRut> I think you need to generate a new key for server2 and then copy that over to server1, you can't use the same key on both (could be wrong thou)
<Doonz> yeah ChrisRut thats what i did
<Doonz> :/
<ChrisRut> Doonz: hmm, not sure then, what (if any) error messages are you getting? can you try using verbose to get more info?
<Doonz> ~/.ssh$ ssh -o PreferredAuthentications=publickey private.com
<Doonz> Permission denied (publickey,password)
<Doonz> thats server 2 to server 1
<Doonz> when i run that command on server1 to server2 it logs me in
<kisielk> Doonz: check the permissions on your files in .ssh on server1
<Doonz> identical on each server
<ChrisRut> Doonz: can you ssh from 2 to 1 without using a key (just password)?
<kisielk> is the key in your authorized_keys?
<ivoks> nice
<ivoks> i've tested pacemaker with ip and service failover and it works great
<ivoks> easy to set up, easy to configure and runs quite nice
<ivoks> when i think of it, i don't know how to set up the same thing with RHCS :)
<jiboumans> ivoks: that's pretty good news :)
<ivoks> yeah...
<ivoks> i need to do more testing for final conclusion
<ivoks> but, at the moment, it looks awesome
<jiboumans> ivoks: glad to hear it. keep us appraised of progress, we're all quite interested
<ivoks> sure
<dug_> I upgraded my server to karmic, works fine. Removed the 2nd hard drive tho (empty), and now it won't boot, just a flashing cursor. Holding shift doesn't load grub, I checked https://help.ubuntu.com/community/Grub2
<dug_> I ran grub-install from an 8.10 cd, no change.  I'm burning a 9.10 cd now, will that install grub2?
<dug_> oh i see the instructions here now at the bottom of the page https://help.ubuntu.com/community/Grub2#Reinstalling%20from%20LiveCD
<Doonz> fix it home dir was chomd to 777
 * kirkland hugs mathiaz
<kirkland> my pxe UEC install setup totally kicks ass now
#ubuntu-server 2010-01-07
<dim3000> how do I start openbox from terminal?
<dim3000> or any other de for that matter
<tonyyarusso> Would anyone be able to help me set up ejabberd to have users authenticated against system accounts through PAM?
<nvme> does the server version come with any gui ?
<sub> not by default, but if you were so inclined you could install one
<tonyyarusso> nvme: not be default.  You likely want to look into ebox.
<nvme> how easy is it to put something like gdm on it ?
<tonyyarusso> Very, but you have to ask yourself why you would want to.
<nvme> i dont want to, my boss does so the noobies can do stuff too :P
<tonyyarusso> Installing GDM won't make it any easier for people to administer the server.  It will only make it easier to break it.
<nvme> w.e
<nvme> ill just wing it
<uvirtbot`> New bug: #503768 in clamav (main) "package clamav-freshclam 0.95.3+dfsg-1ubuntu0.09.10 failed to install/upgrade: subprocess installed post-installation script returned error exit status 2" [Undecided,Incomplete] https://launchpad.net/bugs/503768
<kees> soren: how about gcc, glibc for your test-suite tests?
<kees> soren: and then, everything listed in lp:~ubuntu-bugcontrol/qa-regression-testing/master/  build_testing/
<kees> extensive instructions on how to modify various packages to get their internal testsuites to run.
<kaushal> hi
<kaushal> any gui application for viewing postfix maillog file ?
<uvirtbot`> New bug: #504157 in eucalyptus (main) "Lucid UEC installer: CC install is not preseeded when separated from CLC" [High,Confirmed] https://launchpad.net/bugs/504157
<jmarsden> kaushal: Any text editor would work.  Emacs, gedit, whatever you use for editing text files is fine.
 * soren has *again* just had a bird in his office
<soren> An actual bird. The sort that flies.
<Jeeves_> lol
<Jeeves_> soren: Do you have your windows opened?
<soren> Heck no.
<soren> It's freezing outside.
<_ruben> haha
<soren> (That's my current excuse. during the summer, I use other excuses to never open my window)
<soren> It's a hassle. Eventually I have to close the darn thing again.
<Jeeves_> soren: So how did the bird get in? :)
<soren> Jeeves_: Beats me. My house must be leaking.
<soren> My wife do open windows, though. Maybe it flew in that way and somehow thought, "hey, I'll go against all my instincts, and fly down to the darkest corner of the basement and scare the shit out of someone."
<Jeeves_> :D
<soren> Worst part: I have no clue where it is now. It's still in here somewhere, waiting for me to be really concentrated, and then it'll come back out and freak me out.
<ewook> lol
<Jeeves_> Has everybody upgraded powerdns recursor, btw?
<Jeeves_> Hmm, Ubuntu hasn't upgraded pdns-recursor yet
<Jeeves_> Hmm, it has been updated
<Jeeves_> An hour ago :)
<jumbers> What's the best way to diagnose random hard locks? This is occurring after a kernel upgrade to 2.6.31-17
<gzur> GHello		$sql1 = "UPDATE lmi_ornefni_flakar SET nafn = $1, nafnberi = $2, dags_leidr = '" . $data_date_nuna . "', flokkur = $3, notandi = $4, heimildir = $5, tvinefni = $6, eytt = $7, eytt_dags = $8, eytt_notandi = $9, eytt_skyring=$10,tilvist=$11 WHERE nameid = $12";
<gzur> 		$sql2 = "UPDATE lmi_ornefni_linur SET nafn = $1, nafnberi = $2, dags_leidr = '" . $data_date_nuna . "', flokkur = $3, notandi = $4, heimildir = $5, tvinefni = $6, eytt = $7, eytt_dags = $8, eytt_notandi = $9, eytt_skyring=$10,tilvist=$11 WHERE nameid = $12";
<gzur> 		$sql3 = "UPDATE lmi_ornefni_punktar SET nafn = $1, nafnberi = $2, dags_leidr = '" . $data_date_nuna . "', flokkur = $3, notandi = $4, heimildir = $5, tvinefni = $6, eytt = $7, eytt_dags = $8, eytt_notandi = $9, eytt_skyring=$10,tilvist=$11 WHERE nameid = $12";
<gzur> oops
<gzur> sorry about that
<Jeeves_> :)
<gzur> Some prepared statements for your perusal.
<gzur> I just set up ubuntu server and told it to install tomcat6 when I was setting it up.
<gzur> but I can't figure out how to start/stop the tomcat server...
<gzur> it starts up fine on boot
<gzur> but atm if I want to reboot the server, I have to reboot the box - which is somewhat less than optimal.
<gzur> the shell scripts under /bin don't seem to do anything
<Jeeves_> gzur: /etc/init.d/tomcat<tabtab> (stop|restart|start)
<gzur> I'm thinking there's something very obvious that I'm missing here, and was wondering whether anybody could point me in the right direction...
<gzur> thanks
<gzur> init.d
 * gzur goes googling what
<gzur> init.d is about.
<gzur> thanks a boatload
<garymc> hey anyone know how to use ftp. I set it up so i could access my ftp web server from the internet. Now I had 5 static ip address's assigned and i assigned one to my LTSP server now i cant ftp to it remotley. Do i have to change anything in ftp and how? what files do i alter and where are they. PLEASE HELP!!
<ycy> hi there
<ycy> I have a problem
<ycy> i deleted all the files in /etc/apt/apt.conf.d/
<ycy> how can I regenerate them?
<jiboumans> ycy: assuming you dont have a backup of your /etc or aren't using etckeeper (suggest you do from now on), try using dpkg -S apt.conf.d
<jiboumans> that will show you all packages that have installed a file in there
<ycy> ok
<ycy> then dpkg reinstall all of those pkg?
<jiboumans> *nods*
<alvin> Will that work if his apt.conf.d directory is empty?
<uvirtbot`> New bug: #504215 in libvirt (main) "virsh fails to start kvm domain with bridged nic" [Undecided,New] https://launchpad.net/bugs/504215
<acalvo> hi
<acalvo> and happy new year
<acalvo> :)
<acalvo> what is the best way to sync between two servers?
<cemc> acalvo: sync what? is there a master/slave relation? rsync?
<acalvo> yes, is there a master/server relation
<acalvo> I want to set up a backup server, so in case the master goes down, the slave takes control of everything (ldap, samba, and so on)
<cemc> like failover or you start the backup server manually?
<acalvo> whatever is easier
<acalvo> I don't mind starting it manually
<cemc> I would probably do it with rsync, at night when nobody's using it (if that's the case here), and have all the configurations in place in case I have to switch. but that's just me, there are probably much better ways to do it
<cemc> and more complicated ways too :)
<acalvo> that sounds pretty useful
<acalvo> just rsync?
<cemc> the problem with that setup is, after you get the master server back up, you have to sync back the changes made on the backup. it really depends on your situation, maybe you should google it first ;)
<acalvo> cemc: yes, could be really a problem
<zul> morning
<sommer> yo
<acalvo> sommer: tu
<acalvo> what is the best way to sync two servers?
<uvirtbot`> New bug: #470179 in php5 (main) "php5 crashed with SIGSEGV in start_thread()" [Medium,New] https://launchpad.net/bugs/470179
<_ruben> acalvo: define "sync"
<acalvo> _ruben: I've a production server, and last month we had a power failure which lead to a really big data crash, and I've waste like 5h trying to get it working again (we had backups, but only 1 a day)
<acalvo> so I'm trying to focus in having two servers: a master/slave relation. So, if one fails, start the other without any data loss
<acalvo> I don't know if I can run a rsync every hour (or so), or if there is any other solution more appropiated for this goal
<_ruben> if you want realtime sync of data use something like drbd
<acalvo> mmm, clustering then
<acalvo> but I've already the server working
<_ruben> drbd can be activated with existing data .. must admit, never done it myself (drbd is still on my todo/wish list)
<acalvo> I'll give it a look, but I don't know if it fits in the actual enviorenment
<acalvo> is there any other solution?
<_ruben> its the only realtime sync solution im aware of
<_ruben> closest alternative would be a scheduled rsync as you stated yourself
<soren> glusterfs
<acalvo> thanks you both, I'll give it a try for both solutions
<acalvo> but seems that your point is right: clustering
<acalvo> no matter how, is the best way to accomplish real-time data sync
<uvirtbot`> New bug: #504262 in eucalyptus (main) "Lucid UEC installer: NC doesn't publish its existence" [High,In progress] https://launchpad.net/bugs/504262
<Ethos> Anyone know a way of installing vmware tools into ubuntu server 8.10?
<Ethos> It seems impossible to find a guide that just works
<ttx> kirkland: stumbled on bug 504262 now ... I reverted the eucalyptus-nc-publication job to start on "started eucalyptus-nc", pending a more elegant solution with upstart. At least it works.
<uvirtbot`> Launchpad bug 504262 in eucalyptus "Lucid UEC installer: NC doesn't publish its existence" [High,In progress] https://launchpad.net/bugs/504262
<kirkland> ttx: did you noticed i made libvirt upstart savvy?
<ttx> kirkland: that's good :)
<kirkland> ttx: and set eucalyptus-nc to start on started libvirt?
<kirkland> ttx: i'm hoping that will take care of a few -nc problems
<kirkland> ttx: not sure it'll solve this though, but i'd be interested in testing
<ttx> kirkland: not it won't solve this one
<kirkland> ttx: ah, this is the ssh respawn one
<ttx> well, an avahi-daemon respawn one
<ttx> a less-ugly solution would be to use respawn and some way to make it not respawn very fast
<ttx> since apparently upstart won't really check state without an event to force it to
<ttx> kirkland: you know of a way to make upstart respawn slowly ?
<kirkland> ttx: heh :-)
<kirkland> ttx: well, i'm learning more about upstart each night
<ttx> I hit the respawn limit even on a avahi-daemon restart (which is quite fast)
<kirkland> ttx: that i'm not sure about, but I'll ask slangasek
<ttx> we might have to corner the upstart experts at the sprint
<kirkland> ttx: agreed, would be nice to get a comprehensive review of eucalyptus*upstart and their dependencies
<kirkland> ttx: it would be nice to get that on the calendar
<kirkland> ttx: as a 1.5 hour meeting or something
<ttx> right, let me see if there is a place for that yet
<ttx> not yet
<Ethos> Anyone know how to install vmware-tools into server 8.10?
<Ethos> I think 99% of the guides on the internet are written by people who simply make them up
 * ttx tries a 5-component separated install now
<kirkland> ttx: as for the "solution", starting on started eucalyptus-nc, that certainly seems reasonable to me
<ttx> kirkland: well, nothing ensures that ssh / avahi-daemon are available then
<ttx> kirkland: even if thery always are in my experience
<ttx> and as soon as you add an "and" to that, trouble starts
<kirkland> ttx: then start on started ssh, avahi-daemon, and eucalytpus-nc ?
<Ethos> k
<ttx> kirkland: that would still fail if avahi-daemon is restarted
<Ethos> helpful bunch
<ttx> avahi-publish sigterms when avahi-daemon is stopped
<gzur_> I've got an ubuntu-server running X. I can connect to it from my windows machine over SSH using putty fine.
<gzur_> What I'm wondering is whether I can somehow connect to the X server from my windows machine.
<gzur_> Could you guys at least point me in the right direction.
<ttx> kirkland: but I agree it's slightly better
<ttx> and will apply it.
<ttx> done
<ttx> "<gzur_> I've got an ubuntu-server running X" then it's an ubuntu desktop, no ?
<ttx> Ethos: you might have more success in #ubuntu-virt
<jiboumans> gzur_: you'll need x-forwarding enabled and an x capable client. alternately you can use vnc or somesuch. As ttx pointed out, the desktop folks are more likely to know specifics
<gzur_> ttx: well - sure.. it's actually ubuntu-server onto which I installed metacity
<Ethos> ttx: that's great thanks :)
<gzur_> awright
<gzur_> thanks
<ttx> Ethos: not eveyrone (especially here) is using/needing vmware
<Ethos> Yeah, I thought as much
<jiboumans> gzur_: if you want access to the 'remote desktop', vnc or some such is your best bet and should be easy to set up
<gzur_> ok
<Jeeves_> You cannot file bugs on PPA-packages, it seems?
<ScottK> Jeeves_: No.  They aren't in Ubuntu.  Just mail the person that runs the PPA.
<Jeeves_> ScottK: I'm the person running the PPA!
 * Jeeves_ was looking for cheap karma :)
<ScottK> Ah.
<Jeeves_> Would be a nice feature though
<ScottK> Jeeves_: What I did for the Kubuntu PPA, ~kubuntu-ppa, was make a LP project called kubuntu-ppa and have it have a bug tracker.
<ScottK> So it's two separate entities (since PPAs don't have bug tackers), but happens to have the same name.
<Jeeves_> ScottK: bitcron isn't really worth that much effort :)
<ScottK> OK.
<acalvo> how can I force a system to use an ethernet as default?
<Jeeves_> acalvo: huh?
<Jeeves_> What other options do you have? :)
<acalvo> I've two NICs, one has internet access while the other don't
<acalvo> and checking the route table it seems that it tries only the one that does not have internet access
<acalvo> is there any way to add more weight to the one that has internet access?
<Jeeves_> acalvo: it will try to use the default route
<acalvo> 10.0.0.0        *               255.255.255.0   U     0      0        0 eth0
<acalvo> 192.168.0.0     *               255.255.255.0   U     0      0        0 eth1
<acalvo> default         192.168.0.1     0.0.0.0         UG    100    0        0 eth1
<acalvo> default         router.esci.es  0.0.0.0         UG    100    0        0 eth0
<alvin> acalvo: You can change the name of your nics in this file: /etc/udev/rules.d/70-persistent-net.rules
<Jeeves_> alvin: That something else
<Jeeves_> acalvo: It will try both these default routes on a round-robin'ish base
<Jeeves_> If you don't want the default route on one side, you should stop sending it :P)
<Jeeves_> (or setting it)
<acalvo> mm no
<acalvo> it's the backup network
<acalvo> just to communicate between servers
<acalvo> so it is using a dedicate line
<Jeeves_> Is everything reachable via both paths?
<acalvo> yes
<Jeeves_> Ah, than you can set a metric
<acalvo> yes!
<acalvo> good idea
<Jeeves_> You can set that in the interfaces file
<acalvo> mmm how?
<acalvo> ok, sorry
<acalvo> googled it
<Jeeves_> man interfaces
<acalvo> yes, done that
<acalvo> thank you
<acalvo> it worked! thanks
<Jeeves_> yw
<ttx> kirkland: I just committed a fix for bug 504309 -- when you'll build the new release, please doublecheck that it still installs :)
<uvirtbot`> Launchpad bug 504309 in eucalyptus "Lucid UEC installer: walrus separate install ends up with /var/lib/eucalyptus/.ssh unreadable for eucalyptus" [High,Fix committed] https://launchpad.net/bugs/504309
<ttx> kirkland: as usual, please do the release dance before you go to bed, even if you didn't commit any fixes
<mathiaz> kirkland: I'm going to close bug 487282 as you mentioned it's working ok now
<uvirtbot`> Launchpad bug 487282 in eucalyptus "create a packaging branch" [Undecided,Confirmed] https://launchpad.net/bugs/487282
<ttx> kirkland: fwiw, I haven't hit the "lets restart euca until it wortks and kill -9 it sometimes" recently
<ttx> I've only been testing the weird topologies, if that's a factor
<kirkland> ttx: that's good :-)
<kirkland> mathiaz: yeah, agreed
<nvme> im trying to install this off a USB drive (unetbootin), but its bugging me with the no cdrom detected prompt, how do i fix that ?
<hackeron> hey, I see "2010-01-04 06:41:25 GMT FATAL:  terminating connection due to administrator command" - in my postgres log, which kills my apps connected to postgres. Any ideas where is that comment coming from? - the server is in a rack 500 miles away and no one was connected to it
<guntbert> hackeron: have a look at the syslog?
<ttx> kirkland: if you have some time (?) it would be good to fix bug 497831 -- it's preventing separated component install of SC from autoregistering
<uvirtbot`> Launchpad bug 497831 in eucalyptus "Eucalyptus-SC doesn't ask the cluster-name question" [High,Triaged] https://launchpad.net/bugs/497831
<kirkland> ttx: working on meeting minutes now
<ttx> kirkland: happy man :)
<kirkland> ttx: 30 minutes into the 2 hour marathon
<hackeron> guntbert: nothing there to do with postgres
<guntbert> hackeron: I'm not familiar with postgresql - but I'd gueass evrey kind of "administrator command"  would leave its traces in syslog (around that time...)
<hackeron> guntbert: it doesnt - nothing around that time in syslog
<guntbert> hackeron: sorry then
<manish> Hello there
<Doonz> hey guys is there a setting somewhere in ubuntu to throttle transfers?
<jpds> Doonz: Don't think so, you can use something like trickle though.
<Doonz> jpds: thanx, no its just a dedicated server i have i cant get it to go faster that 150kb/s even thos its 100mbit and the remote servers are 20mbit
<bogeyd6>  If you have a mounted windows share and you are writing files to the mounted location and somehow the mount failes, how do you remount and force the system to sync those files? I did a mount -a and it erased /mnt/windowsshare but I would like to know for the future
<bogeyd6> Doonz, iptables using hashlimit to throttle to certain ports
<bogeyd6> doonz you can also throttle everything or whole subnets
<manish> Hello, I have a LAN with Win2K servers as DomainControllers, want to change over to Ubuntu 9.10 can someone help. I will be provided with a machine tomorrow.
<manish> can some one help me here?
<Pici> I don't know what the question is.
<manish> Pici, I have a LAN with 2 Win2K servers which act as a Domain Controllers.
<GNUtoo|oeee> hi,I've not an ubuntu-server but a jaunty-based distro and I'd like to assign permanently an ip to eth0....how do I do that....there is iface eth0 + the ip and netmask inet static in /etc/network/interfaces
<GNUtoo|oeee> oops
<manish> Those keep failing and I need to leave my work to keep booting them as often they crash
<guntbert> GNUtoo|oeee: "jaunty-based" meaning what?
<manish> I am fed up with them now
<GNUtoo|oeee> trisquel
<Ash1> how do i disable the usb single port...
<GNUtoo|oeee> it's exactly like jaunty without the non-free repos
<manish> So I suggested to use the Ubuntu server.
<Ash1> please send me exact configuration
<guntbert> GNUtoo|oeee: look at https://help.ubuntu.com/9.10/serverguide/C/network-configuration.html
<GNUtoo|oeee> thanks
<GNUtoo|oeee> guntbert, doesn't work...that's the static config I have...I'll explain
<GNUtoo|oeee> I've this embedded thing that I'd like to boot with an nfs rootfs
<GNUtoo|oeee> so there is a bootloader that configures the ip
<GNUtoo|oeee> then the kernel reconfigure the interface
<GNUtoo|oeee> with dhcp
<GNUtoo|oeee> and then tries to boot on nfs
<GNUtoo|oeee> but for some reason the server card(not the embedded thing) is ifconfig eth0 downed by soemthing
<GNUtoo|oeee> even if nm-applet is killed etc...
<GNUtoo|oeee> the link is a direct link
<GNUtoo|oeee> maybe I should reboot
<GNUtoo|oeee> I'll reboot the nfs server
<GNUtoo|oeee> so what should I try? a switch?
<GNUtoo|oeee> so there would be carrier sensing
<GNUtoo|oeee> s/sensing//
<bogeyd6> !samba @ manish
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<bogeyd6> !samba | manish
<ubottu> manish: Samba is the way to cooperate with Windows environments. Links with more info: https://wiki.ubuntu.com/MountWindowsSharesPermanently and https://help.ubuntu.com/9.10/serverguide/C/windows-networking.html - Samba can be administered via the web with SWAT.
<bogeyd6> manish, perhaps you could setup a script to just reboot the server each night one after the other
<cemc> I have an Intel Pro 1000 nic, which does wake up the system when I send magic packet with etherwake, but only if it was shut down in Windows. when I shut it down in ubuntu, it doesn't wake up. any ideas?
<GNUtoo|oeee> mmm after rebooting it the ip seem stable
<GNUtoo|oeee> only error -13 for NFS
<GNUtoo|oeee> but that prevent it from booting
<Ash1> how do i disable the usb single port
<GNUtoo> anyone?
<GNUtoo> I bet error -13 is permission denied
<GNUtoo> but it works while booting from sd and mounting it
<bogeyd6> Doonz, iptables using hashlimit to throttle to certain ports
<Guest88015> hey, i keep seeing this eerror in my cc.log for UEC in MANAGED-NOVLAN mode, priv interface 'eth0' must be a bridge, tunneling disabled
<rj175> I also get failed to attach tunnels for vlan 10 during maintainNetworkState()
<Ash1> how do i disable the usb single port
<uvirtbot`> New bug: #504407 in dhcp3 (main) "Option "metric" in /etc/network/interfaces broken" [Undecided,New] https://launchpad.net/bugs/504407
<zul> well that sucked
<occy> Checking `bindshell'...                                     INFECTED (PORTS:  6667)   Should I be worried?  Got this from chkrootkit   we have internal irc server but it's not accessable from the outside world
<occy> Sadly we've been having problems with conficker on machines.
<occy> Now I'm getting:  Checking `bindshell'...                                     INFECTED (PORTS:  1524 6667 31337)
<KillMeNow> sounds like you gots rooted
<scresawn> Hi. I'm having a problem try to get DRBD working in a cluster environment.  I've tried the tests here: https://wiki.ubuntu.com/Testing/Cases/UbuntuServer-drbd without much luck.  (corosync won't start.) Can anyone help me or at least point me to the most appropriate resource(s)?
<unit3> Hey, can someone tell me what the separate mysql-server-core-5.1 package is in karmic?
#ubuntu-server 2010-01-08
<harrywood> I have postgres 8.4 running, but I also have 8.3 installed.   How can I shut down 8.4 and start up 8.3 ?
<harrywood> ah ok. got it. sudo service postgresql-8.3 start
<mathiaz> kirkland: howdy! have run into this bug 504530?
<uvirtbot`> Launchpad bug 504530 in euca2ools "euca-register fails to register an image: register_image() takes at least 2 non-keyword arguments (1 given)" [Undecided,New] https://launchpad.net/bugs/504530
<mathiaz> kirkland: this is with the latest version of euca2ools in lucid
<ruben23> hi are there any chance i can increase my /var directory on my ubuntu server,during installation what i select on partion is the default..
<ruben23> this is my df -h--->http://pastebin.com/m7c3cf665
<ruben23> on my volume group i have this--->http://pastebin.com/m199f7cc0
<ruben23> and my lv display--->http://pastebin.com/m7c83e8bd
<ruben23> anyone have idea..?
<ivoks> so, no free lvm space?
<ruben23> ivoks: how do i check it..?
<ivoks> you did vg and lvdisplay
<ivoks> how about pvdisplay
<ruben23>  PV Size               297.85 GB / not usable 3.64 MB
<ivoks> why would you increase /var?
<ivoks> it's not separate partition
<ivoks> atm, it's size is 293GB :)
<ruben23> coz im having recordings on that directory and it getting full.
<ivoks> ?
<ruben23> im wrong
<ivoks> your / (which contains /var) is 4% in use
<ruben23> what i mean is, does /var directory gets its size on the main partition..?
<ivoks> yes
<ruben23> so it geeting the size of my 295.28 GB
<ivoks> *nod*
<ruben23> ow ok....i guess i dont need to increase it
<ivoks> you can't
<ivoks> you allocated all disk space to /
<ruben23> ok thanks..
<ivoks>  /var isn't separate partition anyway
<ruben23> its part of the /root directory right..?
<ivoks> nope
<ivoks> its part of /
<ruben23> ow ok whihc the size is 295 GB
<ivoks>  /root is part of /
<ruben23> got it...more clear now
<ivoks> new to ubuntu?
<ivoks> http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/c23.html
<ruben23> yes, but i been studying it..it my server on production
<ivoks> http://www.pathname.com/fhs/pub/fhs-2.3.html
<ivoks> ^^^ good read
<uvirtbot`> ivoks: Error: "^^" is not a valid command.
<ivoks> 'night
<ruben23>  ivoks:one more thing where did you find that /var is only using 4%...?
<ruben23> ow ok goodnight
<ivoks> whole / uses only 4%
<ivoks> ok, i guess you are familiar with windows
<ivoks> imagine C = /
<ivoks> and C:\Windows = /var
<ruben23> ok thanks..yes clear now
<ivoks> notice that, unlike windows, /var (or any other path) can be directory or partition
<ivoks> so / is one partition, but /var can be another - this is not the case with your setup
<ruben23> ok
<kirkland> mathiaz: hey
<kirkland> mathiaz: hrm
<kirkland> mathiaz: let's poke nekro
<airliasdesign> Is anyone here?
<airliasdesign> Is anyone here?
<xperia> hello to all. i have by incident deleted the mail log files in /var/log and now after restarting postfix the files are not created again. do i need to restart syslog to get the files back or do i need to create them itself?
<twb> *by accident
<twb> If the logs are created by syslog, you do not need to create them -- you might need to restart syslog, though.
<twb> If the logs are written directly by postfix, I don't know -- I would expect restarting postfix to be sufficient.
<xperia> twb: thanks for your reply ! have restarted postfix and the files are not recreated. will try now to restart syslog
<xperia> twb: after the restarting of syslog the files were recreated. need now to test if the logs are really written to the files. have the strange root:root ownership for this files. normally it should be if i am not wrong syslog:adm
<twb> Here, /var/log/mail.{err,info,log} are root:adm
<twb> That's on 8.04 with traditional syslog (not rsyslog)
<xperia> twb: you are a genius. thank you a lot with helping me with this problem. the log files were recreated after the restart of syslog and postfix write to them with no problem :-)
<twb> It probably would have been sufficient to just pkill -HUP syslog
<twb> That is what logrotate will do each day, so this would have fixed itself tomorrow
<xperia> did not thinked syslog is such intelegent to do this. love intelegent self repairing systems :-)
<twb> It's more an happy accident than "self healing"
<xperia> twb: i have problem with recieving mails on my system. till yet i have used in main.cf the aliases file for maping the incoming mails to my user account now i have decided to use recipient_canonical file for this but this does not work.
<twb> xperia: that is beyond me knowledge.
<twb> xperia: try #postfix
<xperia> okay
<xperia> they are only heavy snoobish :-)
<twb> xperia: you mean they don't want to answer your questions?
<xperia> if you ask too much specific question mostly you will read answer like noob ubuntu user :-) to be a ubunut user how want a good bleeding edge system is not easy today.
<twb> Well, Ubuntu isn't about the bleeding edge.
<twb> If you want the very latest versions, you could try LFS, or SourceMage, or Gentoo.
<xperia> with bleeding edge i mean having a full configured system that allow you to have a good mailserver for recieving and sending with multiple email adresses. web and dns server with multiple domains and other things that are beyond the default setup.
<xperia> if you have question related to this tasks geting a answer as a ubuntu user in other chanels isnt that easy.
<twb> xperia: have you read the "Smart Questions HOWTO"?
<JanC> twb:  âº
<twb> JanC: some context, please
<JanC> I also doubt the Postfix people care about what OS you use  ;)
<xperia> twb: the problem is that the people dont are interested to help when somebody ask a specific question. i have asked this question here since 5 min in postfix and till yet nobody answered.
<xperia> do i need to have the aliases file in main.cf for maping incoming mails to my user account or can i use only this line here for this purpose.
<xperia> recipient_canonical_maps = hash:/etc/postfix/recipient_canonical
<twb> xperia: don't expect a response within five minutes.
<JanC> eh, 5 min is not exactly a long time
<twb> xperia: if the channel is slow, it might take HOURS to get a reply.
<xperia> yeah this could be. maybe the ubuntu-server irc chanell is unique as nearly all the time people are here with good knoweledge and answer question fast. if all chanelles would be this way a lot of prblems could be allready solved and time can be saved
<twb> That's because #ubuntu-server isn't a specialist channel, and isn't full of newbies.
<twb> #emacs is similar.
<xperia> okay have solved my postfix problems now. the bug was related to the file /etc/mailname
<xperia> if this file contain a different location than localhost mails recieved will be not resolved on the mashine.
<xperia> thanks for the help see you next time
<uvirtbot`> New bug: #504615 in eucalyptus (main) "add redirect to default index.html on CLC" [Wishlist,In progress] https://launchpad.net/bugs/504615
<Gumby> hi all, I am trying to fix a problem. apt seems to be broken and apt-get update keeps telling me that the files I am downloading are not bzip2 files
<Gumby> Get:5 http://security.ubuntu.com karmic-security/multiverse Translation-en_US [210B]
<Gumby> 98% [2 Translation-en_US bzip2 0] [Waiting for headers] [Connecting to bzip2: (stdin) is not a bzip2 file.
<Gumby> Ive tried these exact same repos on another box (ubuntu karmic desktop) and it works fine
<Gumby> anyone have any idea on what might be wrong?
<kirkland> ttx: hey
<ttx> kirkland: hey, still up ?
<kirkland> ttx: yeah, been busy
<kirkland> ttx: i just uploaded eucalyptus
<ttx> ah ok
<kirkland> so it probably didn't make the last cd iso build
<ttx> will ask for a manual trigger
<ttx> 2am at alpha2, 3am at alpha3...
<kirkland> ttx: heh
<ttx> kirkland: did you spend some time testing ?
<ttx> kirkland: aka "did it start an instance" ?
<ttx> kirkland: kernel team fixed the tun thing yesterday
<kirkland> ttx: i did some testing
<kirkland> ttx: i didn't get an instance running, though
<ttx> kirkland: any issue you want to pass on ?
<kirkland> ttx: might not have had the kernel
<ttx> kirkland: ok, then that's the subject of your tomorrow and of my today
<ttx> kirkland: good night :)
<kirkland> ttx: i'm finishing the minutes now
<ttx> kirkland: arh :)
<kirkland> https://wiki.ubuntu.com/MeetingLogs/Server/20100106
<ttx> ok
<kirkland> ttx: oh, one more ...
<kirkland> ttx: https://launchpad.net/bugs/461202
<uvirtbot`> Launchpad bug 461202 in eucalyptus "After purging and removing /var/lib/eucalyptus image store is out of sync" [Low,Fix released]
<kirkland> ttx: i couldn't reproduce that one
<kirkland> ttx: so i marked it fix released
<kirkland> ttx: it would be nice if you could confirm that
<kirkland> ttx: it's pretty easy
<ttx> kirkland: will try
<kirkland> ttx: i did test your changes
<kirkland> ttx: they did install, start up
<kirkland> ttx: but there are still some upstart issues
<ttx> kirkland: ok, will look with todays daily
<kirkland> ttx: now i'm calling it a night
<mario__> Hello!
<mario__> does anyone know whats gping on here? http://pastebin.com/d3e621f6b
<sabgenton> I want to show the  grub boot menu on startup what file to i need to edit?
<xperia> hello to all. i have problem building software with bitbake on my uuntu server. after some 20 to 30 minutes i am getting this error message here
<xperia> running task 1 of 2542 (id: 15, .../recipes/shasum/shasum-native.bb, do setscene)
<xperia> running task 2 of 2542 (id: 75, .../recipes/coreutils/coreutils-native_7.2.bb,do setscene)
<xperia> Out of Memory: Kill Process 11335 (python)
<xperia> ERROR: Task 15 shasum-native.bb do setscene failed
<xperia> it looks like python use nearly all the ram and somehow the kernel shut it down in the middle of the build proccess
<xperia> people told me this is related to my operating system
<xperia> how can i fix this ?
<cemc> put more RAM in the computer? :-)
<cemc> obviously there's some memleak, or you don't have enough ram for that build process if it's eating up all your memory
<xperia> cemc: thanks for sugesstion. cant beleve that ram is the problem as the machine has 1GB RAM. have tryed to build it on a other machine iwth debian/ubuntu and i am getting also the same error message
<cemc> 1GB is not a lot nowadays
<cemc> did you try building it on a machine with more RAM ?
<cemc> like 2GB, 4GB ?
<xperia> hmmm not till yet will ask other people how had successfull builded the software how much ram did they had.
<cemc> xperia: is that some opensource software, maybe I can download it and try building it if it's not too complicated, have 4gb ram in here
<cemc> also on Ubuntu Karmic
<xperia> cemc yeah it is open source software. it is called open embedded and is used to run linux on pocket pc phones like this here
<xperia> http://www.youtube.com/watch?v=U9vg2TU0wew
<xperia> give me just a moment to post the few instructions
<xperia> cemc: http://wiki.openembedded.net/index.php/Getting_Started
<xperia> okay have aksed now the other people too and they have told me that they build this on machines with about 3 to 4 GB RAM
<xperia> so need a new machine in this case hmm bad
<xperia> cemc: you dont need anymore to do this. you will need a lot of time only to clone the repo with git. time for setting up the environment will take 2 to 3 Hours.
<cemc> xperia: is there any readme file with requirements for building?
<cemc> where it says maybe ummm... RAM: 2+GB ? :)
<xperia> yes that i have also searched but didnt find it till yet. it dont even prove if the build system has enoght recources for doing this. cant answe this question as i am self searching the answer for it :-)
<sabgenton_> I can't find menu.1st what do i used to edit the grub menu
<sabgenton_> I can't find menu.1st what do i used to edit the grub menu
<xperia> need some gamer system with quad core cpu
<guntbert> sabgenton_: what ubuntu version?
<sabgenton_> server kamic
<sabgenton_> karmic
<guntbert> !grub2 | sabgenton_
<ubottu> sabgenton_: GRUB2 is the default Ubuntu boot manager in Karmic. For more information and troubleshooting on GRUB2 please refer to https://wiki.ubuntu.com/Grub2
<sabgenton_> oh yeah its version 2 now
<guntbert> indeed :)
<sabgenton_> I did a distro-upgrade and when it got to the  grub-pc package it screwed up cause I had my ide cables reversed
<sabgenton_> stupid thing
<sabgenton_> well it might be my fault I'll tell lmore later
<jiboumans> morning
<sabgenton_> crazy on install of karmic /boot/grub/grub.cfg had hardrives based on /dev/sd somthing
<sabgenton_> i did an up date an now it's based on  a UUID
<sabgenton_> I guess its so it knows what hardrive what even if there /dev/sd* location changes
<guntbert> !uuid | sabgenton_ yes
<ubottu> sabgenton_ yes: To see a list of your devices/partitions and their corresponding UUID's, run this command in a !shell: Â« sudo blkid Â» (see https://wiki.ubuntu.com/LibAtaForAtaDisks for the rationale behind the transition to UUID)
<sabgenton_> is this feature  newer than the initial  karmic release
<sabgenton_> ?
<sabgenton_> guntbert: why didn't karmix  have this when i first installed?
<sabgenton_> was it not release at that point?
<guntbert> sabgenton_: its defintely not new - why it is implemented in step - no idea
<sabgenton_> guntbert: in step?
<guntbert> sabgenton_: ah "in steps" - sorry
<sabgenton_> ah sorry
<sabgenton_> has anyone ever had a glich where  at the ubuntu login prompt it gets sorta superimposed with a root  prompt?
<guntbert> sabgenton_: no - care to !pastebin that?
<sabgenton_> i tryed typing something at this root@ubuntu prompt and then it said invaild comand
<sabgenton_> and sometimes it said invalid login or whatever l
<sabgenton_> like it was switching back and forth
<sabgenton_> guntbert: can't it's gone now
<sabgenton_> but it's kind a worrying :/
<guntbert> sabgenton_: you never should have a prompt root@ubuntu - don't enable the root account
<sabgenton_> its not enabled!
<guntbert> !noroot | sabgenton_
<ubottu> sabgenton_: We do not support having a root password set. See !root and !wfm for more information.
<sabgenton_> and i hadn't log on and gone sudo -i ether
<sabgenton_> guntbert: I rember one time it was a real pain not being able to scp as root
<sabgenton_> but anyway i'm not using the root at all
<guntbert> sabgenton_: ah that was you ...
<sabgenton_> ?
<sabgenton_> lol
<sabgenton_> couple of years ago
<sabgenton_> 5
<sabgenton_> ish
<guntbert> sabgenton_: ok - my error - sorry - we had someone with that last week
<sabgenton_> ah
<sabgenton_> guntbert: did you tell him to passwd root then when done sudo usermod -p '!' root back
<sabgenton_> ?
<sabgenton_> (no harm?)
<guntbert> sabgenton_: no - I only listened to the conversation
<sabgenton_> ah see
<sabgenton_> well did someone else sugest it?
<sabgenton_> or was it considered a no no
<guntbert> sabgenton_: I really don't remember details
<sabgenton_> seems a bit strange not to suport it
<sabgenton_> is it somthing strange - the reason
<sabgenton_> or does ubuntu just not suport people acdently typing things with no protection of "do sudo first"
<cemc> what if I need to rsync some stuff only root has access to to/from an ubuntu box (or between two ubuntu boxes)? rsyncd isn't too recommended either ;)
<sabgenton_> hmm!
<sabgenton_> whats the big deall in not suporting it
<sabgenton_> the only reason i can thing of is users stuffing things cause they don't have the little promt warning "use sudo"
<cemc> probably it opens up a lot more problems in general, if people (who don't have some degree of knowledge) start using root and breaking stuff ;)
<sabgenton_> yeah that I understand as above^
<sabgenton_> but i I need to rsync like you said and then I have a problem with rsync (unrelated to sudo vs root) are they now going to "not support me" because i didn't use sudo (imposble / impactical)
<sabgenton_> ?
<sabgenton_> seems strange
<sabgenton_> to use the linux term
<sabgenton_> FUD
<sabgenton_> I'm not really trying to fight  or anything just want to know if theres something I'm missing in the tecnical
<sabgenton_> out side of lots of noobs getting carryed away
<cemc> another way is to go to #linux and treat your problem as a general linux problem, disregarding the distro a bit ;)
<cemc> just be careful not to mess up anything really ubuntu-specific ;)
<cemc> heh
<sabgenton_> hehe
<cemc> it's just linux :)
<cemc> with some flavouring, heh
<sabgenton_> cemc: its funney the politics in distro chanels :)
<sabgenton_>  / irc in general
<cemc> if you need root, use root, just know about the dangers, and don't recommend to other people :)
<guntbert> sabgenton_: you did read https://help.ubuntu.com/community/RootSudo of course
<acalvo> is there any difference between SATA I, II and III?
<acalvo> are they compatible?
<cemc> is this correct for crontab? 05 0-12,17-23/2 * * * (every other hour except from 12-17) ?
<jiboumans> cemc: http://adminschoice.com/crontab-quick-reference
<awhanbiz> hi, i need help in setting up thin client
<awhanbiz> i m planning to use linux as thin client
<awhanbiz> is xubuntu good to go?
<uvirtbot`> New bug: #497831 in eucalyptus (main) "Eucalyptus-SC doesn't ask the cluster-name question" [High,In progress] https://launchpad.net/bugs/497831
<Hellsheep> Hey guys, throughout the night someone has been attempting to log in to ssh on my server, what's some kind of good software to automatically block an IP after a certain number of failed logins? I
<Hellsheep> am using Ubuntu 9.04
<RoyK> Hellsheep: fail2ban
<Hellsheep> Thanks :D
<RoyK> works well - interfaces with iptables to block IPs for some time after a certain amount of failed logins
<Hellsheep> Cool cool, sounds like exactly what i want
<RoyK> :)
<Hellsheep> Is it quite easy to configure?
<Hellsheep> Bit of a noob with servers :P
<RoyK> yeah
<RoyK> brb
<Hellsheep> kk
<Pici> Hellsheep: The defaults work fine for most people
<jpds> Or just use SSH keys for login and block off passwords.
<Hellsheep> Thanks :)
<Hellsheep> I found some docs on it and am reading it now :)
<RoyK> Hellsheep: I'm using that on 20+ servers with linux and solaris - works well :)
<Hellsheep> Awesome :)
<Hellsheep> I had something like
<Hellsheep> 500-700 login attemps
<Hellsheep> I cant count exactly :P
<Hellsheep> I dunno how to
<Hellsheep> xD
<Hellsheep> Is it really worth reporting the IP's at all?
<RoyK> nah
<RoyK> it's just worms
<Hellsheep> ah i see
<RoyK> just block them for 30 minutes or something
<RoyK> blocking them too long will make it hard if you fail yourself :Ã¾
<Hellsheep> :)
<RoyK> there was some thought of adding cumulatively increasing blocks, but I don't think it's been done yet
<RoyK> anyway - should be trivial
<Hellsheep> Is
<Hellsheep> Maxretry the number of login attemps
<Hellsheep> Before getting banned
<RoyK> oh - it's in already?
 * RoyK checks
<Hellsheep> Hmm
<Hellsheep> I have "maxretry"
<Hellsheep> Below the time to ban them
<RoyK> maxretry means how many times they can retry before getting banned
<Hellsheep> Ah cool
<RoyK> what I meant was 'if x.x.x.x is banned and the ban is removed and x.x.x.x tries another n times, the next time it fails, it's banned for x*2^failcount minutes
<Hellsheep> Ohhh yep i see
<RoyK> not a big issue, though
<Hellsheep> All configured :D
<Hellsheep> That was easy
<RoyK> Hellsheep: what's the ip?
<Hellsheep> My IP?
<RoyK> yeah - lemme try :D
<Hellsheep> For the server
<Hellsheep> kk
<Hellsheep> 69.162.115.201
<bogeyd6> Anyone know how to resolve the issue where an Outlook user sends email to a blackberry phone and its all gobbedly gook because all of the tags are showing and you are basically reading a bunch of HTML.
<RoyK> Hellsheep: hm. eight attempts and still not blocked
<Hellsheep> O.o
<Hellsheep> Hmm
<RoyK> check logs
<Hellsheep> It was set to 6
<Hellsheep> i see the logs
<RoyK> set it to 3
<Hellsheep> Jan  8 16:34:52 basetek sshd[20080]: Failed password for root from  port 61249 ssh2
<RoyK> no ip?
<Hellsheep> I removed it
<Hellsheep> in case
<Hellsheep> :P
<Hellsheep> Jan  8 16:34:52 basetek sshd[20080]: Failed password for root from 81.191.198.164 port 61249 ssh2
<Hellsheep> Thats it
<RoyK> that's mine
<RoyK> people can find my IP if they want it anyway :)
<RoyK> Hellsheep: is fail2ban running?
<Hellsheep> Yep
<Hellsheep> One question
<Hellsheep> Could it be related to iptables not being configured?
<RoyK> no
<RoyK> it configures iptables
<RoyK> iptables -I INPUT -s x.x.x.x -j DROP
<RoyK> that sort of thing
<Hellsheep> ah okay
<RoyK> try iptables -vnL
<RoyK> see if that shows anything
<Hellsheep> Chain fail2ban-ssh (1 references)
<Hellsheep>  pkts bytes target     prot opt in     out     source               destination
<Hellsheep>     0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
<Hellsheep> Only thing relating to fail2ban in there
<RoyK> hm.. seems fail2ban isn't parsing the logs, then
<Hellsheep> Lemme check i got it right
<Hellsheep> Okay, webmin reports my failed ssh in /var/logauth.log
<Hellsheep> I think i have it set to /var/log/logauth.log
<Hellsheep> Thats why.
<Hellsheep> My bad lol
<RoyK> :)
<RoyK> set max to 3
<RoyK> lemme test again
<Hellsheep> Okay go again
<Hellsheep> 2010-01-08 16:40:48,197 fail2ban.actions: WARNING [ssh] Ban 81.191.198.164
<Hellsheep> :)
<Hellsheep> Chain fail2ban-ssh (1 references)
<Hellsheep>  pkts bytes target     prot opt in     out     source               destination
<Hellsheep>    17   912 DROP       all  --  *      *       81.191.198.164       0.0.0.0/0
<Hellsheep>     0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
<Hellsheep> Much better
<Hellsheep> :D
<ewook> mm, fail2ban is nice.
<Hellsheep> Thanks for that RoyK
<RoyK> Hellsheep: :)
<RoyK> Hellsheep: looks like it parsed auth.log before I tried to login again
<RoyK> it was banned on first attempt
<jiboumans> mathiaz: running a few mins late, sorry
<mathiaz> jiboumans: ok
<RoyK> -24ËC
<RoyK> fuck this country
<jiboumans> mathiaz: eta 1 min :)
<jussi01> RoyK: watch the language please
<RoyK> jussi01: do we have another anti-swearing bot?
<RoyK> jussi01: I use the English language. Any other prefferd?
<RoyK> preferred, perhaps
<jussi01> RoyK: the ubuntu channels have some guidelines, please follow them
<jussi01> !guidelines
<ubottu> The guidelines for using the Ubuntu channels can be found here: http://wiki.ubuntu.com/IrcGuidelines
<RoyK> jussi01: such as not using English?
<jussi01> !language | RoyK
<ubottu> RoyK: Please watch your language and topic to help keep this channel family friendly.
<RoyK> jussi01: most families knows English, sir
<RoyK> jussi01: or do you mean "palin friendly"?
<jussi01> RoyK: ok, please dont swear, as expressly stated in the guidelines
<RoyK> jussi01: can you please tell me what is wrong with weighting expressions?
<RoyK> jussi01: explain why the hell you don't want me to use certain amounts of certain languages
<RoyK> there's no kids in here anyway
<RoyK> none that haven't grown accostumed to common language, anyway
<RoyK> jussi01: or are you just a shell script, unable to answer straight questions?
 * RoyK finds it strange that some idiot like jussi01 just complains about the language and doesn't say anything else in here
<RoyK> fucking bot
<soren> RoyK: Calm down.
<jussi01> RoyK: thanks for your patience.... my connection died... :/
<RoyK> I wonde wtf wrote those "guidelines"
<RoyK> wonder
<RoyK> does everything have to be church-friendly?
<soren> RoyK: Dude. Quit being an arse.
<RoyK> soren: I'm not
<soren> There you go again.
<jussi01> RoyK: if you wish to discuss the guidelines, please feel free to join us in #ubuntu-ops
<RoyK> soren: I'm just concerned that some people are more obsessed about certain parts of the English language than they are about the technical issues described in here
<soren> RoyK: You're the one spending most time arguing about it.
<RoyK> soren: well, mr jussi01 here hasn't said a word helping others as far as I can see from my logs, but was very quick to bitch me about language. is that fair?
<soren> RoyK: Give it a rest.
<RoyK> I will
<soren> Thank you.
<Doonz> hey guys i have 2 servers running 9.10 on them. Im trying to preform an Rsync transfer without the use of compression. one server is 100Mbit both ways the other is on a 20/1mbit line. Ive ran speeds tests from both servers and they do hit their max speeds. but when transfering from the 100mbit box to the 20mbit box i cant go any faster than around 200KB/s any ideas?
<zul> smoser: ping
<Doonz> i should just add scp transfers are the same thing
<RoyK> Doonz: I guess the 1Mbps uplink performs worse than defined
<Doonz> ROy the 1mbit link does not upload
<Doonz> its the 100mbit link that does and it preforms as expected
<RoyK> Doonz: does scp and rsync perform similar?
<cemc> Doonz: where do you see that 200KB/s ?
<Doonz> identical top end speeds
<Doonz> going from 100mb it to 20mbit
<cemc> did you try to test with say iperf between them?
<cemc> could there be some other limitation between them? like say an ISP limiting some traffic ?
<abli> Hi! what inetd is installed by default on hardy? (8.04.3)
<soren> abli: none.
<abli> Ok, thanks.
<_ruben> rsync probably requires a fair ammount of uploading on the remote end as well, to verify/check/determine what to transfer
<Doonz> cemc:  whats iperf
<_ruben> network performance tester
<cemc> that won't read from disk or do any compressing/other things, it (should) just tests the bandwidth. install it on both ubuntus and try it
<cemc> Doonz: also make sure that the 20/1 link is not uploading anything else, because if the upload bw is full'ish, the 20mbps download could suffer
<Doonz> yeah yeah yeah this isnt my first time on the inter
<Doonz> web
<cemc> ok, just making sure :)
<kirkland> ttx: hey
<ttx> kirkland: yo
<kirkland> ttx: so are you able to run instances with today's code?
<ttx> culdn't on a CLC / Walrus separated setup
<kirkland> ttx: what about all-in-one?
<ttx> I'm reproducing on a classic CLC+Walrus/CC+SC to see if that's specific to Walrus separate or not
<cemc> Doonz: start a rsync/ssh, then on the 20/1 end put a dstat on the interface and see what's happening (upload doesn't get full). do you rsync through ssh? maybe ssh is limited, try a rsync://
<ttx> kirkland: test in progress
<kirkland> ttx: k -- i'm about to rebuild my cluster
<ttx> all registration should work with my latest fix
<kirkland> ttx: is there a topology you'd like me to focus on?
<ttx> kirkland: not really, any should work
<kirkland> k
<kirkland> ttx: btw ...
<ttx> note that there is an issue with eucalyptus-cc separate
<ttx> bug 504704
<kirkland> ttx: i setup a pxe/tftp server and i've been doing netinstalls all week
<uvirtbot`> ttx: Error: Could not parse data returned by Launchpad: The read operation timed out
<kirkland> ttx: it's super sweet
<ttx> using anna/choose_modules=eucalyptus-udeb ?
<ttx> bug 504704
<kirkland> ttx: yeah, preseeding that too
<uvirtbot`> Launchpad bug 504704 in eucalyptus "[lucid] On CC-only setups, eucalyptus-cc fails to start at boot" [Medium,Triaged] https://launchpad.net/bugs/504704
<ttx> cool
<kirkland> ttx: it makes me wonder ...
<ttx> I need to bring a disk and sync with your mirror
<kirkland> ttx: if we should create a eucalyptus-netboot package
<ttx> so that I don't download 200 Gb on my small DSL line
<kirkland> ttx: optional, suggested, that can be installed on the CLC
<kirkland> ttx: sure
<kirkland> ttx: i have totally hands-off, no-touch installs working now
<kirkland> ttx: takes minutes
<ttx> kirkland: interesting
<kirkland> ttx: an in fact ...
<ttx> kirkland: what would the package specifically do ?
<kirkland> ttx: install tftp
<kirkland> ttx: set up /var/lib/tftpboot
<ttx> with a few example preseeds ?
<kirkland> ttx: yeah, add some preseeds
<ttx> sure, sounds like a good timesaver
<kirkland> ttx: one for each of our defined, well-supported topologies
<kirkland> ttx: mathiaz mentioned that there's a similar spec, for autoinstalls
<kirkland> ttx: anyway, i think UEC is interesting because you never install just one machine
<kirkland> ttx: once you've installed >= 3 machines, i find it's faster to have setup a network install service
<kirkland> ttx: there is one other, simpler option ....
<kirkland> ttx: we could make the CLC recommend squid
<kirkland> ttx: and make all dependent nodes use it as their caching proxy
<kirkland> ttx: *that* would save you a lot of time, if you had a transparent caching proxy
<kirkland> ttx: we did this at Intel
<kirkland> ttx: it worked *marvelously*
<ttx> sounds like a nice best practice
<kirkland> ttx: i think we should default the CLC to being a squid proxy, IMHO
<kirkland> ttx: since all of your nodes will pull the same packages over and over and over again, on updates, and such
<ttx> about bug 504704, its linked to the "eucalyptus-cc starts on started eucalyptus" issue
<uvirtbot`> Launchpad bug 504704 in eucalyptus "[lucid] On CC-only setups, eucalyptus-cc fails to start at boot" [Medium,Triaged] https://launchpad.net/bugs/504704
<kirkland> ttx: and since the CLC serves a preseed file, we could easily set that in one place
<ttx> on a -cc only install that will fail
<kirkland> ttx: right so i was too tired to explain this to you last night
<ttx> kirkland: so we are back at "should eucalyptus upstart script really be used to manage all eucalyptus in any configuration"
<kirkland> ttx: but i worked for several hours on that one
<ttx> kirkland: for NC, right
<kirkland> ttx: right, and i have a reasonable hack that makes that work, sort of
<ttx> basically, CC has the same issue, only more annoying
<ttx> since CC is regularly installed with other stuff
<kirkland> ttx: yeah
<kirkland> ttx: so i'm pretty sure that something is broken with:
<ttx> so having a single "stop eucayptus" is really nice there
<kirkland> start on (started foo and started bar)
<ttx> yes, that's "The Upstart Bug"
<kirkland> ttx: i'm going to talk with keybuk about that now in -devel
<ttx> just some background before you start
<ttx> https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/503850/comments/1
<uvirtbot`> Launchpad bug 503850 in eucalyptus "Upstart publication scripts no longer run" [High,Fix released]
<ttx> kirkland: I'll follow the discussion
<ttx> kirkland: but apparently it's a known issue
<ttx> kirkland: that is not planned to be fixed in lucid
<ttx> kirkland: that doesn't mean there is no other way to achieve the same result
<ttx> kirkland: so picking Keybuk's brain about it is good.
<kirkland> ttx: okay, well our other option is emitting signals
<kirkland> ttx: initctl emit "eucalyptus-is-running"
<kirkland> ttx: we can start on that
<ttx> kirkland: that's worth a try
<kirkland> ttx: yup, apw says that's "The Upstart Bug"
<ttx> not sure keybuk is around
<apw> it sounds like TUB to me.  not sure if it is a bug, or intended
<apw> but its confusing what it means for sure
<smoser> zul, i'll be in in ~30 minutes
<zul> smoser: k
<coffeedude> jdstrand, I was going to ask you about BUG 274530 but looks like you are already working on it.
<uvirtbot`> Launchpad bug 274530 in openoffice.org "cell with german umlaut incorrectly displayed" [Undecided,Invalid] https://launchpad.net/bugs/274530
<coffeedude> jdstrand, Ummm...meant bug 274350
<uvirtbot`> Launchpad bug 274350 in likewise-open "apparmor HOMEDIRS not adjusted for likewise" [High,In progress] https://launchpad.net/bugs/274350
<jdstrand> coffeedude: yeah-- it will be a small debdiff. I'll have it soon (it's building locally)
<coffeedude> jdstrand, cool.  I'm assuming it is too late for alpha2.  or not?
<jdstrand> coffeedude: I was thinking if you approved it, I'd upload
<coffeedude> jdstrand, Sure.  should be a trivial diff.  Sounds good.  Just ping me and I'll review it as soon as you are ready.
<jdstrand> coffeedude: thanks!
<jdstrand> coffeedude: btw, as I'm sure you know, the source was accepted, and I just accepted the binaries about 30 minutes ago
<coffeedude> jdstrand, for the new likewise-open_5.4 packages?
<jdstrand> yeah
 * coffeedude cheers
<jdstrand> https://launchpad.net/ubuntu/+source/likewise-open/5.4.0.39949-2
<jdstrand> coffeedude: ^
<coffeedude> jdstrand, cool beans.  Makes me happy.
<jdstrand> :)
<ankit_babbar> some one here to help me with ldap
<ankit_babbar> ??
<ankit_babbar> hello some 1 dr to help
<soren> We can't help if you don't ask a question.
<ttx> kirkland: my testing blocks on bug 504530
<uvirtbot`> Launchpad bug 504530 in euca2ools "euca-register fails to register an image: register_image() takes at least 2 non-keyword arguments (1 given)" [High,Confirmed] https://launchpad.net/bugs/504530
<kirkland> ttx: oh, yeah, damn
<kirkland> ttx: we need neil for that one
<kirkland> ttx: would you send an email to Canonical-Eucalyptus?
<ttx> kirkland: ok
<ankit_babbar> Sorren
<ankit_babbar> hi
<ankit_babbar> I have got stuck in ldap from amonth
<ankit_babbar> Can you tell me how to ssh using an ldap account
<ankit_babbar> And while I do machine authentication is autofs nedded to load a particular home directory for remote user?
<ttx> kirkland: done
<zul> smoser: im testing my ec2-config scripts on ec2 instance ill be updating the debian packaging as well fyi
<kirkland> ttx: cheers
<ttx> kirkland: fwiw, reverting to the previous euca2ools doesn't seem to solve it
<ttx> so it might indeed be a eucalyptus issue instead
<kirkland> ttx: i looked at the source
<kirkland> ttx: it's in a library somewhere
<kirkland> ttx: the bug is
<kirkland> ttx: as that one tool doesn't take more than one argument
<kirkland> ttx: but it sources something that makes it think that it does
<ttx> arh
<smoser> zul, thats cool. try starting witih my build in my ppa
<zul> smoser: ill get the things working here and then ill merge your branch and re-test them sounds ok?
<smoser> bzr+ssh://bazaar.launchpad.net/~smoser/ec2-init/ec2-init.devel.pkg is pkg branch at this point. i just pushed it.
<smoser> err, try bzr push lp:~smoser/+junk/ec2-init.devel.pkg instead.
<smoser> zul, if you'd like i can do that work.
<smoser> i can just merge your package into mine
<zul> smoser: sure ill ping you when im ready
<smoser> ok. i'll be back in a bit now
<zul> ok
<Cromulent> I'm trying to install the Java EE SDK on my 9.10 32bit server but it says it requires the DISPLAY environment variable be set - do you know what I should set it too?
<ttx> zul: pastedeploty doesn't seem to have been reviewed yet ?
<ttx> python-pastedeploy
<zul> its been renamed to paste
<ttx> zul: what does that mean ? There is another bug about it ?
<zul> ttx: just a sec
<zul> ttx: crap hold on
<zul> ttx: yeah it hasnt been reviewed yet
<ttx> ok
<zul> sorry i got confused
<zul> it needs to be reviewed
<ttx> same for pastescript, right
<zul> pastescript has
<zul> it was done this morning
<ttx> zul: ah, right
<vish> mathiaz: hi... is the server team using Papercuts project to fix trivial bugs on the server side or using a different project? why I ask is regarding Bug #194472 , mpt mentioned we need to keep the desktop goal of not using the terminal and it needs to be decided by the server team...
<uvirtbot`> Launchpad bug 194472 in hundredpapercuts "Entering password in Terminal gives no visual feedback" [Low,Fix committed] https://launchpad.net/bugs/194472
 * ttx doesn't refresh sufficiently fast
<vish> mathiaz: to my knowledge , there havent been any server "papercuts" ... should i cancel the papercut task?
<ttx> vish: we plan to have server papercuts. I'm not sure that one would be accepted as a "server papercut" though
<vish> ttx: from what i hear it fix is to show the stars only during user entry and it disappears once the users hits "return" ... [personally i would like that , but my main concern is regarding the papercut task] mpt mentioned to check with you guys first
<vish> s/it/the
<ttx> vish: I'd reject it as a onehundredpapercuts bug and nominate it for the server papercuts project, whenever this starts (in a few weeks)
<ttx> i'll send an email about that project soon
<vish> ttx: cool , thanks :)
<ttx> we are still in the process of defining what would make a server papercut :)
<vish> ;)
<vish> ttx: isnt that bug a security risk? should it be marked so?
<ttx> vish: depends on implementation, I suppose
<vish> hmm , ok.. i'll leave it as such
<engine252> i have a question i'm running ubuntu-server on qemu/kvm
<engine252>  i'm connecting to the internet via a wireless network is there a way to attach the virtual server to the phisical network?
<engine252>  i want to access my server from the internet
<Pici> engine252: How is the host computer connected to the network/internet?
<engine252> now it is connected with a NAT configuration
<engine252> oh no , wireless
<engine252> the guest through NAT
<engine252> guest = ubuntu-server
<engine252> Pici:
<Pici> engine252: I'm not sure sorry, I just wanted to make sure that the others here had enough information to answer
<engine252> see i also have vmware workstation installed and there i can configure a bridged network but i can't seem to configure the same for kvm
<dasunsru1e32> Hello, I am having issues with likewise-open5 on Karmic. I can sucessfully log the machine onto the domain, it registers with Active Directory; however, when I reboot, I can no longer auth to LDAP. I have to log the machine off/on the domain without rebooting to be able to login in. What could be the issue? Thank you for your help.
<coffeedude> dasunsru1e32, do you mean you are using pam/nss_ldap on your system as well as likewise-open?
<coffeedude> dasunsru1e32, or justy that you cannot log in with domain credentials after the reboot?
<dasunsru1e32> I have installed likewise-open5
<dasunsru1e32> and used this guide to setup: https://help.ubuntu.com/9.10/serverguide/C/likewise-open.html
<dasunsru1e32> it works, until I reboot
<dasunsru1e32> It was fine, I am not sure what is happening or why all the sudden it is failing after reboot
<coffeedude> dasunsru1e32, can you verify that the following processes are running after reboot: lsassd, netlogond, npcmuxd
<dasunsru1e32> sure
<coffeedude> hey dendrobates
<dasunsru1e32> root      3250  0.0  0.2 198844  7296 ?        Sl   09:30   0:00 /usr/sbin/lsassd --start-as-daemon
<dasunsru1e32> root      2047  0.0  0.0  82428  2268 ?        Sl   09:24   0:00 /usr/sbin/netlogond --start-as-daemon
<dasunsru1e32> root      2065  0.0  0.0  88312  1064 ?        Sl   09:24   0:00 /usr/sbin/npcmuxd --start-as-daemon
<dasunsru1e32> Now, I have not rebooted yet
<coffeedude> dasunsru1e32, Ahh,...yeah.  Make sure after the reboot then.
<dasunsru1e32> ok, I will be right back, since I am on the machine that is affected
<ttx> jjohansen: is it still the plan to fix bug 494565 today ?
<uvirtbot`> Launchpad bug 494565 in linux "support ramdiskless boot for relavant kvm drive interfaces in -virtual" [Low,Triaged] https://launchpad.net/bugs/494565
<coffeedude> dasunsrule32, wb
<dasunsrule32> thanks
<dasunsrule32> one sec
<dasunsrule32> I am checking the services
<jjohansen> ttx: sorry that won't hit today
<ttx> jjohansen: so that won't hit for alpha2 ?
<jjohansen> right
<dasunsrule32> what were the other two services?
<ttx> jjohansen: ok.
<dasunsrule32> lsassd is running
<coffeedude> dasunsrule32, npcmuxd and netlogond
<coffeedude> dasunsrule32,  those 3 should be the only required ones.
<dasunsrule32> They are all running, I tried to login and I can't
<coffeedude> dasunsrule32, ok,  moving to private channel then so we can debug.
<dasunsrule32> ok
<dasunsrule32> how do I pm in irc?
<dasunsrule32> sorry,
<coffeedude> dasunsrule32: "/query nick".  But I'll open up a private chat so you should see it now.
<guntbert> dasunsrule32: /msg nick here comes your text
<RoyK> dasunsrule32: /msg google irc commands :Ã¾
<ttx> kirkland: neil explanation doesn't sound very good to me, looks like we'll have to revert to a previous boto
 * ttx is not really here anymore, will be back later for catchup
<kirkland> ttx: urgh
<kirkland> ttx: well, i think it's important that we have this working for A2
<kirkland> ttx: i milestoned the bug
<kirkland> ttx: looks like smoser did the merge
<smoser> ?
<kirkland> smoser: so looks like the newer python-boto is breaking euca-tools, according to neil
<smoser> carp
<kirkland> smoser: see the mail on Canonical-Eucalyptus from Neil
<kirkland> smoser: he recommends reverting for Lucid
<kirkland> smoser: i don't know how that sits with you
<zul> smoser: isnt that suppose to be "crap"?
<smoser> well, i wasn't aware of "the boto upstream is going to be releasing another version soon from what I understand."
<smoser> 1.9 makes parts of ec2-init much easier
<smoser> its not something that couldn't be worked around, but basically, I want the boto.utils.get_instance_metadata from boto 1.9
<smoser> we can revert, but then i have to get the functionality it was giving me.
<ttx> kirkland, smoser: hmhm
<smoser> yeah,
<smoser> i will admit that it is probably less work for me to work aroudn than to work around in euca2ools, ie, ec2-init uses less of boto than they do
<ttx> smoser: yes, but you don't need the extra layer of work
<smoser> but debian has 1.9 now
<smoser> well, right. but neither do eucatools folks.
 * ttx is kinda happy to see API issues not biting only java libraries
<smoser> so , here was my thoughts
<smoser> here is why i wanted 1.9
<smoser> - i needed to be able to crawl the metadata service, getting all the data that was there.
<smoser> - boto1.8 has a boto.utils.get_instance_metadata() that does that, but has bugs where it doesn't get it all.
<smoser> - boto1.9 gets it all
<smoser> so i was looking at
<smoser> a.) reimplementing it
<ttx> kirkland: we need to know the extent of the APi breakage in euca2ools, to see how feasible it is to patch on our end
<ttx> kirkland: if it's limited to a few functions, it might be doable for us to patch euca2ools for compatibility
<smoser> b.) copying it (licensing of boto is MIT, ec2init is GPL)
<ttx> kirkland: I understand that upstream doesn't want to support both
<smoser> so i guess i could copy to ec2init
<smoser> i dont know tha ti have the insigght to declare that from a boto perspective 1.9 will be more easily supported for 5 years
<smoser> than 1.8
<ttx> kirkland: we need more detail to come up with the best solution
<ttx> I've got to go -- be back later to discuss it if needed
<smoser> i'm almost certain that it'd be easiest fo rme to copy parts of boto
<ttx> smoser: how about overriding the functions in 1.8 with some imported 1.9 code ?
<ttx> python allows for some nice function = my_function
<smoser> true. yeah, its largely standalone i think. i just have to copy it somewhere.
<ttx> smoser: depends on how much your desired function depends on other parts of the code, obviously
<ttx> I'm gone now, back in 100 minutes or so.
<cemc> 100? :) why not 2 hours? :)
<smoser> metric system
<cemc> lol
<zul> le french est weird
<zul> smoser: if we are going to have runurl suppor in ec2-config then it should really be in the archive :)
<erichammond> zul: I've gotta run, but at some point I'd be interested to hear what is meant by "runurl support in ec2-config".
<smoser> i kind of wonder what that is too.
<smoser> i think the goal was to, from cloud-config support something like:
<smoser> run-url: http://short.url.asdf arg1 arg2
<smoser> zul, if you want me to implement run-url, its fairly easy
<zul> smoser: yes please :)
<erichammond> runurl was intended to be a command line extension where you can pass it parameters and do things with return values and output just like any other shell command.  If ec2-config has a way to run shell script blobs, and runurl is installed, then that gives more power than just letting them specify a URL and arguments.
<zul> smoser: i put it in my yaml file
<erichammond> E.g., we do things like: if runurl SOMEURL SOMEPARMS; then ...; else ...; fi
<erichammond> or: runurl SOMEURL SOMEPARMS > SOMEFILE
<smoser> sure, easy enough to add a command to do that. i'm not really sure what the intent of that item was... lost in UDS brain
<erichammond> smoser: A command to do what?
<smoser> if we had a 'command' named runurl
<smoser> that then the cloud-config would just invoke
<smoser> and could be used elsewhere too
<erichammond> Agreed.  The AMIs I build have a command named runurl.  I publish it in the Alestic PPA.
<erichammond> If you're going to rewrite it, please let me review so that we can make sure it is backwards compatible.  For example, I use wget which enables some cool features like being able to drop the "http://" for easier to read command lines.
<erichammond> https://launchpad.net/~alestic/+archive/ppa
<erichammond> I'll be back on in a few hours.
<zul> yo ttx
<ttx> zul: sssshhh. I'm not here !
<zul> riiiight
<uvirtbot`> New bug: #504897 in nut (main) "megatec_usb problem (did not claim interface)" [Undecided,New] https://launchpad.net/bugs/504897
<phitoo> Hello all! I was wondering if there was a place where I could track development of LXC in Ubuntu?
<jiboumans> phitoo: there's a blueprint for it
<jiboumans> phitoo: https://blueprints.launchpad.net/ubuntu/+spec/server-lucid-contextualization
<phitoo> jiboumans: Thanks. Now looking... :-)
<phitoo> jiboumans: OK! Is there some sort of status report? Some way to report issues? I expect to be testing it in the next Alpha release.
<jiboumans> phitoo: the spec is targeted for lucid but not yet accepted. We're considering if we can add it for the next milestone (alpha3) which will be completed by late february
<jiboumans> ttx++ # sex lead, ha!
<ttx> jiboumans: way more sexy than DX or DUX
<phitoo> jiboumans: Is there anything I can do or say to push for acceptation? I can test but not develop.
<bkonkle> Quick question - we're about to replace a drive in an HP ProLiant server with Raid 1+0.  Is there a way that we can monitor the raid rebuild progress within the Ubuntu command line?
<dasunsru1e32> I thought you could cat /dev/mdm$device_number
<ttx> phitoo: you can propose your testing skills to the spec assignee
<ttx> phitoo: I'm sure he will make good use of you ;)
<bkonkle> Hmm - I just looked under /dev, and we have no mdm devices.  I'm guessing that's for software Raid, and we're using hardware raid built into the Proliant server
<kirkland> cjwatson: hey, idea i wanted to run by you ...
<kirkland> cjwatson: tell me if i'm off the reservation
<kirkland> cjwatson: i find it really useful to install a caching squid proxy on my CLC
<dasunsru1e32> bkonkle: you would need to see if HP has any software that would let you monitor it then I believe
<dasunsru1e32> mdm would be s/w raid
<kirkland> cjwatson: and point all of the UEC components to use the CLC as its proxy server
<bkonkle> Okay, got it.
<bkonkle> Thank you!
<dasunsru1e32> no prob
<kirkland> cjwatson: since we're necessarily downloading the same stuff over and over and over again for potentially dozens of NCs
<kirkland> cjwatson: what would you think of my making that the default behavior (preseeding that proxy info into the hosted preseed files), but making it debconf (priority low or medium) changeable?
<zul> smoser: lp:~zulcss/ubuntu/lucid/ec2-init/ec2-init-config
<ttx> zul: nice work !
<zul> ttx: its really simple but its enough to get us started working on it
<zul> ergh...i mean improving it
<cjwatson> kirkland: hmm, I don't really have an opinion either way. I imagine some would find it useful but some (many?) would have a corporate mirror already
<kirkland> cjwatson: agreed
<cjwatson> kirkland: if you did it you would also have to cope with the case where the CLC itself needs a proxy to see the outside world
<cjwatson> kirkland: seems like something to look at as low priority
<kirkland> cjwatson: that's already the case, though, no?
<ttx> and for those folks it would result in additional load on CLC
<kirkland> cjwatson: yes, definitely low priority
<ttx> ?
<kirkland> ttx: proxy work is pretty cheap
<kirkland> ttx: i have a powerpc mac mini that's my squid proxy :-)
<ttx> network load ? CLC is hit pretty hard by clients, therically
<cjwatson> kirkland: proxy's already handled in some ways, but it would require additional handling to also configure squid to use another proxy
<kirkland> cjwatson: gotcha
<kirkland> ttx: sure, some load
<kirkland> ttx: i'm talking mostly about caching debs and package updates/installs
<kirkland> ttx: that doesn't happen often
<ttx> kirkland: hm, right
<kirkland> ttx: -nc's don't really talk to the outside world very often
<kirkland> ttx: but when they do, would be nice to suck those down at gigabit speed
<kirkland> ttx: especially when doing this over and over and over
<kirkland> ttx: which is necessarily the case with a cluster
<sangel> salve a tutti
<sangel> !!
<sangel> ragazzi c'Ã¨ qualcuno
<sangel> ho problemi con Bin9
<sangel> Bind9
<sangel> una cosa molto semplice
<Pici> !it
<ubottu> Vai su #ubuntu-it se vuoi parlare in italiano, in questo canale usiamo solo l'inglese. Grazie! (click col tasto destro sul nome del canale per entrare)
<sangel> sorry :P
<smoser> jjohansen, ping
<jjohansen> pong
<smoser> i think maybe kernel is locking up in ec2 instances
<smoser> at least i can't explain it. it just goes away
<jjohansen> hrmm, it boots and then dies
<smoser> ie, system there, i'm using it, and then ssh console stops
<smoser> reboot brings it back.
<jjohansen> hrmm, I assume you have tried sshing back in
<smoser> actually, so that means the kernel isn't dead. because i do see a
<smoser> [   32.429018] Restarting system.
<smoser> yeah, i've tried again and again.
<jjohansen> what of pinging
<smoser> ping external to ec2 never works.
<smoser> i'll try interneal to internal if it goes down again
<cab938> where would a put a shell script so that it loads on first boot and runs as root?
<cab938> I was using the --firstboot flag for ubuntu-vm-builder but it just doesn't work
<cab938> I essentially want to build a vm, mount its drive, inject some files, then start they vm somewhere else
<jjohansen> smoser: it could still be part of the kernel locking, there are some updates I need to finish rolling in
<geezenslaw> Hello, I have a local cloud vs. public cloud question.
<smoser> cab938, easiest thing to do is probably to just add an rc.local like init.d script
<malloc64> how do i kill a process that doesn't die with kill -9?
<guntbert> malloc64: sudo kill -9
<malloc64> okay, how do i kill a process that doesn't die with sudo kill -9?
<guntbert> malloc64: what process? show its line from ps aux please
<malloc64>  /usr/bin/perl -w /usr/share/debconf/frontend /var/lib/dpkg/info/xserver-xorg.postinst configure 1:7.4~5ubuntu18
<malloc64>  root      6998  0.0  2.8  56352 14044 pts/2    D<   10:40   0:00 /usr/bin/perl -w /usr/share/debconf/frontend /var/lib/dpkg/info/xserver-xorg.postinst configure 1:7.4~5ubuntu18
<mathiaz> kirkland: there is a spec about providing an easy to setup apt proxy
<malloc64> sorry, there's the full
<kirkland> mathiaz: yeah, that's one step beyond what i'm suggesting
<kirkland> mathiaz: as that would require a few hundred gigs of disk
<kirkland> mathiaz: just a squid proxy on the CLC, to cache updates and debs, could be done pretty cheap, without requiring a lot of disk
<mathiaz> kirkland: well - IIUC you wanna run a squid proxy on the CLC to cache the packages?
<kirkland> mathiaz: right -- just the ones needed, though
<mathiaz> kirkland: right - IIRC that's what the apt proxy spec is about
<kirkland> mathiaz: hmm, okay
<mathiaz> kirkland: granted - you may wanna pull down a complete archive
<mathiaz> kirkland: but that's just another step IIRC
<mathiaz> kirkland: having an easy way to setup an apt proxy seems to be the first step
<mathiaz> kirkland: and could be usefull in any environement beside CLC
<malloc64> guntbert: root      6998  0.0  2.8  56352 14044 pts/2    D<   10:40   0:00 /usr/bin/perl -w /usr/share/debconf/frontend /var/lib/dpkg/info/xserver-xorg.postinst configure 1:7.4~5ubuntu18
<kirkland> mathiaz: right
<guntbert> malloc64: see http://linuxgazette.net/issue83/tag/6.html
<kirkland> mathiaz: ttx: i'm interested in your opinion about this ...
<kirkland> http://paste.ubuntu.com/353624/\
<malloc64> gunbert: sounds like i need to reboot.  but i was in the middle of a jaunty to karmic upgrade. am i hosed?
<kirkland> mathiaz: ttx: to solve https://launchpad.net/bugs/487275
<uvirtbot`> Launchpad bug 487275 in eucalyptus "eucalyptus.conf should not be a conffile" [High,Triaged]
<kirkland> mathiaz: ttx: I'm creating /etc/eucalyptus/eucalyptus-defaults.conf
<kirkland> mathiaz: ttx: which will be a conffile
<kirkland> mathiaz: ttx: managed by us
<kirkland> mathiaz: ttx: and sourced at the top of /etc/eucalyptus/eucalyptus.conf
<kirkland> mathiaz: ttx: users (or more likely euca_conf) will change values in /etc/eucalyptus/eucalyptus.conf
<kirkland> mathiaz: ttx: which won't be a conf file any more
<kirkland> mathiaz: ttx: i'd like your sanity check on that pastebin, to make sure those values are at least *reasonable* defaults
<ttx> looking
<mathiaz> kirkland: hm - I don't understand the comment: If you want to change these values, you almost certainly
<mathiaz> kirkland: want to either edit /etc/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: if /etc/eucalyptus/eucalyptus-defaults.conf
<mathiaz> kirkland: is supposed to be conffile, it means that this is the file end users are supposed to edit
<ttx> I'm not sure I'm getting it either...
<mathiaz> kirkland: I think it's the other way around
<ttx> doesn't eucalyptus.conf contain all values, so defaults end up being useless?
<mathiaz> kirkland: conffile are only edit by sysadmin and should be source *last*
<mathiaz> kirkland: the package provide a default configuration, which values can be overridden by a local sysadmin (via a conffile)
<mathiaz> kirkland: euca_conf and package maintainer scripts should modify the package file, not the conffile
 * kirkland steps back to think
<ttx> ideally euca_conf would modify only a state file in /var/lib, but maybe that's a lot to ask the euca folks
<ttx> and the conffile in /etc would only be modified by sysadmin
<ttx> getting late for me, especially for a friday
<kirkland> ttx: i'm afraid that "/etc/eucalyptus.conf" is pretty thoroughly hardwired into their code
<mathiaz> kirkland: ^^ right
<ttx> mathiaz knows that stuff better than I do, I leave you in good hands :)
<kirkland> ttx: so we could move that to /var/lib, and symlink it back to /etc/eucalyptus for compatibility
<kirkland> ttx: thanks
<mathiaz> kirkland: you'd probably want three files
<kirkland> mathiaz: have a moment to finish this discussion?
 * kirkland is all ears
<mathiaz> kirkland: sure
<ttx> kirkland: something like that, yes
 * kirkland has been looking at this for too long
 * ttx has a headache
<mathiaz> kirkland: 1. one file that is only modified by sysadmin (a conffile)
 * kirkland too
<mathiaz> kirkland: 2. one file that is modified by euca_conf
<mathiaz> kirkland: 3. a default file that sets all the options
<kirkland> mathiaz: and what order or they sourced in?
<mathiaz> kirkland: 3. sources 2, then 1
<kirkland> mathiaz: is (3) a conffile too?
<mathiaz> kirkland: as 1 is more important than 2, which is more important than 3
<mathiaz> kirkland: nope - only 1 is a conffile
<mathiaz> kirkland: 3. should be in /usr/
<kirkland> mathiaz: okay, let's assume that (2) *must* be /etc/eucalyptus/eucalyptus.conf (which might be a symlink to /var/lib)
<mathiaz> shipped by the package as a regular this is the default options from the maintainer perspective
<kirkland> mathiaz:  3 is something in /usr/share ?
<mathiaz> kirkland: yes
<kirkland> mathiaz: and 1 is actually in /etc
<mathiaz> kirkland: as this is a default file shipped by the package maintainers
<mathiaz> kirkland: yes
<kirkland> mathiaz: (while 2 and 3 are symlinks to /usr and /var)
<kirkland> mathiaz: got it ...
<mathiaz> kirkland: and 2 is in /var/lib/ since euca_conf would modify it
<kirkland> mathiaz: now, names ....
<kirkland> mathiaz: yup, all good
<mathiaz> kirkland: all of these files are shell scripts right?
<kirkland> mathiaz: suggestions on what to call 1 and 3?
<kirkland> mathiaz: yes, shell syntax
<mathiaz> kirkland: and by default eucalyptus looks for /etc/eucalyptus/eucalyptus.conf?
<kirkland> mathiaz: right
<kirkland> mathiaz: i'd be very hesitatant to change that too much
<mathiaz> kirkland: traditionally 1. is in /etc/defaults/
<kirkland> mathiaz: ah, right
<kirkland> mathiaz: 1 == /etc/defaults/eucalyptus
 * mathiaz nods
 * mathiaz checks the FHS and debian policy
<kirkland> mathiaz: is (3) necessary?
<kirkland> mathiaz: seems like /etc/eucalyptus/eucalyptus.conf -> /var/lib/eucalyptus/eucalyptus.conf would just need to source (1)
<mathiaz> kirkland: hm
<mathiaz> kirkland: the problem is on package upgrades
<kirkland> mathiaz: i'm trying to figure out what the value of (3) is for our situation specifically
<mathiaz> kirkland: how do you handle new options when euca_conf has already modified the file?
 * kirkland thinks
<mathiaz> kirkland: if new options are introduced, you'd have to edit (2) *without* loosing the modification from euca_conf
<mathiaz> kirkland: from the postinstall script
<mathiaz> kirkland: if you have (3), you just ship the new options in the new configuration file
<smoser> zul, so you were leaving packaging to me? was that the plan?
<kirkland> mathiaz: and why would you do that in (3) rather than (1) ?
<mathiaz> kirkland: you don't wanna touch (1)
<mathiaz> kirkland: you start with an all commented file for (1)
<mathiaz> kirkland: or actually - you have a comment in (1) that points to (3) for the complete list of options
<mathiaz> kirkland: since (1) is a conffile, you wanna minimize changes made to this file
<kirkland> mathiaz: right, so the documentation i wanted to purge from the configuration files and put into a manpage
<mathiaz> kirkland: that another good solution
<mathiaz> kirkland: in which case it becomes like (3)
<mathiaz> kirkland: ie the man page is always the refernce
<kirkland> mathiaz: yeah, i'm going to do that for sure, a manpage for euca_conf, and perhaps another one for the configuration options
<kirkland> mathiaz: possibly one manpage, with a symlink connecting the two
<kirkland> mathiaz: as they're closely related
<mathiaz> kirkland: except that you need to have some similar to the man page for eucalyptus the program (that's (3))
<mathiaz> kirkland: *something* similar
<kirkland> mathiaz: hmm, okay
<mathiaz> kirkland: on upgrades, if there a new options, you add them to the man page (for the user) and to (3) for the program
<mathiaz> kirkland: that way you don't have to update the conffile
<mathiaz> kirkland: and thus diminish the number of prompts
<kirkland> mathiaz: okay ... what needs to be taught to "use" (3)?
<kirkland> mathiaz: b/c everything currently sources/reads/writes (2)
<mathiaz> kirkland: what used to read /etc/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: (3) is the main parent configuration file, which sources (2), then (1) at the end
<mathiaz> kirkland: to have proper overriding capabilities
<mathiaz> kirkland: so what I'd do is:
<mathiaz> kirkland: (1) is /etc/default/eucalyptus
<mathiaz> kirkland: (2) is /var/lib/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: (3) stays in /etc/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: however (3) is no longer a conffile
<mathiaz> kirkland: http://www.debian.org/doc/debian-policy/ch-files.html#s-config-files
<kirkland> mathiaz: alrighty
<mathiaz> kirkland: hm
<mathiaz> kirkland: I think you're right in that we don't need (3)
<mathiaz> kirkland: well - (1) and (3) can be merged in /etc/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: the issue with bug 487275 is that euca_conf modifies a conffile
<uvirtbot`> mathiaz: Error: Could not parse data returned by Launchpad: The read operation timed out
<mathiaz> kirkland: which leads to useless prompts on upgrades
<mathiaz> kirkland: so ship /etc/eucalyptus/eucalyptus.conf as a conffile (and let the sysadmin customize it)
<mathiaz> kirkland: and have another file (2) in /etc/eucalyptus/eucalyptus-euca_conf.conf modified by euca_conf
<mathiaz> kirkland: which is then sources by /etc/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: and then /etc/eucalyptus/eucalyptus.conf would source at the *begining* /etc/eucalyptus/eucalyptus-euca_conf.conf
<mathiaz> kirkland: so that the sysadmin could still modify the result of euca_conf
<mathiaz> kirkland: well - it actually depends on what euca_conf modifies
<mathiaz> kirkland: or rather sets as options
<kirkland> mathiaz: okay, that was pretty much what i was going for in my original approach
<kirkland> mathiaz: although my comments in the top of that file were inaccurrate
<mathiaz> kirkland: well - not really actually :)
<MagicFab> Hi all
<mathiaz> kirkland: it's just that you wouldn't need such a file anymore
<MagicFab> can sshd be configured to have nic-specific authentication methods ? ie. "PKA-only" for Internet-facing, "PKA/Password" for internal network ?
<mathiaz> MagicFab: well - you could run two instances of sshd
<mathiaz> MagicFab: each binding to an interface whith their own configuration
<MagicFab> mathiaz, neat. Apparently LTSP requires password auth in its LAN.
<mathiaz> MagicFab: I don't think you can achieve the same result with only *one* sshd daemon running
<MagicFab> I thought it generate keypairs etc. automatically
<mathiaz> kirkland: now the downside of having only 2 eucalyptus.conf, is that you need to set *all* options possible in eucalyptus.conf
<mathiaz> kirkland: which makes sysadmin edition a bit difficult
<mathiaz> kirkland: having 3 configuration file would solve the end user issue
<kirkland> mathiaz: that's pretty much where we're at now, no?
<mathiaz> kirkland: right
<mathiaz> kirkland: especially if you move the documentation to a man page
<kirkland> mathiaz: i'm definitely moving to a manpage
<mathiaz> kirkland: then you'd end up with a long list of options in eucalyptus.conf
<mathiaz> kirkland: if the sysadmin opens the file, it can be a little bit difficult to edit
<MagicFab> mathiaz, tx
<mathiaz> kirkland: having /etc/default/eucalyptus, you'd just provide the most used options for edition
<mathiaz> kirkland: or the sysadmin can just add the options he wants to tweak
<mathiaz> kirkland: for example, the samba configuration file has hundreds of options, but you only see a dozen of them in the default configuration file
<kirkland> mathiaz: okay, stepping back from this a bit ...
<kirkland> mathiaz: looking at screen ...
<kirkland> mathiaz: there's an /etc/screenrc
<kirkland> mathiaz: which is the global screen configuration, a conffile, managed by ubuntu, admins can change it, but let's say that most don't
<ivoks> evening
<kirkland> mathiaz: and each user can have a ~/.screenrc
<mathiaz> ivoks: o/
<kirkland> mathiaz: screen knows to source /etc/screenrc first, then soruce ~/.screenrc for user custom overrides
<mathiaz> kirkland: right
<kirkland> mathiaz: ~/.screenrc is not package managed, and hence users (or programs) can read and write that all they want
<mathiaz> kirkland: there is a also a list of all-possible options with their default value somewhere in screen (probably compiled in the binary)
<kirkland> mathiaz: and the screen(1) manpage
<uvirtbot`> New bug: #504960 in samba (main) "package samba-common 2:3.4.0-3ubuntu5.3 failed to install/upgrade: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 128 zur?ck" [Undecided,New] https://launchpad.net/bugs/504960
<kirkland> mathiaz: so i was thinking /etc/default/eucalyptus would be like our /etc/screenrc
<kirkland> mathiaz: conffile, pkg managed, admin can change there, but it's recommended that they use euca_conf
<kirkland> mathiaz: euca_conf reads/writes /etc/eucalyptus/eucalyptus.conf -> /var/lib/eucalyptus/eucalyptus.conf
<kirkland> mathiaz: which first sources /etc/default/eucalyptus, and then sets it's overrides from the global package defaults
<mathiaz> kirkland: /me ponders
 * mathiaz ponders
<kirkland> mathiaz: basically, we'll ship a set of "ubuntu" defaults
<kirkland> mathiaz: which are "distro" specific
<kirkland> mathiaz: like "use KVM"
<kirkland> mathiaz: then the user has more stuff that they have to configure for UEC to work
<kirkland> mathiaz: "site" specific stuff
<kirkland> mathiaz: like PUBLIC_IPS
<mathiaz> kirkland: hm
<kirkland> mathiaz: so programs will keep reading/writing /etc/eucalyptus/eucalyptus.conf as they always have, for backwards compat
<kirkland> mathiaz: to make more FHS friendly, we move it to /var/lib, and symlink it back
<kirkland> mathiaz: and make it not a conffile
<mathiaz> kirkland: well - it's a configuration file, it should stay in /etc/
<mathiaz> kirkland: this is what http://www.debian.org/doc/debian-policy/ch-files.html#s-config-files says
<kirkland> mathiaz: processed by a program
<kirkland> variable state, i would call it
<mathiaz> kirkland: PUBLIC_IPS is configuration data
<kirkland> mathiaz: okay
<mathiaz> kirkland: I don't think it's an internal state
<mathiaz> kirkland: anyway I don't think it's important right now
<kirkland> mathiaz: alright, i'm not insistent on this
<kirkland> mathiaz: okay, good
<kirkland> mathiaz: so we, the distro, set some defaults in /etc/default
<mathiaz> kirkland: IMO we need to figure out how many configuration files we're going to ship
<dasunsru1e32> as a root user, how do I watch what another user is doing from a terminal?
<uvirtbot`> New bug: #504963 in bind9 (main) "[Karmic] host -4 does IPv6 lookup -- times out" [Undecided,New] https://launchpad.net/bugs/504963
<dasunsru1e32> preferably without their knowledge
<mathiaz> kirkland: to go back to bug 487275
<uvirtbot`> Launchpad bug 487275 in eucalyptus "eucalyptus.conf should not be a conffile" [High,Triaged] https://launchpad.net/bugs/487275
<mathiaz> kirkland: the problem, is that euca_conf modifies eucalyptus.conf and where there is a *new* version of eucalyptus.conf shipped by the package, you're prompted by dpkg
<kirkland> mathiaz: righto
<mathiaz> kirkland: how often does the shipped version of eucalyptus.conf change?
<kirkland> mathiaz: hrm, probably once per release for most users
<kirkland> mathiaz: within a development cycle, perhaps more
<mathiaz> kirkland: and we want to avoid the prompt because most of the users won't have changed eucalyptus.conf
<kirkland> mathiaz: actually ....
<kirkland> mathiaz: let's say i move all the documentation out
<kirkland> mathiaz: then that file is just variables/values read/written by euca_conf
<mathiaz> kirkland: it's just that because euca_conf changed it (without the user knowledge) the user suddenly get a prompt for nothing
<kirkland> mathiaz: and i make it not a conf file, but just a template created by the postinst (if not yet existing)
<mathiaz> kirkland: or rather the user gets a prompt for a package he has never configured directly (he doesn't know about eucalyptus.conf)
<kirkland> mathiaz: then we can change the manpage all we want
<kirkland> mathiaz: and the user can change (directly, or via euca_conf) all they want
<kirkland> mathiaz: done?
<mathiaz> kirkland: hm - what happens if new options are introduced?
<kirkland> mathiaz: they're put into the manpage
<mathiaz> kirkland: would you have to modify the file to stick the new options in there?
<kirkland> mathiaz: you mean where default values *must* be defined?
<mathiaz> kirkland: yes - exactly
<mathiaz> kirkland: is eucalyptus.conf the file where default values are defined?
<kirkland> mathiaz: currently, user-chosen, and almost-universally-default values are intermixed
<kirkland> mathiaz: i tried to pull the latter out to that file i pastebined at the top of this conversation
<mathiaz> kirkland: right - and that's the problem with upgrades
<mathiaz> kirkland: you need to be able set the default values for new options without bothering the user about it
<kirkland> mathiaz: see my "don't touch this file" comment :-)
<kirkland> mathiaz: so let's move that to /usr/share
<mathiaz> kirkland: right - that's why you'd put such a file under /usr/share/package-name/
<mathiaz> kirkland: well - forget my last comment
<mathiaz> kirkland: it's irrelavant for the moment
<kirkland> mathiaz: okay, so we'll ship a sane set of static defaults to a file in /usr/share
<kirkland> mathiaz: source that at the top of /etc/eucalyptus/eucalyptus.conf
<mathiaz> kirkland: yes
<mathiaz> kirkland: have eucalyptus.conf be a conffile
<kirkland> mathiaz: okay
<mathiaz> kirkland: and have eucalyptus-local.conf be modified by euca_conf
<mathiaz> kirkland: and sourced by eucalyptus.conf as well - *after* /usr/share/, but at the begining of the file
<mathiaz> kirkland: so that the sysadmin has ultimate control over the configuration of eucalyptus via eucalyptus.conf
<kirkland> mathiaz: okay, so a) static defaults in /usr/share
<mathiaz> kirkland: if new options are shipped during an upgrade, there are made available in /usr/sahre
<kirkland> mathiaz: b) user modified values in eucalyptus.conf (conffile)
<mathiaz> kirkland: and if a new eucalyptus.conf is shipped, the end user is not prompted for a change if he hasn't modified the file
<kirkland> mathiaz: c) machine read/written values in eucalyptus-local.conf
<mathiaz> kirkland: yes - seems like a good option to me
<mathiaz> kirkland: a) /usr/share/eucalyptus/eucalytpus.conf
<mathiaz> kirkland: b) /etc/eucalyptus/eucalyptus-local.conf (modified by euca_conf)
<kirkland> mathiaz: getting euca_conf to read/write eucalytpus-local.conf instead is going to be difficult, and potentially contentious
<mathiaz> c) /etc/eucalyptus/eucalyptus.conf - sources a), then b)
<mathiaz> kirkland: well - if euca_conf writes directly to eucalyptus.conf then users would get a prompt on a package upgrade *only* if the package ships a new eucalyptus.conf file
<mathiaz> kirkland: which would happen less frequently then shipping a new /usr/share/eucalyptus/eucalyptus.conf file?
<mathiaz> kirkland: which would happen less frequently then shipping a new /usr/share/eucalyptus/eucalyptus.conf file (where new defaults values for new options are added)?
<ruben23> hi guys..are there ways i can make image of my windows xp client pc and restore it anytime form image using my linux server, like multiple client pc deployment...any suggestion..?
<ivoks> that isn't allowed by windows license
<ivoks> one license per machine
<ivoks> :)
<Jamash> if I understand ruben23 correctly, he only wants to backup and restore one machine
<ruben23> Jamash: yes backup and restore per machince only
<ivoks> urgh... drbd 8.3.7rc2 build process is... urgh...
<kirkland> mathiaz: still around?
<mathiaz> kirkland: sure! and you?
<mathiaz> kirkland: not shoveling snow down there?
<kirkland> mathiaz: no, but it's wicked cold here too :-)
<mathiaz> kirkland: I'm not sure we'd agree on what *wicked* cold means... :D
<kirkland> mathiaz: okay ...
<kirkland> mathiaz: http://pastebin.ubuntu.com/353685/
<kirkland> mathiaz: heh :-)
<kirkland> mathiaz: well, it's going to be -10C here tonight
<kirkland> mathiaz: which is a record
<kirkland> mathiaz: and i'm running 20 miles (32km) tomorrow morning in that weather
<mathiaz> kirkland: ohhh - no more coyotte then?
<kirkland> mathiaz: heh :-)  i'm sure they're staying warm
<kirkland> mathiaz: okay, so see that pastebin
<mathiaz> kirkland: oh don't worry about that - there are cross country competions running at lower temperature then that
<mathiaz> kirkland: you'll survive :)
<mathiaz> kirkland: look good to me
<kirkland> mathiaz: i think on new installs, /etc/eucalyptus/euclayptus.conf should look like that
<mathiaz> kirkland: line 6 - maybe expand the command
<kirkland> mathiaz: i need to think about maintainer scripts to handle upgrades correctly
<mathiaz> kirkland: right - on new install it should look like that
<mathiaz> kirkland: how easy will it be to teach euca_conf to modify /etc/eucalyptus/eucalyptus-local.conf ?
<kirkland> mathiaz: http://pastebin.ubuntu.com/353688/
<kirkland> mathiaz: not too hard ...
<mathiaz> kirkland: cool
<kirkland> mathiaz: it has:
<kirkland> FILE="@prefix@/etc/eucalyptus/eucalyptus-local.conf"
<kirkland> mathiaz: however, it also has "/etc/eucalyptus/eucalyptus.conf" hardcoded elsewhere
<mathiaz> kirkland: and add a statement in /etc/eucalyptus/eucalyptus-local.conf pointing to the eucalyptus.conf
<kirkland> mathiaz: i'm going to create one patch that fixes the latter using $FILE appropriately
<mathiaz> kirkland: right seems like the best option too me
<mathiaz> kirkland: you also need to handle the case where eucalyptus-local.conf doesn't exist
<mathiaz> kirkland: like on first install
<mathiaz> kirkland: would euca_conf support that?
<mathiaz> kirkland: if not you could install an default eucalyptus-local.conf on first install
<mathiaz> kirkland: it would *not* be a conffile
<kirkland> mathiaz: in the postinst, you mean
<kirkland> mathiaz: such that it's not a conffile?
<mathiaz> kirkland: yes
<kirkland> mathiaz: ack, got it
<mathiaz> kirkland: in the postinst
<kirkland> mathiaz: okay, next one:
<kirkland> mathiaz: http://pastebin.ubuntu.com/353689/
<mathiaz> kirkland: and does euca_conf support an empty/reduced eucalyptus-local.conf?
<kirkland> mathiaz: i'm not sure about that
<kirkland> mathiaz: will need to do some testing
<mathiaz> kirkland: well - if it sources eucalyptus.conf, we're good to go
<kirkland> mathiaz: i think i might need to teach euca_conf about eucalyptus.conf AND eucalyptus-local.conf
<kirkland> mathiaz: on read, it reads eucalyptus.conf
<kirkland> mathiaz: but writes to eucalyptus-local.conf
<mathiaz> kirkland: ha - true
<pting> is it just my server... but i can't seem to get 100% utilization on an amazon ec2 small instance... i get at most 40%
<mathiaz> kirkland: line 11: want to either *directly* edit /etc/eucalyptus/eucalyptus.conf
<quietone> what do I do so my machine can be a server for the home calendar when it is booted and I am not logged in? I think I just need to make the wireless connection but I have not found how to do this. And the router will be the firewall.
<kirkland> mathiaz: ack
<mathiaz> kirkland: line 12: or use euca_conf to add your customizations.
<mathiaz> kirkland: don't mention /etc/eucalyptus/eucalyptus-local.conf.
<kirkland> mathiaz: got it.
<mathiaz> kirkland: actually mention it on line 8
<mathiaz> kirkland: /etc/eucalyptus/eucalyptus.conf.
<mathiaz> kirkland: line8: /etc/eucalyptus/eucalyptus.conf and /etc/eucalyptus/eucalyptus-local.conf.
<kirkland> mathiaz: k
<mathiaz> kirkland: http://pastebin.ubuntu.com/353688/ - why not mention euca_conf?
<mathiaz> kirkland: kirkland something like: you can also use use euca_conf to add your customizations - see euca_conf man page
<kirkland> mathiaz: it's mentioned on line 6
<mathiaz> kirkland: well - I'd add to the end as well
<mathiaz> kirkland: line 10 outlines one source of documentation to customize UEC
<mathiaz> kirkland: line 11 could mention *another* way (euca_conf) to customize UEC
<erichammond> zul: Where do I find ec2-runurl referenced in http://bazaar.launchpad.net/~zulcss/ec2-init/ec2-init-config/annotate/head%3A/upstart/ec2-runurl.conf
<erichammond> exec /usr/sbin/ec2-runurl
<kirkland> mathiaz: http://pastebin.ubuntu.com/353694/
<kirkland> mathiaz: okay
<kirkland> mathiaz: this is what creates the non-conffile /etc/eucalyptus/eucalyptus-local.conf
<kirkland> mathiaz: this can be pretty bare, but it needs to start out with EUCALYPTUS="not_configured"
<mathiaz> kirkland: you should probably make sure it's only run on package *installation*
<mathiaz> kirkland: if the sysadmin decides to delete /etc/eucalyptus/eucalyptus-local.conf, it would be recreated on next package upgrade
<mathiaz> kirkland: which may break things since the default is EUCALYPTUS="not_configured"
<kirkland> mathiaz: hmm, that seems highly inadvisable, removing eucalyptus-local.conf
<mathiaz> kirkland: agreed. but the sysadmin may wanna do it
<kirkland> mathiaz: besides, i think we need to create that file on upgrades from 9.10
<mathiaz> kirkland: sure - we can add code to do that as well
<kirkland> mathiaz: actually, on upgrades, i think we need to mv eucalyptus.conf to eucalyptus-local.conf
<mathiaz> kirkland: if dpkg --compare-version ...; then ... create file... ; fi
<kirkland> mathiaz: right, we need to compareversion and mv the current, euca_conf written eucalyptus.conf to eucalyptus-local
<mathiaz> kirkland: right - so you'd first install the default file
<kirkland> mathiaz: and seed eucalyptus.conf with the 2-source version
<mathiaz> kirkland: and deal with the package upgrade afterwards
<mathiaz> kirkland: note that upgrades will have prompt anyway
<kirkland> oh, right
<mathiaz> kirkland: since the conffile eucalyptus.conf will be modified
<kirkland> mathiaz: okay, so you really i need logic to only create -local on install?
<mathiaz> kirkland: I think so.
<mathiaz> kirkland: the logic is actually already in the postinst script
<mathiaz> kirkland: where it deals with installing euca_root-wrap
<mathiaz> kirkland: and does euca_conf -d / /etc/eucalyptus/eucalyptus.conf
<kirkland> mathiaz: oh, crap
<kirkland> yeah
<mathiaz> kirkland: you'd probably gonna have to look at that code as well
<kirkland> mathiaz: hrm, shoudl this go in preinst, then?
<mathiaz> kirkland: this =?
<kirkland> mathiaz: the -local creating codde
<mathiaz> kirkland: I don't think so
<mathiaz> kirkland: in preinst, /etc/eucalyptus/ won't exist
#ubuntu-server 2010-01-09
<mathiaz> kirkland: as mentionned in http://www.debian.org/doc/debian-policy/ch-files.html#s-config-files
<kirkland> k
<kirkland> mathiaz: what's the logic to tell if this is on install, and not upgrade?  dpkg --compare-versions .... something?
<mathiaz> kirkland: [ -z "$2" ]
<kirkland> mathiaz: or test for $2
<kirkland> ah
<kirkland> okay
<mathiaz> kirkland: on install $2 is empty
<kirkland> mathiaz: cool
<mathiaz> kirkland: $2 holds the latest *installed* version of a package
<mathiaz> kirkland: http://women.debian.org/wiki/English/MaintainerScripts
<mathiaz> kirkland: ^^ has a good explanation on how scripts are called with which argument
<kirkland> mathiaz: http://paste.ubuntu.com/353701/
<kirkland> mathiaz: something like that
<kirkland> mathiaz: should create eucalyptus-local.conf if this is either an install, or an upgrade from a version before we started using that file
<mathiaz> kirkland: hm right.
<mathiaz> kirkland: I would put the seed_local_conf call to the pkg install section of the maintainer script (below - with euca_conf -d)
<mathiaz> kirkland: and add some more specific code to handle upgrades
<mathiaz> kirkland: what would should be done on upgrades?
<mathiaz> kirkland: because after the upgrade, euca_conf is going to write to eucalyptus-conf.local
<kirkland> mathiaz: by "install section", you mean $1 = "install" ?
<mathiaz> kirkland: shouldn't eucalyptus.conf be copied to eucalyptus-local.conf?
<kirkland> mathiaz: OOOOHHHH
<kirkland> mathiaz: there's already a -z "$2"
<kirkland> mathiaz: sorry
<kirkland> mathiaz: yeah, let me rework this
<mathiaz> kirkland: yes - that's what I was refering to
<kirkland> mathiaz: sorry, sorry, sorry
<kirkland> mathiaz: completely missed that
<kirkland> mathiaz: okay, i'm rolling now :-)
<Bullterd> Hi All.
<Bullterd> This is royally doing my head in.
<Bullterd> My switch 100% supports ad network bonding
<Bullterd> I followed this tut
<Bullterd> http://blog.brightbox.co.uk/posts/howto-do-ethernet-bonding-on-ubuntu-properly
<Italian_Plumber5> how can I turn this off:  "You have new mail in /var/mail/david"  I really don't need to be notified.
<Bullterd> My Router & all linux boxes can ping the box that has the bond setup
<Bullterd> but my workstation and iphone cannot connect nor ping it :(
<Bullterd> I know that the box is communicating with the router because its picking up DHCP leases, and if I change the IP address based on the MAC and then restart the bond, it picks the new IP up fine
<kirkland> mathiaz: okay ... http://paste.ubuntu.com/353712/
<kirkland> mathiaz: but i think i still have one problem
<kirkland> mathiaz: i'm looking at the debian women's chart now
<kirkland> mathiaz: so the good news is that we can get rid of all of those euca_conf calls
<mathiaz> kirkland: I don't think you want to seed_eucalyptus_conf
<mathiaz> kirkland: that should be conffile
<kirkland> mathiaz: right, i wanted to ask you about that ....
<kirkland> mathiaz: so it is a conf file
<kirkland> mathiaz: but it's changing drastically on upgrades now
<kirkland> mathiaz: we need to copy that data over to -local
<kirkland> mathiaz: and *then* replace it with our new conffile
<kirkland> mathiaz: it *is* a conffile
<kirkland> mathiaz: i have that code elsewhere in my diff
<kirkland> mathiaz: which we'll handle for new installs correctly
<kirkland> mathiaz: but for upgrades, we need to gently move that data over
<mathiaz> kirkland: hm... well - dpkg will automatically prompt for the user
<mathiaz> kirkland: because /etc/eucalyptus/eucalyptus.conf has changed both *in* the package and on the local system
<kirkland> mathiaz: right, this is messy
<mathiaz> kirkland: right - that will only happen once on upgrade
<mathiaz> kirkland: there are ways to workaround that though
<kirkland> mathiaz: i think this is why i keep going back to making eucalyptus.conf *not* a conffile
<kirkland> mathiaz: let's imagine eucalyptus.conf is more like eucalyptus-local.conf in this new model
<kirkland> mathiaz: it is the thing read/written by euca_conf
<kirkland> mathiaz: we can easily take something that was a conffile, and drop it from dpkg's installed files list
<kirkland> mathiaz: if it's in /etc, it should just remain
<mathiaz> kirkland: yes - but then you have to take care of it in the maintainer scripts
<kirkland> "<mathiaz> kirkland: there are ways to workaround that though"
<kirkland> mathiaz: ^ ?
<mathiaz> kirkland: we could handle the upgrade by moving the old eucalyptus.conf file in the preinst on upgrades
<kirkland> mathiaz: oh, okay
<kirkland> mathiaz: that's legit?
<mathiaz> kirkland: it's *possible* :D
<mathiaz> kirkland: that being said we should make sure it's done after eucalyptus is stopped
<mathiaz> kirkland: on upgrade eucalyptus is stopped, then move eucalyptus.conf to eucalyptus-local.conf
<kirkland> mathiaz: okay; how to verify eucalyptus is stopped?
<kirkland> mathiaz: "stop eucalyptus" in the preinst?
<mathiaz> kirkland: it's done by the upstart debhelper token
<erichammond> pting: m1.small instanced type on ec2 gets 1 "EC2 compute unit" which is generally less than half of the real CPU on which the VM is running.  What you are seeing is normal and you are getting all that you're paying for.
<mathiaz> kirkland: I don't know if the new conffile eucalyptus.conf would actually be installed though
<mathiaz> kirkland: since dpkg would notice that the old eucalyptus.conf conffile is no longer there and it may not reinstall the new one
<mathiaz> kirkland: I think you should ask in #ubuntu-devel - slangasek may be able to help in that
<erichammond> smoser: "ping external to ec2 never works." - This depends on your security group configuration.  If you want to allow ping then allow icmp in the security group.
<mathiaz> kirkland: moving conffile around is tricky - the samba pkg did that at some point IIRC
<kirkland> mathiaz: okay, pinging slangasek
<uvirtbot`> New bug: #460316 in clamav (main) "clamav-base package contains very big clamav data files (main.cvd and daily.cvd) from clamav-data package" [Wishlist,Confirmed] https://launchpad.net/bugs/460316
<uvirtbot`> New bug: #428179 in clamav (main) "package clamav-base 0.94.dfsg.2-1ubuntu0.5 failed to install/upgrade: el subproceso post-installation script devolvi? el c?digo de salida de error 3" [Medium,Confirmed] https://launchpad.net/bugs/428179
<uvirtbot`> New bug: #430418 in clamav "clamav-milter socket permissions wrong" [Medium,Triaged] https://launchpad.net/bugs/430418
<kirkland> mathiaz: cool!
<kirkland> mathiaz: euca2ools/boto happier?
<kirkland> mathiaz: please upload to lucid as soon as you're satisfied
<mathiaz> kirkland: well - I'd like some feedback on the package first
<mathiaz> kirkland: some of my testing is working
<mathiaz> kirkland: but my main stress test script is still failing
<mathiaz> kirkland: and I haven't figured out why yet
<kirkland> mathiaz: hmm, okay
<kirkland> mathiaz: http://paste.ubuntu.com/353732/
<mathiaz> kirkland: and I need to grab something to eat now
<kirkland> mathiaz: i'm testing the euca_conf/eucalyptus.conf changes now
<mathiaz> kirkland: I haven't had lunch yet
<kirkland> mathiaz: heh
<mathiaz> kirkland: line 98: it think you want install
<mathiaz> kirkland: hm well - nervermind
<kirkland> mathiaz: hmm, i'm testing for upgrade
<mathiaz> kirkland: oh yes - you actually want install
<mathiaz> kirkland: postinst doesn't receive upgrade as $1
<mathiaz> kirkland: it receives install as $1
<kirkland> mathiaz: that's preinst
<mathiaz> kirkland: ahhhh - you're right
<mathiaz> kirkland: I need to grab something to eat
<kirkland> mathiaz: kirkland - 1 ... mathias - 10000000000
<kirkland> :-)
<uvirtbot`> New bug: #502878 in samba (main) "Samba 3.4.0 won't let win98 clients to connect" [Medium,Triaged] https://launchpad.net/bugs/502878
<uvirtbot`> New bug: #487275 in eucalyptus (main) "eucalyptus.conf should not be a conffile" [High,In progress] https://launchpad.net/bugs/487275
<HFSPLUS> !ops
<ubottu> Help! Channel emergency! soren, lamont, mathiaz or tom
<NotTooSmart> copied my smb.conf from a different machine, same username same pass same dirs, can't get it to open?
<Pickley> Hi?
<Pickley> guess not
<FireCrotch> Pickley: Hi!
<FireCrotch> Do you have a problem that we can help you with?
<Pickley> FireCrotch: Hey
<Pickley> Server used to boot straight through Grub
<Pickley> now it just waits for me to have to hit eneter
<Pickley> and also it hangs after it fsck's the partitions
<NotTooSmart> cant get sound on ubuntu-server, i have pulse audio installed, vlc shows it playing the file, alsamixer works. but no sound. any ideas
<FireCrotch> Pickley: well lets solve the grub problem first - what does it display when you have to press enter?
<Pickley> FireCrotch: It starts booting, but it used to automatically boot through into the server
<Pickley> FireCrotch: Do I just need to change it back to auto select?
<FireCrotch> Pickley: Yes, if you've changed it from the default
<Pickley> FireCrotch: I haven't just happened today, might be because it says there are two kernels now
<FireCrotch> Pickley: Yeah, the grub configuration probably just didnt update properly
<Pickley> FireCrotch: Ok, I can sort that out once I get it botting properly :P
<FireCrotch> Pickley: Are you able to boot into single user mode?
<Pickley> FireCrotch: How do you do that? Pass something in grub?
<FireCrotch> Pickley: Yes. the word "single" as a parameter
<Pickley> Ok, I'll try that now
<NotTooSmart> can anyone help me ;\
<FireCrotch> Pickley: fyi it goes at the end of the kernel line
<Pickley> FireCrotch: Doesn't work, also get a i915_handle_error, but it worked before with that.
<Pickley> Can only hit Ctrl+Alt+Del to reboot really
<FireCrotch> Pickley: so it just completely freezes after it does its fsck during boot? any kind of error output from that?
<Pickley> FireCrotch: It just shows the i915 thing and then nothing else loads
<Pickley> FireCrotch: except I had that error before
<Pickley> Thinking it might just be my hard drive failing lol
<FireCrotch> the i915 error is referring to the intel 915 chipset
<Pickley> Yep
<Pickley> It normally worked with that anyway
<FireCrotch> Hmm
<Pickley> Not sure
<Pickley> and really don't want to lose my data
<Pickley> lol
<Pickley> although I do have the most important stuff backed up luckily
<FireCrotch> probably is a bad hard drive. how old is the drive?
<Pickley> Probably 3-4 years old
<Pickley> probably older
<FireCrotch> wouldnt surprise me then
<Pickley> Got another 8gb drive coming so will swap it out
<Pickley> *80
<Pickley> Thanks anyway :D
<Pickley> FireCrotch: think I should just get of it when I get the new one?
<FireCrotch> probably
<Pickley> :D
<Pickley> Not like I'll use more than 80gb
<Pickley> The other issue I was having was the server kept dropping out
<Pickley> But that might be due to the drive as well
<FireCrotch> yeah, hdd problems can show up in various ways
<Pickley> :D
<Pickley> Guess I get to reinstall everything
<FireCrotch> Pickley: well you could always try to get the data off the hard drive onto the new one
<Pickley> Yeah
<Pickley> I don't think it's worth it
<FireCrotch> At least /etc/ :)
<Pickley> lol maybe
<Pickley> Server wasn't set up long ago so doesn't matter too much
<Pickley> Had only really set up SVN, Rails, Ruby, LAMP and Samba
<RoyK> hm. seems tzdata is updated all the time. it's not like timezones are changing, is it?
<cemc> RoyK: check the changelog for what changed I guess ;)
<cemc> anybody using a SheevaPlug computer? http://en.wikipedia.org/wiki/SheevaPlug
 * RoyK wants
<twb> cemc: I am.
<cemc> twb: is it worth buying one?
<cemc> vlan trunking works? openvpn? :-)
<twb> I don't know.  Mine is just a build slave.
<cemc> stock 2.6 kernel ?
<cemc> what distro on it ?
<twb> It's running whatever is in Squeeze at the moment
<cemc> aha
<twb> The onboard mtd ships with Ubuntu 9.10
<cemc> twb: downsides?
<cemc> as far as you can tell :)
<twb> It's bulky, and it's a wallwart if you're using the US connector
<cemc> I have no idea how to compare that 1.2ghz ARM processor to a normal one, you're building stuff on it ?
<twb> d-i doesn't support mtd yet, so you can't actually install Debian (or Ubuntu) onto the onboard storage, and using USB for the root filesystem means that it takes a few reboots before it can actually bootstrap the kernel
<cemc> d-i ?
<twb> debian-installer
<cemc> so how is it ship with ubuntu 9.10 then?
<twb> With dd or similar.
<cemc> err, s/is/does/
<cemc> so you can 'install' ubuntu on it, just not the usual way?
<cemc> I mean you have to hack it a bit :)
<twb> FSVO a bit
<cemc> hm
<twb> You should also see #openplug (iirc)
<cemc> twb: do you know about openwrt and routers that support it? how does the installation part compare to that? as in easier/harder?
<cemc> I'll take a look there too, thx
<twb> cemc: I never installed onto the mtd, I left it alone
<cemc> ah, ok
<twb> I did a normal d-i netboot onto a USB key.
<twb> I was also extremely annoyed that the bootloader's PXE support needed next-server hard-coded, instead of using DHCP.
<cemc> twb: can you show me a cat /proc/cpuinfo on it phlease?
<twb> cemc: it's offline at the moment, sorry.
<cemc> np
<twb> cemc: http://pastebin.com/f7e6c1ef8
<twb> Curious lack of jazelle for an OMAP3
<orogor> anyone knows how to get a list or remove packages which have an apt source which was disabled on upgrade?
<bogeyd6> orogor, apt-get autoclean
<bogeyd6> orogor, then apt-get autoremove
<twb> orogor: do you mean packages were installed from (say) universe, and now that you've disabled universe, you want to remove those packages?
<orogor> well more stuff like medibuntu which is automatically disabled on upgrade
<orogor> then after that unless medibuntu is not re enabled, there s no update for theses packages
<twb> orogor: in aptitude's GUI, these packages will appear in the "Obsolete" drop-down
<twb> Er, s/drop-down/collapsible list/
<twb> You should be able to just hit purge (_) on that list to purge them all
<orogor> can i get that in cli ?
<twb> From the line interface, it's probably something like ~O or ?obsolete()
<orogor> i try cleaning up old stuff, that computer has been throught a few major upgrades already
<twb> orogor: you may want to investigate deborphan and friends
<twb> debfoster?  It's been a while
<bogeyd6> why spend all the time going through the list when apt will remove them
<orogor> because the list will also include pckages downlaoded and installed manually and for which no repository was provided
<orogor> and maybe i dont want to remove theses
<twb> bogeyd6: because d-i, at the very least, doesn't set markauto flags correctly
<twb> Granted, I tend to markauto EVERYTHING and have a single equivs stub, because I find that easier than debfoster/deborphan
<twb> But I don't feel like teaching people how to do that
<bogeyd6> ok so if someone manually installed something, how does it get disabled on upgrade?
<twb> It's "disabled", in the sense that it matches ?obsolete(), as soon as you install it.
<bogeyd6> so aptitude search ?obsolete shows someone a list of all packages manually installed and no longer has its reqs?
<orogor> bogeyd6, as i said , let s say you install something from ppa or medibuntu, which is a common case
<twb> I'm talking about the set of packages that are installed and are not in apt.
<orogor> ppa and medibuntu will propose update to karmic, but still on dist upgrade, ubuntu will disable these repositories
<twb> orogor: you must have a bloody strange dist-upgrade.
<orogor> thus unless you manually re enable the repository, all the packes installed from theses are left with no upgrade path
<orogor> really thats  what it does here
<orogor> it disable everything except the canninicall aproved/maintained packages, (can t  remember the name)
<twb> I hate ubuntu
<orogor> i specially hate the network manager, it  s plain bogus and should be removed
<twb> orogor: FWIW, it's not installed by default unless you use a desktop CD
<twb> I especially liked how network manager + nis ==> 30 minutes to boot
<orogor> i specially like how it destroy my network config at each reboot
<orogor> i tried all the official settings to solve this , it seems there s no issue
<jcastro> if you define your network interface in /etc/network/interfaces NM won't touch it
<orogor> it does
<jcastro> then that's a bug
<orogor> thats  why i say it s bogus
<orogor> and it has always been
<jcastro> have you tracked down the cause? reported a bug?
<twb> Unfortunately, ubiquity doesn't define wired interfaces in /etc/network/interfaces by default, so NM gets its fingers into them
<orogor> nope, still i bet it s a known one, all the peoples i know since ever have this issue
<jcastro> which bug?
<orogor> well for my special case, sometime it goes differently
<orogor> a nix with a static ip , plusgged on a network with a dhcp server, use the dhcp provided adress and not the static ip
<jcastro> why not do a dhcp reservation?
<orogor> it becomes very problematic, when the computer with the nic witht he static ip is a net gateway
<twb> jcastro: maybe he bought a $20 router and its DHCP server can't do reservations
<orogor> i bet because int he first place it should use the static ip and not the dhcp one
<orogor> still now it s less an issue , before, at every dhcp lease renew it would get a  new ip , now it  s only at boot
<twb> orogor: just purge NM
<orogor> did that
<orogor> but still that shouldn t be
<orogor> it has other issues with wireless, of the same kind
<orogor> basically it keeps overidding manually set settings with some other ones it thinks is best
<twb> orogor: that's its job :-/
<orogor> no , there s auto, dhcp, and whetever parameters it should do that only if theses are set
<orogor> btw, anyone with amd64  is having problem with amarok crashing on startup under karmic?
<uvirtbot`> New bug: #502262 in autofs (main) "autofs4 doesn't timeout cifs mounted shares" [Medium,Incomplete] https://launchpad.net/bugs/502262
<twb> That's a desktop issue
<orogor> yup, but maybe you also have a desktop computer :)
<twb> Nope
<twb> Sorry
<orogor> duh
<uvirtbot`> New bug: #505178 in likewise-open (main) "error while launching in kde" [Undecided,New] https://launchpad.net/bugs/505178
<Italian_Plumber> any way to get this notification turned off?  "You have new mail in /var/mail/david"
<twb> Italian_Plumber: yes
<Italian_Plumber> Thanks for answering my question. :)  I'll ask another.  How do I get this notification turned off? "You have new mail in /var/mail/david"
<twb> I don't remember
<niekniekniek> hello!
<niekniekniek> i was wondering if someone can give some info on how the ubuntu cloud is making sure it doesn't loose any data when some bare metal dies
<uvirtbot`> New bug: #502762 in samba (main) "[karmic] winbind installation causes apt to fail" [Low,Incomplete] https://launchpad.net/bugs/502762
<twb> niekniekniek: presumably, the same way RAID5 does
<twb> i.e. redundancy
<niekniekniek> yeah i know it has walrus and stuff
<niekniekniek> but i would like to know how it works
<niekniekniek> what if 5 pc's all have 80 GB's harddisk space
<niekniekniek> how many can die before one looses data
<niekniekniek> and how does it perform in terms of live migration when such an event occurs
<niekniekniek> stuff like that :)
<niekniekniek> anyone?
<Italian_Plumber> did you try wikipedia?
<Italian_Plumber> Here's what you really need to know abaout RAID, which will put you above my boss: If you have a RAID array, keep spare HDs around, so that when one fails, you actually have one to hot-swap!
<Italian_Plumber> Oh... and that spare laying around?  It can't be one that was pulled out of that same RAID array when it failed last year.
<niekniekniek> i know what RAID is...
<niekniekniek> when creating a cloud i wouldn't like to be dependant of one machine with RAID
<niekniekniek> Probably a netapp or something..
<niekniekniek> but that netapp would also need a spare
<niekniekniek> i'm not sure if ubuntu cloud uses that way... maybe it uses all the storage in all the vm hosts as storage
<niekniekniek> it seems it uses ata over ethernet
<bogeyd6> niekniekniek, you need to check into cloud data striping and http://nicolas.barcet.com/drupal/files/SkillsMatter-WhatIsUbuntuCloud_0.pdf
<ehazlett> is there support for LVM volume snapshot merging in 9.10?  i saw a launchpad request for inclusion in 8.10 but can't seem to find anything else about it...
<bogeyd6> is there a website that shows what kinds of things you do with cloud?
<jpds> bogeyd6: Most things?
<bogeyd6> well more detailed like uhm
<bogeyd6> is it similiar to vmwares vmotion?
<jpds> niekniekniek: I /think/, that if the storage controller dies, then the cloud nodes will run off their local copy of it.
<cemc> twb: I read a little bit about this sheevaplug thingy, and as I understand the last version to run on it is Ubuntu 9.04, and after that that's it ? no more (up-to-date) ubuntu on that?
<AntonyB> I have been searching everywhere but I can't find out if there is a possible solution to replace AD in windows server with openldap and samba in ubuntu, and is it posible to apply GPO
<AntonyB> From a ubuntu server for a windows client?
<AntonyB> Anybody>
<AntonyB> ?
<niekniekniek> bogeyd6, jpds, thanks!
<niekniekniek> jpds: i'll try to find out if it does
<niekniekniek> bogeyd6: i'll give it a read
<niekniekniek> jpds: come to think of it, i guess it is not like that
<niekniekniek> because live migration won't work if there's a local copy
<niekniekniek> so i guess no local copy
<niekniekniek> ?
<niekniekniek> ok i read some more
<niekniekniek> seems to me the ubuntu cloud has a single point of failure
<niekniekniek> http://www.ubuntu.com/system/files/UbuntuEnterpriseCloudWP-Architecture-20090820.pdf
<niekniekniek> the CLC/WS3
<niekniekniek> anyone?
<ruben23> hi..are there any application whihc i can deploy window xp and linux client desktop image on a multiple client pc on the network..installation purpose and image a desktop os
<niekniekniek> ruben23: that would be clonezilla
<ruben23> niekniekniek: both for windows and linux OS..?
<niekniekniek> Features of Clonezilla:  Therefore you can clone GNU/Linux, MS windows and Intel-based Mac OS, no matter it's 32-bit (x86) or 64-bit (x86-64) OS. For these file systems, only used blocks in partition are saved and restored. For unsupported file system, sector-to-sector copy is done by dd in Clonezilla.
<niekniekniek> sweet feature: Multicast is supported in Clonezilla SE, which is suitable for massively clone. You can also remotely use it to save or restore a bunch of computers if PXE and Wake-on-LAN are supported in your clients.
<ruben23> niekniekniek:ow ok very nice..your using it..?
<ruben23> im new to this, do you have somehow a kind of how to link..
<niekniekniek> have you tried google? http://www.clonezilla.org/
<niekniekniek> if you have a specific question i can maybe answer it
<niekniekniek> and yes, i've used it in the past.. back then it required some more handwork then i expected, but it worked ok
<niekniekniek> where should i go to learn more of the ubuntu cloud system?
<uvirtbot`> New bug: #459403 in ubuntu-docs (main) "OpenLDAP server instructions out of date: slapd no longer creates initial directory (dup-of: 463684)" [Undecided,Fix released] https://launchpad.net/bugs/459403
<osmosis> I just noticed karmic has python 2.6, and no 2.5. So ubuntu is fully upgraded to the latest python? There are no 2.5 dependencies?? Im surprised. I thought it would be more work to get things working on 2.6 since it doesn't maintain backwards compatibility.
<osmosis> err, that would be python3. nevermind.
<ScottK> Also Karmic has 2.5.  It's Lucid that has only 2.6.
<soren> ScottK: lucid still has python2.5.
<soren> ScottK: and python2.4, apparantly.
<uvirtbot`> New bug: #505278 in openssh (main) "ssh-add -D deleting all identities does not work. Also, why are all identities auto-added?" [Undecided,New] https://launchpad.net/bugs/505278
<uvirtbot`> New bug: #505297 in clamav (main) "package clamav-base 0.95.3+dfsg-1ubuntu0.09.04 failed to install/upgrade: id: clamav: No such user" [Undecided,Incomplete] https://launchpad.net/bugs/505297
<metalf8801> I'm trying to use backuppc for the first time and I'm getting this error "tree connect failed: NT_STATUS_BAD_NETWORK_NAME" and I'm wondering what I should do
<metalf8801> does it mean I have the wrong workgroup name?
<metalf8801> is anyone else using backuppc?
<RoyK> metalf8801: hm. looks good - why not combine that with an opensolaris server with dedup+compression :)
<metalf8801> I can't get it to work on my Ubuntu server which is what I want to do first
<metalf8801> RoyK: what do you mean it looks good? its not working
<RoyK> haven't tried it, but it looked good
<RoyK> http://i.imgur.com/UEeKS.jpg lol
#ubuntu-server 2010-01-10
<jmarsden> metalf8801: Saying "its not working" is insufficient information for anyone here to help you.  Be specific: what version of Ubuntu are you running, what exactly did you do, what happened, what did you expect to happen?
<metalf8801> ok thanks
<metalf8801> I'm using Ubuntu serve 9.10
<metalf8801> I install samba awhile ago and it seems to be working
<metalf8801> I just installed Backuppc and tried to set it up but I'm getting this Last error is "tree connect failed: NT_STATUS_BAD_NETWORK_NAME".  error message when I try to run a backup
<metalf8801> jmarsden: is there more I should say?
<uvirtbot`> New bug: #505301 in openssh (main) "openssh server should warn that .ssh/authorized_keys is not accessible (causing ssh pubkey authentication to fail silently)" [Undecided,New] https://launchpad.net/bugs/505301
<Adri2000> I'm setting up a mail server that will host multiple domains. I'll use postfix's virtual mailboxes, and dovecot for pop/imap. as there will be only a few users, I won't use something like ldap or mysql. given that I'd like smtp authentication in postfix and pop/imap authentication in dovecot to use the same user/password backend, what could I use?
<ScottK> soren: Yes, the interpreters are there, but they aren't supported Python versions for module and extension building.
<ScottK> it means they're pretty unlikely to be useful unless you only need core Python stuff.
<LlamaZorz> I have installed ubuntu server and when I first boot I get a grub "error: no such device" error.  Usually in grub 1 id know how to fix this with esc or id be able to go into a livesession and change the files, but I cant mount ext4.  So how can i work on the grub config while at this level
<sabgenton> what is the standerd way to stop start sshd ?
<sabgenton> I can't find it in the usual place
<sabgenton> is there like a tool to go thru?
<stgraber>  /etc/init.d/ssh stop/start
<sabgenton> oh ssh is sshd
<sabgenton> see
<sabgenton> why did they change it thats strange
<sabgenton> its a dameon
<sabgenton> the d makes sence
 * sabgenton slaps self
<sabgenton> stgraber: thanks
<twb> sabgenton: hysterical raisins
<sabgenton> lol
<stgraber> Adri2000: np
<sabgenton> ah all clear now :P
<stgraber> that's sabgenton not Adri2000 ... (yeah, auto-completion + typo ;))
<sabgenton> :)
<LlamaZorz> grub2 is awful
<twb> LlamaZorz: try extlinux
<LlamaZorz> twb: thanks il look into it
<sabgenton> is there any reason the ubuntu forums show examples with wvdial instead of pon?
<sabgenton> wvdial doesn't run as a dameon which is a pain
<bogeyd6> sabgenton, i havent dealt with dialup modems in quite sometimes, as I imagine most others have not either. Probably why that is done.
<bogeyd6> sabgenton, https://help.ubuntu.com/community/DialupModemHowto/SetUpDialer  is infinitely more informative
<sabgenton> bogeyd6: well its a 3g modem not a 56k dialup modem but fair enough
<sabgenton> bogeyd6: adsl IDSN etc is there no way to work them with wvdial
<bogeyd6> just checking
<bogeyd6> !dial up
<ubottu> You want to connect via dial-up? Read https://help.ubuntu.com/community/DialupModemHowto - Also try disabling/removing KNetworkManager if KDE applications cannot connect using dial-up
<sabgenton> I'f you'll excuse my ignorance
<bogeyd6> !pppoe
<ubottu> Setting up an ADSL/PPPoE connection? Look at https://help.ubuntu.com/community/ADSLPPPoE
<bogeyd6> and then
<bogeyd6> !adsl
<ubottu> Setting up an ADSL/PPPoE connection? Look at https://help.ubuntu.com/community/ADSLPPPoE
<jmarsden> metalf8801: (After a 4 hour delay) I was away earlier, back now.  The error msg you are seeing looks like a samba client configuration issue. However, I'm confused... you said you were backing up a Ubuntu server -- doing that would not normally be using a samba client at all.  I'd use the backuppc rsync client for that.
<metalf8801> I'm backing up a computer running xp to a Ubuntu file server
<jmarsden> Ah.  Then you probably need to tell packuppc/samba the correct workgroup name for that XP machine?
<sabgenton> bogeyd6: all I was saying before is pon can be used for adsl not just dialup
<bogeyd6> sabgenton, is there any reason the ubuntu forums show examples with wvdial instead of pon?
<sabgenton> bogeyd6>	sabgenton, i havent dealt with dialup modems in quite sometimes, as I imagine most others have not either.
<bogeyd6> sabgenton, you lost me
<sabgenton> sorry dons't matter
<sabgenton> I basicly want to use  a wvdial script as a pon scrippt
<jmarsden> bogeyd6: The examples are for dialup, for which wvdial is the conventional tool to use.  also wvdial does "intelligent" guesses about hwo to proceed when it meets something that isn't quite PPP yet at the other end, whereas pon is strictly PPP only, I think.  Like others have said it's been a while since I used either one!
<sabgenton> cause wvdial is a pain
<jmarsden> sabgenton: What specifically is "a pain" about wvdial?  WHat aspect of its behaviour are you trying to change or avoid?
<sabgenton> having to run it in a termian and the C-c it when finished
<sabgenton> pon runs like a service
<sabgenton> and then*
<jmarsden> You are saying that wvdial doesn't return to the shell when it has established the PPP link?  It used to, from memory...
<bogeyd6> jmarsden, he didnt do the wvdial & disown
<qwood> Hello, all. Is anyone here familiar with quotas?
<jmarsden> sabgenton: Would wvdial &    # relieve the "pain" you are experiencing?  Or even nohup wvdial & ?
<jmarsden> qwood: Probably; ask your actual question and see who answers :)
<qwood> Ok.
<sabgenton> jmarsden: true
<sabgenton> I'm so dum
<sabgenton> well control c is like kill -2 or someting aint it?
<sabgenton> so I could just send that to the  process after finding it with ps aux
<sabgenton> top etc
<qwood> I just wondered if I needed to undo anything if I accidentally ran touch /aquota.user /aquota.group instead of touch /home/aquota.user /home/aquota.group
<qwood> I don't want quotas enabled on /
<sabgenton> hmm still thats a pain pon seems much more uniform
<twb> qwood: the quotas files have no significance unless the filesystem is mounted -ousrquota -ogrpquota
<jmarsden> qwood: As long as the filesystem concerned doesn't have the usrquota mount option set it still won't use quotas, so you can just delete the unwanted files, as far as I know.
<jmarsden> :)
<twb> Even if it does -- it'll just make the next boot take aaaages
<twb> And of course disable quotas until that time.
<qwood> This is the only command I think may have done anything, mount -o remount /
<qwood> So what do I need to do if anything? In case you can't tell, I've never used it before.
<jmarsden> qwood: Did the / filesystem have the usrquota or grpquota option(s) in /etc/fstab at the time you ran the command?
<qwood> No it didn't, now I know what you meant. Also, the /home filesystem has the options usrjquota and grpjquota instead of that in the guide I am following. Why is that, out of curiosity?
<sabgenton> jmarsden: I just want to traslate a wvdial.conf to a pon ppp file
<sabgenton> eg http://pastebin.ca/1745010
<jmarsden> sabgenton: OK, go for it, if that's what you want to do.
<jmarsden> qwood: Sounds like a typo to me, I see no usrjquota option mentioned in the mount man page.
<qwood> I would be willing to agree. The software that utilizes it has a guide, which is not written by this  guy. It says to use it without the js
<sabgenton> jmarsden: ok you would know how?
<sabgenton> wouldn't*
<jmarsden> sabgenton: Not really... about 15 years ago I would have done it pretty fast, I think :)  I suspect you will want to make a ppp chat script that does the various modem commands.
<sabgenton> ok so wvdial does that part for you so to speak?
<jmarsden> sabgenton: Have you considered something like   alias mypon='nohup wvdial &'    and alias mypoff='kill %wvdial'
<jmarsden> Yes.
<jmarsden> Read the respective man pages of the two tools to find out what they do and how they work :)
<qwood> Hopefully I wont have any more surprises. Haha. Thanks for the help, twb and jmarsden
<jmarsden> qwood: You're welcome.
<sabgenton> oh I'll do that im the mean time
<sabgenton> nice thing about having a good pon script is that I can chuck it on any system just about
<sabgenton> not everyone has wvdial compared to pon /ppp
<jmarsden> sabgenton: If you have multiple systems with these 3G modems, then you should take the time to learn both wvdial and pon (and pppd) well, so you can develop good scripts and troubleshoot them.
<twb> jmarsden: busybox start-stop-daemon :-)
<sabgenton> jmarsden: true
<jmarsden> sabgenton: Incidentally, wvdial is in main, and gnome-system-tools depends on it, so it should be available to all Ubuntu systems (at least from from Hardy forward).
<sabgenton> well i have go a pon script that works !
<jmarsden> twb: That would work too :)
<jmarsden> !congratulate sabgenton
<sabgenton> but its not the translation of my wvdial script which was made for my conection
<sabgenton> which i wan'ted to do
<sabgenton> jmarsden: guess i  will go study and figure the rest out later
<sabgenton> :)
<jmarsden> Sure.  If your pon script works as reliably as wvdial, then whether it is a translation, a direct revelation from God, or your cat walked on the keyboard and typed it that way is irrelevant to your immediate needs -- you have it, and it works :)
 * sabgenton worrys the future seeing thur with jmarsden's  concept
<sabgenton> with the
<twb> Not that it matters, but here's a start-stop-daemon example I prepared earlier: http://twb.ath.cx/Preferences/.bin/twb-agents
<sabgenton> cool
<sabgenton> twb: lol u a gentoo user?
<twb> sabgenton: hardly.
<sabgenton> k
<twb> ubottu -l keychain
<twb> Grmph
<sabgenton> !quit thx guys!
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<wweasel> I'd really appreciate your help. I installed Ubuntu Server, had it automatically configure LVM. It made my root partition large, almost the entire available disk space. I'd like to shrink it, but can't shrink the active root partition.  Can I do so using an environment launched from the Server Disc, or would I need a live environment from the Desktop CD?
<twb> wweasel: it'd be easiest to use a live CD that includes LVM
<twb> e.g. Knoppix or the CentOS 5 live CD.
<twb> Hmm, I wonder if you could do it by breaking in the initramfs...
<wweasel> well, i have a network connection, I can just install the required packages, say, inside the Desktop CD's live environment
<wweasel> you're right, if I had one of those live cds, they would be an easier path. I could dl one, but this may be faster
<wweasel> twb: i wouldn't know what it would mean to break in the initramfs, never mind how to do it :)
<wweasel> Part of my problem is that I'm doing this on an old laptop which is finnicky about which CDs it will boot from. I have no rational explanation for why it boots from certain ones but not others... and I don't think I'm being dense.
<wweasel> for instance, it's cool with the 9.10 Server CD, but not the Desktop CD (if it would boot the Desktop CD, I'd be off and running already)
<wweasel> do you happen to know off the top of your head if the CentOS 5 live CD has ext4 support built in as well? (i'd need that too)
<wweasel> twb?
<twb> I don't know.
<twb> Probably not.
<jmarsden> wweasel: grepgrep ext /proc/filesystems ext /proc/filesystems in a CentOS 5.4 virtual machine here shows only ext2 and ext3 .
<wweasel> bah. i'm baffled as to how i'm going to shrink this bloody partition
<twb> What WAS the rationale for making ext4 the default in 9.10?
<wweasel> twb: extents are pretty? :P
<wweasel> jmarsden: thanks! it's bad news :/ but thanks for the help. it saves me the time and frurstration of downloading/burning the disc and discovering it doesn't work
<jmarsden> wweasel: If you can instead deal (somehow) with the "Ubuntu Desktop CD won't boot" issue, you'd be set... have you tried a Lucid Alpha Desktop image??
<wweasel> jmarsden: nope. that's a good idea. i'll set it to dl and give it a shot (it will take a while, i don't have a particularly fast connection)
<twb> Or just roll your own CD...
<wweasel> and thanks, btw, on checking centos. saves me the time and frustration of discovering it won't work
<twb> live-helper isn't that hard to use
<wweasel> and in all these cases hope that my laptop likes the cd i make
<wweasel> i really can't fathom what makes it boot some and not others
<wweasel> and it's consistent too
<twb> wweasel: make a USB boot key, then.  Or just make a live image in a spare LVM volume
<jmarsden> wweasel: OK.  No problem.  Right.  But open source computing is not supposed to be some sort of "magic"; have you tried a USB boot?
<jmarsden> Or do you have an external USB CDROM drive the laptop might be persuaded to boot from, in case you have a failing optical drive in the laptop?
<wweasel> i haven't tried a usb boot. that's a good idea too. though i can't get the laptop to boot from an external CD drive (i thought it's drive might be wonky), so I'm not sure if it will boot usb. or whether that's related
<wweasel> :)
<wweasel> twb's idea of making another root partition in a spare LVM volume is a good one.
<wweasel> it avoids the CD drive, will launch from GRUB, and then i can shrink the current one.
<jmarsden> wweasel: Yes, if you have a slow network connection that could be a lot faster than any of the other approaches :)
<wweasel> jmarsden: 45 min remaining in the Lucid ISO dl :)
<wweasel> ~200KB/s
<jmarsden> wweasel: I'm spoiled: 10Mbit/sec FIOS downloads, so CD-sized ISO files download in about 13 minutes here :)
<wweasel> hah! well, consider me jealous.
<wweasel> twb, jmarsden: what do you think is the easiest way to approach this? duplicating my current root partition into a second partition, then configuring grub to boot that one? better idea?
<twb> wweasel: just debootstrap or whatever, then change root= to point to the other LV
<wweasel> twb: good idea. better idea, i'd say.
<twb> Now that I think about it, a mere LV *resize* ought to be possible from a modern d-i without resorting to the fumbling "rescue" target.
<wweasel> twb: ext3/4 support live expanding. I'm not so sure about live shrinking.
<twb> wweasel: I realize that.
<wweasel> Sorry!
<wweasel> well, if you can think of a way, i'd be relieved. this will almost certainly work, though
<jmarsden> wweasel: twb's new idea is that you could boot from a Debian CD, as though you are doing an install, then in the partitioning step resize (shrink) the existing root partition.  At that point it isn't mounted, so shrinking it should work.
<jmarsden> (At least, that's my interpretation of it)
<twb> Right
<wweasel> Ah!
<twb> modern d-i even has a "shrink" option, sometimes
<twb> Within its GUI, I mean
<wweasel> stupid question:what is "d-i"
<jmarsden> wweasel: debian-installer
<wweasel> ah!
<wweasel> ok
<wweasel> well, i can boot the ubuntu server disc. but I wouldn't be sure how to get to a point where i can control it to shrink the lvm.
<jmarsden> wweasel: Select Install and then click next until you get to the partitioner :)
<twb> Just keep clicking next until it talks about partitioning, then pick "manual"
<wweasel> can I then have it write the new partition table, without proceeding to installing? i think it proceeds immediately
<twb> Yeah, you just turn off the machine after the resize is  done
<wweasel> it could work! but i think i like the rescue partition/debootstrap idea better. it's also a kludge, but less of a kludge :)
<wweasel> i could do with a faster internet connection for deboostrap too :)
<wweasel> twb: are you still around? i have a quick question
<twb> What?
<wweasel> I've made the rescue partition. debootstrap. I need to install the linux-image package so i can boot from it.
<wweasel> the linux-image depends on grub
<twb> No you don't.
<wweasel> no?
<twb> The kernel is already present in the other LV
<twb> Or wherever /boot is
<wweasel> right!
<wweasel> Okay :) thanks * so * much!
<LlamaZorz> where is the default location for the website in apache, is it /usr/share/apache2/default-site
<twb> LlamaZorz: by default httpds export /var/www/
<LlamaZorz> thanks sir
<LlamaZorz> and where does mysql store the databases?
<twb> Probably /var/lib
<LlamaZorz> managing this is so so much easier than a freebsd server
<LlamaZorz> ls
<wweasel> twb: I have a potential problem. I made my rescue arch i386, while my server install's arch is ia64 (debootstrap was failing with the setting --arch ia64).
<wweasel> so I will need to get a 32-bit kernel, right?
<twb> That was silly
<wweasel> well, it wasn't working.
<twb> I would've just tried d-i
<wweasel> I know. Well, at this point, I should still be able to do what I want if I install a 32-bit kernel to /boot, right?
<wweasel> (I'm sorry, I know somewhat what I'm doing, but only-half. I'm aware of that).
<twb> I don't know
<wweasel> Okay. Thanks
<sabgenton> I am trying to install linux-restricted-modules-server and madwifi-tools are they still in repositorys?
<sabgenton> (on karmic)
<twb> sabgenton: try packages.ubuntu.com
<sabgenton> twb: if it's not commented somewhere in my sources.list does that mean madwifi is not suported or somthing?
<twb> sabgenton: I don't know what you mean by that.
<jmarsden> sabgenton: http://packages.ubuntu.com/madwifi-tools   # will show you what versions of Ubuntu madwifi-tools exists in... try it.
<sabgenton> jmarsden: awe crap  it says jaunty
<sabgenton> jmarsden: any change of them bring it back on Lucid?
<jmarsden> You'd need to find out why it was removed, and why you think it is still needed, etc etc.
<sabgenton> awe man one me against ...
<sabgenton> alot
<sabgenton> jmarsden: reason being ath5k (which im shure is the reason they don't need it) doesn't do access point mode
<jmarsden> sabgenton: Until you check on it, it might be just that the package got renamed or something that simple... so do your homework and find out why it was removed before you complain all over the channel :)
 * sabgenton worrys
<sabgenton> k
<sabgenton> sorry
<sabgenton> jmarsden: is this likely to mean something?
<sabgenton> https://launchpad.net/ubuntu/karmic/+source/madwifi-tools
<jmarsden> Not really any new info there.  I'd suggest looking at the Debian BTS to see if you can find out something there.
<twb> jmarsden: madwifi was a non-Free driver for some chipsets, and ath5k (which IS free) replaced it.
<twb> jmarsden: I guess Debian/Ubuntu just stopped shipping madwifi entirely.
<twb> sabgenton wants the non-Free version because (apparently) ath5k doesn't support some connection modes.
<jmarsden> Sounds reasonable to me.  I don't see it in Lenny or Squeeze.  So that answers the question.  And means it is very unlikely madwifi-tools would return
<sabgenton> cry
<sabgenton> twb: well as jmarsden said i better not blab to much before I really no
<sabgenton> mabye ath5k now does that mode
<sabgenton> I really have my doubts though
<jmarsden> sabgenton: Why not test it and find out instead of doubting? :)
<sabgenton> they were talking about it if I remeber
<sabgenton> jmarsden: ath5k?
<sabgenton> u mean
<jmarsden> ath5k in access point mode.  Or whatever it was that you need.  Yes.
<jmarsden> Don't have doubts -- test and then you will *know* :)
<sabgenton> u guys told me early how ubuntu did have dependency probs like debian users playing with stable and  unstable and stuff just worked ;P
<sabgenton> didn't
<sabgenton> now I have to 'experiment'
<jmarsden> sabgenton: And?  You are now having dependency issues in Ubuntu?  With what depending on what?    BTW, I recently bought a random cheap PCI wifi card recently and plugged it into my desktop and... it just worked... :)
<jmarsden> You are the one claiming you need madwifi-tools.  Then you say you "doubt" something... so you don;'t really know whether you need them.  So I suggest you test.  This is bad how?
<sabgenton> I wanted to setup my headless server as an access point without having to buy  a router basicly
<sabgenton> I have done it before with madwifi
<sabgenton> now i'm having to play
<jmarsden> sabgenton: OK, so stick with 8.04.3 on your server, if you want.
<sabgenton> well its fun but
<sabgenton> :(
<jmarsden> Or if your time is valuable and you don't like anything that you have "doubts" about, spend a few dollars on a different Wifi card that you know will work in the mode you need.
<sabgenton> we will adjourn :P
<RoyK> what would be the most convenient way to automatically upgrade 10+ servers? cron-apt? these servers run 9.10, so there are updates arriving quite often
<jmarsden> sudo install unattended-upgrades   # and read the docs and configure it
<jmarsden> sudo apt-get install unattended-upgrades  #  I mean :)
 * RoyK tests in a vm
<jmarsden> And if it is 10+ and you are worried about bandwidth usage, create a local apt proxy with apt-cache-ng or similar and then point them all at that.
<RoyK> we're on a gigabit internet link, so I think it'll suffice
<jmarsden> That should be sufficient, yes :)
<RoyK> :)
<RoyK> well, to be honest, it's just 100Mbps as of now, the new fiber isn't connected :Ã¾
<jmarsden> Unless you have already filled it up with videos/pron/whatever, that will still be OK :)
<RoyK> :)
<sabgenton> jmarsden: twb: http://wireless.kernel.org/en/users/Drivers/
<sabgenton> I was wrong :/
<jmarsden> sabgenton: Now you know why I said you should do the research before griping :)
<sabgenton> my apologys ath5k is now fully awsome!
<sabgenton> I new they were getting there but didn't know they made it
<sabgenton> so looks like ubuntu didn't depreciate mad-wifi till karmic cause ath5k is now good for it
<sabgenton> :D
<sabgenton> anyone have a good guide for this?
<sabgenton> jmarsden: what do I put in hostpad or /modprob.d/options
<sabgenton> if I can ask
<underdev> hi!  When i was using the client-oriented ubuntu karmic, i would have to sudo each time i wanted admin "powers". On karmic server, i stay "sudo"ed. How do i relinquish admin permissions? or set a time-out?
<underdev> (this is an account i added to the "admin" group.
<underdev> )
<underdev> i seem to remember reading there was a 5 minute timeout, but that doesn't seem to be the case on Ubuntu 9.10.
<underdev> figured it out
<underdev> ty
<erichammond> kirkland: I tried to test the fix for bug 479823 as requested in the last Ubuntu server team meeting, but ran into bug 505482 which I just submitted.
<uvirtbot`> Launchpad bug 479823 in eucalyptus "euca2ools: euca-bundle-vol strips leading zero (0) from user id" [Undecided,Fix committed] https://launchpad.net/bugs/479823
<uvirtbot`> Launchpad bug 505482 in euca2ools "euca-bundle-vol dies with "Invalid cert"" [Undecided,New] https://launchpad.net/bugs/505482
<LinuxAdmin> hi guys
<LinuxAdmin> is there significant diferences between ubuntu and ubuntu server?
<LinuxAdmin> I mean, by the admin perpective
<uvirtbot`> New bug: #505519 in samba (main) "package winbind 2:3.4.0-3ubuntu5.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 2" [Undecided,New] https://launchpad.net/bugs/505519
<uvirtbot`> New bug: #505522 in tomcat6 (main) "Tomcat 6: FileNotFoundException for MANIFEST.MF when auto-expanding a WAR" [Undecided,New] https://launchpad.net/bugs/505522
<twitchnln> greetings i've got a jaunty-server box that I recently installed a sblive into, I have all modules loaded, but for some reason there is no /dev/dsp* any ideas?
<LinuxAdmin> is there significant diferences between ubuntu and ubuntu server?
<twitchnln> different kernel and doesn't install fluff by default...
<Wallace> Can anybody give me any tips on setting up my printer in server?  My printer is attached to the network (wifi), I can see it, and I can print to it directly from a laptop running desktop ubuntu, and lpinfo -v on the server sees the printer - I am not able to connect to cups on 192.168.0.1:631 (I get a 403 forbidden error).  Anybody know where I might start to fix this?  (What I am ultimately trying to do is to set up server as a print server)
<Wallace> ... 192.168.0.1 being the server, where cups is installed
<MTecknology> What's the deal with these files? /home/.ecryptfs/michael/.Private/ECRYPTFS_FNEK_ENCRYPTED.FWZtAVmepc5o1ETurbn2pXrnP3Emk6bGLudWDD-VoqtOf2IvF8f.mrDTRE--/ECRYPTFS_FNEK_ENCRYPTED.FWZtAVmepc5o1ETurbn2pXrnP3Emk6bGLudWljsbayczoLH9Bb.CvqKvpE--/ECRYPTFS_FNEK_ENCRYPTED.FWZtAVmepc5o1ETurbn2pXrnP3Emk6bGLudWhX9blVIv8zrpEsGMUxAzJ---/ECRYPTFS_FNEK_ENCRYPTED.FWZtAVmepc5o1ETurbn2pXrnP3Emk6bGLudWhgJ3q6ADzEHXd7erIU-c.E--
<guntbert> MTecknology: you enabled an encrypted private directory?
<jpds> MTecknology: That's a file in your ~/Private directory / encrypted home.
<MTecknology> jpds: nifty stuff; I always thought the encrypted data was still in ~/
<LyonJT> Hey whats the command to remove a folder and all its contents
<guntbert> LyonJT: !!!careful!!! don't make typing errors     rm -rf <folder>     - NEVER with root permissions!
<LyonJT> why not with root?
<guntbert> LyonJT: because you accidentially might remove your complete system of important parts of it
<guntbert> *or
<LyonJT> oh okay thanks mate !
<guntbert> LyonJT: you're welcome :-) but please do triple check before <enter> :)
<LyonJT> i have :) thanks!
<LyonJT> do you know what Reloading /etc/samba/smb.conf smbd only means?
<martin-> does ufw automatically save rules?
<erichammond> martin-: Based on the description of "enable" and "disable" I assume it does, but it would be easy to test.
<martin-> yep
<martin-> it saves them in /lib/ufw
<martin-> atleast on 9.10 server
<erichammond> cool
<erichammond> I'm upgrading to Karmic from Hardy, so am learning a few of the differences.
<martin-> :)
<martin-> I love the ufw syntax
<martin-> almost the same as pf in openbsd
<erichammond> Amazon EC2 has a firewall that's even simpler to set up, which is what I've been using for the last couple years.  I may layer on with ufw for added protection.
<martin-> nice
<Aison> evening
<Aison> I created my own software raid
<Aison> but after every reboot
<Aison> /dev/md1 is gone ;)
<Aison> then I have to do mknod /dev/md1 b 9 1
<Aison> mdadm --auto-detect
<Aison> then it's back
<Aison> how do I keep the md1 device persistent
<ElllisD> Insn't there a tool that'll alter the sources.list automatically depending on release edition? I'm upgrading from 6.06 LTS to current.
<ElllisD> Aison: It has something to do with being listed in /etc/fstab-
<Aison> it's added in fstab
<ElllisD> what happens? it doesn't exist when you cd there?
<Aison> well, the device node is not created, so I can't mount it
<Aison> and I can't cd to it of course
<Aison> I guess it's related to a bad /etc/mdadm/mdadm.conf
<ElllisD> I go through a process where I do pvcreate /dev/foo, then lvcreate /dev/foo when I add a drive
<ElllisD> <-- dunno what that is
<ElllisD> Sorry I couldnt be more helpful
<Aison> found it ;)  sudo mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf
<Aison> that's all
<ElllisD> nice
<ElllisD> Apparently I can't do do-release-upgrade from Dapper by default, afraid to install it in case it bricks the server
<ElllisD> well, that solves that- it doesnt even exist
#ubuntu-server 2011-01-03
<vorian> what is the easiest way to set up a mail server?
<vorian> nvrmind
<Delerium_> I'm no pro, but it depends on your infra (fix ip? MX record?  send and receive?
<vorian> just something that will send mail when prompted my the right script
<vorian> I think what i'm looking for is exim4
<Delerium_> I **think** postfix might do the job too
<Delerium_> depend on your scripting language?!
<vorian> AH, I'll set it up and see
<Delerium_> If you just want to send mail, just specify your ISP SMTP server and your mail should be routed correctly
<vorian> kk
<vorian> thanks Delerium_
<Delerium_> mp
<Delerium_> errr.. no
<_Techie_> DHCP operating on a bridge interface seems to take considerably longer to assign addresses than when listening on a standard interface, is there any reason for this?
<stgraber> _Techie_: It's not DHCP itself that takes longer. A bridged interface usually takes around 20s before it can start receiving/transmetting data over the bridge.
<gobbe> yes
<stgraber> _Techie_: if you go with a static IP instead, you'll notice the same delay until you can reach your bridged interface
<gobbe> and bridge is meant to be used in static ip
<stgraber> the bridge interface on the host shouldn't have a DHCP address or if you really need it, you should make sure you join a tap device (or similar) with a MAC address lower than all the other bridge members
<stgraber> that's because a bridge will always take the lowest MAC address as it's own MAC, so if you don't control what the lowest address is, your bridge will change MAC address making your IP to change when using DHCP
<stgraber> there's nothing wrong with using DHCP on a bridged interface though (like a VM, VZ container, ...)
<gobbe> yes, it's working, but it's designed to work with static ip
<stgraber> a bridge interface is a bridge interface, it's not any more designed to be used with a static or with a DHCP address. It's just simulating an ethernet switch.
<stgraber> I agree it's good practice, as it's on a real physical switch, but there's nothing "designed" specifically for static or dynamic addresss.
<gobbe> well, not designed in that way, but it's working better with static ip
<stgraber> no. Your interface won't come online any faster with static than with DHCP. The only difference is that when using static everything will hang as it believes network is up whereas with dynamic, it'll just wait for the bridge to work and then get an IP and start everything.
<_Techie_> my bridge has a static ip
<stgraber> in that aspect, DHCP actually works better than static on bridged interface. Still, it's better to have static addresses on your network to avoid depending on a dhcp server, but that's just good practice.
<_Techie_> once again, you lot have mis understood what i was saying
<_Techie_> my DHCP server, listening on a bridge interface, is taking longer than expected to assign ip's
<_Techie_> im not talking about right after boot or anything
<stgraber> ah, I never noticed that specific issue (having all my DHCP servers in a bridged VM).
<_Techie_> when i was using a single interface, windows wouldnt complain about not being able to mount share due to not being given an ip
<stgraber> reading that last sentence makes me wonder if you're not speaking of bonding instead of bridging (single vs multiple interfaces)
<gobbe> sorry, i was talking about bonding
<stgraber> ok, now it makes sense :)
<gobbe> too messed up, havent got yet my breakfast coffee :D
<gobbe> sorry, but now i need to go, have to be at office in 45min
<stgraber> what kind of bonding are you using ?
<_Techie_> http://sprunge.us/TaIQ
<stgraber> ouch, yeah, if you do it that way you'll have issues for sure ...
<_Techie_> what the appropriate way of doing it?
<stgraber> you probably want to use bond0 and the bonding module instead of using bridging
<stgraber> http://wiki.debian.org/Bonding
<stgraber> your current config basically means: take eth0 and eth1, put them in the same switch and add an IP to it
<_Techie_> is there anywhere where i can see the differences between bonding and bridging?
<stgraber> which means it creates a loop on your network by linking eth0 and eth1
<_Techie_> and what does bonding do?
<stgraber> bonding takes the two links and will load-balance the trafic between the two. If one goes down, it'll fallback to using just one.
<_Techie_> okay, thats not what i need
<_Techie_> both interfaces are not connected to the same set of devices
<_Techie_> eth0 is a direct link to my computer, and eth1 is the rest of the network
<_Techie_> so by that description, bonding is definitely what im after
<_Techie_> not what im after*
<stgraber> ok, so bridging is indeed what you want in that case. Basically your server is going to act as a switch with your computer plugged in in one port and the rest of your network in another
<stgraber> and has itself an IP in that network
<_Techie_> yep
<_Techie_> as you can see, i had the two interfaces setup on different networks
<_Techie_> which was fine, untill i wanted to use an autodiscovering app across the two
<stgraber> I don't see why DHCP would be slow, other than some weird MAC address mangling done by the server (which shouldn't happen)
<_Techie_> ill have a stab in the configs and come back
<stgraber> I'm about to go to sleep here anyway, getting quite late here.
<_Techie_> okay, have a good one stgraber
<stgraber> thanks
<chrislabeard> Hey guys can anyone help me format a secondary hard drive?
<chrislabeard> I'm having a heck of a time
<chrislabeard> when i do fdisk -l
<chrislabeard> it gives me /sdb1 and /sdb2 partitions when I try to format the sdb2 it says it doesn't exist ?
<chrislabeard> Any ideas
<pting> chrislabeard, so ls -alh /dev/sdb2 returns nothing?
<chrislabeard> ls: cannot access /dev/sdb2: No such file or directory
<chrislabeard> I just wanna wipe the drive clean but it just seems like its impossible.
<pting> chrislabeard, your box doesn't see it... do an ls \/dev/sdb*
<pting> whoops, remove that \
<pting> does it return aything?
<chrislabeard> gives me /dev/sdb and /dev/sdb1
<chrislabeard> So I guess the other partition is gone
<chrislabeard> Why does it list it when I do fdisk -l?
<pting> chrislabeard, so you should be able to wipe /dev/sdb
<chrislabeard> K can you walk me through it.. So I will know the correct way to do it.
<chrislabeard> I've tried so many different ways
<pting> chrislabeard, so you wan to whipe sdb and format with ext4 or something?
<chrislabeard> Yeah, I want to turn this hard drive into a TimeMachine drive so ext4 or 3 whichever
<chrislabeard> I can wipe the whole thing
<pting> chrislabeard, so you should be able to do sudo mkfs.ext4 /dev/sdb i believe
<chrislabeard> pting: k let me try
<pting> chrislabeard,  ... make sure that's the one you want to destroy though
<chrislabeard> yeah my other drive is sda
<chrislabeard> K its running
<pting> chrislabeard, i think timemachine might require specific settings, i dunno if macs can read ext4
<chrislabeard> if not I can just use that command and re format it
<chrislabeard> http://sidikahawa.blogspot.com/2010/03/setting-up-time-machine-server-on-my.html
<chrislabeard> that is the tutorial I was going to follow
<pting> chrislabeard, http://serverfault.com/questions/190840/mount-a-ext4-partition-on-mac-os-x
<chrislabeard> How do I see the mount point of the new drive?
<pting> chrislabeard, you should read up on http://en.wikipedia.org/wiki/Fstab ... it's basically done through /etc/fstab
<chrislabeard> Really I don't see the drive in there
<pting> chrislabeard, but just to get started and to test, you basically make a directory, then type the following sudo mount -t ext4 /dev/sdb /mnt/mydirectory
<chrislabeard> k cool I'll add that to the fstab
<pting> chrislabeard, you dont' add that line into fstab; fstab has a specific format specified in that link i sent you
<chrislabeard> oo
<pting> chrislabeard, that's a command you use if you don't want to include it in fstab
<chrislabeard> oo okay
<chrislabeard> pting: So why ext4 and not ext3
<pting> chrislabeard, it's just the newer format; but i don't think macs can natively read ext4, so i would pick another fs for your time machine if it's an issue
<chrislabeard> K
<pting> chrislabeard, i've never tried time machine, so i don't know anything beyond that
<chrislabeard> well thanks for your help up to here
<IrishWristwatch> I thought the latest time machine update broke time machine for ubuntu.
<IrishWristwatch> Did they finally patch it?
<chrislabeard> I don't know this was posted march 2010
<chrislabeard> Is it not working now
<pting> damn you apple
<IrishWristwatch> Not sure, I thought I read somewhere that they slipped an undocumented change in the protocol and it broke time machine for ubuntu somehow.
<IrishWristwatch> It could be patched by now.
<chrislabeard> Yeah lol I hope so
<IrishWristwatch> I really don't know, sorry.
<IrishWristwatch> I guess... try it.  :)
<chrislabeard> Yeah I'm gonna
<chrislabeard> we will see
<pting> so how do you perform an unattended install of a package when it asks whether a package's config should be replaced or not? something to do with debconf-get-selections ?
<IrishWristwatch> does --assume-yes when running aptitude work?
<pting> IrishWristwatch, i want to keep the current config, but i'll check aptitude's params out
<Syria> Hi, why .htaccess is not affecting my site? is it because of the permessions?
<Syria> When I try to change its permessions to 666 I get an error message.
<Syria> -rw-r--r-- 1 root root     205 2011-01-03 08:02 .htaccess
<joschi> Syria: check the value for AllowOverride for the directory
<Syria> joschi How can I check its value please?
<joschi> Syria: check your apache httpd configuration
<joschi> Syria: /etc/apache2/
<Syria> joschi is called apache.conf ?
<joschi> Syria: basically every file in that directory. apache.conf is only one
<Syria> joschi And which one  is respossible for the AllowOverride please?
<joschi> Syria: check it yourself. you big enough ;)
<Syria> joschi Okay thnx. :)
<Syria> joschi I am viewing the files using nano on my ubuntu lucid server and most of them are blank! is this normal?
<gobbe> what files?
<uvirtbot`> New bug: #696753 in net-snmp (main) "Cannot disable connection logging" [Undecided,New] https://launchpad.net/bugs/696753
<uvirtbot`> New bug: #696769 in clamav (main) "Please remove clamav source and binaries from lucid and maverick backports" [Low,Confirmed] https://launchpad.net/bugs/696769
<uvirtbot`> New bug: #696774 in bacula (main) "package bacula-director-mysql 5.0.1-1ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/696774
<twister004> hi guys... is there a way in cron where we can delete old backups created by a cronjob?
<twister004> im using ubuntu server 8.01LTS
<Jeeves_> * * * * * rm -rf /
<twister004>  8.04LTS
<twister004> haha
<Jeeves_> (that's a joke, obviously!)
<Jeeves_> Before anyone starts using it and blaims me for their stupidity :)
<Jeeves_> twister004: What are you talking about?
<twister004> maybe ill have to do some kinda date comparison.. can anybody guide me?
<Jeeves_> Which backups?
<twister004> i am backing-up sql databases
<_ruben> if it's a cronjob creating the backups, just extend that cronjob to delete old ones
<twister004> i want to delete backups that are older than a week
<twister004> _ruben... I'm new to cron.. can you guide me?
<Jeeves_> twister004: You're misinterpreting the purpose of cron
<Jeeves_> cron is just a scheduler
<Jeeves_> it runs tasks you create at time you tell it to run them
<twister004> yes... i know
<Jeeves_> So, your question has nothing to do with cron
<twister004> but i can use it to periodically check for old backups and delete them.. only.. i dont know how to do it
<Jeeves_> No, you cant
<Jeeves_> You can use it to run a script (you still need to create) to "periodically check for old backups and delete them.."
<twister004> hmm.. any idea how i can proceed?
<twister004> yes
<twister004> sorry.. that's more like it
<Jeeves_> So, shall we forget about cron for a moment?
<Jeeves_> Shall we?
<twister004> yes
<twister004> i think i got it....what i can do is "find" files older than seven days and delete it ( /usr/bin/fin <location> -mtime +7 -delete)
<Jeeves_> ok, so you've got this script that creates your backups.
<twister004> Jeeves.. can you confirm if this'll work"?
<Jeeves_> Yes, you can do that
<twister004> oh gr8!!!
<_ruben> assuming your /usr/bin/fin is something similar to the find command
<twister004> :)
<twister004> yes it is
<Jeeves_> And you try it with -ls first, before you erase whole / :)
<twister004> yes
<gobbe> i would do script which finds and deletes them
<gobbe> and run that from crontab
<sander^work> How can I turn on the boot menu, so I can choose another kernel than default one?
<gobbe> press shift during boot
<uvirtbot`> New bug: #694538 in bacula (main) "package bacula-common 5.0.1-1ubuntu1 failed to install/upgrade: el paquete bacula-common ya estÃ¡ instalado y configurado" [Undecided,New] https://launchpad.net/bugs/694538
<RoyK> sander^work: or google for it and change /etc/default/grub
<gobbe> yes, depends that if you want to boot only once, it's easier to use shift on boot
<gobbe> if you want to change it for every boot editing ;)
<uvirtbot`> New bug: #695857 in apache2 (main) "ssl protection not default for sensitive packages" [Undecided,New] https://launchpad.net/bugs/695857
<sander^work> RoyK, I've googled it.. but cant find the option to change in /etc/default/grub.
<RoyK> sander^work: this is mine http://pastebin.com/WAFiTCf5
<RoyK> remember to update-grupb after changing that file
<pickscrape_> HI, would this be a good place for a couple of questions about using raid/mdadm, or should I go elsewhere?
<gobbe> what kind of question?
<_ruben> it could be
<RoyK> pickscrape_: don't ask to ask, just ask
<RoyK> there's no such thing as stupid answers, only stupid questions
<pickscrape_> Basicall my server has ubuntu server on it which is running on standard partitioning. I've installed a second on new drives in a raid1 (for boot) and raid5 (for everything else). It works perfectly fine.
<_ruben> RoyK: shouldn't that be the other way around?
<RoyK> :)
<sander^work> RoyK, thanks! :-)
<pickscrape_> Oh, and the "everything else" then has lvm on top
<RoyK> pickscrape_: so, something like sda1 and sdb1 in a mirror?
<pickscrape_> My question is, in order to make moving over to the new install easier, I'd like to be able to mount the raid stuff from my old install, so I can continue to use the old install while getting things ready on the new one. But I don't know if it would like that or not.
<RoyK> different drives for raid5 or same ones?
<RoyK> pickscrape_: use nfs
<pickscrape_> RoyK: this is the same server. They won't both be running at the same time.
<RoyK> pickscrape_: or do you mean boot into the old installation? sambe box?
<RoyK> ok
<RoyK> then it should be trivial, as long as the old installation support the filesystem used
<pickscrape_> Boot into old install and mount the new local volumes from the new raid array without screwing up the new install :)
<_ruben> if all drives can be hooked at the same time, it shouldn't be a prob
<pickscrape_> Yes, they're all running now.
<pickscrape_> And I'm pretty sure that 9.10 can handle ext4?
<RoyK> I can't see how you should be ruining your new install by just mounting the filesystem(s)
<RoyK> pickscrape_: cat /proc/filesystems or modinfo ext4
 * RoyK doesn't remember if 9.10 does support it, but would be somewhat surprised if it doesn't
<pickscrape_> Yeah, it's there.
<RoyK> well, just try
<pickscrape_> My worry was that if I tried to fiddle with the raid from the old install, it would play with settings stored on those drives and break it for the new install. I'm not sure how touchy such things are. :)
<RoyK> iirc all of that is autosense these days
<RoyK> anyway, it shouldn't be able to import something it looks on as broken
<RoyK> s/import/mount/
 * RoyK was thinking in ZFS lines
<pickscrape_> There's no /dev/md* devices at the moment so I need to figure out how to get it to detect them
<pmatulis> pickscrape_: you'll need to install mdadm and lvm2
<pickscrape_> pmatulis: I already have those installed
<pmatulis> pickscrape_: then you should be able to assemble the arrays
<RoyK> does mdadm tell you anything worthful?
<RoyK> perhaps the md mod isn't loaded
<uvirtbot> New bug: #696832 in drbd8 (main) "drbd will not start with given default global configuration" [Undecided,New] https://launchpad.net/bugs/696832
<pickscrape_> lsmod shows no md module, but mdadm is started
<RoyK> no raidxx modules either?
<pickscrape_> RoyK: yes, plenty of those (covering what I need too)
<pickscrape_> I think --assemble is what I need to look at
<RoyK> pickscrape_: and the old install sees the partitions, right?
<RoyK> //proc/partitions
<pmatulis> pickscrape_: scan and then assemble
<pmatulis> pickscrape_: did you use mdadm.conf?
<pickscrape_> RoyK: yes, though I have to use parted to do it. fdisk and cfdisk complain about GPT
<pickscrape_> pmatulis: the array was created for me using the ubuntu server 10.10 installer
<pickscrape_> So I'm not sure what it used to store the configuration.
<RoyK> hm... GPT might be the issue
<RoyK> dunno if 9.10 can figure out that automatically
<pickscrape_> RoyK: if I can access it manually then I'll be happy with this. Hopefully this old install won't be sticking around for too much longer. :)
<pickscrape_> If not, then I'll just have to do it the other way round when I'm not needing the server to be working: boot the new one, mount the old and copy over that way.
<pmatulis> pickscrape_: can you pastebin output to 'sudo fdisk -l'?
<_ruben> pickscrape_: doing it the other way around will probably turn out to be much easier in the end, due to the (probably) far less complex disk layout of the old install (easier to mount it in the new install)
<pickscrape_> pmatulis: http://paste.ubuntu.com/549859/
<pmatulis> pickscrape_: hm
<pickscrape_> _ruben: yes, I agree. I wouldn't need any help at all with that. The downside is that it's less convenient, since I'd like to be able to use the server as well. Plus, I like to learn. :)
<_ruben> ah ok
<stack_>  I upgraded an 8.04 server to 10.04.  It has a software raid.  Every time it boots, it dumps to initramfs shell and I have to run "mdadm" to recognize the array and then exit to continue the boot.  How can I fix this?  One weird thing, my mdadm.conf file looks like the following: http://paste.ubuntu.com/549864/
<RoyK> stack_: no idea - perhaps removing (or renaming) mdadm.conf will force linux to autodetect things?
<stack_> I can wipe those duplicate lines in the config, right?
<RoyK> IIRC removing the file will force linux to read the config from the drives
<RoyK> which it should be stored
<pmatulis> stack_: what kind of errors do you get?  set up a verbose boot if you don't see messages
<pmatulis> stack_: it may be a UUID issue
<RoyK> UUIDs shouldn't change, though, should they?
<stack_> pmatulis: it just is waiting for root and then dumps to initramfs
<RoyK> stack_: are the raid drives independant or do you run the root on the same drives?
<stack_> the raid is the root
<RoyK> ok
<RoyK> sorry, never tried (yet) to upgrade from softmirrored root
<pmatulis> stack_: like i said, boot verbosely (--verbose or --debug kernel boot option)
<stack_> pmatulis: I'll have to try it and get back to you.  It's a production system.  We had a power outage, so I tried to figure it out then but didn't get far
 * SpamapS streeettcchhess
<stack_> thanks for your help, all... I'll dig further
 * SpamapS is trying "terminator" for the first time.. seems to be just the thing for a big monitor
<highvoltage> terminator ftw
<pmatulis> SpamapS: i don't think it does anything that normal screen can't but the mouse support is convenient
<RoyK> SpamapS: terminator?
<SpamapS> pmatulis: even if screen does everything it does.. for some reason this made a lot more sense to me than screen's help pages ever did
<SpamapS> Been using screen for years, never liked it much.
<highvoltage> pmatulis: you can also split a window (or tab) into many horizontal and vertical splits, it's quite useful
<pmatulis> highvoltage: but screen can do this too
<highvoltage> and you can make one of the splits temporarily grow to the full window size. combination of screen/byoby/terminator is really great
<SpamapS> and actually, I'm using byobu inside the terminator screens, which is probably silly but works fantastically
<SpamapS> I just have two vertical splits now, but when I want to do something, I split horizontally, do it, then close the window
<kirkland> SpamapS: what version of byobu are you running?
<SpamapS> kirkland: in my hosted CentOS box.. "old" .. and then whatever maverick has.
<kirkland> SpamapS: ah ... i was gonna show you something cool :-)
<SpamapS> I only use it for ctrl-a N and ctrl-a C
<kirkland> SpamapS: if you have a natty/byobu somewhere ...
<SpamapS> kirkland: upgrade to natty is my 9:00am TODO ;)
<sciguy16> Is it possible to give a normal user permission to reload apache?
<SpamapS> which is in 24 minutes
<RoyK> sciguy16: no
<kirkland> SpamapS: heh, okay, poke me once you do
<SpamapS> kirkland: poke queued
<sciguy16> RoyK: ok, thanks
<RoyK> sciguy16: you can allow it with sudo, though
<RoyK> sciguy16: man sudo is a good start
<SpamapS> ok this is weird..
<SpamapS> ssh agent isn't working in terminator
<highvoltage> pmatulis: indeed. I see them as two different completely different things though, they just happen to offer some of the same functionality
<sciguy16> RoyK: thanks, I'll look into it
<pmatulis> highvoltage: yes, i use terminator + screen
<SpamapS> actually strike that.. ssh agent just isn't working. :(
 * RoyK wonders when gnome apps came into ubuntu-server
<compdoc> I install the gnome desktop on server. is that bad?
<RoyK> it's ok, but if you want a desktop machine, I'd recommend installing ubuntu-desktop
<RoyK> the server stuff is there, or available, after all
<RoyK> and installing the desktop machine will allow for better customization last I checked
<SpamapS> RoyK: terminals are the window to the server. :)
<compdoc> well, I need a server - its not used as a desktop. I just like the gui
<RoyK> post-installing gnome led to a few headaches last I tried
<RoyK> compdoc: it's the same thing, desktop and server, same distro
<RoyK> it's just that if you want a fancy guy, starting off with -desktop will do things easier for you
<RoyK> the only difference between ubuntu server and desktop are (1) default packages installed and (2) kernel chosen. for most use the desktop kernel will work just as fine as the server kernel
<compdoc> I thought I read that Server has things set up differently
<RoyK> just the server bits, which are minimal
<compdoc> I think to work better under load
<RoyK> that's HZ in kernel, yes
<RoyK> also, desktop comes with a gui-based network setup, so it might not work out of the box with a static ip (it's set after login)
<RoyK> but still, why do you want a desktop on your server?
<compdoc> I have a large rsync transfer going for several hours now, and gnome wont open any programs like a term window
<compdoc> so I have to wait
<RoyK> just kill rsync, restart it in a 'screen'
<RoyK> then you can do whatever you want without disrupting it
<compdoc> a screen?
<compdoc> a terminal window?
<Datz> ooh, 67 days uptime, 68 updates :)
<Datz> more than one update a day!
<Datz> average anyway  .. good work :p
<Datz>  11:22:08 up 67 days, 21:13,  2 users,  load average: 0.83, 0.94, 0.89
<pmatulis> compdoc: 'man screen'
<compdoc> thanks
<thesheff17> when I use vmbuilder I'm getting no bootable device.  Did something change recently?  I see the file for the virtual machine.
<thesheff17> nm I found a bug for it 659532.
<yeags> hey all
<RoyK>  
<yeags> i'm looking at building an Ubuntu private cloud
<yeags> Is it possible to have the nodes power up using WakeOn Lan?
<baggar11> yeags: that's more of a NIC requirement, than ubuntu-server based
<gobbe> wake-on-lan works quite bad in most of the cases
<gobbe> i would prefer iLO or something like that
<yeags> yeah iLO is much nicer, but i'm planning on a very cheap, basic home system
<yeags> I just wanted a single big server as the controller which could run 2-3 servers, and then have 3-4 laptops I have here that I can power up and run services on ondemand
<yeags> I was wondering if the software had any support for sending WOL packets to automatically bring up servers into the cluster to run more servers
<yeags> Currently I just use Xen and manually fire up my servers
<yeags> and manually move images from machine to machine
<yeags> UEC looked like a nice option with a much nicer front end
<yeags> So my plan is a 3TB main server with 8GB ram, quad core CPU
<yeags> run 3-4 servers on that, and then have my spare laptops with 2GB RAM run development servers when I need them
<yeags> I do a lot of technology validation and thought UEC would be a great platform for managing test systems
<yeags> I could easilly use a linux WOL client to fire off the packets
<yeags> is my plan a sound one?
<gobbe> yes
<gobbe> it sounds ok :)
<yeags> :) smashing
<yeags> I assume the main controller can also run servers without issue?
<RoyK> yeags: what sort of storage solution?
<yeags> well... I have options. I have a 6TB NAS (QNAP) and also the main server will have a 3TB array in it
<RoyK> for large storage, I'd recommend zfs on some good platform
<yeags> define large storage? :)
<RoyK> 5+TB
<RoyK> perhaps
<yeags> well my data will reside on the QNAP and I will access via iSCSI
<RoyK> thing is, large storage systems need to use checksumming
<RoyK> since silent errors occur quite often with multiterabyte systems
<yeags> I can't find any good architecture diagrams online
<RoyK> and zfs has fairly good checksumming for this
<RoyK> so with a box running multiple VMs, I'd say a separate box for storage would be decent
<yeags> what is performance like of a gigabit ethernet network when the server image is on a remote NAS storage?
<yeags> surely I would be better actually storing the images locally on the server?
<SpamapS> yeags: Not necessarily
<SpamapS> yeags: if the local storage is very busy with a lot of concurrent requests, and your NAS has tons of cache and disks to work with.. it may do a better job
<SpamapS> RoyK: whats the status of ZFS on Linux these days, any idea?
<yeags> http://en.wikipedia.org/wiki/ZFS#Linux
<yeags> not looking good
<uvirtbot> New bug: #696916 in mysql-dfsg-5.0 (universe) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.5 failed to install/upgrade: subprocess post-installation script killed by signal (Interrupt)" [Undecided,New] https://launchpad.net/bugs/696916
<yeags> and Oracle would sure screw it up if they could
 * yeags works for Oracle, unfortunately
<RoyK> SpamapS: works well on fuse
<RoyK> SpamapS: works better with openindiana :)
<SpamapS> RoyK: Is Nexenta's ZFS using FUSE?
<yeags> SpamapS: local storage will be much faster and responsive than the QNAP remote NAS
<yeags> I'm planing on non OS file systems being on the NAS
<yeags> like regular file shares
<gobbe> SpamapS: no it's not, nexenta is solaris-based
<RoyK> SpamapS: no, it uses a solaris kernel
<SpamapS> gobbe: ahh.. right
<SpamapS> I keep hearing that BTRFS has some serious problems with its metadata efficiency
<RoyK> SpamapS: btrfs announced on their ml that raid[56] was on they way some 20 months ago
<RoyK> seems there's a prerelease being announced soon
<SpamapS> thats good to hear
<RoyK> and btrfs isn't even close to zfs security-wise
<SpamapS> baby steps
<RoyK> SpamapS: really? if it took them two years to implement raid[56], how long to stabilise?
<SpamapS> RoyK: who are "they" anyway?
<SpamapS> wasn't Oracle a big component of BTRFS development?
<RoyK> oracle gave the source away some four years back iirc
<SpamapS> I'm more curious about who was doing the actual coding and designing
<RoyK> see the source
 * SpamapS must go afk... bbiab
<RoyK> usually the bits are pretty well credited
<sbeattie> SpamapS: chris mason is the primary developer for btrfs, he was working on it before he joined oracle. Not sure if he's roped other oracle devs (zach brown?) into developing for it.
<yeags> Chris is a development director in Oracle, has 6 developers reporting to him
<RoyK> 6 developers working on btrfs
<RoyK> seems perhaps oracle doesn't want btrfs to succeed
<RoyK> now they have zfs
<yeags> would
<yeags> wouldn't surprise me
<pting> regarding unattended upgrades pf packages, is there something i can do to automatically select "keep existing config" when doing package upgrades... is there something i can set in debconf-set-selections?
<snipeTR> hi
<consumerism> i'm trying to diagnose a problem, not really ubuntu related but i figured this would be a good place to ask. occasionally, when i am navigating a website served by a ubuntu server in ec2, i get a timeout.  the problem usually persists for a few minutes and then goes away. apache access logs show no requests while the timeout happens, so the requests are never hitting the server (right?).
<consumerism> nothing changes on the client end, the browser loads pages one minute, times out for a few consecutive minutes, then is able to load pages again. load on the server is negligible. the best i can come up with is some network problem outside my control between client and server, is there any other possibility? what could i do about this?
<snipeTR> "Long-Therm" What is the difference?
<pting> consumerism, firewall issues? possibly some rate limiting policies in apache? try spinning up another instance in ec2 and do an httperf
<billybigrigger> where i can i poll imap/smtp info from?
<billybigrigger> kind of like phpinfo for php, is there anywhere i can get info on my mail system?
<snipeTR> hi guys? "Long-Therm" What is the difference?
<Pici> snipeTR: Are you asking about LTS - Long Term Support?
<snipeTR> yes what is this long term?
<Pici> !lts
<ubottu> LTS means Long Term Support. LTS versions of Ubuntu will be supported for 3 years on the desktop, and 5 years on the server. The current LTS version of Ubuntu is !Lucid (Lucid Lynx 10.04)
<snipeTR> ohh ok tnx pici
<snipeTR> mean only support time different? thru?
<Pici> snipeTR: Essentially.
<Pici> Also, LTS releases can be upgraded from one to the next without having to go through the intermediary releases.
<Pici> So. 8.04 to 10.04 directly. Instead of having to go to 8.10 -> 9.04 -> 9.10 -> 10.04
<snipeTR> tnx my friend. this info source ? URL?
<snipeTR> Pici soo tnx byes
<pting> has anyone used ubuntu-eucalyptus extensively? i'm thinking of building out a cluster for work
<pting> ... we use ec2 so it seems like a perfect fit. i'm trying to figure out challenges i may encounter.
<SpamapS> pting: #ubuntu-cloud is also a good place to chat about it
<billybigrigger> using the ubuntu server guide...how come i can't send mail to gmail users?
<SpamapS> billybigrigger: what is your evidence that you ca n't send mail to gmail users?
<pting> SpamapS, thanks
<billybigrigger> the fact that i can send to hotmail works..
<billybigrigger> my test email to my gmail account didn't...this looks a little suspect though....
<billybigrigger> Jan  3 15:12:14 timmy postfix/smtp[7303]: certificate verification failed for gmail-smtp-in.l.google.com[74.125.93.27]:25: untrusted issuer /C=US/O=Equifax/OU=Equifax Secure Certificate Authority
<billybigrigger> i wonder if gmail doesn't like my self created CA
<billybigrigger> i don't know where the Equifax Secure CA is coming from though
<SpamapS> billybigrigger: that may be theirs
<SpamapS> billybigrigger: do you have ca-certificates installed?
<billybigrigger> i turned verbose logging of smtpd on, i'll pastebin my attempt to email my gmail account
<billybigrigger> Jan  3 15:20:05 timmy postfix/local[7406]: 5C2AA61CC: to=<www-data@mail.thefrozencanuck.ca>, relay=local, delay=0.02, delays=0.01/0/0/0, dsn=5.2.0, status=bounced (maildir delivery failed: create maildir file /var/www/Maildir/tmp/1294086005.P7406.timmy: Permission denied)
<billybigrigger> i keep getting those stupid errors in my mail.log too...any one have an idea where those are coming from?
<SpamapS> billybigrigger: thats a local delivery failure
<SpamapS> billybigrigger: www-data is getting email
<SpamapS> billybigrigger: You can alias email to www-data to something else.
<billybigrigger> wonder if its my webmail client...
<billybigrigger> effin things up...
<billybigrigger> anyway back to gmail...
<billybigrigger> http://pastebin.com/1ficC5FT
<pmatulis> billybigrigger: i see:
<pmatulis> Jan  3 15:24:14 timmy postfix/smtp[7417]: C578361CC: to=<donzavitz@gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.93.27]:25, delay=0.83, delays=0.36/0.01/0.24/0.22, dsn=2.0.0, status=sent (250 2.0.0 OK 1294086254 g3si36207791qcq.181)
<billybigrigger> it never shows in my inbox though
<billybigrigger> i've sent about 5 test emails today
<billybigrigger> all within an hour
<billybigrigger> haven't recieved 1 yet...hotmail test mail was instant
<consumerism> pting: if it was firewall, it would not be intermittent...right?
<pting> consumerism, doesn't sound like it, but maybe there's some sort of connection policy that throttles connections, i would first try to make repetative connections from within ec2 then try with a machine outside... or it could be that that particular instance is running on hardware that's having issues
<pting> consumerism, what's the instance size?
<pmatulis> pting: consider #ubuntu-cloud
<pting> pmatulis, consumerism is having the ec2 connectivity instance issues, not me =)
<pmatulis> pting: 'scuse me
<pmatulis> consumerism: see above
<erichammond> Amazon EC2 has rules about how many emails you can send per time period for new accounts.  You can get them lifted by applying for whitelisting.
<SpamapS> erichammond: I seem to recall smoser asking some amazon people and they basically said that SMTP from EC2 was unreliable and unpoliced by them.
<SpamapS> other than responding to spam reports by cancelling accounts I mean
<SpamapS> but maybe I misunderstood
<erichammond> SpamapS: There are real time block lists that ban email from EC2 IP ranges from time to time which I think is the primary "unreliable" factor.
<erichammond> See also: https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request
<erichammond> I recommend sending email from EC2 servers through a non-EC2 email relay with a clean IP address.
<erichammond> You can either use a third party (gets expensive for high volume) or lease a simple server on any non-"cloud" provider (i.e., one that doesn't let new folks sign up with a credit card and rent computers by the hour).
<DesignsEdge> Afternoon all!  I hope that I can get some help here ---
<IrishWristwatch> Maybe you can, but we don't know what the problem is.
<DesignsEdge> well an end user just called with a problem - will brb -- sorry
<uvirtbot> New bug: #697027 in clamav (main) "clamav-base upgrade failed on Ubuntu 10.4" [Undecided,New] https://launchpad.net/bugs/697027
#ubuntu-server 2011-01-04
<RoyK> this is quite nice
<RoyK> I'm home sick with a leg in a cast, boss sent me ane sms "are you logged onto the dhcp server?" replied "huh?"
<RoyK> he called me minute later, the dhcp servers had been down for half an hour or so, I checked /var/log/daemon.log, complaints about timesync between  the two servers, installed and configurd ntp, restarted ntpd, dhcpd, done, about a minute work, and boss was like WTF did you do?
<ScottK> To which you replied "Fixed it - now where's my check?"
<RoyK> I'm called in sick, his only reply was 'well, roy, you've done your job today again :)'
<RoyK> I love debugging unices when I get to the point quickly
<RoyK> those windoze nerds just bladder and can't understand how something can be fixed that easily, or quickly
<RoyK> and WITHOUT A REBOOT
<yann2> reminds me when I started virtualising windows... "what a terrible idea! How will users press the reset button now when its being slow?" (true story sadly)
<RoyK> :)
<RoyK> windoze is defective by design
<pmatulis> yann2: did you ever get more info re 'kvm and memory usage'?
<yann2> pmatulis, nope...
<pmatulis> yann2: k
<RoyK> yann2: what was that about?
<yann2> is a vm with postgres, still swapping when having plenty of filecache, still no idea, still thinking I should add more ram
<RoyK> google swappiness
<yann2> thought about that, but didnt know what to set it to
<RoyK> linux will start swapping out things quite early by default
<yann2> and a good value for that is quite disputed
<RoyK> it's a good thing to swap out crap you don't need in ram
<yann2> and actually pmatulis told me not to, I think my original questoin was "what should i set swappiness to" :)
<yann2> RoyK, when 60% of ram is used it starts to get me worried
<RoyK> default value is 60
<RoyK> I usually set it to 100
<RoyK> yann2: the point is that you want as much memory usable as possible, stuff hanging around eating memory is of no use
<yann2> but when its claimed back it might take a lot of i/o, more than some filecache
<yann2> plus on a postgres VM, filecache is probably the db files... and swap the postgres memory
<RoyK> not necessarily, swapped out doesn't mean released from memory
<yann2> I meant used back, or read back :)
<RoyK> swapped out is an old term, these days things get swapped out and not released from memory
<yann2> just with a postgres vm, you end up having postgres trying to read as much as possible into memory, and the filesystem doing the same for the same data
<RoyK> so that the memory contains the data, but can release it quickly, or release it from swap
<RoyK> I don't think postgres has any caching
<RoyK> it just relies on the filesystem buffers
<RoyK> yann2: don't care too much about how much is swapped out - care about i/o load
<yann2> wonder what  it uses 600M ram for then :(
<RoyK> if sar/sysstat/vmstat/something tells you wio is high, well, that's an indicator for need of more memory
<yann2> RoyK, the thing is, it's a very sensible VM, if it runs out of memory it might kill my data
<RoyK> yann2: don't bother
<yann2> and it would be good
<yann2> I certainly dont want my VM to run out of ram... and monitoring the swap is the best way I know to monitor it
 * RoyK guesses that last one should be 'no good'
<yann2> no good yes :P
<yann2> although its boring finance stuff :)
<RoyK> how is the i/o load of the vm?
<yann2> has pikes
<RoyK> on average
<RoyK> apt-get install sysstat
<RoyK> enable it in /etc/default/sysstat
<RoyK> start it
<yann2> pasted you a link in private
<RoyK> that will show you io load over time
<yann2> with detailed stats
<RoyK> well, that's ok
<RoyK> some spikes, but overall just fine
<yann2> maybe I should change my swap alerts levels
<RoyK> I really wouldn't worry about that
<RoyK> you're hardly using swap at all
<yann2> got an alert at 40% :)
<RoyK> and the memory stats look good
<RoyK> disk i/o load is close to zero
<RoyK> so don't worry
<yann2> like collectd? :)
<RoyK> well, add a truckload of memory and you might gain 10% performance
<RoyK> your system is not under heavy load
<yann2> I know :) just concerned its sometimes swapping out 400MB when it has 500MB filecache
<RoyK> linux does that
<RoyK> it swaps out stuff not in use
<RoyK> so that the memory can be used for something useful
<yann2> k
<RoyK> at swappiness = 60, being the default, it's quite moderate
<RoyK> at 100 it's a little more triggerhappy, better for most systems, at least those with low memory
<RoyK> low memory being < 200% needed of the apps
<Ph0t0nix> hi all
<Ph0t0nix> could anyone help me recovering grub2 on a software RAID?
<RoyK> after an upgrade from 8.04?
<hallyn> jdstrand: is bug 696318 something you've ran across before?
<uvirtbot> Launchpad bug 696318 in libvirt "Starting VMs on qcow2 format with base images in level three fails" [Medium,Confirmed] https://launchpad.net/bugs/696318
<pmatulis> yann2: what disk format are you using with your plagued guest?
<quentusrex> Anyone familiar with Supermicro ipmi on Ubuntu 10.04 or 10.10?
<yann2> qcow2 pmatulis , on local disk
<yann2> its not THAT plagued
<yann2> sent you the graphs in pv
<yann2> anyway its late here, I think I'm off to bed :) see you tomorrow, and thanks to both of you for your help
<Frenk> Hey, I have a strange issue with hostnames on ubuntu. When I send an e-mail there is a line:  Received from damdamdam.org (ip6-localhost [127.0.0.1]) but my hosts file does not have damdamdam.org ...
<qman__> Frenk, mail is sent "from" the mailname you specified when configuring your mail server
<Frenk> qman__: I mean something else: the from adress is okay. But when I open all headers: Received from damdamdam.org (ip6-localhost [127.0.0.1]) by myrightdomain.org (Postfix) with ESMTP id 223 for <testmail@gmail.com>; Tue, 4 Jan 2011 02:32:41 +0100 (CET)
<Frenk> and I want to get rid of damdamdam.org
<qman__> yes, this is what I'm talking about
<qman__> that name is the system's mailname
<qman__> what that name is has nothing to do with DNS, though you're supposed to choose a DNS accurate name
<Frenk> cat /etc/post*/main.cf | grep my >>> myhostname = myrightdomain.org
<qman__> that is not the mailname
<qman__> cat /etc/mailname
<Frenk> output is: static.MY-IP-ADRESS.clients.HOSTER.com
<qman__> well, if it's coming from somewhere else, grep -R damdamdam.org /etc
<qman__> might be the sender specifying that in headers too
<Frenk> qman__:  I found smth but there are all domains listed that Im using. I just run grep for / ... waiting ..
<fubada> anyone use Adaptec 2240900-R?
<fubada> or Rosewill RC-218
<fubada> need more sata ports for my sw raid not sure which is best for ubuntu
<twb> I don't use hardware raid.
<fubada> i dont either
<fubada> i want to use sw raid
<fubada> i just need more ports.
<twb> If you just need SATA sockets for your md RAID, just buy a $10 SATA controller card.
<twb> It shouldn't matter a damn as long as you can say "don't fakeraid", which most can -- and if the one you get can't, just buy another $10 card with a different controller.
<fubada> the rosewill is the only option i found on newegg thats nonraid, 4xsataII,
<fubada> pci-e 4x
<twb> They all CLAIM to be raid, but it's fakeraid
<fubada> yes i know
<twb> don't worry about it
<fubada> i was lookng at a 3ware card but thats way overkill
<fubada> i dont need 300mb/sec
<twb> Just buy three $10 ones of different makes (and chipsets, if you can confirm that)
<twb> One of them will work and support "no raid" as an option
<fubada> heh in my case i can only use 1. its on a atom ion2
<fubada> 1xpci-e
<twb> Yes, but buy three at once so you have a better chance of ONE of them working
<fubada> LOL
<fubada> oh its like that
<fubada> thanks
<qman__> I have a couple promise TX4s
<qman__> the only features you might be interested in are NCQ and friends, for performance
<qman__> everything else is irrelevant as long as it can operate in non-raid mode
<twb> NCQ is the noop call, right?
<twb> Oh, no, it's more like an interrupt protocol.  Never mind
<twb> http://en.wikipedia.org/wiki/NCQ
<fubada> k i do want ncq thanks
<twb> fubada: eh, depends if it's gonna be a database server or just host users' word documents :-P
<qman__> if you have more than four drives, you're probably CPU and network limited anyway
<fubada> twb ill surely have mysql or some db backend for services
<fubada> i have 4 drives for md raid
<fubada> and ssd for os
<fubada> http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=12215494
<qman__> raid 10/0+1?
<fubada> i was thinking raid5
<fubada> 4x2tb's
<fubada> md raid
<qman__> going to be CPU limited
<fubada> really? dual core 1.8
<qman__> reads will be fast, but writes will be CPU limited
<qman__> still pretty fast, but don't expect more than ~60MB/s
<fubada> hm
<qman__> it has to calculate the parity on the fly
<qman__> raid 10 would not have that problem
<fubada> can linux do sw raid10
<qman__> but gigabit ethernet will cap your throughput around 80MB/s
<qman__> so unless that throughput is needed locally, not really worth it
<qman__> yes
<fubada> so what is the best option? a 3ware card runs 320$
<fubada> but i dont need that kinda read/write
<qman__> not worth it, for that scale
<fubada> raid 1?
<qman__> if you need more throughput, spend more on a faster CPU and better drives, or just go raid 10
<fubada> raid 5 is too much cpu, im not familiar with raid10
<fubada> the drives are wd greens for low power
<fubada> and low heat
<qman__> green drives are the bottom of the barrel
<qman__> they like to advertise low power but they're very slow and won't last under heavy loads
<qman__> but again, it all depends on your needs
<qman__> if it's just storing data backups, no problem
<qman__> if you're hosting a high traffic database, get something else
<fubada> htpc, home nas
<fubada> backups
<fubada> seed box for my torrents
<fubada> 500/sec 24/7
<fubada> 500k/sec
<qman__> you're going to need a faster CPU if you want good throughput with that going
<qman__> encoding/decoding video and running torrents are going to put notable load on it already, plus a raid 5
<fubada> eh i dont think it'll be doing any encoding
<fubada> ion2 is onboard for gfx
<qman__> let me put it this way
<fubada> :)
<qman__> my file server, 8 disk raid 6, runs torrents
<qman__> 2.2GHz single core CPU
<qman__> my writes under normal circumstances cap around 35MB/s
<qman__> when I start a bunch of torrents the server load exceeds 20 for several minutes
<fubada> what.cd?
<qman__> nah, 10-20 torrents
<fubada> damn
<qman__> and normal, seeding load is .10-.15
<qman__> that chip is dual core, so it is faster than mine, and raid 6 has twice the parity to calculate
<qman__> but you're still looking at notable overhead
<qman__> you're going to be waiting on it when you start big torrents, and your writes probably won't cap your gigabit
<Frenk> Mh can anyone explain this to me: https://help.ubuntu.com/10.04/serverguide/C/certificates-and-security.html there is some information: http://paste.ubuntu.com/550071/ What does this paragraph mean? Its my first SSL experience (Official SSL from a CA) - I mean eg. if Apache or Cyrus crashed, does it mean my watchdog cant start them again until I type in my password again for the cert?
<fubada> got it
<fubada> the goal was to build a lower power box, so i'll have to sacrifice on services
<fubada> thats fine
<qman__> ok, just so you're aware
<fubada> appreciate the info thank you
<qman__> it'll work fine, just a bit slower
<fubada> i wanted to run xen on this lol
<dschuett> how do you make it so you can access your webpages internally like this: http://internal.ip.of.server/directory_of_webpage
<fubada> qman__, can you think of a better 5400rpm drive
<fubada> than the wd caviar green
<dschuett> i am currently using virtual hosts
<qman__> I don't have suggestions there, I use 7200RPM drives
<qman__> samsung HD103SJs, to be exact
<fubada> nice
<fubada> 72k's run too hot
<fubada> 4 stacked will run really hot
<qman__> in my experience, WD drives run a lot hotter than other bransd
<qman__> though it has not affected longevity
<qman__> 10C on average
<qman__> the green drives may be a different story, I don't know because I don't use them, but for everything else that has been my experience
<fubada> there's a samsung green drive
<fubada> 5400
<fubada> its 20bucks cheaperr
<fubada> SAMSUNG Spinpoint F4 HD204UI
<qman__> one suggestion, make sure you configure SMART monitoring
<qman__> I lost my first array when three drives failed in one day, and I already had to RMA one drive from this new array
<fubada> 3 drives in one day
<fubada> earthquake?!
<qman__> no, just plain drive failure
<qman__> those were seagate 7200.11 500GB
<qman__> two an hour apart
<fubada> i have 1 drive fault tolerance with raid10?
<qman__> the third a few hours later
<fubada> 4x2tb
<qman__> any one drive, and in some cases two drives
<fubada> cool
<qman__> it depends on which two, though
<fubada> right
<fubada> are you using myth
<fubada> mythtb
<fubada> tv
<qman__> no, my file server isn't also an htpc
<fubada> :)
<qman__> I tried to set up myth on a different machine but it wouldn't work with my graphics
<qman__> had to install windows, unfortunately
<fubada> im so enraged at this cablecard issue
<fubada> encryption of hdmi, lack of cablecard support in linux
<fubada> sdv
<fubada> this makes mythtv not even worth it
<The_Tick> it's not if you have encrypted qam channels
<fubada> i want to fire my cablecompany now
<The_Tick> I just want to buy the channels I use
<The_Tick> and not the other shit
<fubada> exactly
<The_Tick> dream on ;)
<fubada> im going to fire my cabletv
<The_Tick> what do you watch anyhow?
<fubada> a&e
<fubada> nfl
<fubada> history
<fubada> natgeo
<The_Tick> nfl you're set
<fubada> espn
<The_Tick> get an xbox
<The_Tick> and downlod the espn app
<The_Tick> :D
<fubada> i have ps3 :P
<The_Tick> does the ps3 have the espn app? ;)
<fubada> i doubt it
<The_Tick> anything that plays on espn
<The_Tick> you can play on the xbox
<fubada> i think you can stream hd nfl games for 100$/yr
<fubada> in flash
<The_Tick> what on a&e?>
<fubada> horders and intervention
<The_Tick> haha
<fubada> other crap
<The_Tick> ya, same
<fubada> i really liked that show on amc
<fubada> about meth...
<fubada> doh whats the name
<fubada> espn signed a deal with MS
<fubada> no espn for ps3
<The_Tick> right
<fubada> The_Tick, do you own a hauppage hd-pvr
<The_Tick> nope
<The_Tick> fubada: I have an xbox
<The_Tick> and I'm almost off of cable
<fubada> thats one way to have mythtv with cablecard
<The_Tick> really?
<fubada> but can only record/watch one channel
<The_Tick> hrmm
<fubada> yea it just records and encodes 1080i
<The_Tick> so get more than one computer
<fubada> and USB to your box
<fubada> the media/communication companies in this country are filthy pigs.
<tom_> oh dear - I think I've done something stupid. I have 320000 files in one directory (small css files).Am I reaching somehow some limits here on ext3?
<tom_> just thinking of running out of inodes or something.
<shauno> tom_: 'df -i' will show you your filesystem usage by inode count, I'd keep an eye on that as the first sign of things going funky
<tom_> uff - still shows 26% free ... :-) but an ls | wc -l takes like 5 minutes. I guess I'm best of deleting the whole directory, they are just temp files.
<shauno> I think what you'll start to run into first with that kinda scenario is the limits of shell globbing
<twb> tom_: by default GNU ls will sort records
<twb> tom_: there is an option to prevent this, which will make it emit results as they arrive from the directory listing syscalls
<twb> Alternatively, find -ls or just find
<stlsaint> hello folks
<twb> Oh, and 320,000 files in a single dir will not be handled very well by ext[23] or fat -- possibly not by any block-oriented filesystem.  Try an extent-oriented filesystem?
<stlsaint> could someone help me to install grub to from cli?
<twb> stlsaint: "grub-install /dev/sdz", where sdz is your disk.
<tom_> I don't event want them !! I think I must have screwed up in a php script.
<twb> stlsaint: if that doesn't work, you're on your own -- I use extlinux.
<stlsaint> i installed server edition, but grub installed on the usb stick and not my master harddrive
<tom_> but it should be safe to just delete that directory right?
<twb> tom_: find /tmp/foo -name 'php_??????.txt' -delete
<twb> tom_: yes, or just rm -r the parent dir
<twb> stlsaint: that is a known bug
<twb> stlsaint: I don't know what the workaround is, other than unplugging the USB key after you've loaded its kernel and ramdisk into memory.
<stlsaint> oh man there has to be a better way than that!
<twb> stlsaint: it'll be fixed in natty (and maverick?)
<tom_> thx for ur help twb shauno, that number just was astronomical
<stlsaint> 10.04
<twb> stlsaint: of course you can also fix things post-facto by booting a live image and using grub-install from there.
<twb> stlsaint: or you could install 8.04 normally, then upgrade it to 10.04
<stlsaint> this is a pretty big bug to let go unwarranted
<stlsaint> smh
<twb> Er, it's not "unwarranted".
<twb> It has been fixed.
<twb> Since it does not affect the supported install media, I assume that the release policy didn't require it to be backported to stable releases.
<stlsaint> twb: aye, i see what you mean but is it corrected in 10.04 as i see this being the most important as it is a LTS
<twb> stlsaint: AFAIK you're using an unsupported install medium, so if it breaks, you get to keep both pieces
<stlsaint> so the only supported installs are cd?
<stlsaint> i wouldnt understand how a net install would be supported...
<stlsaint> but a usb is not...
<twb> AFAIK only CD installs are officially supported.
<stlsaint> smh
<twb> FWIW a 10.04 netboot kernel and ramdisk can be bootstrapped from a USB key, then the USB key removed, and then the install completed from the network.  This is how I perform installs when the PXE server is down.
<stlsaint> aye
<stlsaint> that is good tto know
<twb> That's these: http://archive.ubuntu.com/ubuntu/dists/{hardy,lucid,maverick}/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/{linux,initrd.gz}
<twb> Within that tree are boot.img and mini.iso that are basically the same content, but pre-rolled with a bootloader.
<stlsaint> twb: i went with the re-install of grub from livecd (usb)
<stlsaint> twb: thanks for your assistance
<twb> stlsaint: OK
<uvirtbot> New bug: #697105 in apache2 (main) "Segfault on POST" [Undecided,New] https://launchpad.net/bugs/697105
<Guest17443> where i can discuss about ubuntu amazon or UCE images - any idea - is it the right room?
<xampart> my log is filling up with collectd rrdtool -errors like these http://pastebin.com/Fm1LDc7V any experiences?
<twb> xampart: install ntp?
<macno> Hi, when I start the cman service in 10.10 I see these messages in daemon.log http://paste.ubuntu.com/550140/
<twb> xampart: the first line looks like it's whinging because an epoch time counter somewhere went backwards
<twb> Or rather, stood still
<twb> xampart: what happened at Mon Jan  3 08:19:39 UTC 2011 ?
<twb> macno: udev is pissed because the rules file you just installed expects (presumably) an older kernel
<twb> macno: try copying it from /lib/udev/rules.d/ to /etc/udev/rules.d/ and removing the "misc/"'s
<macno> I upgraded from 10.4 to 10.10
<macno> let me try
<twb> macno: note that I'm just guessing; I'm not familiar with "cman" nor 10.10
<xampart> twb: nothing special in syslog or messages
<xampart> and ntp is installed
<no--name> is it safe to use ubuntu-server as a desktop os?
<no--name> does it have all the security stuff that ubuntu-desktop has?
<TREllis> no--name: yes, apart from the packages installed by default from the server cd will be different and the kernel installed will be the -server flavor
<no--name> TREllis: is there anything I should install security wise that -server doesn't have installed by default?
<TREllis> no--name: the server install will have a very minimal package set, so by default you shouldn't have any ports open or anything. I'd start off with reading through the server guide, it has a fairly brief overview of things to look at securing, really depends on what services you want to run. https://help.ubuntu.com/10.10/serverguide/C/index.html
<no--name> ok
<no--name> so in the end it will be a lot more work than just going with -desktop?
<TREllis> no--name: "more work" would depend on what you want to use it for
<no--name> ok
<twb> no--name: define "safe"
<no--name> twb: as safe as ubuntu-desktop
<soren> It's safer.
<soren> Way safer.
<soren> It has almost nothing installed by default.
<soren> Less attack vectors.
<twb> no--name: IMO -server is not significantly LESS safe than -desktop; it may be MORE safe.
<twb> no--name: they differ primarily in the set of packages installed by default; they both back onto the same package repository.
<no--name> twb: but I want to use it like I'd use -desktop
<no--name> just because it has less bloat
<jussi> argh, what is the package that provides the command "locate" ?
<twb> no--name: perhaps you should install from the "alternate" CD, choosing the "cli" option and then installing only what desktop packages you think you need.
<twb> jussi: mlocate (or locate or slocate, depending on vintage and preference).
<jussi> twb: ok, what is the one that usually ships with maverick?
<twb> jussi: apt-file can tell you, in future.
<twb> jussi: mlocate, I expect
<jussi> they didnt put it onto my server... :/
<no--name> twb: ok. I was just wondering if there were packages I'd need to add for security if I were to use it as a desktop OS
<twb> no--name: conventional wisdom is that installing packages makes a system LESS safe
<no--name> ok
<no--name> what concerned me is that update manager on -desktop comes up with all sorts of "security updates" but on -server it just comes up with kernel stuff. I guess that's because these security updates are for packages that aren't on -server?
<twb> no--name: that is simply because you have fewer packages installed.
<no--name> ok, cool :)
<twb> no--name: security updates apply to both.
<no--name> both what?
<twb> no--name: both -server and -desktop.
<no--name> hmm
<twb> no--name: they back onto the same security repository.
<no--name> why is it that when i ran the update manager on -server i only got kernel stuff?
<twb> 20:54 <twb> no--name: that is simply because you have fewer packages installed.
<no--name> oh, right :/
<twb> Well, it may also be because you screwed up something and updates aren't being received
<no--name> hmm
<no--name> it's too bad I don't have it installed right now
<no--name> I'll install it on vbox and see how it goes.
<Syria> Hi, please tell me if this is correct   cp -r var/www/site1/folder   var/www/site2/folder
<no--name> twb: would recommend "install security updates automatically" or "manage system with landscape"?
<no--name> would "no automatic updates" have update manager manage them isntead?
<twb> no--name: unless you have a support contract with canonical, there's no point in choosing landscape
<no--name> ?
<no--name> hmm, oh well. should i choose automatic or no automatic (seeing i will be using update manager)
<twb> No idea.  I do not support desktops.
<no--name> ok
<no--name> well, i'll try no
<no--name> last time i did install them automatically
<Syria> please tell me if this is correct   cp -r var/www/site1/folder   var/www/site2/folder
<no--name> maybe that's why the update manager only came up with the kernel stuff
<soren> Syria: How are we supposed to know?
<Syria> soren am i using the right command? because i am getting a message telling me that folder was not found.
<soren> Syria: Again: How are we supposed to know? That command copies a folder called var/www/site1/folder to var/www/site2/folder. How can we tell if that's what you want?
<no--name> should be a / in front of var
<soren> How can you know that?
<Syria> no--name thnx, I will try it.
<no--name> cp -r /var/www/site1/folder /var/www/site2/folder
<no--name> and only use one space
<Syria> Okay.
<soren> Doesn't matter.
<soren> At all.
<no--name> ok
<no--name> well the / in front matters if you're not in the / directory
<soren> Unless you're in /home/foo/ and the folder you actually want to copy is called /home/foo/var/www/site1/folder, which is perfectly valid.
<no--name> hmm, true
<no--name> didn't think about that
<Syria> no--name I had to add the / It works now.
<no--name> cool
<Syria> Thank you guys.
<no--name> np
<Syria> Using the terminal is much cooler than gui and comfortable to me, but I am still having a problem with learning the commands.
<Syria> I have a vps with Lucid server installed on it.
<no--name> yeah i like using the terminal
<Syria> Guys do you have an idea why my .htaccess file doesn't work? what I ever I type in it? for example "deny from all"
<Syria> 644 permessions.
<uvirtbot> New bug: #697181 in php5 "DoS: Infinite loop processing 2.2250738585072011e-308" [Undecided,New] https://launchpad.net/bugs/697181
<Pupeno[work]> Hello.
<Pupeno[work]> I have a server that quite often runs out of ram and stops responding all together and needs to be restarted. Any ideas where to start to debug the problem?
<xperia> hello to all. i am trying to send email from my php scripts but somehow in the mail logs i get allways the error message
<xperia> Invalid mail address, must be fully qualified domain
<xperia> looks like a configuration problem with php and postfix becouse i am able to send mails over telnet
<xperia> any know how to fix that
<Syria> xperia install rouncube.
<xperia> Syria: i have roundcube and it works great. I am able to send and recive mails without any problems with roundcube but when i try to send a email for a event from one of my scripts it does not work ! normally this worked till yet with the php function "mail"
<nooder> hello. if i have 2 socket server, total ram available is divided by 2?
<uvirtbot> New bug: #697197 in libvirt (main) "Empty password allows access to VNC in libvirt" [Undecided,New] https://launchpad.net/bugs/697197
<zoopster> nooder: ??? sockets have nothing to do with the available RAM.
<zoopster> nooder: available cpu is just that...something that an app can utilize...same goes for RAM.
<nooder> zoopster, so if i setup 16 + 16 gb, for 1 db instance (mysql) gonna be 32gb available?
<zoopster> if you have a server with 32gb, then that ram will be allocated to what needs it....if you have a 30gb db and you cache it all...it will use 30g if it's available...maybe I don't understand your question
<nooder> zoopster, there is mb that have 2 sockets. each socket have 6 ram slots near it. will 1 application have access to both processors and both ram blocks?
<zoopster> I guess it depends on how the mb is architected. It could be two distinct mainboards on one, but is likely to just be organized in a dual-channel configuration
<zoopster> in most inexpensive mainboard configs all cpu and all ram are available as a whole to the os and any apps running on it
<zoopster> in very high end mainboards there are proprietary configurations that allow for the separation, but you would know that if you are working with a proprietary config
<nooder> zoopster, is there a chip way to get 64gb ram or more? because 16gb modules costs really much :(
<sn0man> @nooder what are you trying to use memory for?
<sn0man> apache? nginx or somethng else?
<nooder> mysql cache
<nooder> put whole innodb inside ram
<sn0man> why not try some SSD drives?
<zoopster> nooder: as you increase the amount of ram on a module the price increases rapidly
<reisi> nooder: buy an actual battery-backed ram disk
<nooder> there is point to use fusionio
<zoopster> the question really is why do you want to cache the entire db?
<reisi> nooder: though, are you sure you need to have a 30GB database in-memory?
<nooder> zoopster, thats a standart operation. if all db fits in ram there is 3x increase in performance
<sn0man> they have a point, kinda impractical to put entire db in RAM... what happens when your DB exceeds your RAM later?
<zoopster> yes, but if you are looking for performance, there are many other ways to go about it in addition to caching the entire db
<zoopster> it USED to be standard practice...before SSDs, nosql, etc
<sn0man> buy a battery backed raid controller, setup a Raid10 with 4 SSD drives
<nooder> hm, there is a point in that
<nooder> just never dealed with ssd
<sn0man> response time is really what your after
<nooder> so how much is 1 ssd about?
<nooder> 100+gb
<sn0man> 200 bucks maybe?
<nooder> thats much cheaper than ram anyway
<sn0man> but if you go 60gb they're as low as 80 bucks maybe?
<nooder> anybody used intel platforms?
<sn0man> what OS are you running?
<nooder> fedora 14
<nooder> ext4
<sn0man> what kernel
<nooder> just thinking for now, buy or not supermicro
<nooder> .35 i think
<sn0man> 2.6.33+ allows for TRIM support I believe, which you'll want for sure
<nooder> did you hear about issues with ssd?
<nooder> failures or smth
<sn0man> OCZ is actually a good choice, I seem to recall reading somewhere they have some exclusive crap deal with Sandforce (the SSD controller on the devices) that gives them exclusive access to firmware and such
<sn0man> where as other companies have to use reference firmware from sandforce.   Some licensing bullshit
<sn0man> Whatever SSD disk/company you choose, spend an hour and read through their support forums before purchasing
<sn0man> and read up on TRIM and Garbage collection for your OS
<nooder> for os i will use regular hdd
<sn0man> TRIM and GC will keep the drives healthy and fast
<nooder> i think no need for os in fast drive
<RoyK> iirc there's no TRIM in Lucid
<RoyK> that is, trim came into 2.6.33, so unless someone has backported it to the current lucid packages, there's none
<sn0man> he said he's got 2.6.35
<sn0man> so he should be ok there
<RoyK> oh, well, then it should be there
<sn0man> but he's on fedora 14, dunno if that matters.  But a kernel is a kernel right?
<RoyK> more or less, yes
<RoyK> albeit asking for fedora help on #ubuntu-server is a little wierd
<sn0man> @RoyK: https://sites.google.com/site/lightrush/random-1/howtoinstalllinuxkernel2635onubuntu1004lucidfromubuntu1010mavericktheeasyway
<sn0man> looks like someone did
<RoyK> ah - nice backport
<sn0man> But yea, nooder.  Go get some SSD drives and that shit will be flyin
<ScottK> The powerpc buildds on Launchpad run a maverick kernel with a lucid userspace as it's more stable on that arch.
<RoyK> sn0man: a kernel is a kernel, but all distros have their own patch kits...
<sn0man> true
<nooder> last question is intel or supermicro? intel is very cheap
<sn0man> for what, SSD?
<nooder> platform
<nooder> chasis + mb
<RoyK> nooder: both work, it's more a matter of taste
<sn0man> ahh, just make sure the chipset is fully supported
 * RoyK uses supermicro for storage systems and a few servers
<sn0man> I've woked with a lot for SuperMicro and found them pretty decent
<RoyK> but then, I don't use linux for storage...
 * RoyK wants ZFS
<RoyK> s/wants/needs/
<sn0man> ZFS is the sex
<RoyK> with 2 boxes with 100TB each, I somewhat want block-level checksumming :P
<sn0man> hehe, so long as you got a decent L2ARC cache deduplication is a great thing to have
<nooder> and for mysql whats will be better i7-980x 3.33 or xeon e5620 2.6 but with more l1 cache
<nooder> i think that mhz are really important. but don't know what this cache means for mysql :)
<RoyK> sn0man: really? I've done quite some testing on this 12TB test system, and found dedup quite unusable due to very low write speeds
<RoyK> this was with some 160GB of L2ARC
<RoyK> total space used was 2-3TB
<RoyK> so way within the 'specs', but still, not very usable
<sn0man> to be correct you need a proper LOG cache as well too.
<RoyK> SLOG, yes
<RoyK> had that as well
<RoyK> 4GB
<sn0man> so, LOG and L2ARC for read and write
<RoyK> quite sufficient
<RoyK> SLOG for write, L2ARC for read
<RoyK> L2ARC is critical if using dedup
<RoyK> or a truckload of RAM...
<RoyK> or both
<sn0man> did you give it a proper amount of time to build the cache?  With those two it should be flying
<RoyK> but stil there are issues with destroying a deduped dataset - it can take hours or even days, hanging the system
<RoyK> not very funny
<RoyK> I guess that's the reason S10u9 doesn't have dedup...
<sn0man> yea, all that stuff is still technically dev builds, but long overdue :P
<sn0man> Can't wait till FreeBSD gets p22
<RoyK> so for these two systems, and a smaller 14TB box for another office, we chose openindiana and sufficient disk space
<sn0man> But by that time, Maybe BTRfs will be out with a stable release
<RoyK> looks like there's something in the works with raid[56] there too
<nooder> so which ssd to choose? x25m? or there are better once?
<RoyK> for what use?
<nooder> db
<RoyK> Crucial's RealSSD C300 is fast and works wery well for most use
<RoyK> x25m is quite old and not very fast compared to the newer ones on the market
<RoyK> nooder: how large is the database?
<nooder> 64 gb is enough
<nooder> just needs to be as fast as possible
<RoyK> then I'd recommend the C300
<RoyK> they're quite fast )
<RoyK> :)
<nooder> i read reviews for crucial ram modules and there is quite big number of returns
<nooder> if a buy one, i need to upgrade firmware at first?
<skorv> hello
<skorv> i have a server setted up with bind9+dhcp3 and its working
<skorv> but it serves 2 subnest
<skorv> on dhcp everything is fine
<skorv> on bind i have zone "domain", the rev.lan1 and rev.lan2
<skorv> when i ping my server from lan1 it gives me the ip ih has in lan2 (only in windows)
<skorv> all zones are master
<skorv> i tried to create another zone with a domain for lan2 but bind fails to start
<RoyK> nooder: wouldn't think so - also, about return of crucials, I haven't heard that, got an url on that one?
<sn0man> keep in mind, you always find more complaints than postive reviews
<sn0man> when something works, people don't talk
<sn0man> so more people on a forum saying something doesn't work doesn't really mean much
<RoyK> skorv: I think you need split horizon - see the section about views in the manual
<nooder> sn0man, http://www.amazon.com/Crucial-CT2KIT51264BC1067-204-PIN-PC3-8500-SODIMM/product-reviews/B001MX5YWI/ref=sr_cr_hist_all?ie=UTF8&showViewpoints=1
<RoyK> nooder: I wouldn't worry about that - get a couple of C300, mirror them, and you should be safe
<nooder> i just cant understand what can happen
<RoyK> nooder: also, even if crucial had a bad series of memory chips, that doesn't really matter much for other products
<sn0man> regardless of reviews on amazon, you have no idea what they did to the product in the first place, how they configured it, how they maintained it, treated it.
<nooder> regular hdd i'm monitoring with smart
<sn0man> you can still use SMART
<sn0man> there are also new SMART codes specific to SSD's
<sn0man> so you have to make sure the tool you're using to check the SMART codes supports them
<skorv> put each subnet + reverse in its own view?
<RoyK> SMART might be even SMARTer on the SSDs, since a drive quite oftenly dies without telling SMART about it
<skorv> would that be the idea
<RoyK> skorv: yes
<sn0man> supports SSD, that is
<skorv> humm
<skorv> ty for the help.... i'm new to bind :P
<nooder> thx, that really helped me :)
<uvirtbot> New bug: #697227 in apache2 (main) "Apache processes segfault" [Undecided,New] https://launchpad.net/bugs/697227
<ssureshot> I have my nics bonded but when I restart netwroking avter I've added a new slave in the interfaces file the bond kicks errors and does not work.. how can you restart networking with a bond active?
<SpamapS> ssureshot: how are you setting up the bonded interface?
<ssureshot> SpamapS: wow.. I was just reading the documentation for the first time in a while and although the old way works it seems things have changed so let me try the new way
<ssureshot> I was setting up the auto bond0 with the slave directive for eth0 and1
<uvirtbot> New bug: #697243 in samba (main) "winbind offline logon doesn't works after winbind restart" [Undecided,New] https://launchpad.net/bugs/697243
<RoyK> does a bonded network allow for full speed on boths NICs for a single TCP/UDP stream?
<SpamapS> RoyK: depends on how you have it bonded
<SpamapS> RoyK: the default method is just for failover
<SpamapS> but some switches support a trunking protocol
<RoyK> it was trunking I was thinking of
<RoyK> just curious how that works when a stream is sent to a MAC address on the network layer
<RoyK> if it can balance a single connection across two NICs
<ssureshot> royk: generally speaking, modes provide either hot standby or load balancing services. <-- taken from a quick google
<RoyK> ssureshot: I'm aware of that
<RoyK> I was just wondering about if those trunks could balance a single connection
<ssureshot> I believe they can balance using the roundrobin style mode.. which Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
<RoyK> ssureshot: sure, but what about single connections? I was told from someone I couldn't get 2Gbps on a single connection from two bundled 1Gbps connections
<soren> RoyK: It depends.
<RoyK> depends on what?
<soren> RoyK: All the components in between the two systems exchanging data.
<RoyK> no, you misunderstand
<soren> Easiest way to success is to have a switch that supports bonding.
<soren> Or trunking.
<soren> Or whatever the vendor chooses to call it.
<RoyK> if I open a connection from A to B, single TCP connection
<RoyK> how will bonding help if the data is sent to/from MAC adresses?
<ssureshot> royk: soren is right he beat me too it... youll never get 2gbs on bonded nics unless you have the switches to support it
<soren> if the switch is up for it, the two ports can share a MAC.
<RoyK> ok
<RoyK> but then, the OS needs to support shared MAC as well?
<soren> Sure.
<RoyK> any idea if linux supports that?
<soren> Err. Sure. I thought that's how this discussion started.
 * soren only read the last few lines of context
<ssureshot> royk: the bonding bonds eth0 and eth1 to a single mac address attached to bond0
<soren> RoyK: http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/networking/bonding.txt;h=5dc638791d975116bf1a1e590fdfc44a6ae5c33c;hb=HEAD
<RoyK> thanks
<pmatulis> soren: nice link
<zul> ivoks: im the middle of packaging dovecot2 for universe
<ara> Hello guys!
<compdoc> dont us gals get a hello?
<ara> I am also a lady, and I use guys for both male/female :)
<compdoc> Im not a gal - but I could have been
<zul> hi ara
<AndyGraybeal> anyone know a good hardware mailling list?  i emailed the ubuntu users list about a PDU but haven't gotten a good response yet, plus it's technically off-topic.
<ara> Anyway, during our weekly testing before holidays, we found this bug in the server isos
<ara> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/693042
<uvirtbot> Launchpad bug 693042 in linux "Kernel Panic while booting Natty installer kernel (2.6.37-10-generic) on amd64 ISO" [Undecided,New]
<ara> hey zul :)
<zul> ara: crappers...:(
<ara> zul, have you experienced something similar?
<zul> ara: not recently
<ivoks> zul: eh? universe?
<ara> zul, and before that? the test was done before the holidays
<zul> ivoks: yeah im calling it dovecot2 but it will conflict with dovecot
<zul> ara: no thats totally new to me...ill bring it up in the meeting today
<ivoks> ah, dovecot 2
<ara> zul, I will bring it if you want, I will be there
<ara> zul, you mean the kernel meeting?
<zul> ara: server team meeting
<ara> zul, ah, OK, I won't be at that one
<zul> but if you want you can bring it up there as well ;)
<ara> zul, I will raise it in the kernel meeting, and you can do it in the server meeting :)
<zul> ara: sounds like a plan
<zul> 2 people nagging are always better than 1 ;)
<ara> :)
<zul> thanks!
<AndyGraybeal> i need some help, i'm rsyncing across hosts with the -a switch; which should do -o.  it says "super-user" only; what does this mean?
<AndyGraybeal>         -o, --owner                 preserve owner (super-user only)
<AndyGraybeal> why does it only do 'super-user only'?
<kim0> AndyGraybeal: you need to rsync as root (or sudo) in order for that option to work
<AndyGraybeal> aaah
<kim0> AndyGraybeal: since chown'ing files is a root thing
<AndyGraybeal> thank you kim0
<AndyGraybeal> so, if i'm running th ecommand as root on one computer, and sending it to another computer as a regular user ... does this apply?
<AndyGraybeal> it doesn't apply does it?
<AndyGraybeal> i have to be root on the machine that is receiving for it to be able to preserve the owner, correct?
<zul> uh...is it me running the meeting today?
<AndyGraybeal> hm. so i'm doing it opposite of what i should be.. okay - i'll try something else.. thank you very much kim0  for responding.
<kim0> AndyGraybeal: yeah root on receiver I'd say .. on reader machine, you'd only need permission to read the files
<AndyGraybeal> :)
<JamesPage> zul: looks like it - smoser did the last one pre xmas
<zul> frig..
<smoser> i did one prior to the break.
<fubada> hi, I plan on using ubuntu-server w/ Linux md raid for purposes of NAS.  I have 4x2tb drives driven by Intel Atom d525 dual core 1.8
<fubada> which raid do you think is more appropriate, raid5 or 10
<kim0> fubada: for performnance raid10 for capacity raid5
<gobbe> well. raid10 is ofcourse better than raid5
<gobbe> if you look performance
<smoser> so, http://www.mojvideo.com/video-it-ain-t-me-babe/36cb8416efd2ca44c4b6
<fubada> what capacity will i get with 4x2tb raid10? 4tb?
<fubada> raid5? 6tb?
<kim0> fubada: yeah
<compdoc> 6
<fubada> hm
<compdoc> raid 10 would be 4
<fubada> is sw raid5 cpu intensive?
<fubada> mores than sw 10
<compdoc> ooops - raid 5 is 6, raid 10 is 4
<fubada> dang
<fubada> im not sure what i want
<kim0> fubada: what are you storing there
<fubada> media primarily
<fubada> seed box also
<fubada> p2p
<fubada> backups
<kim0> fubada: is that mostly movies
<fubada> movies and mp3s
<fubada> xbmc type of stuff
<kim0> fubada: media and backups are streaming large files .. I'd go raid5. Hopefully mp3s are not gonna be a problem
<fubada> a problem for the cpu keeping up with raid5?
<kim0> fubada: you'll only know after trying, but I'd be surprised
<fubada> heh
<pmatulis> hm, intel atom processor
<pmatulis> fubada: if it's just dishing up files it shouldn't be a problem
<fubada> pmatulis, the box will also be my xbmc
<fubada> and torrent seedbox
<fubada> thanks kim0
<pmatulis> fubada: does that involve a lot of processing?  i have no idea
<kim0> fubada: you're welcome
<fubada> pmatulis, i dont think so
<fubada> thanks pmatulis
<kim0> fubada: use ext4 since it's a bit faster
<fubada> ext4 raid5 for a total of 6tb's :)
<kim0> fubada: yep :)
<pmatulis> fubada: so just reading from box right?  not writing?
<fubada> well ill be writing to it when its doing backups
<fubada> and when its downloading torrents
<fubada> my torrent speeds can hit 3mb/sec
<fubada> OS is on SSD
<pmatulis> fubada: k, b/c raid5 suffers a bit with writing, due to parity
<HJess> what could cause a very disk disk io (99,9%) on while trying to copy big files on a ubuntu server? (the io stat is taken from iotop) - it locks everything down so even ssh timeout while doing a copy of the files..
<fubada> HJess, raid?
<kim0> HJess: seen that with bad sata cables
<HJess> fubada: yes and no - there is a raid connected, but the copyed files are from a USB to a SATA disk inside the server
<HJess> kim0: ok a bit strange.. given that the problem kind of builds up .. upstart works fine and so
<patdk-wk> I don't know where the 99.9% comes from, as iotop doesn't use percents
<HJess> at the momemt I have 5 Mb/s transfer from the USB to the sata disk .. normal this would be around ~30 MB/s
<patdk-wk> there are so many reasons for that :)
<patdk-wk> harddrive issues on either one, usb or sata
<patdk-wk> other usb device or that one, causing usb bus speed to be slow
<patdk-wk> does dmesg say anything?
<HJess> no errors at all ..
<patdk-wk> probably read recovery or hardware ecc errors on one of the drives
<patdk-wk> use smartctl (doesn't work over usb)
<HJess> ok .. just plain stranage.. a "ls ~" - takes around 2 minutes..
<HJess> when doiing the cp
<patdk-wk> and you find that odd?
<patdk-wk> since you don't have any dmesg stuff, I doubt it's an interface issue, so must be a bad drive
<HJess> normal it would take a spilt sec.
<patdk-wk> normally != failed drive
<HJess> if it was a failed drive would fsck pick it up?
<patdk-wk> no
<patdk-wk> fsck is for failed filesystem
<HJess> smartctl it is then
<kennethbaucum> Can someone give me some tips on getting my Ubuntu LAMP setup to talk to a MSSQL database residing on a different server?
<kennethbaucum> I'm using Ubuntu 10.04.1 LTS Server Edition with LAMP package installed and up-to-date...need to talk to an MSSQL server and I have the correct creds...
<kennethbaucum> Tried following structions found at http://us2.php.net/manual/en/book.mssql.php but to no avail
<thesheff17> kennethbaucum: by default mysql only listens on localhost...edit /etc/mysql/my.cnf comment out bind-address            = 127.0.0.1 and restart mysql
<kennethbaucum> thesheff17: I can talk to MYSQL fine, I need help talking to MSSQL, Miscrosoft SQL Server
<thesheff17> kennethbaucum: oh sorry...Unfortunately almost everyone here doesn't use microsoft
<thesheff17> kennethbaucum: http://www.webcheatsheet.com/PHP/connect_mssql_database.php maybe this how may help.
<kennethbaucum> thesheff17: I understand that, and I don't prefer MS SQL server by any means...but the boss opted to outsource development of an app and that guy chose ASP/MSSQL...I'm still required to talk to the data he has in his app...just need a way to get to it...PHP throws error Fatal error: Call to undefined function mssql_connect()
<kennethbaucum> thesheff17: Thanks, I'll check the link...
<kennethbaucum> thesheff17: Thanks....I have the code, just as listed on the page you suggested, but perhaps there is a MSSQL module missing from PHP5, that I cannot use that built-in function...
<thesheff17> have you installed php5-sybase
<thesheff17> it looks like the PHP to MSSQL module
<kennethbaucum> thesheff17: Oh wait!  I scrolled down and saw how to connect with a DSN name...let me try that...and I'll look at the php5-sybase package...back in a few!
<RoyK> Sybase -> MSSQL was a nice turn
<RoyK> Microsoft was part of the Sybase project, and one day, they just took the code and ran away to make MSSQL
<RoyK> much  the same way as with OS/2 => Windows NT
<eagles0513875> hehey guys i need some help with roundcube i have a mail server running imaps and its failling to authenticate against it for some reason and log me in can someone help me fix this
<blistov> anyone know how to setup nfsv3 on 10.10 ?
<blistov> Seems to have disappeared from all the repos.
<pmatulis> !info nfs-kernel-server
<ubottu> nfs-kernel-server (source: nfs-utils): support for NFS kernel server. In component main, is optional. Version 1:1.2.2-1ubuntu1 (maverick), package size 158 kB, installed size 396 kB
<pmatulis> blistov: â¤´
<blistov> Gah.
<RoyK> nfs3 is still in
<RoyK> mount -o vers=3
<blistov> v4 supports v3, but I'm wondering if there's a way to disable 2 and 4, enable 3, and not use kerberos.
<blistov> RoyK, I don' t have the ability to set client options.  The client is an esxi 4.1 server.
<blistov> It defaults to v4 as a client, but doesn't ACTUALLY support v4.
<blistov> so you have to completely disable v4 on the server.  Doing this however, seems to force kr5b enabled on the server.
<RoyK> krb is an nfs4 thing
<blistov> RoyK, hrm... well is set --no-nfs-version 4 --no-nfs-version 2" on the server, and now its insisting on locating creds for kr5b.
<blistov> Any idea why/
<elijahosborne> I have a question please, after installing lamp, is there a way to not have everything start on reboot, so when I need to use the lamp I type in sudo service apache2 start?
 * RoyK made costers for some family members out of old harddrive platters this christmas
<pmatulis> elijahosborne: by default ubuntu starts installed services on boot
<RoyK> elijahosborne:  apache starts in /etc/rc2.d
<pmatulis> elijahosborne: it's a bit mysterious how to change that behaviour
<pmatulis> RoyK: not upstart?
<RoyK> elijahosborne: just remove the symlink there and it won't start
<RoyK> iirc apache doesn't use upstart
<hggdh> update-rc.d can be used for that
<hggdh> e.g. sudo update-rc.d apache2 disable
<SpamapS> is it just me, or does natty alpha1 boot into a black screen (have to alt-f1 to get the login) ?
<hggdh> I have seen it, but not always, SpamapS
<SpamapS> I think its an attempt to avoid flicker that is having a negative affect on the server
<SpamapS> will have to check it out on alpha2..
<SpamapS> hggdh: I think we have a fix for the libc6 thing
<hggdh> SpamapS: yes, I saw your entry in the bug, great!
<hggdh> SpamapS: perhaps this time I will be able to finish an install on the test rig ;-)
<hggdh> SpamapS: there is also bug 693042 that *may* be related
<uvirtbot> Launchpad bug 693042 in linux "Kernel Panic while booting Natty installer kernel (2.6.37-10-generic) on amd64 ISO" [High,Incomplete] https://launchpad.net/bugs/693042
<SpamapS> oh thats fun.. virt manager won't give me my mouse back when I hit ctrl-alt
<RoyK> SpamapS: seen that happen, especially from remote
<SpamapS> zul: + [zulcss] get cobbler deploying Ubuntu From Ubuntu: DONE   w0000t
<zul> SpamapS: yep
<zul> SpamapS: i rule
<SpamapS> zul: O'Doyle Rules
<soren> SpamapS, zul: Where can I see this new hotness?
<zul> soren: its in the cobbler git repo
<_Neytiri_> i am having a issue with the networking on Ubuntu server 10.10 i set the /etc/network/interfaces file to use a static ip address and it works but then later on it changes back to a dhcp assigned address even tho the file still says static
<SpamapS> _Neytiri_: did you do 'ifdown ethX' before making that change?
<SpamapS> _Neytiri_: dhcpcd is probably still running
<_Neytiri_> no
<SpamapS> or dhclient
<_Neytiri_> i have never had to do that before but i will try it
<SpamapS> _Neytiri_: the issue is that dhclient is still running
<SpamapS> _Neytiri_: you can probably just kill dhclient
 * SpamapS thinks ifup should take care of that
<SpamapS> _Neytiri_: you may want to report that as a bug.
<_Neytiri_> i will cause i know in 10.4 all i had to do was restart netwroking nd it would solve it
<cdubya> is there a problem with the current samba package in the repos for 10.04? I was trying to install samba and am getting an error on unmet dependencies. Looks like maybe the samba package is looking for older versions of the dependencies than exist in the repos?
<genii-around> cdubya: Did you do something like: sudo apt-get update                beforehand, to get the latest list of files?
<cdubya> yep
<cdubya> just did it again for grins
<genii-around> Hm
<genii-around> cdubya: Were you perhaps using a ppa or so, which installed later versions of it's dependencies for some other program?
<cdubya> It doesn't like the versions of samba-common and libwbclient0 that it's seeing (wants 2:3.4.7~dfsg-1ubuntu3.2 on both, sees 2:3.4.8~dfsg-0ubuntu1 available...?)
<cdubya> genii-around, I had tried to install ebox on this machine. That's the only thing. Backed it off, but perhaps that's the issue.
<cdubya> genii-around, the only ppa in the sources file is the ebox one, but it's commented out.....
<genii-around> cdubya: Well, you can always do the long form for the deps.. like: sudo apt-get install libwbclient0=2:3.4.7~dfsg-1ubuntu3.2                   for instance to install that specific version
<cdubya> ok, I'll try that.
<deadsmith> can someone please direct me to the correct channel for installation on an Apple XServe?
<cdubya> genii-around, that did it. Thanks!
<genii-around> cdubya: Yer welcome
<genii-around> deadsmith: https://help.ubuntu.com/community/Xserve2-1 may be of some kind of help
<deadsmith> genii-around:  Thanks :-)  That's what I was looking at, but the boot options just give me a blank screen after I select one.  I can get to grub, but that's the only thing that works.  The server version is named "-amd64.iso" -- do you if there is there a separate x86-64 kernel for the intel machines?
<genii-around> deadsmith: the amd64 is for all processors that use 64 bit instruction sets (AMD or Intel, doesn't matter)
<RoyK> and most software work well with both 64 and 32bit
<RoyK> some software links to 32bit libs, meaning you need those as well
<deadsmith> genii-around:  thanks.  It's been about 6 years since I've done anything, and the hardware seen has changed a bit...
<RoyK> deadsmith: the game is still the same
<deadsmith> ick... ack... *scene.  :-)
<RoyK> http://karlsbakk.net/xray.png <-- I guess I could have brought home more interesting artifacts from Iceland than these
<Amgine> RoyK: but, perhaps, none you'd be as able to keep track of.
<cdubya> what's the easiest way to allow samba guest access with rw permissions on a share?
<cloakable> Make the folder rw to the samba guest account (default nobody), set the share 'guest ok' and make it writable.
<cloakable> why?
<remix_tj> cloakable: e.g. for an app like a mine which eats pdf from a share and returns postprocessed pdfs? :-)
<cloakable> :)
<_Neytiri_> i am having a issue with the networking on Ubuntu server 10.10 i set the /etc/network/interfaces file to use a static ip address and it works but then later on it changes back to a dhcp assigned address even tho the file still says static, i have shut the interfaces down and then restartred netwreoking and its still grabbing a dhcp address about a hour or so after
<AndyGraybeal> i want to rsync some files in a cron job, the problem i'm wondering about is how do i make sure that the rsync job ran and that it did the job i asked it to do?
<pmatulis> AndyGraybeal: cron sends emails to the user of the job
<AndyGraybeal> pmatulis: interesting
 * genii-around ponders --log-file=something
<kirkland> zul: hey
<kirkland> zul: saw your blueprint update
<kirkland> zul: so you got cobbler deploying ubuntu from ubuntu?
<kirkland> cjwatson: howdy
<kirkland> cjwatson: i just uploaded a new package, ssh-import-id to natty
<kirkland> cjwatson: if you could/would give that an Archive Admin's review, and get it into the archive, you can prune that code from openssh
<kirkland> cjwatson: i'll send a merge proposal for that too
<kirkland> cjwatson: lp:~kirkland/openssh/remove-ssh-import-id
<NateW> im trying to get php working on ubuntu-server.. if i have a page in /var/www it works, if i have it in public_html then the browser tries to download it. why does this happen?
<deadsmith> I'm trying to update the 64bit install CD with grub2 so it will work with the EFI on my Xserve2,1 systems.  Can someone explain to me the process on boot?
<The_Tick> grub doesn't have documentation about that?
<thesheff17> NateW: did you install php?  many times that happens when apache hasn't been restarted after php is installed.
<NateW> installed php5 and restarted apache
<NateW> the php mod is enabled as well
<deadsmith> The_Tick:  I wanted to make sure I was doing things in a sanctioned way, hoping that i could upload the ISO to a server to solve the problem for others.  I think I understand the GRUB stuff essentially, but I'm not sure about path names and the EFI setup.
<NateW> thesheff17: would restarting the server help?
<thesheff17> NateW: no...so it is working in /var/www/  you are missing something within the directive.
<NateW> how would i fix this?
<The_Tick> well
<NateW> i tried searching online and havent found a solution yet
<The_Tick> if he modified a config file
<The_Tick> he may need to restart apache to make modphp work
<thesheff17> can you pastebin your /etc/apache2/sites-available/default
<thesheff17> is that where you are adding that public_html dir?
<NateW> i used a2enmod
<NateW> havent edited the /etc/apache2/sites-available/default at all
<thesheff17> apache2 needs to be restarted so the module is picked up
<NateW> i have restarted apache
<NateW> the public_html folder works with html files.. just not php
<thesheff17> I would also check permissions
<thesheff17> chown -R www-data:www-data /var/www/
<thesheff17> here is a real simple PHP script...try creating test.php w/ this content: http://pastebin.com/e4yacRWR
<The_Tick> it won't work thesheff17
<The_Tick> the fact it's downloading it
<The_Tick> and not trying to process
<NateW> thats actually the php file im using
<NateW> setting /var/www to www-data makes it download in the same way
<The_Tick> apache is misconfigured most likely
<thesheff17> hmm...maybe apache2 isn't restarting correctly?
<thesheff17> try stopping it and ps aux | grep apache
<NateW> setting it back to the default username makes it work again
<thesheff17> I'm a little confused?  what do you mean default username?
<NateW> username i use for the server
<thesheff17> this should be done w/ sudo or root
<thesheff17> if that is what you mean?
<NateW> lcs      30906  0.0  0.0   8952   876 pts/2    S+   17:47   0:00 grep --color=auto apache
<NateW> i mean.. running chmod www-data:www-data makes the php file not open correctly, running chmod lcs:lcs makes it work again
<NateW> of course im using sudo and using /var/www with -R
<NateW> http://paste.ubuntu.com/550419/ is /etc/apache2/sites-available/default
<thesheff17> NateW: yea my default looks the same :-/
<thesheff17> and it works
<thesheff17> what version of ubuntu?
<NateW> 10.10
<thesheff17> odd...I'm running 10.04
<NateW> its strange.. php files in /var/www work, but not in ~/public_html
<thesheff17> so if you put test.php with phpInfo(); in /var/www/ it works fine?
<NateW> yeah
<thesheff17> ah ok...I think something needs to be added to apache2 config to say ~/plublic_html to process php files.
<NateW> what should i put?
<thesheff17> https://wiki.ubuntu.com/UserDirectoryPHP
<hggdh> smoser: there? A question on karmic and EC2
<NateW> thesheff17:awesome.. that fixed the problem.. thanks
<thesheff17> NateW: np didn't read close enough that you where trying to use ~/public_html
<zul> kirkland: yep
<kirkland> zul: you rock, man
#ubuntu-server 2011-01-05
<yann2> hello! one small question: can VM in kvm & libvirt run as a low privileged user?
<PryMaL> does anyone happen to know the earliest version of ubu server kernel that supports IPV6?
<uvirtbot> New bug: #697465 in bind9 (main) "package bind9 1:9.7.0.dfsg.P1-1ubuntu0.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/697465
<deadsmith> Is noefi required to boot on an Apple XServe?
<twb> AFAIK GRUB 2 supports EFI.
<twb> I've never dealt with such a system myself, but I *assumed* it all Just WorkedTM
<deadsmith> The discovery from earlier today was that I had to build from the bzr source for grub to boot the EFI....
<deadsmith> And, it is booting....
<deadsmith> but hangs when loading the kernel.
<deadsmith> I think it's video weirdness, but hard to be sure...
<deadsmith> is the error "finding suitable mode.  Booting however" familiar to anyone?
<twb> Sorry, no.
<triode3> hello all, got a mdadm question. built server, 4 drives, / and swap on raid1 on sda1,sdb1 sda2,sdb2 home on sda3,sdb3. Parts sda4,sdb4,sdc1,sdd1 exist. when machine boots it auto-magically mounts the extras as /dev/md0, while root is /dev/md0. Ugh. Can't find solution.
<thesheff17> triode3: I have never seen that happen before...do you see anything in /etc/fstab
<twb> triode3: pastebin the contents of /proc/mdstat
<triode3> http://pastebin.com/SvrSAedY
<triode3> I changed fstab from the default (which listed three devices) to one which still lists three (there should only be three)...
<twb> triode3: I'm not sure what happened there
<pmatulis> ew, partitionable raid devices
<triode3> The issue is that I want to build the final raid array (sda4, sdb4, sdc1, sdd1) but it will not let me as it thinks they are bsing used (sda4, sdb4).
<twb> Oh
<twb> Just mdadm --stop /dev/md0, then.
<triode3> e.g md_d0 should not exist.
<twb> triode3: md_d0 exists because both two-disk arrays want to be called md0
<triode3> mdadm --stop /dev/md_d0 stops it, but it still will not let me create one with mdadm --create blah blah
<twb> triode3: you need to stop one pair, add each of its partitions to the other array, then fix mdadm.conf, then update your ramdisk
<triode3> twb: thats fine, but /dev/md0 is my root. :O will stopping that kill the server :OO
<twb> triode3: then boot from the other pair
<triode3> hrm I only have the OS on /dev/sda /dev/sdb (raid1). I was going to do raid5 on sda4,sdb4,sdc1,sdd1 for space.
<twb> triode3: oh, you want a *raid5* of the four disks
<triode3> As for fixing mdadm.conf, I _thought_ this was ok: http://pastebin.com/GCu2sdiA
<triode3> twb yes
<twb> triode3: I think easiest is to reinstall from scratch
<triode3> on sda1,sdb1 /home on sda2,sdb2 swap on sda3,sdb3 /space on sda4,sdb4,sdc1,sdd1
<triode3> twb: one more monkey wrench, this is IDE, and I only have two controllers. Starting from scratch means disconnecting one, which is why I am in this pickle... when I first brought up ubuntu server, it automagically created this strange md_d0 array
<triode3> twb: I think the other problem is this: http://pastebin.com/hVYPiysE it is still there after removal. After a reboot too (well, obviously).
<twb> triode3: I'm not really interested in helping you in-place migrate a 2-way RAID1 to a 4-way RAID5.
<triode3> twb: I am not trying to migrate. I am trying to figure out why it is auto-creating my fourth raid (when I didn't ask it to) with the wrong drives (when I did not tell it to). I am assuming it is one of those "mdadm developers think users are dumber than they appear" ideologies... I was just wondering if anyone else had seen that and overcome it. I _should not_ have four raid arrays, and I am not wanting to migrate, just t
<twb> triode3: edit mdadm.conf, then
<twb> triode3: if you used dd, it may be that the UUIDs on the new disks match the old ones, so when it does an mdadm --assemble --scan, it finds two complete copies of md0 and assembles both
<twb> Writing zeroes over the metadata blocks and/or telling mdadm.conf to assemble specific devices should address that -- don't forget to rebuild the ramdisk after editing mdadm.conf
<triode3> twb: hrm, I will try that.
<triode3> twb: thanks.
<twb> It still won't help when you try to turn a RAID1 into a RAID5
<triode3> twb: hrm, but it reports that it thinks its raid5: ARRAY /dev/md0 level=raid5 num-devices=3 UUID=a6d72d16:694f7d01:3ccdc9df:08582ff7 it is of course that it is listing /dev/md0 twice. Thats f00ked up.
<twb> The mdstat you sent me said the level was raid1.
<triode3> twb: hrm looking at that again, I think I am screwed. It lists /dev/md0 twice, as a raid1 and a raid5. I think I will just have to somehow get a cdrom/usb in there and rebuild the whole thing
<twb> Just editing mdadm.conf to look for a raid5 will not MAKE it a raid5.
<triode3> twb: yes, but the mdadm --examine --scan reports two: http://pastebin.com/hVYPiysE
<twb> That might just be returning what's in mdadm.conf -- I don't know
<triode3> twb: yes. again, I think this thing is foo-bar. I think I will have to rebuild. mdadm.conf only lists three arrays. :O
<twb> 12:50 <twb> triode3: I think easiest is to reinstall from scratch
<triode3> twb: yes. at this point, I agree... although I will have to finagle some kind of medium.
<triode3> twb: thanks for the input. Sorry for the confustion.
<e_t_> I've got a server with three network interfaces, all with static IPs. Which ever connection I set a gateway on WILL respond to pings, but the other two WILL NOT. How can I get ping responses from all three interfaces?
<twb> All three interfaces are on the same network?
<e_t_> No. One is 10.145.144.7, two is 192.168.254.254, three is 172.17.1.1.
<e_t_> But the network (which I don't control) is set up so that all address *should* be reachable. Tcpdump shows that pings are recieved, but they aren't answered, except for the interface with the gateway set.
<Delerium_> ICMP blocked by some kind of network device?!
<e_t_> Nope.
<CheetoBandito> sounds like its received but dumped before apps can see it
<CheetoBandito> I'm having weird issues with networking in 10.04 concerning multicast too
<e_t_> Shouldn't ICMP be answered directly by the kernel's TCP/IP stack?
<twb> e_t_: you probably need a gateway for each interface, with a lower metric for non-preferred the routes
<twb> e_t_: without a full network diagram it's difficult to be sure
<twb> e_t_: if you're pinging from within the same network, then it should work already
<e_t_> I'm pinging from a host on a different subnet, but as I said, the pings are received, but the computer doesn't respond.
<e_t_> Can I set metrics in /etc/network/interfaces?
<twb> post-up ip route add ...
<twb> post-down ip route flush dev ...
<twb> Or some variation thereof
<twb> e_t_: if your three-iface host doesn't have appropriate routing rules to get back to your box (well, at least for the first hop), there's no way the return ping will reach you
<e_t_> OK. I'd read this: http://kindlund.wordpress.com/2007/11/19/configuring-multiple-default-routes-in-linux/ about setting rules, but I wanted to make sure I wasn't missing something simple.
<twb> e_t_: it wouldn't surprise me if your netadm blocks all ICMP traffic
<e_t_> I can set a gateway on any one interface at a time and get ping responses from it. Besides, my network admin (my teacher at college) certified to me that ICMP was not blocked.
<twb> Is the machine in question intended to be a router, or is it simply multihomed?
<CheetoBandito> I'm having an interesting issue in 10.04... If I request packets from a specific group in one program, and a different group in another program... I get packets from both groups in both programs.
<e_t_> twb: It's really just supposed to be an end-point on the 192 and 172 networks. Traffic never needs to cross from one network to another (routing). On the 10 network, it's a host.
<CheetoBandito> If I compile and run the same scenarion on Windows, Solaris, and other distros I get the intended behavior, that the packets each program sees are only the ones they requested.
<twb> e_t_: and it should be able to reach all distal networks via any interface?
<e_t_> twb: that is my understanding.
<twb> My guess is your route table should look like this, then: http://paste.debian.net/103781/
<twb> Er, 10/8 should be the third line, not that it matters
<twb> oh, and I forgot "via 1.2.3.4" fields for the 0/0's
<e_t_> twb: With multiple gateway set in /etc/network/interfaces, this is my routing table: http://paste.debian.net/103782
<twb> e_t_: please pastebin "ip r" formate.
<e_t_> twb: http://paste.debian.net/103783
<twb> Yeah, OK, though you have no 0/0 (i.e. default) for eth2.
<twb> That ought to be enough for ICMP to work, although for connection-oriented protocols like TCP I think you'll want to use non-equal metrics (so all traffic goes over one default route)
<twb> I'm actually kinda surprised ICMP didn't work across the internal wossname without those routes...
<twb> Do you have a firewall running on this box?
<e_t_> No. I even ran iptables -F to be sure. Security is not a concern for this server.
<skorv> my current setup is ubuntu server 10.04 +bind9 +dhcp3-server
<skorv> in your opnion what may crash networking
<e_t_> skorv: What does "crash networking" mean?
<skorv> one minute everythin
<skorv> sorry
<skorv> one minute everything is nice the next i cat even get an ip
<skorv> i go to the server and do sudo /ets/init.d/networking restart and it goes back to being ok... but its temporary
<e_t_> Which machine can't get an IP?
<skorv> everyone
<skorv> server setup = 3 NIC 1 wan 2 lan
<skorv> it serves 2 lans
<skorv> i tried wit static and dynamic ip on interfaces on wan side. lans have static ip
<e_t_> The server has static IPs and it gives out DHCP addresses to lan clients?
<skorv> correct
<e_t_> But after a minute, the lan clients no longer receive DHCP addresses?
<skorv> it basically has the same effect as ifdown -a
<skorv> yes
<skorv> all freezes
<skorv> i restart network only and it goes back to normal
<skorv> clients can get ip and internet works
<e_t_> Can you run Wireshark on the lan clients, to see what the network traffic looks like?
<skorv> its 5 am and i'm in bed...been working on it sine 10 am yesterday
<skorv> *since
<skorv> i'll do it in a couple of hours
<skorv> can a packet "crash" networking?
<skorv> or trafic?
<e_t_> Misconfiguration might cause any number of network-disabling effects.
<skorv> after a random ime?
<skorv> * time
<skorv> its the 2nd time it happens... i even thought it was a bed install the last time
<skorv> so i redid everything by hand istead of copying the config files i already have
<skorv> i'm portuguese... excuse my english
<skorv> i fallowed tutorials on ubuntu forums for dhcp and bind
<skorv> i'm a windows server expert... 1st time serious with linux server (my 1st had desktop installes)
<skorv> *installed
<skorv> was looking forward to make a good sbs alternative
<skorv> i have no training... self taught on almost everything
<skorv> this nre server was to be my "crown jewel"
<skorv> so far its crap (due to me most likely)
<e_t_> I've always had good luck with dnsmasq, rather than bind.
<skorv> well... fresh install tomorrow i nothing else works
<skorv> i'lll give dnsmasq a chance as long as it supports multiole networks
<skorv> *multiple
<e_t_> It does. It can also do DHCP.
<uvirtbot> New bug: #697545 in nagios3 (main) "Backslashes from check_disk_smb output are removed in web interface" [Undecided,New] https://launchpad.net/bugs/697545
<vraa_> anyone here know details about canonical support?
<uvirtbot> New bug: #697584 in bind9 (main) "inconsistent and incompatible naming of bind files" [Undecided,New] https://launchpad.net/bugs/697584
<uvirtbot> New bug: #697592 in ntp (main) "unable to create socket on eth0 (210) for ::ffff:192.168.1.4#123" [Undecided,New] https://launchpad.net/bugs/697592
<uvirtbot> New bug: #697609 in bind9 (main) "rndc-confgen does nothing" [Undecided,New] https://launchpad.net/bugs/697609
<cjwatson> kirkland_: thanks, reviewing ssh-import-id now
<RoyK> isn't that ssh-copy-id?
<cjwatson> RoyK: no
<cjwatson> RoyK: two different tools
<RoyK> can't find the import one on lucid...
<pmatulis> RoyK: it's part of openssh-server i think
<kiall> Hiya .. I've just come across a strange situation .. i have a hostname in /etc/hosts - root can ping it fine .. but, other users cant resolve the host? Its like /etc/hosts is only being honored for the root user
<pmatulis> kiall: try strace, it's probably a permissions thing
<pmatulis> 'strace -e open ping localhost'
<kiall> pmatulis, it was - I just discovered the issue .. /etc/nsswitch.conf was 664 rather than 644..
<pmatulis> kiall: ok
<kiall> sometimes a "smart" cloud-init script turns out to not be so smart ;)
<webstyler> Can someone recommend CrashPlan online backup?
<kim0> webstyler: I can :)
<webstyler> thx
<kim0> webstyler: been using it for a year+ .. works great! Linux/Mac/Windows
<webstyler> yeah. thats nice
<webstyler> need linux supprt
<webstyler> i am considering backing up my server/nas off-site
<webstyler> and the unlimited plan seems to be a good solution
<webstyler> how is the restore/web interface?
<raphink> there's also the great S4 API: http://www.supersimplestorageservice.com/
<raphink> :-)
<webstyler> ok thx
<raphink> don't try it in production though ;-)
<webstyler> ok good to know
<xampart> http://www.youtube.com/watch?v=yGyRVymDBhw
<xampart> wrongdow
<pmatulis> xampart: thanks for that
<xampart> pmatulis: np =)
<pmatulis> xampart: he lost a testicle in 1999?
<xampart> pmatulis: i suggest you look the other videos from him. on that information i would presume, no.
<pmatulis> xampart: i'm not much into contemporary music.  is this a real music video?
<hggdh> smoser: I am in now
<CERNUNN0S> Greetings, i'm new to the whole Ubuntu server admin and have inherited a number of servers unfortunately it appears one has been compromised. In so far that I am receiving reports of spam and helo senders, could anyone point me in the direction of a good set of guides to remedy these teething problems. The server is running Lucid Lynx LTS
<CERNUNN0S> Thank you for your time.
<patdk-wk> CERNUNN0S, running a website?
<patdk-wk> probably one of your cgi's was hacked into
<patdk-wk> hopefully the kernel/libc are updated also
<hggdh> CERNUNN0S: first action, once confirmed it has been compromised, is to take it off the network
<CERNUNN0S> Yes running virtualhosts, and hosting two email accounts and various aliases for email forwarding, how would I go about assessing the compromised systems?
<hggdh> ugh
<gobbe> not quite easy job
<gobbe> you could run checksums of files and compare them with system thats known to be good
<CERNUNN0S> Main problem is the set-up I have been left with is near incomprehensible, very basic and largely incompletely and completely without documentation. Would be great if there was different environments for test, developing. But alas no it's all as one in a monstrous multi-master-fail
<gobbe> i wouldn even start to find out. i would install everything from sratch
<hggdh> CERNUNN0S: is the system up-to-date with security fixes? kernel, apache, and libc6 comes to mind as possible venues
<kiall> gobbe, yup - you either know what you're doing .. or not.. and really when the server is compromised and offline .. its not the time to learn when you dont even understand the setup :)
<gobbe> yeah
<gobbe> just wasting time
<gobbe> because you will end up installing them from scratcg anyway
<hggdh> actually, no, not wasting time. If you do not know how you got compromised, there is no guarantee it will not happen again
<gobbe> gd.....its hard to irc with mobile phone
<hggdh> the safe bet is it *will* happen again
<CERNUNN0S> I don't think my predecessor believed in keeping things up to date so I would think it would not be running the latest software. So the answer is i'm wasting my time. Well i've already started creating a new server cluster completely separate from these. I was hoping the duck tape would hold on this set up long enough for me to get on with the new setup
<gobbe> hggdh, well....finding it out could be impossible
<hggdh> gobbe: I agree. But it should be tried. Part of the cost ;-)
<gobbe> CERNUNN0S: install proper tools to prevent this in future also
<gobbe> hggdh, yes. if you have atleast somekind of knowledge and tools
<gobbe> in this case i would say that finding out could be near impossible
<CERNUNN0S> The new set up includes tools and complete logs unlike current garbage,
<gobbe> good
<Fidelix> Hey guys, i'm having a permission problem in my server. The script runs as www-data but it cant read files uploaded through FTP by the user "rs"
<hggdh> CERNUNN0S: try to figure out who is sending out the HELOs (apache?) -- if cgi, disable the suckers
<CERNUNN0S> Thought as much but thought i'd ask the guys and girls who deal with this on a daily basis
<gobbe> CERNUNN0S: i would bet that apache/php could be good place to start out
<smoser> hggdh, ok. i'm in too.
<smoser> zul, could you maybe do a sponsor for me ? bug 686124
<uvirtbot> Launchpad bug 686124 in util-linux "Add option to sfdisk to use maximum partition size" [Medium,Triaged] https://launchpad.net/bugs/686124
<zul> smoser: sure gimme a sec..
<CERNUNN0S> Thanks all I shall have a check and see if there is anything I can gleen from the apache/php files
<hggdh> smoser: I am thÃ¬nking of you, pedro, and myself together, OK with that?
<smoser> yeah.
<raphink> Fidelix, what are the rights for the uploaded file?
<Fidelix> raphink, just a sec.
<Error404NotFound> anyone has installed NagiosXI on Ubuntu successfully? I have been doing a lot of google but can only find its manual for centos/rhel
<Fidelix> raphink, -rw------- 1 rs       users       696320 2011-01-05 12:20
<uvirtbot> New bug: #697676 in ntp (main) "ntp kills polar bears" [Undecided,New] https://launchpad.net/bugs/697676
<raphink> Fidelix, and are you surprised that www-data can't read it?
<Fidelix> raphink, not really. But can you give me a light on how to change the default permission of uploaded files in vsftpd ?
<raphink> sure
<raphink> give me a minute
<raphink> Fidelix, you could set local_umask in vsftpd.conf for example
<raphink> there's more fine umask settings, too
<Fidelix> raphink, i'll try to set local_umask to 777
<raphink> hmmm
<Fidelix> raphink, -rw------- 1 rs
<raphink> I would say more like 022
<Fidelix> It did not work
<Fidelix> oh, ok
<genii-around> umask doesn't have same values chmod does, it uses the complements of 777 etc
<raphink> a umask of 022 will give you files are that 644
<raphink> another option would be to use the chown_username setting in vsftpd.conf
<Fidelix> raphink, -rw-r--r-- 1 rs
<raphink> so files that are uploaded are given to www-data
<raphink> Fidelix, yes, that's 644, and it should allow www-data to read it
<genii-around> raphink: I'm pretty sure 022 will give you perms of 755
<Fidelix> raphink, i already tried chown_username
<Fidelix> Files are simply not chowned
<raphink> genii-around, 022 gives 755 to dirs, 644 to files
<raphink> Fidelix, did you set chown_uploads to yes?
<Fidelix> yes i did
<raphink> hmmm
<Fidelix> But my problem is already solved. I thank you for your time!
<raphink> I use chown_username on my server and it works fine
<raphink> no problem Fidelix
<raphink> ;-)
<raphink> genii-around, the complement is 777 for dirs, 666 for files
<zul> smoser: done
<smoser> gratzie
<macno> Hi, I have to setup a bridge interface on maverik , and I need both ipv4 and ipv6
<macno> this is my config http://paste.ubuntu.com/550654/ but I can't reach host using the ipv6
<Error404NotFound> macno, not directly related to your question but why are you using IPv6? just playing and see how it works? or you are actually deploying it in a network?
<macno> Error404NotFound, we use it
<macno> Error404NotFound, I'm connected to freenode using ipv6
<Error404NotFound> macno, hmmm, i see... i am just using gogoc to experiment it :)
<Error404NotFound> macno, so am i, but using different nick :P
<gobbe> macno: can you run ping6 for example?
<macno> gobbe, no, but I think I found the problem right now
<macno> 1 sec
<macno> gobbe, yep, the hwaddress row was wrong
<gobbe> ok :)
<gobbe> sometimes when asking help, you realize by yourself what was wrong =)
<macno> it must be under the eth0 section
<gobbe> usually happens to me =)
<macno> not under the br0 one
<gobbe> yes
<zul> hggdh: is the iso still failing?
<hggdh> zul: fix was published today, should be on tomorrow's ISO
<hggdh> zul: so yes, still failing ;-)
<zul> hggdh: meh
<hggdh> heh
<uvirtbot> New bug: #697690 in multipath-tools (main) "no dbg package for multipath-tools" [Undecided,New] https://launchpad.net/bugs/697690
<smoser> Daviey, fyi, i'm using pad.daviey.com more and more
<smoser> very useful until we manage to get a pad.ubuntu.com
<gobbe> uuh
<gobbe> quite nice tool :)
<azizLIGHTS> where is the per-user www root by default?
<azizLIGHTS> for apache2
<RoyK> $HOME/public_html
<RoyK> by default
<RoyK> if you enable usermod (iirc)
<thesheff17> that does look like a sweet tool...does it have a name?
<azizLIGHTS> how do i know if i enabled usermod
<RoyK> mod_userdir
<azizLIGHTS> sorry what?
<RoyK> azizLIGHTS: a2enmod userdir
<RoyK> then apache2ctl graceful
<azizLIGHTS> thanks that worked
<RoyK> azizLIGHTS: see /etc/apache2/mods-enabled/userdir.conf for the drive
<RoyK> s/drive/directory/
<hallyn> huh.  pad.daviey.com.   yeah, nice.  we should find some ppl to test how high it scales :)
<gobbe> hih
<gobbe> i would like to get something like that to my office
<gobbe> would be usefull there :D
<hallyn> i see ,it's etherpad.  i thought Daviey had coded it up :)
<azizLIGHTS> is it ok to have joomla, oscommerce, and zen-cart all on same system
<azizLIGHTS> or should i make snapshots for my vm
<azizLIGHTS> 3 snapshots , 1 for each
<gvandeweyer> hi all. could someone give me an opinion on this: Our server is running against it's limits and i'm looking at the ubuntu private cloud option.  Will this be the way to go? We now have a single server that runs very specific tasks (home-brewed code), and it should be able to distribute the tasks to spare cpu's in some core-duo's that are now lying around
<gvandeweyer> however, as far as I get the guides, the cloud starts regular ubuntu systems (virtual) on each node, that you can login to (ssh?)
<gvandeweyer> would it be possible to launch these speficic programs on these nodes, and how about storage after loggin in? will each user still have their own homedir with their data as they do now?
<gvandeweyer> and as a side node: isn't starting virtual complete OS's a bit overhead ?  should i go for some sort of cluster option, with a system running ubuntu, accepting tasks from the network monitor and running it in their native OS ?
<compdoc> your users would use it to store files?
<gvandeweyer> they would need their data to process them
<gvandeweyer> and preferable store them somewhere on it to (several 100 gigs, not suitable for desktop usage)
<gvandeweyer> I could use NFS/samba for remote access if using physical machines that get jobs from a central queue
<compdoc> when I think of the could, I think of users connecting to a public or private server to save files, store contacts, and maybe use apps to write docs or spreadsheets. I havent found a use for it myself
<compdoc> *cloud
<vraa> does anyone have any experience with ubuntu canonical server support?
<gvandeweyer> compdoc: i feel the same, but it looks easy to implement (compared to cluster design)
<compdoc> a cluster is a group of servers that prevent interruption of service from the failure of one server. I dont think the cloud is the same kind of thing
<compdoc> clouds can use clusters, certainly
<hallyn> hm, i've been trying the 'test memory' option in all the livecds, but it doesn't seem to ever give me the full memtest86 i expect...  just blanks for 2 seconds and then comes back to the grub menu
<hallyn> what am i doing wrong?
<AndyGraybeal> should I ever be worried about cron not working?  does it ever not work?  I'm depending on it to rsync data.
<kirkland> smoser: around?
<azizLIGHTS> is it ok to install joomla, oscommerce, and zen-cart on same machine? or should i seperate the vm into 3 snapshots
<hallyn> kirkland: can you try booting an ubuntu livecd in kvm and running memtest?  does it actually run memtest?
<patdk-wk> azizLIGHTS, it depends on how paranoid you are about one of them getting hacked, and if you care if cross contamination
<kirkland> hallyn: let me try ....
<azizLIGHTS> im just testing right now
<azizLIGHTS> its not live
<kirkland> hallyn: nope, just reboots
<hallyn> kirkland: hrmph
<hallyn> kirkland: a rhel cd did the same thing for me.
<hallyn> kirkland: doh, i'm an idiot
<kirkland> hallyn: qemu problem then?
<hallyn> yeah, the very one that i thought was responsible for jdstrand's bug
<hallyn> (see http://www.spinics.net/lists/kvm/msg47235.html)
<hallyn> I guess I'll file a separate bug, pull the bugfix, and then ask jdstrand if it fixes his problem too :)
<patdk-wk> how do I make lvm forget about vg's that no longer exist?
<hallyn> patdk-wk: not sure, but in the past I've had to dd from if=/dev/zero into the first bit of the old vg bc the tools were finding non-deleted header info, iirc
<patdk-wk> hallyn, no, the drive doesn't exist anymore
<patdk-wk> just lvm keeps attempting to locate it
<hallyn> then i guess you prolly can't write zeros into it :)
<patdk-wk> :)
<azizLIGHTS> o crap
<azizLIGHTS> i was saving vmware snapshot and ubuntu says BUG: soft lockup - CPU#0 stuck for 112s
<hallyn> patdk-wk: so, you had a vg spanning multiple partitions, one ofthe partitions is gone, and lvm keeps trying to find the partition that's gone?
<patdk-wk> nope
<azizLIGHTS> is it ok or what...
<patdk-wk> I haved a vg on a single drive, that drive is gone (cause I was recovering it)
<patdk-wk> but vgs throws out a crapload load of, read failed after 0 of 4096 at 0: intput/output error
<hallyn> patdk-wk: hm, does 'vgremove' help at all?
<patdk-wk> hallyn, nope, just says, not found
<smoser> kirkland, here now
<kirkland> smoser: okay, so ssh-import-id is in the archive
<kirkland> smoser: wanna propose your merge with your getopt rework?
<kirkland> smoser: and i'll review sponsor upload
<smoser> ok.
<patdk-wk> hallyn, seems to be cause I didn't realize it, and never did a vgchange -an on it before I removed it
<hallyn> hm, you'd think that'd be fixable after the fact...
<doko> eucalyptus guys: please change the order of the recommends in eucalyptus-walrus, bittornade should go first
<patdk-wk> hallyn, evil, but deleting the stuff out of /dev and /dev/mapper worked
<hallyn> some say evil, I say handy
<patdk-wk> well, evil, cause it's just not a clean or nice way :)
<smoser> kirkland, i just pushed to lp:ssh-import-id
<macno> I used vmbuilder for the first time this afternoon, it's a great tool!
<kirkland> smoser: sweet, thanks
<smoser> where do you want to go from there ?
<macno> I just have a couple of questions:
<macno> 1) is it possibile (or what do I have to look at) to add also ipv6 address ?
<robbiew> kirkland: zul: Daviey: SpamapS: JamesPage: hallyn: smoser: FYI.. RoAkSoAx will be joining us in Dallas next week ;)
<zul> sweet!
<kirkland> robbiew: \p/
<kirkland> RoAkSoAx: welcome, dude!
<RoAkSoAx> kirkland: thanks man :)
<Daviey> robbiew, \o/
<macno> 2) how can I set the disk type block ?
<uvirtbot> New bug: #697752 in qemu-kvm (main) "memtest won't run" [Medium,Confirmed] https://launchpad.net/bugs/697752
<TREllis> RoAkSoAx: \o/
<kirkland> hallyn: the memtest thing ... has that always been there, or is that a recent regression?
<RoAkSoAx> kirkland: btw... got hold up with PowerNap dev... ended up not having a laptop the last couple weeks... and I guess next week I'll resolve some doubts
<kirkland> RoAkSoAx: okay, I'll bring a wattmeter
<kirkland> RoAkSoAx: and I'll have a few spare laptops
<RoAkSoAx> kirkland: cool!!
<azizLIGHTS> im getting some funny characters on putty via ssh?
<azizLIGHTS>  Ã¢Ã¢cron
<azizLIGHTS> instead of cron
<azizLIGHTS> what to do
<Datz> azizLIGHTS: try: settings->translation->UTF-8
<azizLIGHTS> i dont have a gui
<azizLIGHTS> are you talking about putty?
<Datz> on putty you said
<Datz> right click
<Datz> then you'll need to refresh the screen
<T3CHKOMMIE> hello all, im trying to change the default page loaded by apache2. i have /var/www/index.html and identity.html i want to keep both files but i want apache to defualt to identity.html before it looks for index.html, where do i configutre this? thanks for the help!
<KurtKraut> T3CHKOMMIE, your question seems a bit odd for me. Could you explain me why you want to do this?
<smoser> kirkland, did you want me to make a branch from lp:ubuntu/ssh-import-id ? and propose merge ? how are you expecting to keep the two related (lp:ssh-import-id and lp:ubuntu/ssh-import-id)
<T3CHKOMMIE> my professor says so. honestly i would overwrite the index.html but they want me to change apache defualt settngs and i cant find it for the life of me.
<kirkland> smoser: nah, i trust you
<kirkland> smoser: you have commit access to that project/branch
<smoser> kirkland, but not to the ubuntu
<kirkland> smoser: i'll just need to sponsor/upload for you
<kirkland> smoser: right
<Datz> T3CHKOMMIE: probably a question for #http. But I think in apache config you can set it to default to something other than index.html
<smoser> i already commited to "upstream"
<kirkland> smoser: just keep the changes upstream
<kirkland> smoser: perfect
<kirkland> smoser: i want to do some testing, and i'll get it uploaded today
<T3CHKOMMIE> Datz, do you know where that conf is? i have an empty https.conf file and if i ad
<T3CHKOMMIE> "DefaultIndex identity.html"
<T3CHKOMMIE> nothing changes.
<KurtKraut> T3CHKOMMIE, Datz is right. You can even set the priority of default files, where you can set that identify.html should be serverd first even if there is a index.html, but this still a quite odd scenario. I can't figure out who would need such thing and for what purpose.
<KurtKraut> T3CHKOMMIE, if your http.conf is blank Apache was installed incorrectly.
<KurtKraut> T3CHKOMMIE, or you're looking for the wrong file in the wrong folder.
<T3CHKOMMIE> crap....
<T3CHKOMMIE> its the httpd.conf file in  /etc/apache2 right?
<T3CHKOMMIE> or is there another i dont know about.
<KurtKraut> T3CHKOMMIE, in Ubuntu, yes. In other Linux distributions it might be placed in different paths.
<T3CHKOMMIE> dang, is there a way i can put what is supposed to be there?
<Pici> No. /etc/apache2/httpd.conf is blank on normal Ubuntu apache2 installs.
<cjwatson> RoyK: you won't generally find information about current development in lucid.  ssh-import-id was added in maverick
<Pici> T3CHKOMMIE: We use a per-site configuration;. See /etc/apache2/sites-enabled/
<T3CHKOMMIE> Pici,  thanks anything inparticular i should be looking for?
<Datz> T3CHKOMMIE: you probably have all that info in apache2.conf.. athough I'm not finding what I was looking for either :p
<Pici> T3CHKOMMIE: The sites you have enabled will have config files in that path
<T3CHKOMMIE> ok
<Pici> T3CHKOMMIE: You could even drop a single .htaccess file in /var/www/ if you wanted to just modify the behavior for that one path. Its up to you how you want to do it.
<T3CHKOMMIE> i just want apache to load /var/www/ideneity.html when i got to localhost rather than /var/www/index.html, stupid i know but its a requiremnt for class :S
<Datz> T3CHKOMMIE: it's in mods-available
<Datz> not sure where though :P
<Pici> What is in mods-available?
<T3CHKOMMIE> mods avaiable.... ok lemme check that too. thanks Datz
<Pici> mods-available isn't really relevant to this simple change.
<shauno> I'd just rgrep DirectoryIndex in /etc/apache2.  Chances are all will become clear pretty quickly
<Datz> T3CHKOMMIE: I think: dir.conf
<Pici> T3CHKOMMIE: You probably have 000-default in /etc/apache2/sites-enabled/
<Datz> Pici: ok..
<T3CHKOMMIE> ok i changed /etc/apache2/sites-enabled/000-default and added DefaultIndex identity.html index.html under docuement /var/www and that seemed to work.
<Pici> T3CHKOMMIE: Exactly :)
<T3CHKOMMIE> shazam! thanks guys i uber appriciate it! thats taken everyone almost 4 horus in my class to figure out
<T3CHKOMMIE> awsomness, now off to computer architecture! thanks again!
<RoAkSoAx> smoser: where you able to test the fix for bug
<RoAkSoAx> smoser: where you able to test the fix for bug #680138
<uvirtbot> Launchpad bug 680138 in testdrive "fails with "Invalid cross-device link" if cwd is not same filesystem as $HOME" [Low,Fix committed] https://launchpad.net/bugs/680138
<azizLIGHTS> im installing 10.10 server
<azizLIGHTS> why does vmware say when you are done and the operating system boots up, click i finished installing
<azizLIGHTS> why
<azizLIGHTS> what will it do
<Datz> probably a question for #vmware
<azizLIGHTS> sorry
<Datz> np, (I don't know the answer) :P
<smoser> RoAkSoAx, i suggested an alternative solution.
<RoAkSoAx> smoser: awesome! Thanks!
<hallyn> kirkland: i don't think 0.12.5 has that memtest problem.  do you have a maverick box to check on?
<RoAkSoAx> smoser: Ok, I don't think that will work because that cmd will be splitted later to be used by a subprocess in the GTK, and as far as I could remember having that type of commands made subprocess to fail
<RoAkSoAx> i'm testing it now though
<smoser> i dont know what you're doing in the gtk. it works from the commanad line.
<uvirtbot> New bug: #697802 in samba (main) "cannot log on to shared folders on this or other server" [Undecided,New] https://launchpad.net/bugs/697802
<RoAkSoAx> smoser: ok. It indeed works well with the command line, but fails with the GTK, because of subprocess. Will work on it to provide a better solution
<smoser> well, you have a general issue if you're consuming the response from that functoin in two different ways
<smoser> RoAkSoAx, see subprocess.Popen(shell=True)
<smoser> http://docs.python.org/library/subprocess.html
<smoser> i'd ditch the 'split' and just use that.
<RoAkSoAx> smoser: well the difference is that in the command line it only runs as "os.system(cmd)", while in the gtk it uses a subprocess to run cmd, specially because to obtain the progress and stuff
<RoAkSoAx> smoser: and as far as I can remember, when I first did it, when using shell=True, I couldn't grab the progress. But will give it a try since that part has been changed
<smoser> i just tested this, and it shows me progress in the gui
<smoser> http://paste.ubuntu.com/550775/
<RoAkSoAx> smoser: ok cool then!
<RoAkSoAx> smoser: works great, thank you!!
<deadsmith> trying to remake the ISO for amd64-server; can I replace /boot/grub/efi.img with a new grub.efi ?  Or is this not even the boot file for EFI systems?
<deadsmith> shorter question:  does anyone know the mkisofs flags used to make the amd64-server ISO ?
<deadsmith> n/mind... found it.
<CheetoBandito> I'm having an interesting issue in 10.04... If I request packets from a specific group in one program, and a different group in another program... I get packets from both groups in both programs.
<CheetoBandito> If I compile and run the same scenarion on Windows, Solaris, and other distros I get the intended behavior, that the packets each program sees are only the ones they requested.
<kirkland> Daviey: around?
<kirkland> Daviey: would you mind dropping me a few instructions as to setting up that video chat flash thing we used yesterday
<jdstrand> hallyn: hey, so today I seem to be hitting bug #694029 pretty regularly, with 0.12.5+noroms-0ubuntu7 on natty
<uvirtbot> Launchpad bug 694029 in qemu-kvm "[natty] kvm guests become unstable after a while" [Medium,Confirmed] https://launchpad.net/bugs/694029
<jdstrand> hallyn: how do I disable ksm?
<jdstrand> kirkland: hey, you may actually know that one since you implemented it in Ubuntu iirc ^
<kirkland> jdstrand: edit /etc/default/qemu-kvm
<jdstrand> easy enough
<jdstrand> kirkland: thanks!
<kirkland> jdstrand: if that fixes your problems, try enabling it, but adjusting the sleep milliseconds
<jdstrand> kirkland: this bug really stinks :(
<kirkland> jdstrand: there have been complaints that the default sleep (20ms) is too aggressive
<jdstrand> kirkland: will do
<kirkland> jdstrand: i don't disagree, but i also haven't seen a solid suggestions as to what a sane default would be
<jdstrand> kirkland: would the aggressive default cause flaky guest behavior?
<kirkland> jdstrand: if someone told me it did, i'd be inclined to belive them
<jdstrand> kirkland: oh, I am not suggesting anything. it is just been very annoying for me. I only just started looking at ksm
<kirkland> jdstrand: but i've not heard that yet
<jdstrand> s/would/could/
<jdstrand> (and therefore have no opinions)
<jdstrand> kirkland: ^
<kirkland> jdstrand: okay
<kirkland> jdstrand: i don't know of any issues with ksm causing instability
<jdstrand> kirkland: I've only started hitting this bug lately, and have never had a problem with the default
<jdstrand> kirkland: well, by 'lately' I mean with the natty kernel
<jdstrand> which has probably been a month and a half (used the maverick kernel on natty for quite a while before that)
<jdstrand> disabling ksm didn't help
<deadsmith> still no successful EFI boot.  Does anyone know where there's a description of the files on the ISO?
<deadsmith> or otherwise have any good ideas about what I should be reading to make a custom EFI install ISO?
<cole> deadsmith: did you see this? http://tinyurl.com/24alt9k
<cole> deadsmith: didn't scroll up, just saw you had trouble booting via efi
<deadsmith> cole: no worries, let me look this over and see if it mentions something I've missed...
<deadsmith> cole:  I have a working grub2 for EFI, I'm just trying to get it on the bleeding CD
<cole> ah!
<deadsmith> cole:  I was able to install the system by bootstrap-installing with rEFIt and an OSX partition (it's an Xserve)....
<deadsmith> cole:  but I'm trying to get a reuasable install image..
<cole> deadsmith: i assume you used rsync --one-file-system --exclude /proc ??
<deadsmith> cole:  no... at which step?  making the ISO?  I have the amd64-server loopback mounted on a linux machne and I'm using mkisofs to remaster it...
<jdstrand> hallyn: summary of bug #694029. qemu-kvm 0.12.5+noroms-0ubuntu7, using an ide disk and disabling ksm made no difference
<uvirtbot> Launchpad bug 694029 in qemu-kvm "[natty] kvm guests become unstable after a while" [Medium,Confirmed] https://launchpad.net/bugs/694029
<hallyn> jdstrand: yeah, so basically any qemu on natty kernel is bad?
<jdstrand> hallyn: bottom line, yes
<jdstrand> I'm back to a maverick kernel
<hallyn> I've yet to reproduce it, but wil try with your suggestion about maverick update in one, and (wahtever) in another kvm
<jdstrand> (I have to actually get work done)
<jdstrand> hallyn: what chipset do you have? I'm using i7 here
<hallyn> yeah, i have core i7 laptop
<jdstrand> hallyn: it is hard to find a simple reproducer.... :\
<jdstrand> hallyn: not sure it would make any difference, but I am using qcow2 files, the one upgrading is a snapshot qcow2
<jdstrand> hallyn: the one installing ubuntu desktop is qcow2, but not snapshotted yet
<jdstrand> I have to believe it is going to be a more widespread problem when people upgrade...
<hallyn> jdstrand: how do you install ubuntu minimal anyway?  i think i'll try apt-get install ubutnu-desktop from ubutnu server?
<jdstrand> hallyn: I use vmbuilder
<hallyn> jdstrand: have all of these been built using vmbuilder?
<jdstrand> hallyn: yes
<hallyn> hm.  potentially relevant
<hallyn> jdstrand: thanks, i'm just gonna play a lot, i'll stop bugging you :)
<jdstrand> hallyn: these are the packages I install after using vmbuilder: screen ubuntu-desktop vim openoffice.org
<jdstrand> hallyn: the precise method I use to generate VMs is in https://wiki.ubuntu.com/SecurityTeam/TestingEnvironment
<hallyn> kthx.  after an attempt with installs from iso, i'll try from vmbuilder
<jdstrand> hallyn: sure! thanks for looking at it
<jdstrand> it has been a real hair-puller for me
<Hilikus> hey guys
<Hilikus> can someone help me configure alsa for 5.1. i try the sound test and i only hear front left and right. i don't use pulse audio or gnome
#ubuntu-server 2011-01-06
<twb> I just got bit on the arse by a permission change on vmlinuz files (kernels) on my netboot server.
<twb> Just to rule this out: was there any change in lucid where new kernels are supposed to be (say) r-------- instead of rw-r--r-- ?
<twb> Never mind, I can demonstrate the problem is not in ubuntu's code
<twb> Wait, no I can't
<twb> http://paste.debian.net/103860/ is what I'm seeing
<mrroth> hi
<mrroth> anyone know if thier any type of thing like drobo raidz system for ubuntu or linux
<mrroth> tha tis open source and free
<mrroth> but works that ssame way
<JanC> mrroth: what's drobo "raidz" ?
<mrroth> drobo is the nas, that allows you to add disk of diff sizes
<mrroth> and still hav redunency
<mrroth> and allow the raid array to grow is size
<twb> If by "RAIDZ" you mean ZFS -- no, you can't have a good ZFS on Linux.
<JanC> it's some sort of RAID 5 it seems
<twb> I've seen RAIDZ before sold as "ZFS uses RAID5, but fixed to be more gooderer"
<JanC> but different
<mrroth> what does drobo call it
<mrroth> so drobo is bassicalaly zfs
<mrroth> "http://www.drobo.com/products/drobo-fs.php"
<JanC> actually, it might be similar to btrfs
<twb> Well, ZFS *is* similar to btrfs :-)
<mrroth> "Data Robotics BeyondRAID solves these two critical issues, delivering flexibility without downtime. Built on an advanced storage virtualization platform, BeyondRAID chooses the correct RAID algorithm based on data protection needs at any given moment. Since the technology works at the block level, it can write blocks of data that alternate between data protection methodologies."
<mrroth> oh
<mrroth> I wanted to see if I could bulid a ubuntu server that had the same features
<twb> Ubuntu cannot guarantee no downtime
<mrroth> oh
<twb> That's, uh, technically impossible
<mrroth> that what thier website said
<mrroth> :(
<mrroth> it seem illogical
<JanC> drobo can't guarantee that either  ;)
<twb> RIght
<mrroth> oh
<mrroth> but at least the feature to keep adding new disk or replacing smaller disk with a bigger one
<mrroth> to achive more space
<mrroth> they offer that, I wonder if it was just some repackage opens ource stuff or maybe there was somethign in the Open source community that did the same
<twb> I would say: buy two 2TB disks and put them in a system that can take at least three disks.
<JanC> you can do that with normal linux software raid, but the smallest disk will decide how much of every disk will be used then...
<twb> RAID1 them.  At upgrade time, buy two nTB drives and migrate to a new RAID1 of those.
<JanC> you can always move from RAID 1 to RAID 5
<twb> This is a simple, predictable and comprehensible solution.
<mrroth> oh
<mrroth> so at upgrade time, just copy data from the mirror, to the nTB biggersize disk
<mrroth> then remove the two smaller nTB disk
<mrroth> and mirror the Second nTB biggersize disk
<twb> mrroth: you add the new disks to the array, wait for the sync to finish, remove the old disks, then tell mdadm to grow to the new disks' size
<mrroth> oh so I can remove disk from the raid 1 array
<JanC> that will allow you to do everything without downtime
<mrroth> sweet
<mrroth> that acutally good idea
<mrroth> and mdadm will expand disk to new disks ize
<mrroth> size
<mrroth> but I would have to have four bays right twb?
<twb> Technically you can do it with only two bays
<twb> It's just a little less fiddly if you can put all four disks in at once
<mrroth> how do i get to add two more members to the array
<mrroth> or I remove one smaller disk
<mrroth> add the bigger one
<twb> mrroth: you forcibly degrade the array
<mrroth> and when it done synching
<mrroth> oh so you degrade by removing one of the smaller disk from the raid 1 array
<mrroth> put bigger one, it will synch and have same space as smaller disk
<mrroth> then remove smaller disk, add second bigger disk
<JanC> always make sure you have backups of important stuff when you do such things though  ;)
<mrroth> rebulid the array
<mrroth> and then use mdadm
<twb> JanC: well, the backup is one half of the array you degraded and removed :-)
<mrroth> IS THIER A  ubuntu based NAS applaince?
<mrroth> yea because it mirror
<mrroth> but always have a offsite backup
<mrroth> raid is not backup :)
<JanC> there are several Debian-based ones
<twb> mrroth: probably, but Ubuntu itself is a generic OS, not a NAS OS.
<mrroth> oh thier a debian based one hmm
<mrroth> oh
<mrroth> JanC which debian do you recoemnd or I can google Debian  +nas
<JanC> I think FreeNAS is now Debian-based?
<twb> mrroth: normally one would roll their own setup on top of Ubuntu or Debian to get a NAS
<mrroth> oh http://www.debonaras.org/
<twb> http://wiki.debian.org/DebianPureBlends
<mrroth> freenas is what I was using
<twb> A NAS is not a very complicated thing
<twb> Just install ubuntu-server and then install (say) samba and nfs-kernel-server.
<mrroth> OH I see
<JanC> well, the nice thing about some NAS OS'es is that they automaticly detect additional disks etc.
<mrroth> but I would need a frontend webbased management right
<mrroth> oh they do hmm
<mrroth> I like drobo but it 600 bucks
<mrroth> and NO disk
<mrroth> the NO disk part is fine, but the cost is huge
<twb> Unless you're handing the system on to idiots, a CLI is the Right Thing.
<mrroth> true that true
<mrroth> and I can use ubuntu software raid
<mrroth> :)
<mrroth> I want three thigns, ssh (sftp), samba , NFS, and a webbased torrnet client, to download linux ISO
<mrroth> :)
<twb> linux isos my ass
<JanC> most commercial home NAS boxes have torrent clients already  ;)
<mrroth> oh
<JanC> even many home routers have that nowadays, if they have an USB port
<mrroth> yea but no redunecy :(
<mrroth> hmm
<mrroth> thanks dude
<JanC> I have a 4-bay Netgear device which costed about 300-350 EUR without disks IIRC, but I think it was an end of series or something  ;)
<twb> mrroth: two USB sticks, then :P
<jforman> mrroth: if you want a file server with backup, try what i do...raid-1 between two disks that export via nfs and samba, and use rdiff-backup to copy that data to a 3rd hard drive on the same machine
<mrroth> JanC does it allow extra APPS, or is it just for samba
<mrroth> right now I have three 500 gig disk
<JanC> it runs some sort of minimal Debian-derivative and there is a toolchain available
<twb> jforman: not rsnapshot? :P
<mrroth> one USB encoulser
<jforman> twb: never heard of it.
<JanC> it has a SPARC CPU though, so not exactly mainstream
<mrroth> I would like to mirror both disk, and just use enclouser for backup
<twb> jforman: it's a perl shim for cron+rsync+cp -al
<jforman> twb: yeah, thats basically what rdiff-backup is
<twb> Yeah, I thought so.
<mrroth> JanC is your 4-bay Netgear device fast (throughput)/
<twb> who cares
<mrroth> oh
<JanC> it's certainly not the fastest, but in general fast enough I guess
<mrroth> cool
<Mark_> Hi, i just installed a server with lamp to find PHPmyadmin wasn't included. then ln'd it from my usr/share/ to my www. But now i get error code 1045 anyone know what this means?
<Mark_> nevermind i needed to install mcrypt
<JohnFury> ahoy... people of ubuntu
<JohnFury> anyone who successfully created a ubuntu image for openstack compute?
<thewrath> how does one get on the ubuntu server team?
<blueyed> I wonder why this build fails:
<blueyed> http://launchpadlibrarian.net/61691370/buildlog_ubuntu-lucid-i386.php5_5.3.5~snap201101060130-0~blueyedppa1_FAILEDTOBUILD.txt.gz
<blueyed> error seems to be: Zend/zend_stream.c:239: error: 'PROT_READ' undeclared (first use in this function)
<blueyed> I am basically using a mashup of the Debian/Ubuntu packaging and the PHP 5.3 snapshot.
<blueyed> This worked for 5.3.4 snapshots some weeks before, but now fails.
<JohnFury> hello... anyone who successfully created a ubuntu image for openstack compute? Please...
<blueyed> This is somehow triggered by the apache2filter package, where HAVE_SYS_MMAN_H is undef because of "checking for sys/mman.h... no" during configure.
<JohnFury> hello... anyone who successfully created a ubuntu image for openstack compute? Please...
<JohnFury> hello... anyone who successfully created a ubuntu image for openstack compute? Please...
<JohnFury> how did you guys do it? any documentation that tells the steps?
<twb> JohnFury: I do not know what an "openstack compute" is.
<theamazingbeat> hi is this the channel to be for openssh issues
<theamazingbeat> anyone?
<c0nv1ct> theamazingbeat, it is the channel for server issues, just ask
<theamazingbeat> okay So I have a sshkey dilema
<theamazingbeat> you can follow up: #ubuntu-beginners-team
<theamazingbeat> err wrong link
<theamazingbeat> sorry
<theamazingbeat> http://ubuntuforums.org/showthread.php?t=1659447&page=2
<theamazingbeat> there you can follow up^
<theamazingbeat> anyway I have tried basically everything to set this thing up and it simple just wont work out the way I want
<c0nv1ct> theamazingbeat, you should be able to just copy or append the key to ~/.ssh/authorized_keys on the server side and it should work
<theamazingbeat> well when u use puttyGen doesn't it edit the key and change it
<c0nv1ct> assuming /etc/ssh/sshd_config allows for RSAAuthentication and PubkeyAuthentication
<theamazingbeat> better question, should i generate a key on my windows machine, or should i take the key already generated on my ubuntu machine
<theamazingbeat> they are and PasswordAuthentication is not allowed
<twb> My NFSv3 server is using 10% of the CPU just for lockd at the moment.
<c0nv1ct> it really shouldn't matter, you should be able to import the private key into putty or export the public key to the server
<twb> That's a bit excessive; how do I "restart" lockd (it's a kernel thread)?
<twb> 10% of a quad-core Xeon, that is.
<c0nv1ct> i don't use putty that much, i use SecureCRT on the rare occasion i am on a windows box, so i cant help  much with that
<theamazingbeat> should i edit the authorized_keys with VIM editor or text editor? or does it not matter
<theamazingbeat> cuz i hate VIM editor anyway
<c0nv1ct> lol it shoudlnt matter, and you dont need an editor really
<c0nv1ct> unless you have more than one key, you can just copy the public key over and rename it to authorized_keys
<theamazingbeat> okay i have 2 files in my ~/.ssh/authorized_keys.... one named: aithorized_keys.swp and one named .authorized_keys.swo?
<theamazingbeat> OMG
<theamazingbeat> i fixed
<theamazingbeat> it
<c0nv1ct> those other two look like temporary files from having an editor open to me
<theamazingbeat> lol for some reason my authorized_keys file got put into my Home folder
<theamazingbeat> so all i did was move it to the .ssh folder
<c0nv1ct> lol, there ya go
<theamazingbeat> and BAM
<theamazingbeat> dude i have been working this since thursday
<theamazingbeat> almost 1 week
<theamazingbeat> cuz I am brand new to ubuntu
<c0nv1ct> lol, i've been using rsa key auth for years
<c0nv1ct> it took me a while the first time too
<theamazingbeat> well now i know how to do it
<theamazingbeat> Persistence pays off
<c0nv1ct> now the trick is not to lose the private key ;)
<theamazingbeat> ya ima triple backup that file
<theamazingbeat> While I have you I just have one more question. How do I basically map my laptop (windows box) to my linux so if I am not home I can access both drives via SFTP
<theamazingbeat> map my laptop drive*
<c0nv1ct> best bet for that would probably be cifs
<c0nv1ct> just basic windows file sharing
<c0nv1ct> though for a server you should really have the laptop map a drive off the server instaed
<theamazingbeat> Basically what I want is if both my machines are at home, I go to a remote location, I connect to my linux box, how do i get to see my laptop drive as well at thew same time
<c0nv1ct> you can just share a folder in windows like you normally would and map that to a directory in linux
<theamazingbeat> ya how do i do that in linux?
<c0nv1ct> check out both the samba server and client info ubuntu offers
<theamazingbeat> :P
<c0nv1ct> doing a client on the server is a bit more complicated because it is all command line
<JohnFury> how did you guys created the image of ubuntu for Openstack?
<JohnFury> the one that's on this site: http://wiki.openstack.org/RunningNova#Get_an_image
<twb> I have ipv6 disabled at the kernel level.
<twb> However, getent hosts (DNS) still returns IPv6 entries.
<twb> Can I tell nsswitch.conf that its DNS resolver should ignore AAAA records?
<WinstonSmith> hi ppl :) made myself an udev rule so that it loads my usb-wifi-driver. works well but i have to replug the device if i reboot. could anybody tell me where i have to make a udev rule so it runs at startup?
<WinstonSmith> or is there a better way?
<twb> udev shouldn't care whether the device was inserted at boot time
<WinstonSmith> twb, so there is no udev rule i could create that checks devices at boot time? so it would be easier to just use rc.local?
<noaXess> hey all.
<noaXess> to install de_CH locale i need just the package language-pack-de and it's dependencies, right?
<andol> noaXess: Well, at least as long as you want the de_CH UTF-8 locale. These are the locales setup by that package - http://paste.ubuntu.com/551004/
<uvirtbot> New bug: #698028 in backuppc (main) "Please merge backuppc 3.2.0-2 (main) from Debian unstable (main)" [Undecided,Confirmed] https://launchpad.net/bugs/698028
<noaXess> andol: yeah.. thats what i want :).. and if iso?
<noaXess> andol: how you checked that?
<andol> noaXess: Not sure if there's a package which will setup that for you, but you can always stick a fiile yourself, under /var/lib/locales/supported.d
<andol> noaXess: you can check available locales by running "locales -a"
<noaXess> andol: do i need also language-support-de?
<noaXess> locale -a ;)
<noaXess> w/o s
<noaXess> Ã¤h no.. not need language-support-de cause that will install.. openoffice.. uaaa
<twb> noaXess: -base
<twb> noaXess: or you can simply request a specific locale... "sudo update-locale de_DE.UTF-8", IIRC
<twb>         chroot $template_dir locale-gen ${LANG:-en_US.UTF-8}
<noaXess> do i need restat anything after installing language-pack-de?
<twb>         chroot $template_dir update-locale LANG=${LANG:-en_US.UTF-8}
<twb> That's what I do, it provides only the locale I ask for
<twb> Note: I only use CLIs, so it may not be appropriate for a desktop
<noaXess> twb: i don't need change the language.. de_CH is just for the correct deciaml and tausend seperator of numbers.. also for date's..
<twb> noaXess: just set LC_DATE and LC_NUMERIC, then
<noaXess> twb: we are in ubuntu-server ;)... so no desktop
<twb> Sorry, LC_TIME not LC_DATE
<noaXess> twb: set it in /etc/default/locale?
<twb> It's all in the locale(1) manpage
<twb> noaXess: in your .profile, unless you want it to be system-wide
<noaXess> twb: hm.. maybe just for one user.. that runs a specific server application, openerp
<twb> If you want it to be system-wide, try "update-locale LANG=en_US.UTF-8 LC_TIME=de_CH LC_NUMERIC=de_CH"
<noaXess> twb: thanks.. but i will test it just on that specific user.. is there also a command to do it just for a specific user instead of do it manually in .profile?
<twb> noaXess: not that I know of
<noaXess> twb: so manual way..
<azizLIGHTS> is it advisiable to put a template for joomla first, then install virtuemart or its ok to do in any order?
<zul> hggdh: can you see if the test rig is unbroken today?
<shaggy2> I have a problem: trying to install vhcs and got thisE: Package 'proftpd-mysql' has no installation candidate
<shaggy2> E: Unable to locate package libmd5-perl
<shaggy2> E: Package 'libperl5.8' has no installation candidate
<shaggy2> E: Package 'php4' has no installation candidate
<shaggy2> E: Unable to locate package php4-mcrypt
<shaggy2> E: Package 'php4-mysql' has no installation candidate
<shaggy2> E: Package 'php4-pear' has no installation candidate
<shaggy2> E: Package 'libsasl2' has no installation candidate
<shaggy2> E: Package 'apache2-common' has no installation candidate
<shaggy2> E: Package 'libapache2-mod-php4' has no installation candidate
<pmatulis> !paste | shaggy2
<ubottu> shaggy2: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<shaggy2> sorry
<shaggy2> do I need to paste to that now that it's there?
<pmatulis> shaggy2: no
<shaggy2> ok ummm just noticed I have php5 installed, thats trying to install php4 would that cause errors on my server?
<pmatulis> shaggy2: how are you installing vhcs?
<shaggy2> http://ubuntuforums.org/showthread.php?t=25722
<pmatulis> shaggy2: pastebin your sources.list file
<shaggy2> ok gimme a sec please
<pmatulis> shaggy2: also note that this forum post is from 2005
<shaggy2> http://paste.ubuntu.com/551072/
<shaggy2> I understand that, but it's the most recent that I could find for ubuntu, I don't trust any tutorials that are not on ubuntu forums as I found a few that tryed to open security holes, as informed by a few people in the #ubuntu irc
<shaggy2> pmatulis: u still there?
<shaggy2> pmatulis: http://paste.ubuntu.com/551072/
<pmatulis> shaggy2: well downloading a shell script and letting it do whatever it wants on ubuntu is just not supported at all
<shaggy2> say what???
<pmatulis> shaggy2: do you have any idea what it's doing to your system?
<pmatulis> (wget http://www.siemens-mobiles.org/vhcs/vhcs.sh)
<shaggy2> well vhcs is ment to make easy manage for a webhost
<shaggy2> no no no no I never went for that opt... I went for the other option to install manualy
<shaggy2> I didn't trust the as it wasn't coming from ubuntu or the creators (vhcs) website
<shaggy2> I would never using a shell script
<shaggy2> pmatulis: ok so can I continue with that tutorial or is it unsafe?
<shaggy2> pmatulis: keeping in mind I am not using the shell script
<pmatulis> shaggy2: i would drop it.  you are trying to install stuff in a non-standard way on ubuntu.  simply put, there is no vhcs ubuntu package
<shaggy2> just like there is no cpanel for ubuntu either, webmin does not have the features that these other 2 have
<pmatulis> shaggy2: webmin is not supported either
<pmatulis> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<shaggy2> witch it did being the reason I removed it
<shaggy2> so is there a system manager that is supported and can manage a Vhost?
<pmatulis> shaggy2: i don't know if the ebox framework can do what you want
<pmatulis> !info ebox
<ubottu> ebox (source: ebox): common library used by eBox platform modules. In component universe, is optional. Version 1.5-0ubuntu1 (maverick), package size 529 kB, installed size 3528 kB
<shaggy2> from what I find on google ebox is for ubuntu and xbox 360
 * shaggy2 confused
<pmatulis> xbox?
<shaggy2> google search = ubuntu 10.10 ebox
<shaggy2> top result = [ubuntu] Xbox Live and Ubuntu 10.10 - Ubuntu Forums
<pmatulis> shaggy2: anyway, you'll need to either take a different approach or risk the use of unsupported s/w + deal with the install problems
<shaggy2> ok but where could I find help on the different approach?
<popey> shaggy2: http://www.zentyal.org/ - ebox is now zentyal
<azizLIGHTS> what is the apache user and how do i chown /var/www/joomla15 to it
<cortex|sk> azizLIGHTS: sudo chown www-data:www-data -R /var/www/joomla15 ?
<azizLIGHTS> is that a answer? r question
<azizLIGHTS> *or
<azizLIGHTS> going to try it
<jforman> azizLIGHTS: you can find out the user who is running your web server by running 'ps auxww | grep apache'..the first column is the user running the process
<azizLIGHTS> youre right thanks
<azizLIGHTS> cortex|sk and jforman
<zul> can you run puppet and puppetmaster on the same server?
<gobbe> hmmh
<gobbe> you should be able, but im not 100 % sure
<RoyK> wtf is puppet(master)?
<gobbe> configuration tool
<gobbe> and central management
<RoyK> ah - that one...
 * RoyK forgot
<gobbe> however i dont see point to run puppet on puppetmaster
<RoyK> from http://bitfieldconsulting.com/puppet-tutorial <-- it should be pretty simple to run both on the same machine
<gobbe> yes it should
<mrroth> !backup
<ubottu> There are many ways to back your system up. Here's a few: https://help.ubuntu.com/community/BackupYourSystem , https://help.ubuntu.com/community/DuplicityBackupHowto , https://wiki.ubuntu.com/HomeUserBackup , https://help.ubuntu.com/community/MondoMindi - See also !sbackup and !cloning
<mrroth> !raid
<ubottu> Tips and tricks for RAID and LVM can be found on https://help.ubuntu.com/community/Installation/SoftwareRAID and http://www.tldp.org/HOWTO/LVM-HOWTO - For software RAID, see https://help.ubuntu.com/community/FakeRaidHowto
<guntbert> !askthebot > mrroth
<ubottu> mrroth, please see my private message
<RoyK> !zfs
<ubottu> For information concerning ZFS and Ubuntu, see: https://wiki.ubuntu.com/ZFS
<uvirtbot> New bug: #698210 in tomcat6 (main) "package tomcat6 6.0.28-2ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/698210
<azizLIGHTS> how do i do this command "sudo mv ~/VirtueMart_1.1.6_eCommerceBundle_Joomla_1.5.22/ /var/www" .... it makes a new dir called VirtueMart_1.1.6_eCommerceBundle_Joomla_1.5.22 ... but i want it to dump directly to /var/www
<gobbe> hmmh
<gobbe> you want to move data from ~/VirtueMart_1.1.6_eCommerceBundle_Joomla_1.5.22/ to /var/ww?
<azizLIGHTS> i want to move contents of that dir virtuemarket
<azizLIGHTS> and subdirs as they exist
<gobbe> go to the directory ~/VirtueMart_1.1.6_eCommerceBundle_Joomla_1.5.22/ and then sudo mv * /var/www
<azizLIGHTS> into /var/www and not /var/www/virtuemarket/
<azizLIGHTS> o
<SpamapS> woot PPA stats
<SpamapS> clint@clint-MacBookPro:~/src/lptools$ ./get-ppa-stats cassandra-ubuntu stable
<SpamapS> cassandra	0.6.8-0ubuntu1	26
<SpamapS> kirkland: seen the PPA stats feature yet?
<SpamapS> doh
<lifeless> timing is everything
<RoyK> azizLIGHTS: the easiest way is to create a redirect from /var/www
<RoyK> or you could change the apache setting for that virtualhost
<seniorgregor> Hello, I am experiencing frequent system crashes with ubuntu-server (8.04, kernel 2.6.24-28-openvz) on many production servers.  The system hangs.  Is there a kernel guru around?  Here is a syslog extract of the bug: http://pastebin.com/P5pmLw2k
<gobbe> hmmh
<gobbe> are you admin of openvz-environment
<gobbe> or buying your machines as virtual machines
<seniorgregor> yes
<gobbe> i would checkout your NFS and openvz-environments CPU usages
<seniorgregor> it always happens on the same vz guest, always apache httpd.  CPU and NFS usages are high since it is a very busy website, but they are stable..  nothing out of control.
<deadsmith> anyone know when grub 1.99 will go into the installer ISO?
<genii-around> deadsmith: Probably April with Natty. apt-cache policy grub2 shows: Candidate: 1.99~20110104-2ubuntu1
<AndyGraybeal> is there an ubuntu image specifically for virtual machines?
<gobbe> yes there is
<gobbe> sorry, there was
<gobbe> cannot find it anymore
<RoyK> AndyGraybeal: there is a kernel for virtual machines, and there is minimal install from the boot menu for small VMs
<RoyK> AndyGraybeal: but the old JEOS has been abandoned
<RoyK> AndyGraybeal: under which hypervisor are you planning to run these VMs_
<RoyK> ?
<gobbe> yep, jeos was the thing i wass looking for him :)
<RoyK> it's available from the standard install disk
<RoyK> press F4 IIRC
<gobbe> ok
<BrixSat> what permissions should have /var/www so that www-data user and proftpd user can write and read and execute?
<AndyGraybeal> RoyK: kvm
<AndyGraybeal> jeos yes
<AndyGraybeal> bad ass thank you guys.
<harrisonk> anyone here know how to turn on dynamic DNS in the service easydns?
<RoyK> fgfi
<harrisonk> RoyK: were you talking to me?
<RoyK> harrisonk: there are several dynamic dns systems out there
<harrisonk> RoyK: I know
<sparc> hey, is there a Cobbler analog for Ubuntu?
<sparc> where something can pxe boot, and get added to a list somewhere
<sparc> and then have an installation profile applied to it, so it can provision itself?
<kirkland> SpamapS: we're actually working on cobbler for ubuntu right now
<kirkland> sparc: ^
<sparc> aah ok
<kirkland> sparc: we do have something called "uec-provisioning", which does the pxe booting magic, etc.
<sparc> hmm ok, that's cool
<kirkland> sparc: but we're abandoning it in favor of getting cobbler working
<kirkland> sparc: we should have something alpha quality in ubuntu natty next week-ish
<sparc> i have PXE boot working now, i followed the automated install pdf directions
<sparc> my boss likes it wiehn i can prsent pretty guis and things
<sparc> hehe
<kirkland> sparc: pointy hair?
<sparc> i'm fine with just running tftp and dhcpd
<sparc> tho
<sparc> kirkland: haha
<ideaman> Can anyone tell me a good PCI Wifi Card for that is linux compatible out of the box?
<sparc> he's technical, but his boss has the pointy hair
<ideaman> using Ubuntu server
<sparc> ideaman: the orinoco and the aironet have had support for 10 years or so
<RoyK> ideaman: there are several - most chipsets are supported
<ideaman> I was just checking Amazon and all the name brands said nothing about linux so I wasn't sure
<ideaman> thx
<sparc> yeah, they don't mention linux in the marketting :(
<ideaman> what about PCI/PCIE for desktops/servers, same ?
<ideaman> sparc: same story for a PCI slot on a desktop/server?
<sparc> yeah, PCI and PCI-e work fine in my servers
<ideaman> on either of those brands?
<g-hennux> hi!
<g-hennux> i need someone with pam+ldap expertise
<g-hennux> pam_groupdn, will  this have to be a posixgroup or groupofnames or what?
<g-hennux> because i have set pam_groupdn to some group and this seems to be just ignored, i.e. any user can login
<yann2> thanks for reopening #ubuntu-virt btw
#ubuntu-server 2011-01-07
<_Techie_> is there a group a user has to be in to allow them to listen on the network?
<Slyboots> Hmm.. Hello all
<Slyboots> Seems my server is.. slightly unwell
<Slyboots> "Gave up waiting for root device. " blah blha..
<Slyboots> ALERT! /dev/mapper/tank-root does not exist
<Slyboots> Whats odd is this is a RAID1 mirrored over 3 disks
<twb> Slyboots: nothing wrong with that
<RoyK> Slyboots: not playing with zfs, are you?
<Slyboots> no?
<Slyboots> its been running for months.. powered it down.. powered it back up
<Slyboots> AAHGHGHGHGHHGHGHGHGHGHG!!
<Slyboots> And so on
<JanC> Slyboots: no hardware failure?
<Slyboots> I thought a disk was dead but..
<Slyboots> SMART shows OK across the board
<uvirtbot> New bug: #699665 in openssh (main) "sshd crashed during a rsync" [Undecided,New] https://launchpad.net/bugs/699665
<JanC> Slyboots: is that a "fake" raid controller ?
<Slyboots> Linux softraid; MDADM
<JanC> with some other layer on top or below then?
<Slyboots> LVM
<twb> Slyboots: sorry, I mean that a three-way RAID1 is not odd
<Slyboots> The BIOS shows the SMART status of each disk.. comes up OK
<Slyboots> Im wondering if a kernel update might have hosed the modules or something
<JanC> Slyboots: what's the status of the software raid device ?
<Slyboots> No idea; trying to create a live bootable disk to go in and check
<Slyboots> right now as soon as it hits grub it bombs out witht "Gave up waiting for root device"
<JanC> heh?
<JanC> so it's grub that complains?
<Slyboots> I get a black screen with a flashing "_" under it
<Slyboots> Then after about 30 seconds
<Slyboots> "Gave up waiting for root device"
<Slyboots> ALERT! /dev/mapper/tank-root does not exist
<JanC> that's inside the initrd, so you should be able to see if the raid is assembled or not?
<Slyboots> Not sure.. basiclylafter that it goes (initrtfs) and ..
<Slyboots> Well nothing; tapping keys on the keyboard does nothing
<JanC> hm, that's not very useful indeed âº
<JanC> try booting an older kernel?
<Slyboots> Dont have any.. least I dont think so
<Slyboots> I dont evne really get a grub prompt
<Slyboots> (Although i never clearn out /boot.. so it should be there in theory
<JanC> you can enter the grub menu with Esc (grub1) or Shift (grub2)
<Slyboots> xMm..
<Slyboots> "Grub loading.."
<Slyboots> Then it just continues on
<JanC> just keep Esc or Shift (depending on grub version) pressed down during boot
<Slyboots> Tried that...
<twb> Does Caps Lock also work, as in extlinux?
<twb> That way at you can just push it and leave it down
<Slyboots> Its a wireless keyboard..
<Slyboots> Doesnt have indicator lights for caps lock
<twb> Slyboots: then it probably won't work at all
<twb> I doubt that grub has a bluetooth driver
<Slyboots> Its wifi through a usb interface
<twb> I guess that might work if grub has a USB driver
<JanC> I doubt it's WiFi  ;)
<Slyboots> OK; got grub using another keybarod :P
<JanC> but an USB keyboard might need a BIOS setting
<Slyboots> Its RF or something; but thats outside the fact.. grabed another keyboard nad loaded up the last good kernel
<Slyboots> Well its doing something; ubuntu Usplash has come up
<twb> more likely plymouth
<JanC> twb: depends how old the server install is  ;)
<Slyboots> Okay; got fed up waiting for usplash to vanish and logged in via the cli
<twb> I guess if he sees "grub loading" than it isn't that stupid new "0 wait time" one
<Slyboots> Not sure how to continue from here though
<Slyboots> Should I.. fsck the disks or..?
<twb> Slyboots: what was the problem again?
<Slyboots> Well it was going "AGGH! Root not found"
<Slyboots> Loading an old kernel fixed that..
<Slyboots> But now I got run-parts: /etc/update-motd.d/90-updates-availabel exited with an return code 2
<Slyboots> Everal times
<Slyboots> Mm.. brb. hold on
<Shinything_> Mm..
<Slyboots_> Right
<Slyboots_> Well back into the server.. not 100% if everything is OK though
<Slyboots_> First things first should be to remove that "bad" kernel from grub.lst right?
<Slyboots_> not sure where grub.lst is though
<twb> Slyboots_: /boot/grub
<twb> Slyboots_: just remove that kernel, even
<Slyboots_> Mmm.. using apt-get ?
<twb> ALthough if it were me, I'd probably try to work out WHY it wasn't working
<twb> Slyboots_: yes
<Slyboots_> Seems like its not loading LVM or something
<Slyboots_> And since its 2am in the morning I just want to fix it and go to bed
<Slyboots_> or play Dawn of war.. minions of Chaos to kill and whatnot
<Slyboots_> Mm..
<Slyboots_> Okay; there is one marked -virtual
<Slyboots_> Not sure why that is there
<twb> Slyboots_: I don't know what you're looking at.
<Slyboots_> Grubs list of kernels
<Slyboots_> I've removed one marked "virtual" see if that makes any difference
<twb> I believe the "virtual" flavoured kernels have many features removed because they are intended only for use as guest OSes in VMs
<Slyboots_> Aye; Thats what Im thinking
<Slyboots_> Im not sure what its on my server at all but I've removed it and updated grub; a reboot will tell
<kaje> I'm running Ubuntu 10.4 and installed BIND using this howto: https://help.ubuntu.com/community/BIND9ServerHowto
<kaje> I keep getting these messages in my log:     Jan  6 18:46:59 jupiter kernel: [1462769.470917] type=1503 audit(1294361219.337:72660):  operation="open" pid=13550 parent=1 profile="/usr/sbin/named" requested_mask="ac::" denied_mask="ac::" fsuid=105 ouid=105 name="/var/log/query.log"
<kaje> Any thoughts?
<kaje> My home machines all use that machine for DNS and when my wife goes to facebook, BIND is not resolving some of the servers that host facebook's css files and a few other things.
<kaje> One of the DNS names that aren't resolving is b.static.ak.fbcdn.net. When I do an nslookup on that address, I get some of those kernel messages in my logs. I'm hoping this is the issue.
<qman__> kaje, looks like apparmor is denying it access to that file
<kaje> yeah, I got it worked out in the bind channel. Thanks for the help though.
<qman__> ah, ok
<clayd> what command do i use to see how many cores a vps has available to it.  i am using ubuntu server 10.04
<gobbe> cat /proc/cpuinfo
<twb> Depending on the virtualization technology, that won't tell you how many your VPS is allowed to use
<uvirtbot> New bug: #695985 in mysql-5.1 (main) "/etc/mysql/debian-start exposes debian-sys-maint users password to any users on the box via ps(1)" [Medium,Confirmed] https://launchpad.net/bugs/695985
<martinjh99> Is there a way to gete byobu to save the tabs you have open over re-boots?  I have a shell, root shell and media server running and owuld like their tabs to be kept open over re-boot
<milligan> In logrotate.conf, is i.e /var/log/myapp/* a valid descriptor, when myapp contains subfolders, that contain the logfiles?
<twb> martinjh99: screen cannot save state between reboots.
<twb> milligan: you could configure .screenrc to *launch* programs whenever it starts.
<martinjh99> i think you mean me - How would you do that then? Point me to some docs?
<twb> Sorry, yes
<twb> martinjh99: say you wanted to run bash and top and "w3m google.com".
<twb> martinjh99: you would put in three lines like "screen"; "screen top" and "screen w3m google.com"
<twb> martinjh99: in your .screenrc, I mean
<martinjh99> wouldn't that run 3 instances of screen though?
<twb> No
<twb> .screenrc takes screen commands, not sh commands.
<martinjh99> hmm ok
<twb> In any case, the sh command "screen foo" will create a new tab if it is run within screen (i.e. if $STY is set)
<twb> It is unfortunately an extremely confusing arrangement for newbies
<martinjh99> certainly sounds like it  and why when i google "configuring screen" i get a load of windows pages ;)
<twb> Your best references are 1) the manpage/infopage; and 2) #screen channel on Freenode
<twb> You will have better luck (but still not good) in google by using `"GNU Screen"' rather than `screen'.
<lenios> using /bin/screen instead of screen might help too
<twb> Interesting idea
<martinjh99> ah ok thanks - just searching ubuntu forums to see if they have anything there
<twb> Pfft
<twb> web fora are just people too stupid to use usenet
<twb> Rather, too stupid to configure a newsreader
<JanC> or a mail client  ;)
<martinjh99> hehe well i found something I can use via Google... Thanks all
<twb> JanC: I suppose so, although I prefer newsreaders for reading mailing lists :P
<uvirtbot> New bug: #699737 in autofs5 (main) "automount[1275]: syntax error in nsswitch config near [ syntax error ]" [Undecided,New] https://launchpad.net/bugs/699737
<twister004> hi guys... how can i view my raid setup on an ubuntu server
<raphink> twister004, hardware or software raid?
<twister004> hardware
<raphink> that depends on the kind of raid you have then
<raphink> what is it?
<twister004> it's RAID1
<raphink> no I mean the brand
<raphink> the controller
<raphink> I'm used to array-info for smartarray (hp/compaq) for example
<twister004> 82801G(ICH7 Family) IDE Controller
<twister004> that's the adapter model
<raphink> hmmm, google tells me this is an audio controller
<twister004> it is?
<raphink> :S
<twister004> :D... my bad
<twb> twister004: lspci -nn | grep IDE
<raphink> or maybe there's two products with the same name
<twb> ICH7 is just a southbridge
<twb> Probably you've got an ICH7R or so
<twister004> sorry... it's the N10/ICH7 Family SATA IDE CTRLLER
<raphink> ok
<twb> IOW fakeraid
<twb> twister004: stick to md RAID
<twister004> ill keep that in mind.. right now, in the "Disk Utility"... i see two HDDs under the Adapter.. are these the two RAIDed HDDs?
<twb> Who cares?
<twister004> or will I be able to see only one of the RAIDed hdd?
<twister004> I want to know if there's a third hdd
<twb> twister004: why are you using a GUI
<twb> twister004: open the case, and count the drives, then
<twister004> :D
<twister004> im remotely located
<twb> Blergh
<twb> pastebin the contents of /proc/mdstat and /proc/partitions
<twister004> partitions?.. is there a command called partitions?
<twister004> i cant find it
<raphink> hrmm
<raphink> these are files...
<raphink> just cat them, or use pastebinit to do it faster ;-)
<twister004> oh.. here you go http://pastebin.com/jkR4HyJT
<twb> twister004: you have two disks set up with per-partition RAID1 arrays
<twister004> sory.. it's software raid
<twister004> ok... but what about the third.. i remember it was in there
<twb> There's no third disk, as far as linux sees
<twb> There are three ARRAYS
<twister004> is there something like a "devfsadm" for linux?
<twb> I do not know what that is.
<twb> The tool to manage md arrays is mdadm.
<twister004> it's probing for devices(newly connected)
<twister004> under solaris
<twb> udev is infrastructure that responds to device events (including them being connected)
<twb> e.g. you can tell it "whenever the USB mass-storage device with the serial number XXX is connected, mount it and start backing up /srv to it"
<twb> But more generally it creates device files in /dev, and a bunch of desktop wankiness like mounting stuff in /media
<twister004> twb.. ok
<twister004> thanks
<twister004> looks like there is no 3rd disk
<twister004> wierd
<twister004> i clearly remember there was a 3rd
<twb> Maybe it's not cabled properly
<twister004> twb.. yeah.. looks like that's the problem.. ill have to go onsite
<twister004> thanks for all your help and advice!
<twb> Well, I AM a genius
<RoyK> http://xkcd.org/844/ :)
<gobbe> win 17
<gobbe> sorry
<Frenk> Hey, I wanted to outsource my web-server so I created a virtual machine. Now I do not want to install Webmin, phpmyadmin again. Is there a software I can use to manage (add domain/add database) the virtual machine from the host?
<twb> gobbe: that's not out yet
<twb> Frenk: ssh
<incorrect> s
<Err404NotFound> how do i install http://pastebin.com/7u9AzMEh php extensions? i knew 2 so mentioned their packages, what about rest?
<Frenk> I want to otain a SSL certificate but I  read that if it is password protected I need to enter the password each time a service is restarted. I have a monitoring solution (monit) which restarts the services if something happens. Is it very insecure to have a SSL-cert without password or is it easier to configure monit with the SSL-password?
<pmatulis> Frenk: i don't think you can decrypt a certificate with a monitoring program
<pmatulis> Frenk: it is standard to not encrypt certificates on services that need to come up unattended
<Frenk> I mean the watchdog needs to start the service somehow ... even if ssl-cert is password protected... | Okay I try to set it up without password - but "The CSR key must have a length of 2048 bit" has nothing to do with wether its encrypted or not?
<pmatulis> Frenk: no
<patdk-wk> having a password on the cert is nice, if you think your box will be rooted
<patdk-wk> not having a password is fine, as long as your not rooted
<patdk-wk> then it can still be *ok* (depending on your definition and level of ok), if you revoke the cert, assuming you know you where rooted
<Err404NotFound> how do i install these: http://pastebin.com/7u9AzMEh php extensions on ubuntu karmic? i have listed the two known ones, don't know packages for rest, tried apt-cache search name-here | grep php but no results
<Frenk> patdk-wk: pmatulis thx
<Error404NotFound> how do i install these: http://pastebin.com/7u9AzMEh php extensions on ubuntu karmic? i have listed the two known ones, don't know packages for rest, tried apt-cache search name-here | grep php but no results
<Frenk> I have postfix installed for sending e-mail. Does my reverse dns has to be mail.domain.com or can I just use domain.com?
<pmatulis> Frenk: the name that your MTA will expose to the internet should be both forward and reverse resolvable
<pmatulis> Frenk: forward is more important but some MTAs may refuse your mail if reverse is missing
<pmatulis> Frenk: but of course your actual domain name needs to be (at least forward) resolved as well
<Frenk>  Ill check it =)
<Error404NotFound> how do i install these: http://pastebin.com/7u9AzMEh php extensions on ubuntu karmic? i have listed the two known ones, don't know packages for rest, tried apt-cache search name-here | grep php but no results
<Frenk> And I have a strange thing happening pmatulis, I have cyrus and denyhosts, every time denyhosts bans any ip the permission on /etc/hosts.deny are changed and cyrus cant read the file = refuses all connections. I dindt find any permission settings in deny-hosts config.
<patdk-wk> Error404NotFound, they are installed by default in php-common (I think)
<Error404NotFound> patdk-wk: pcre is, not sure abour json, couldn't find its config in php.ini
<patdk-wk> heh, fail
<patdk-wk> check with phpinfo
<patdk-wk> there are no config options for json, so it won't be in php.ini
<Error404NotFound> patdk-wk: :P
<Error404NotFound> thanks :)
<Arcitens> Hi. I followed a broken tutorial online for setting up Drupal with a LAMP stack on Ubuntu and I think I made some bad changes to my /etc/apache2/httpd.conf file. I'm wondering how I can either restore it to the original settings or reinstall apache with the original .conf file
<zul> Daviey: can you have a look at 697753 its pretty simple
<pmatulis> Arcitens: use at your own peril, also, check the path to the deb: 'dpkg --force-confnew -i /var/cache/apt/archive/apache???.deb'
<pmatulis> 'archives'
<pmatulis> zul: so re nc fix, will users of previous releases be burned in any way?
<zul> pmatulis: as in?
<pmatulis> zul: well, as i understand it, change to nc was to enable simultaneous connections to a libvirt session, will continue to be like this?
<pmatulis> zul: w/o extra hoops?
<zul> pmatulis: the "-q" functionality was put back but with a warning message
<pmatulis> zul: i guess i don't grok the issue, i thought the 'q' thing was the change we put in
<pmatulis> zul: that caused the bug
<uvirtbot> New bug: #699845 in php5 (main) "php5 affected by http://bugs.php.net/53632" [Undecided,New] https://launchpad.net/bugs/699845
<zul> pmatulis: right it is...but its very tied into libvirt, users can use netcat-traditional if they need regular functionality
<pmatulis> zul: but what if they need both libvirt *and* traditional functionality?
<zul> pmatulis: good point lemme think about it
<mdeslaur> pmatulis: then they use nc.traditional
<pmatulis> mdeslaur: o_0
<mdeslaur> pmatulis: oh, zul updated the patch to the one debian just changed....so nc-openbsd now behaves like the upstream one
<mdeslaur> ie: behaviour is the same if no -q is given
<pmatulis> mdeslaur: ok, that makes sense then
<mdeslaur> but the default behaviour between nc-traditional and nc-openbsd is different, and that has nothing to do with the -q patch
<Arcitens> pmatulis: Sorry, I went afk. And I'm not sure what exactly you're suggesting I do there, sorry.
<mdeslaur> fedora uses nc-openbsd by default, and we do too now...so if anything, we'll be consistent
<pmatulis> Arcitens: it's a command
<Arcitens> pmatulis: and I should replace ??? with apache version?
<pmatulis> Arcitens: to the real path of the package that provides the conf file your're talking about
<Arcitens> pmatulis: ah, ok. thanks.
<soren> pmatulis: It wasn't to enable simultaneous connections to libvirt. It was to enable more than one *ever*. If you had connected to it with nc.traditional once, you could never, ever connect to it again.
<pmatulis> Arcitens: to be sure: 'dpkg -S /etc/apache2/httpd.conf'
<pmatulis> soren: ah
<mdeslaur> hi soren!
<Arcitens> pmatulis: says not found when I ran the second command. (Sorry I'm being a total noob here. I appreciate the help.)
<pmatulis> Arcitens: so the path you gave is not right
<Arcitens> pmatulis: it must be. I'm staring at the file in nautilus and I can open it with 'gedit /etc/apache2/httpd.conf'
<pmatulis> Arcitens: symlink?
<Arcitens> pmatulis: nope
<soren> mdeslaur: dude.
<soren> mdeslaur: :)
<mdeslaur> :)
<Arcitens> pmatulis: I'm fine just starting over completely. is there a way I can uninstall the whole apache-mysql-php stack including the config files and then reinstall and work with a fresh slate?
<pmatulis> Arcitens: i'm not sure you can back out completely from a tasksel task
<Arcitens> pmatulis: that's unfortunate :(
<pmatulis> Arcitens: you'll need to google 'round.  maybe view what the task does and remove individual packages
<pmatulis> Arcitens: there's a file that explains the tasks
<pmatulis> Arcitens: man tasksel
<Arcitens> pmatulis: Hmm ok. what about just the apache part? Can I uninstall apache and purge the config files for that? I have a feeling it's apache I screwed up in.
<pmatulis> Arcitens: sure
<pmatulis> Arcitens: 'aptitude purge apache' should do it
<Arcitens> pmatulis: thanks very much for the help.
<soren> Tasks don't "do" anything.
<soren> They're just a set of packages.
<pmatulis> soren: possible to remove associated packages in one fell swoop?
<soren> I suppose "sudo apt-get --purge remove taskname^" should do it.
<soren> (Note the ^ at the end)
<pmatulis> right, ok
<AndyGraybeal> is it best to use the VM image or ubuntu server for vm?
<pmatulis> AndyGraybeal: say what?
<AndyGraybeal> aaah nevermind
<gobbe> AndyGraybeal: there is no more JEOS version available
<AndyGraybeal> f4  on install and it pops up for vm install!
<AndyGraybeal> i'm a bit behind the times.
<gobbe> aah, you mean that
<AndyGraybeal> i have ubuntu 10.04 server install and i hit f4 on what type of install i want to do .. and i see the virtual machine choice now, it didnt show up earlier becaues i was doing something wrong.!
<AndyGraybeal> thanks gobbe  and pmatulis
<gobbe> :)
<AndyGraybeal> this is the recommended way to start out with a VM correcT?
<pmatulis> AndyGraybeal: you want to create a KVM guest?
<AndyGraybeal> pmatulis: yes
<pmatulis> AndyGraybeal: use virt-manager is you are just beginning
<pmatulis> s/is/if
<AndyGraybeal> oh no no, i've made many images with virt-install but i have used the 'normal' install frm ubuntu server disc
<AndyGraybeal> i just learned yesterday aboutthe VM install!
<AndyGraybeal> i learned kvm about 2 years ago with 8.10
<AndyGraybeal> well learned enough to make it from point a to point b... throw a ringer in the mix and i'm confused.. but from point a to b and i'm fine.
<pmatulis> AndyGraybeal: i never used a VM install explicitly in the installer.  i assume it just makes a minimal install + the virtual kernel
<AndyGraybeal> i wouldn't know how to do that manually, so this is great :)
<pmatulis> AndyGraybeal: comparable to what vmbuilder does
<AndyGraybeal> i haven''t used vmbuilder since 8.10
<pmatulis> AndyGraybeal: so what's your question then?
<AndyGraybeal> i was asking about how to get to the virtual install; i hit f4 on the wrong screen, i eventually found it and answered my own question.
<Arcitens> pmatulis: Well, google wins in the ned. All I had to do was add 'ServerName localhost:80' to 'httpd.conf' Now it seems to be working fine... Feel like a bit of a fool, but hey, learning experience, right? Thanks again for the help.
<AndyGraybeal> pmatulis: sory for keepihng yuo hanging.
<pmatulis> good, looks like everybody is happy now
<AndyGraybeal> :)
<uvirtbot> New bug: #699855 in autofs5 (main) "autofs.schema in wrong location" [Undecided,New] https://launchpad.net/bugs/699855
<hggdh> zul: I will ping mathiaz when he pops up re. uec-testing-scripts-devs (adding you)
<hggdh> (just noted you are pending there)
<zul> hggdh: thanks
<joe-mac> the preseed value for d-i for partman-md/device_remove_md doesn't actually remove md
<joe-mac> the install fails if md devices exist
<joe-mac> so i have to mdadm -S /dev/md* && mdadm --zero-superblock /dev/sd* before i start the installer
<joe-mac> and that can't be automated via early_command in a nice way, the mdadm udeb isn't loaded yet, and even if it was it can exit non zero eevn if the intended operatuion completes successfully on some of the nodes that * expands to
<hallyn> jdstrand: just to be sure - have all your bad kvm tests been using a snapshotted qcow2 guest?
<joe-mac> this is 10.04
<joe-mac> also it seems the partition priority and sizxing doesn't behave as expected when doing lvm over raid
<jdstrand> hallyn: a snapshot was always involved, yes. in the 'upgrade one/install another' test (comment #12), only the upgraded one was snapshotted as the other was still being bootstrapped
<joe-mac> may have discovered how to fix the priority weirdness, apparently the order is dependent in this scenario
<joe-mac> maybe the device_remove_md is order-dependent? anyone using this preseed value with success?
<jdstrand> hallyn: also, I almost always was using a mix of i386 and amd64 installs, but I can't so with certainty I always did (and therefore can't say it is 'ok' with just one or the other)
<yann2> hello! Could someone tell me what this means: Jan  7 15:14:32 leibniz kernel: [11819455.470672] type=1505 audit(1294413272.487:208):  operation="profile_load" pid=19195 name="libvirt-4e12e041-2ec2-587c-4655-8c51167c15cb"
<jdstrand> hallyn: and by 'installs', I mean 'guest installs'
<gobbe> yann2: apparmor is preventing
<jdstrand> yann2: libvirt uses apparmor to confine virtual machines
<jdstrand> gobbe: no, it isn't
<gobbe> ah, sorry
<yann2> I just stopped apparmor
<gobbe> well, apparmor yes, not preventing :)
<jdstrand> yann2, gobbe: that line is just telling you that the profile loaded
<gobbe> i didn't read enough good
<yann2> damn I stopped whole apparmor before :(
<jdstrand> yann2: no reason to stop apparmor for that line-- if apparmor is causing a problem, it will log a denial
<gobbe> and then you can tune apparmor
<gobbe> infact apparmor is working quite well
<yann2> k, is a dev server anyway
<jdstrand> profile_load and profile_remove are purely informational, and letting you know that everything is working properly
<gobbe> if you compare to selinux which might be pain in the ass
<yann2> Jan  7 15:14:32 leibniz kernel: [11819455.470672] type=1505 audit(1294413272.487:208):  operation="profile_load" pid=19195 name="libvirt-4e12e041-2ec2-587c-4655-8c51167c15cb"
<yann2> arg
<yann2> Jan  7 15:05:14 leibniz kernel: [11818898.104525] type=1503 audit(1294412714.157:188):  operation="open" pid=18580 parent=1603 profile="/usr/lib/libvirt/virt-aa-helper" requested_mask="r::" denied_mask="r::" fsuid=0 ouid=0 name="/var/lib/kvm/oasouth-itadmin/root.qcow2"  thats the one that got me concerned
<jdstrand> yann2: virt-aa-helper denials are not necessarily fatal
<jdstrand> yann2: look in /etc/apparmor.d/libvirt/libvirt-4e12e041-2ec2-587c-4655-8c51167c15cb.files
<jdstrand> yann2: if it has /var/lib/kvm/oasouth-itadmin/root.qcow2, you are ok
<jdstrand> yann2: virt-aa-helper is what generates the dynamic profile, tailored for your vm
<hallyn> jdstrand: btw i'm trying to use the security team docs examples to vm-clone etc - but when i vm-clone it starts the new machine but doesn't manage to connect t it over ssh to do the updates it wants.  Is this known (.e. some script needs to add '.' to the hostname or something)?
<yann2> mmmh could it be that apparmor is making issues if I put my VM in tmpfs?
<jdstrand> yann2: it would show denials in the log
<joe-mac> i know selinux could
<joe-mac> i don't know much about AA but yea it would be in the audit logs
<jdstrand> yann2: /var/lib/kvm/oasouth-itadmin/root.qcow2 should be allowed though, because of this line in virt-aa-helper's profile:
<jdstrand>   /**.qcow{,2} r,
<jdstrand> yann2: is /var/lib/kvm/oasouth-itadmin/root.qcow2 a symlink?
<yann2> nope, thats where I put my vms
<yann2> the /var/lib/kvm/oasouth-itadmin/ is a tmpfs though
<jdstrand> hallyn: I haven't used vm-clone in ages
<yann2> when running the vm from disk instead of tmpfs it doesnt freeze anymore though :)
<jdstrand> yann2: what are the contents of /etc/apparmor.d/libvirt/libvirt-4e12e041-2ec2-587c-4655-8c51167c15cb*
<jdstrand> hallyn: vm-clone doesn't do snapshots
<hallyn> jdstrand: oh.  ok.  well, i'll keep trying, and will try a i386 one
<jdstrand> hallyn: (which is why I don't use it anymore)
<hallyn> what do you use then?
<hallyn> just vm-start -s?
<jdstrand> hallyn: that page has a section down below for using snapshots
<hallyn> kthx
<jdstrand> hallyn: you looked at 'Cloned virtual machines'. I use 'Snapshotted virtual machines'
<yann2> jdstrand, http://pastealacon.com/26508
<jdstrand> hallyn: someone from the team may use vm-clone-- perhaps kees or mdeslaur, I'm not sure. I used to, but don't any more
<mdeslaur> hallyn: I'm trying to debug that exact problem as we speak :)
<mdeslaur> I tried to use vm-clone this morning, and it's not working
<jdstrand> mdeslaur: does vm-clone use nc?
<mdeslaur> jdstrand: I looked at that, but that's not the issue
<mdeslaur> hallyn: you're on natty, right?
<jdstrand> yann2: from an apparmor perspective, it looks ok. I suggest looking in /var/log/libvirt/
<mdeslaur> jdstrand: it's not mounting the images successfully I think...anyway, I'm still poking at it
<jdstrand> yann2: if you want, you can add:
<yann2> thats where I started :)
<jdstrand>   /var/lib/kvm/** r,
<yann2> jdstrand, do you think that having /var/lib/kvm/oasouth-itadmin mounted as tmpfs changes anything for apparmor, than having it not?
<jdstrand> to /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper and reload with 'sudo apaprmor_parser -r  /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper'
<yann2> for some reasons it doesnt work when mounted as tmpfs. It does work when not, but it's dog slow :(
<jdstrand> yann2: but that won't fix your issue-- you can see the dynamic profile is correctly generated
<jdstrand> yann2: having as tmpfs should make no difference
<yann2> jdstrand, my real issue is that its a windows vm - and it freezes after a few seconds now
<yann2> works fine without tmpfs, but am installing service pack on windows
<jdstrand> yann2: what version of ubuntu is the host?
<yann2> 10.4
<jdstrand> yann2: so, if you really think it is apparmor, you can sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd, then stop and start libvirt
<yann2> I got no idea if its apparmor, its the only indication I could find in the logs :)
<jdstrand> yann2: do you have enough ram? iirc a tmpfs can only take 50% of your ram. so that is half your ram for the disk, and then your vm still needs ram for itself
<yann2> oh that would explain yes
<yann2> host has 8GB and vm is 3.9GB :) host still had some ram left though
<yann2> I ll try again on a host with more ram - thanks a lot for the info, thats probably it
<jdstrand> sure
<hallyn> jdstrand: btw, as i was reading the securityteam/testenvironment page yesterday I *was* wondering why i'd want to first clone, then create a runtime snapshot :).  oops
<jdstrand> heh
<jdstrand> hallyn: feel free to clarify the page if it is misleading
<hallyn> jdstrand: i suspect i just read it too fast
<macno> I'm running vmbuilder to create a VM with a LV as disk but parted gives an error http://paste.ubuntu.com/551525/
<yann2> jdstrand, btw, it would be nice if qemu would throw out a small warning to syslog in case it fails to grow a growable qcow2 image - not sure where i should report that too
<yann2> -o
<jdstrand> yann2: probably against upstream qemu-kvm
<mdeslaur> hallyn: bzr update the ubuntu-qa-tools tree
<mdeslaur> hallyn: I fixed the problem, but now have hit a second problem
<mdeslaur> hallyn: seems unmounting a qcow image now hangs with natty
<hallyn> mdeslaur: zounds
<hallyn> thanks :)
<mdeslaur> hallyn: if you figure out why qemu-nbd is hanging, let me know
<hallyn> ok
<FunnyLookinHat> Ok you master admins - question for ya - I currently have sendmail installed and running on a box - it
<FunnyLookinHat> it's all working fine, but I want to start using an external service.
<FunnyLookinHat> Rather than rewrite all of my scripts, is there a way to create a smarthost for a single user with sendmail ?
<mrroth> how do I convert my ubuntuserver in to a nas
<mrroth> is thie ra hwo to
<macno> FunnyLookinHat, do you mean forward to another server all the mails directed to a specific user?
<FunnyLookinHat> Yeah - and to do so with a different username/password.
<FunnyLookinHat> So take user1 at localhost smtp - and forward to user2/pass2 at external.smtpserver.com
<macno> add an alias
<FunnyLookinHat> That's all ?  Wow - easy.
<macno> that's all. remebert to run newaliases after editing /etc/aliases
<FunnyLookinHat> Hmm wait.
<FunnyLookinHat> No that will just forward emails.
<FunnyLookinHat> I need to forward the SMTP request.
<macno> why?
<b0gatyr> mrroth: you might want to look into FreeNAS as well
<mrroth> yea
<mrroth> I am
<mrroth> I got freenas on my usb stick
<mrroth> but
<mrroth> it saying starting starting devd
<FunnyLookinHat> macno: Because I'm rolling our services into sendgrid - to improve deliverability, etc. - and rather than change every hard-coded setting for the SMTP stuff in our php scripts, I'd prefer to make just one change so I can roll it back easily if necessary :)
<b0gatyr> mrroth: boot it of a VM might be better
<FunnyLookinHat> http://c0001374.cdn1.cloudfiles.rackspacecloud.com/dcerb1a17.jpg
<FunnyLookinHat> Woops - sorry
<FunnyLookinHat> Ignore that please :)
<mrroth> so install ubuntu server
<mrroth> then install a vm soultion for ubuntu server
<b0gatyr> nice pic, now my wallpaper ;)
<mrroth> hmm
<air^> :D
<FunnyLookinHat> b0gatyr: glad you liked it - source: pegshot.com/p/dcerb1a17/
<zul> php i sawesome!
<FunnyLookinHat> zul: Yes.
<FunnyLookinHat> :)
<FunnyLookinHat> Question of the day - is there a way to run a find/replace on file contents recursively ?
<lau> how can i handle debconf "dialog issue TERM not set using Teletype instead" issue when aptitude -y safe-upgrade fia fabfile.py ?
<FunnyLookinHat> Heh - find + sed
<DrPoO> Hi, Im getting a "System information disabled due to load higher than 1" message upon reboot. I have no idea what causes this. Any ideas?
<shauno> FunnyLookinHat: it's going to be something along those lines, yeah.  I don't know anything that does it out of the box.  but plenty of things that could be duct-taped together to do it
<FunnyLookinHat> shauno: I'm trying to do something like this...
<FunnyLookinHat> find ./ -type f -exec sed -i 's/"$params['host'] = 'localhost';"/"$params['host'] = 'smtp.sendgrid.com';"/' {} \;
<FunnyLookinHat> But it won't find my switch and replace statements it seems
<shauno> that does look fun.  you're going to need to escape a lot of that.
<FunnyLookinHat> wonderful.
<FunnyLookinHat> I thought the " " surrounding would remove the need to escape ?
<shauno> $ echo one two three | sed 's/"one"/"1"/'
<shauno> one two three
<shauno> litmus test doesn't look too hopeful
<RoyK> I don't think sed interprets "
<FunnyLookinHat> blarg!
<zeknox> does ubuntu have a lighter version of server?
<RoyK> inside '', the shell won't interpret anything, so " is sent to sed, which sees it as a character without any special means
<DrPoO> does anybody know what the  message "System information disabled due to load higher than 1"  means ? It appears upon login
<RoyK> zeknox: in the grub menu at the start of the installation, you can choose a miniature installation
<zeknox> DrPoO: you're server has too high of a load that it doesn't have extra cpu cycles to handle the 'system info' on login
<zeknox> RoyK: thanks!  that might be just what we need
<RoyK> zeknox: hah ... it says that on my 24 core machines too, if they are running at load 1.1
<zeknox> RoyK: haha....> 1.0 isn't even that large of a load IMO, its a semi decent load but not huge
<RoyK> having that limit set load load 1 is quite low imho
<RoyK> most systems today have 2 or 4 or more cpus
<zeknox> RoyK: I concur
<RoyK> roy@tor:~$ uptime  19:23:10 up 55 days, 22:48,  3 users,  load average: 25.08, 25.06, 25.03
<DrPoO> zeknox, any suggestions as to how to find what is causing this problem?
<DrPoO> zeknox, the machine seems to work fine, except for that message....
<zeknox> DrPoO: run top, what is eating cpu cycles?
<RoyK> in /etc/update-motd.d/50-landscape-sysinfo, remove the check
<DrPoO> zeknox, right now, nothing.... I guess it peaks for some reason when it reboots
<RoyK> DrPoO: see that file
<FunnyLookinHat> Any idea why this won't run?  It expects more input... http://pastebin.com/zv1Fuk2v
<RoyK> FunnyLookinHat: you're running that search and replace on all files in a dir and its subsequent subdirs?
<FunnyLookinHat> Yup :)
<FunnyLookinHat> Long story.
 * RoyK doesn't want to hear it
<FunnyLookinHat> Any idea how to fix that statement to work?  I think I'm escaping one thing too many...
<RoyK> FunnyLookinHat: how many files?
<FunnyLookinHat> ~ 50-75
<FunnyLookinHat> And I have to make 6 other changes similar to that one.
<RoyK> FunnyLookinHat: consider setting the settings in one file and then just include that
<FunnyLookinHat> Right - well I will.. but first I have to replace the hard-coded setting with a global variable first...
<FunnyLookinHat> and to do that - I have to figure out sed first :)
<FunnyLookinHat> So realistically my statement will be to switch the $params['host'] = localhost; with $params['host'] = $smtp_host;
<RoyK> or use perl :P
<FunnyLookinHat> And define $smtp_host in my include file.
<FunnyLookinHat> >_<
<RoyK> yes
<FunnyLookinHat> RoyK: you're saying use PERL to run the find/replace instead of bash ?
<RoyK> perl regex is a bit simpler or at least far more efficient than sed
<RoyK> that is
<RoyK> it's not simpler
<RoyK> but it's way better
<FunnyLookinHat> Then it's not solving my problem. :)
<FunnyLookinHat> Because I can't even figure out my escape problem ...
<FunnyLookinHat> :)
<RoyK> the escape problem is mainly because you're running it inline from the command line
<RoyK> which complicates things a bit
<RoyK> upload one of the files, please
<FunnyLookinHat> http://php.pastebin.com/Gk4b81TE
<FunnyLookinHat> That's a test file
<RoyK> http://pastebin.com/bUkHECiM
<RoyK> save that script, run it as ./fixme.pl file1 file2
<RoyK> or find . -type f -exec ./fixme {} \;
<FunnyLookinHat> Ok thanks
<FunnyLookinHat> !
<RoyK> or perhaps
<RoyK> http://php.pastebin.com/bS3cdZH4
<RoyK> FunnyLookinHat: mind, code not tested :P
<FunnyLookinHat> Right
<FunnyLookinHat> Definitely going to test, etc.
<RoyK> the open statement for NEWF is wrong, btw
<RoyK> http://pastebin.com/9hNErGHE
<lxcnovice> hi can anyone give me an xorg.conf for a headless server - to use with vnc?
<uvirtbot> New bug: #699967 in munin "Empty list of plugins/services with hostname containing uppercase letters" [Medium,Triaged] https://launchpad.net/bugs/699967
<yann2> I wonder if KVM vms support wakeonlan :)
<RoyK> yann2: ??
<yann2> RoyK, ?
<RoyK> yann2: just wonder wtf you would use wake-on-lan with a vm :)
<Adog> hi is there any network admins in here that could help me out with a RAS VPN Server
<yann2> RoyK, could be useful in my case :)
<yann2> plus just interested if it works at all :) I guess I'll give it a try ;)
<Adog> are you talking to me or someone else? lol
<ventz> I have a system configured to auth agaist ldap. Login/auth works, passwd works, sudo works (after install nscd), but chsh does not
<ventz> wondering if anyone else has seen anything like this. I keep getting 'Cannot change ID to root.' when I try 'chsh -s /bin/$someshell $username'
<RoyK> ventz: first hit on google http://moduli.net/sysadmin/sarge-ldap-auth-howto.html
<ventz> RoyK: saw that, but everything in ldap looks correct
<ventz> interesting, so i think lilbpam-ldap is not installed
<ventz> hm, it is installed correctly
<ventz> RoyK: so I think a system update/upgrade somewhere along the lines replaced the sym links the previous person setup -- I used the divert
<ventz> now it's at least using the correct chsh
<Arcitens> when using the following: mysql> grant usage on *.* to user@localhost identified by 'password' should the "password" include single quotes in the terminal or just type it as password, no quotes?
<Arcitens> I've been looking at a few different tutorials that have confused me on the matter
<ventz> Arcitens: I always use single '
<ventz> Arcitens: create database a;
<ventz> CREATE USER 'a'@'localhost' IDENTIFIED BY 'password';
<ventz> GRANT ALL PRIVILEGES ON a.* TO 'a'@'localhost';
<ventz> FLUSH PRIVILEGES;
<ventz> (for example)
<Arcitens> ventz: so does that mean that my actual password will have single quotes. or are the single quotes part of the syntax?
<ventz> it will not have single quotes
<Arcitens> great. thanks.
<ventz> it's just to specify the string
<ventz> np
<RoyK> or just 'create user something@somewhere.org identified by 'password'
<RoyK> erm
<RoyK> grant all on something.* to someone@somewhere.org identified by 'asdf';
<RoyK> that'll create the user if it doesn't exist
<RoyK> the extra apostrophes can be left out
<Arcitens> oh. so the apostrophes are irrelevant either way and I was working myself into a tiff over nothing? :p
<RoyK> mysql is quite sloppy when it comes to syntax - quotes or no quotes, it just guesses
<Arcitens> heh, I see
<RoyK> but you need quotes around strings
<RoyK> such as you password
<RoyK> not the username or host, though
<Arcitens> that seems...consistent
<RoyK> the extra quotes around user/host was _added_ by mysql developers
<RoyK> though not strictly
<RoyK> the original SQL syntax doesn't require that
<RoyK> SQL93 IIRC
<Arcitens> ok
<RoyK> also, keep in mind that if you want to build a serious database, perhaps postgresql might be a better choice
<RoyK> but again, mysql is neat for small stuff
<Arcitens> yeah. i'm just getting started with all this business. so i think i'll stick to what most people say is best for beginners
<RoyK> postgresql has a steeper learning curve, but it's way cooler with its object model and caching paragidme
<RoyK> the sql syntax is, well, just sql
<Arcitens> Well, maybe (hopefully) I'll get there eventually. But I'm struggling along enough with this as it is. So for now I'll settle for the less steep learning curve.
<RoyK> not too much of a change whether you use oracle or mysql or sybase or mssql or postgresql or even sqlite
<Arcitens> i see
<RoyK> on complex joins and stored procedures and again, object models, that's where you start to see the differences
<RoyK> your average SELECT * FROM girls WHERE name != 'mom' AND age < mine; will probably run on all platforms
<Arcitens> What could be causing me to get a response: "Failed to connect to your database server. The server reports the following message: SQLSTATE[42000] [1049] Unknown database 'drupal7db'." after I just ran 'mysql> create database drupal7db' and did all the grant privileges etc. ?
<ventz> RoyK: got the chsh to work
<RoyK> ventz: cool
<RoyK> Arcitens: connecting from localhost or another box?
<ventz> there were partially two problems. One was someone symlinking it, and an update wiped the symlinks, the other was some modifications of the perl script and nscd was/still is a problem (caching)
<RoyK> ventz: file a bug :)
<Arcitens> Royk: no. trying to set up LAMP for Drupal development on my own desktop.
<RoyK> Arcitens: then your average grant all on dbname.* to someuser@localhost identified by 'somepass'; should work well
<ventz> I am thinking of adding an /etc/init.d/nscd restart at the bottom of that script and allowing that script to be run
<ventz> RoyK: i saw what happened to the 'sudo w/ ldap(s) enabled' bug -- still out there
<Arcitens> Royk: But it thinks the db doesn't exist for some reason?
<RoyK> Arcitens: pastebin list databases;
<Arcitens> Royk: sorry, where should I enter 'list databases'?
<RoyK> Arcitens: in the mysql console
<RoyK> just run mysql as root
<RoyK> or mysql -p if you've set a password
<Arcitens> yeah i'm in there
<RoyK> ok, list databases
<Arcitens> i get no response with 'list databases' and i get a syntax error with 'list databases;'
<RoyK> check if the database exists
<RoyK> erm
<RoyK> show
<RoyK> not list
<RoyK> my fault
<Arcitens> ah, no problem.
<RoyK> then show grants should list the grants
<Arcitens> hmm. i'm still actually getting no response on 'show databases'
<RoyK> add a ;
<Arcitens> i did
<RoyK> it should be like this http://pastebin.com/D0G74xA2
<Arcitens> ok. logged out of mysql and back in and it worked. none of the dbs I *thought* I created are there. So I did something wrong with the create db commands?
<RoyK> possibly
<Arcitens> I see information_schema, mysql, phpmyadmin
<RoyK> ok
<RoyK> type 'create database whatsitsname;'
<RoyK> without the quotes
<Arcitens> done
<RoyK> phpmyadmin is worthless if you want to learn :)
<RoyK> then
<Arcitens> heh. ok.
<RoyK> use whatsitsname;
<RoyK> then grant all on whatsitsname.* to someuser@somehost identified by 'somepass';
<RoyK> it's not really necessary to 'use' the database before the grant, but that makes your selects local to tht db
<RoyK> or any sql query, really
<Arcitens> oy. I didn't put the closing ; on any of the commands I was running. Didn't realize that was essential. Hopefully this will work well now. show databases; shows the db I just created now.
<Arcitens> let's see if I can figure out the rest of this now.
<RoyK> if you forget the ;
<RoyK> just follow that after pressing enter
<RoyK> sql isn't line-based
<Arcitens> oh, interesting
<RoyK> some commands aren't sql, like 'use', so they don't need the semicolon
<Arcitens> awesome. I think my databases are working now. Thanks so much for your help.
<RoyK> ;)
<Arcitens> anything I can do to thank you? I don't know if people write other people positive feedback or anything...
<RoyK> just don't use phpmyadmin or some silly webapp if you want to learn
<RoyK> nah - just stay and help others
<RoyK> that's the best thanks you can give
<Slyboots> Hmm
<Slyboots> Is there anthing like htop for network usage?
<RoyK> iptraf?
<Slyboots> Neat
<RoyK> yeah
<Slyboots> Thanks.. Dont suppose you know if there is a better version of iotop? ;)
<RoyK> 10+YO app
<RoyK> not really
<RoyK> measuring i/o on linux is a bitch
 * Slyboots nods..
<Slyboots> Sometimes I can hear the disk spinning and rnning when they should be asleep
<RoyK> iirc there's a dtrace replacement for linux, but i don't remember its name
<Slyboots> Actually what is the linux power managment called?
<Slyboots> ampd ?
<Slyboots> apmd..
<RoyK> apmd is quite old
<Slyboots> Im a little worried about applying power maangment to the disks (They are greens and part of a RAID array)
<RoyK> iirc systrap is the one for linux
<RoyK> pretty advanced
<RoyK> no, perhaps I'm wrong
<Slyboots> I've tried dtrace before..bewildering
<RoyK> system tap
<RoyK> sudo su -
<RoyK> ops
<RoyK> systemtap can give you a bit of info, perhaps close to dtrace
<Slyboots> Mmm
<RoyK> Arcitens: IRC is still web 0.0, so no chance to add comments :)
<Arcitens> Royk :p sure, but perhaps you want some props on your ubuntu wiki page or something. i dunno. ;)
<RoyK> nah
<Slyboots> :P
<RoyK> no problem
<RoyK> but thanks
<Arcitens> hey, thank you. :)
<RoyK> Arcitens: IRC work by means of people wanting help and wanting to help, and, of course, a few trolls come by every now and then :P
<Arcitens> Royk: fair enough. Just new to a lot of this.
<RoyK> Arcitens: heh - welcome to the Old World
<Arcitens> Royk: ha. Happy to be here.
<eightphantomz> hi guys... im very much new in ubuntu and linux/unix as a whole... i decided to use ubuntu for my NAS project. this NAS will be my ftp/torrent machine. my question is, do i need all LAMP?
<RoyK> nope
<RoyK> eightphantomz: also, how much data will you have on this?
<eightphantomz> ok... i've googled and got mixed up with all the ubuntu server thingy...
<RoyK> eightphantomz: if it's terabytes, you might want to consider using zfs instead of the standard raid systems
<eightphantomz> yes tb
<RoyK> eightphantomz: it won't hurt the server if those lamp processes are running
<eightphantomz> oh ok... might aswell play around with LAMP for study purpose i guess..
<RoyK> the problem with big storage systems is that modern drives have the same fault rate per sector as the old 1GB drives had
<RoyK> and with a truckload of terabytes, you get silent errors, not detected by the drive, and thus not by the OS
<RoyK> so you want a filesystem with data checksumming
<eightphantomz> i need to read up zfs
<Slyboots> I thought ZFS didnt work with Ubuntu
<eightphantomz> lol
<RoyK> eightphantomz: also, if this is just a storage machine for nfs or smb/cifs, I'd recommmend something like openindiana
<RoyK> eightphantomz: http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
<eightphantomz> yes it will also my storage machine
<RoyK> Slyboots: zfs-fuse works well
<RoyK> Slyboots: a bit low on the write speed, but good otherwise
 * Slyboots is just using ext4 in a RAID5 configruation with monthly backups
<eightphantomz> RoyK: thanks for the link
<Slyboots> 3.5tb
 * RoyK pats his 100TB boxes
<RoyK> remote-pat-by-ssh :P
<hallyn> mdeslaur: lol, i don't trust changes to qemu-nbd with body:
<hallyn>     Remove NULL checks for bdrv_new return value
<hallyn>     
<hallyn>     It's an indirect call to qemu_malloc, which never returns an error.
<eightphantomz> basically my plan is to do automation ftp process to download media files to the machine...
<eightphantomz> and stores it
<Slyboots> :D
<RoyK> eightphantomz: you can do that from any unix-like system
<Slyboots> I was going to just inall FreeNAS on it.. but with ubuntu it does *so* much more
<Slyboots> Sabnnzbd/Sickbeard/Couchpotato/ssh/squid/irssi/dnsmasq..
<RoyK> Slyboots: that's why I use VMs with ubuntu and openindiana for the storage :P
<eightphantomz> Slyboots: I was thinking on trying FreeNAS as well... but I think I'm more comfortable with Ubuntu
<RoyK> eightphantomz: the bitch becomes real when your data, which you thought to be safe, corrupts because of so-called 'silent errors'
<eightphantomz> RoyK: And the bitch is? LOL.
<mdeslaur> hallyn: ouch!
<FunnyLookinHat> How would I limit this command to only *.php files ?      find ./ -type f -exec sed -i "s/\$params\['host'\] = 'localhost'/\$params\['host'\] = \$GLOBALS\[smtp_global_host\]/" {} \;
<eightphantomz> RoyK: Do u have any recommendation? Not very old Atom pc. Download and storage purposes.
<RoyK> eightphantomz: for ubuntu or openindiana?
<eightphantomz> RoyK: Ubuntu. I'm checking on OpenIndiana as we speak.
<RoyK> solaris-based OSes are a bit picky, they can't address drives > 1TB on 32bit
<RoyK> for ubuntu, most things will work
<gobbe> i would go with solaris&zfs only with huge storages
<RoyK> huge being what?
<RoyK> gobbe: I've seen silent errors ruin data in as low as 2-3TB
<hallyn> mdeslaur: but syslog shows nbd is stuck in nbd_ioctl waiting on a mutex
<gobbe> RoyK: well 2-3TB is huge storage
<RoyK> lol
<gobbe> in home environment
<RoyK> not anymore
<RoyK> I'd say over 2ish TB, you want a safe filesystem, since backing up the stuff is hard
<RoyK> and then zfs is really the only one
<gobbe> yep
<gobbe> and zfs supports several nice features
<gobbe> like snapshots, deduplication, compression etc
<RoyK> forget about dedup
<RoyK> it sucks hard
<RoyK> I've spent my days testing zfs dedup
<RoyK> it's not good
<gobbe> i have one customer running sun's openstorage and there deduplication have worked very good
<gobbe> but they are mostly using it for vmware
<eric_hill> Out of borbid curiosity, why does zfs dedup suck?
<eric_hill> s/borbid/morbid...
<RoyK> small server, 12TB net storage on raidz2, 140GB L2ARC and some for the SLOG
<RoyK> write speed to the zpool was horrible after 2-3TB was stored there
<RoyK> gobbe: also, removing a deduped dataset can make the server hang for some days :P
<gobbe> 12TB created with what kind of disks?
<RoyK> gobbe: it's a good reason dedup didn't go into Solaris 10 update 9
<RoyK> gobbe: WD Black
<gobbe> sata?
<RoyK> 2TB drives
<RoyK> yeah
<gobbe> well, 2TB sata is slow like a cow
<RoyK> and they work very well
<RoyK> we have 160 of those
<RoyK> not at all
<gobbe> if you need performance you go with SAS
<RoyK> linear speed about 200GB/s
<gobbe> and that's where dedup works ok
<RoyK> but then, 7k2 drives sucks at seek speeds
<RoyK> SAS/SATA - that's just interfaces
<RoyK> doesn't matter
<gobbe> no it's not
<RoyK> it certainly is
<RoyK> 3Gbps SATA is about the same as 3Gbps SAS
<eightphantomz> Ok enough data for today. Thanks guys. Cya
<RoyK> SAS has better TCQ, right, and SATA has only NCQ, but still, same shit
<gobbe> jep
<RoyK> bandwidth is the same
<gobbe> but disk speed isn't
<RoyK> and a single drive can't sustain 3Gbps anyway
<gobbe> enterprise-sas disks run at 15k
<RoyK> that's the spin time
<RoyK> not related to the interface
<gobbe> well, you cannot find 15k sata disks
<gobbe> that are enterprise-ready
<RoyK> there are 10k sata drives
<RoyK> same drive, different interface
<gobbe> 10k to 15k is still huge step
<RoyK> seektime-wise, yes, linear transfer speed, no
<RoyK> a 2TB drive has far higher density
<gobbe> ofcourse
<RoyK> so linear speed is about the same
<RoyK> that's why you use l2arc/slog for zfs
<hallyn> mdeslaur: I'm tempted to blame the BKL removal patches in drivers/block/nbd.c :(
<gobbe> but in the end, you have to write it to disk, the bigger disk slower it is :)
<RoyK> gobbe: nope, the bigger, the faster, really, because of higher density
<gobbe> RoyK: it's just a cache, it just saves you sometime
<gobbe> if you shortstroke yes
<RoyK> gobbe: please, I don't mean to contradict you, but I've been working with storage systems for 10+ years and I know very well where the bottleneck is
<gobbe> but if you take 500GB enterprise-sas and compare it to 2TB sata disk
<gobbe> you see huge performance gain
<gobbe> RoyK: me to
<gobbe> :)
<gobbe> with huge enterprises
<gobbe> like ~50 betabytes of storage
<RoAkSoAx> zul: ping?
<eric_hill> What's a betabyte? :)
<RoyK> sure, but for the price of a 500GB enterprise drive, you can get 3-4 2TB drives, and if you compare the speed of thouse 3-4 drives with the one 500GB drive, well.....
<gobbe> typo
<gobbe> petabyte
<RoyK> eric_hill: :D
<gobbe> it's slow to type with n900 :-)
<eric_hill> I think a bettabyte is mo 'betta than just a plain byte.
 * RoyK hands gobbe a floppybyte of pr0n
<gobbe> RoyK: yes, like i told that if you do short-stroking :)
<RoyK> I did short-stroking back in 2001
<RoyK> for video streaming
<gobbe> it's still used with modern storages
<RoyK> on el-cheapo 120GB drives
<gobbe> like IBM SVC supports it :)
<RoyK> doesn't surprise me :)
<gobbe> and openstorage in fact
<gobbe> but the key is storage tiering :-)
<gobbe> and automation for that
<RoyK> so with your multib^Hpetabyte storage, have you used zfs with any of them?
<gobbe> of course, we sell oracle/sun ;)
<gobbe> i like zfs
<RoyK> ah
<gobbe> don't get me wrong :)
<zul> RoAkSoAx: yep
 * RoyK ended up on openindiana to get the fuck away from oracle
<RoAkSoAx> zul: could you please take a look at bug
<RoAkSoAx> bug #687986
<uvirtbot> Launchpad bug 687986 in openhpi "[FTBFS] package 'openhpi' (2.14.1-1) failed to build on natty" [Low,Fix committed] https://launchpad.net/bugs/687986
<RoAkSoAx> and sponsor it :)
<zul> RoAkSoAx: sure when i get back
<RoyK> gobbe: have you looked into btrfs?
<RoAkSoAx> zul: awesome, thanks ;)
<gobbe> RoyK: not quite much
<gobbe> RoyK: i thought that oracle killed it :)
<RoyK> I've followed the development a bit
<RoyK> nah, oracle gave away the code, so it's somehow alive still
<RoyK> raid[56] is on its way there
<gobbe> i would be nice to have zfs in linux kernel...maybe in someday
<RoyK> so, give it a year or two, perhaps it'll be comparable to zfs
<gobbe> yea
<RoyK> gobbe: I'm running a little benchmark with iozone on openindiana with native zfs and ubuntu lucid/maverick with zfs-fuse just to see how it performs
<RoyK> that is, I was, but then I broke my fucking leg, so I won't be back until a couple of weeks
<gobbe> :)
<gobbe> i thought to run little test with IBM's XIV and compare it to sun's openstorage
<RoyK> http://karlsbakk.net/xray.png <-- not very funny
<IdleOne> my bad word script is freaking out
<RoyK> lol
<gobbe> but...i'll head to bead
 * RoyK throws some beads after gobbe 
<Rav3nSw0rd> Installing Ubuntu Server 10.10 on an ancient Dell Desktop, removed "quiet" from the installation options and computer stops at "[1.507269] ohci_hcd 0000:02:0a.1: irq 3, io mem 0xff1fd000" with exception of time, this has happened repeatedly. I have tried irqpoll to no avail... help please? (btw, message is from a picture taken using my camera... can't access it directly from the knowledge I have so... yea)
<RoyK> Rav3nSw0rd: boot with ahci=off
<Rav3nSw0rd> thank you :D I shall go try that right now.
<RoyK> Rav3nSw0rd: that is, boot up, remove quiet etc, add ahci=off
<RoyK> or was that noahci?
 * RoyK isn't sure
<RoyK> Rav3nSw0rd: also, have you tried 10.04?
<Arcitens> If there's no DocumentRoot specified in apache's httpd.conf file, what does it default to?
<Rav3nSw0rd> RoyK, is this in addition to or instead of irqpoll? well, I've tried with no irqpoll both ahci=off and noahci, only ahci=off with irqpoll, and same stopping point with exception of just ahci=off, where it stopped before getting to that point
<ventz> RoyK: i modified that chsh a bit more, and now it actually takes care of nscd and it works for TLS :)
<RoyK> ventz: nice :)
 * RoyK just looked through some old stuff http://karlsbakk.net/hacker/
<Arcitens> good night
<hallyn> mdeslaur: it's definately the kernel.  on lucid, it works.  chroot into a lucid chroot on natty, it works.
<hallyn> (I think the check in nbd_ioctl for lo->magic is not sufficient, but that's a pure guess)
<RoyK> Guten Abend
<sabgenton> is https://help.ubuntu.com/community/NetworkConnectionBridge still the best way to bridge?
<sabgenton> uses pre-up brctl bla bla
<sabgenton> too many line have they made somthing simpler to go the networking file?
<sabgenton> *lines
<hallyn> mdeslaur: (doh, of course i meant, "chroot into a lucid chroot on natty, it fails - qemu-nbd -d locks up")
<Slyboots> Hmm..
<Slyboots> Using SCreen.. can you split the terminal vertically
 * Slyboots hits his head against the screen..
<Slyboots> stupid regex
 * RoyK points to "man screen"
<kieppie> hi guys. I'm in the process of rebuilding a (file)server from scratch. planing on installing 10.4.1 LTS, and has a hardware RAID instaled. could anyone please recommend a filesystem that would be best-suited for optimum thoughput & stability?
<hallyn> hm, no.  lucid userspace on natty kernel does behave differently - the qemu-nbd -d worked, but the qemu-nbd kthread becomes defunct
<qman__> kieppie, for filesystems < 8TB, I use ext3
<qman__> tried and true, never lost any data
<RoyK> for anything > 8TB, use zfs :P
<RoyK> with a hardware raid controller, zfs won't be of much use except it will find out when data is corrupted
<qman__> I've had minor trouble with reiser and JFS, nothing too big
<qman__> but I recommend against XFS
<qman__> I've lost several complete filesystems
<RoyK> qman__: really?
<qman__> yeah
<qman__> power loss and kernel bugs crashing the system
<qman__> resulted in total data loss
<RoyK> I haven't, yet, only been using it sparsely for 10 years, though
<RoyK> I don't use xfs anymore
<RoyK> for spooling it sucks hard
<qman__> in theory it's fine
<qman__> but the problem is when things do go wrong
<RoyK> not for spooling
<qman__> in ext3 and reiser, recovery is pretty easy and reliable
<RoyK> performance is 20% of ext3
<qman__> in XFS, it tends to be total failure
<yann2> I recommend also against reiserfs
<RoyK> killerfs
<yann2> critical bugs in kernel > 2.6.30, non fixed and leading to data loss
<binBASH> MurderFS!
<RoyK> :)
<binBASH> ;)
<yann2> https://bugzilla.kernel.org/show_bug.cgi?id=14826  burnt myself really bad 2 days ago on that one
<uvirtbot> bugzilla.kernel.org bug 14826 in ReiserFS "jdm-20002 reiserfs_xattr_get: Invalid hash for xattr" [Normal,New]
<RoyK> anyway - if you want your data safe, use zfs
<RoyK> no other filesystem (perhas except btrfs) checksums data
<RoyK> and for large data storage that is a must
<qman__> zfs is great, the problem is solaris
<qman__> zfs in linux or stable btrfs would make my day for sure
<RoyK> there is freebsd, and openindiana and zfs-fuse
<yann2> I'd be careful with zfs too :) heard of version incompatibilities between zfs versions and hosts
<RoyK> yann2: haven't seen it yet, and I follow the zfs ml quite closely
<qman__> but as far as traditional filesystems, ext3 is the most stable and easiest to recover in my experience
<RoyK> yann2: obviously you can't mount a zpool v28 on a system not supporting > v19, but that's about it
<yann2> well I got a nas with ZFS i pray that if my controller crashes I can mount the ZFS partition on an ubuntu :)
<yann2> (solaris right now)
<RoyK> yann2: what sort of controller?
#ubuntu-server 2011-01-08
<RoyK> not any hardware raid?
<yann2> meh. don't ask. you'd know why I'm worried :)
<yann2> zfs raid, j4200, and a... t1000 as a controler
<RoyK> iirc that's pretty standard hardware - you should be able to grab the drives and mount them wherever you want
<yann2> yup I hope so ;)
<RoyK> but on the same rpool, make sure to remove /etc/zfs/cache or whatever it's called
<RoyK> zfs layout is being cached on the rpool
<yann2> t1000 not doing too well. Am a bit more cautious with that type of assumptions though since that reiserfs issue
<yann2> I learnt "filesystem regressions" the hard way ;)
<RoyK> I've been running a couple of 50TB opensolaris boxes for one and a half years
<RoyK> they just work
<yann2> I know, so do I... its the t1000 I'm worrie dabout, its not much redunded, single ide disk
<yann2> hardware failure quite likely in the next 2 years I'd say
<RoyK> can't you just attach a new drive and create a mirror?
<yann2> I should really reinstall a new, better controller
<yann2> I could, but t1000 still suck as a zfs controler :)
<RoyK> zpool attach rpool origdev newdev
<yann2> better get a new pci-express card for another server
<RoyK> yann2: install a new drive in a usb dock or whatever
<RoyK> yann2: attach it to the rpool
<RoyK> yann2: install grub, and you have a mirror
<yann2> the controller is setup on ufs I think
<RoyK> the controller?
<yann2> the t1000
<RoyK> oh
<RoyK> how is the storage attached?
<RoyK> FC?
<yann2> I got a raidz2 on 6 disks on the j4200, attached via sas to the t1000
<RoyK> ok
<RoyK> just order a new pizzabox from somewhere, move the SAS controller, done
<yann2> that SAS card is several hundred quids, is what has been blocking me for the moment :)
<yann2> yeah... you heard about the difference between pci-e and pci-x? :D
<RoyK> oh, pci-x card? :)
<yann2> I read about it when I tried what you suggested /o\
<RoyK> lsi has some pretty decent cards that doesn't cost too much
<yann2> "for some reason it wont fit.... press harder... GGNNNN"
<yann2> :P
<RoyK> 9211 is one of them
<yann2> the worst part is I think we got it to fit :'(
<RoyK> the new box will be pci-e, right?
<yann2> 9211? is good?
<RoyK> the 9211 will fit there
<RoyK> 9211 is very fast, but you'll get WWN-based device names
<yann2> writing that down... cant remember which one is -e which one is -x though
<RoyK> which somewhat sucs
<RoyK> -e is the new one
<yann2> want to replace the t1000 by a x4100 I have spare, should be much faster
<yann2> I m supposed to get a new budget in a couple of months, I'll see if it fits in the list :)
<RoyK> with only four drives, I think that will be the bottleneck
<yann2> cant be much worse than right now
<yann2> you're saying you trust 100% the implementation of zfs on linux though?
<yann2> installing that solaris box was like... urrrrrg :'(
<yann2> wouldnt really want to do it again :)
<yann2> thanks for your recommendation on the card btw, I'll write that one down ;)
<RoyK> yann2: we changed to some 3Gbps SAS boards instead
<RoyK> just to get rid of the WWN naming
<RoyK> supermicro and that controller didin't speak well
<RoyK> so we didn't know which drive was where
<RoyK> and with 160 drives, you don't want to lookup the WWN
 * He4D ist away (Forever Alone)
<RoyK> yann2: we ended up with some sas3081 controllers and their internal counterparts
<RoyK> 3801
<RoyK> works well
<RoyK> yann2: http://pastebin.com/Atkpzux5
<kieppie> hi guys. I'm in the process of rebuilding a (file)server from scratch. planing on installing 10.4.1 LTS, and has a hardware RAID instaled. could anyone please recommend a filesystem that would be best-suited for optimum thoughput & stability?
<RoyK> kieppie: most will work, how much data do plan to put there?
 * RoyK wonders if kieppie even watched the discussion after his initial quiestion
<kieppie> hi RoyK: thanks for the response. currently there's about 600 GB I'm backing up (`rsync -av`), but tit could very well grow well above the TB's mark in the coming year or so.
<RoyK> kieppie: ext4 is safe, well tried and works, nothing fancy but it works
<kieppie> i did try & follow the discussion after my initial question, but it either didn't seem pertinent, or I missed out on a chunk...
<yann2> RoyK, hey, be happy to have cache disks :)
<RoyK> kieppie: and as I tried to tell you earlier tonight, try to read up on zfs if you want to do serious data storage
<RoyK> yann2: wot_
<RoyK> ?
<kieppie> cool, thanks. that's the de-facto default I would've gone with, but just wanted a second opinion whether something like zfs or brtfs wasn't better-suited
<yann2> http://pastebin.com/Atkpzux5  < you got ssd caches right? :)
<RoyK> kieppie: zfs kicks ass, if you need it
<kieppie> RoyK: thanks for ZFS advise. I'll lokk into it
<RoyK> kieppie: http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
<kieppie> cheers
<yann2> RoyK, I think opensolaris is dead though, not sure about new deployments...
<RoyK> kieppie: don't misunderstand, zfs is great for large storge, but is a bit on the heavy side for small stuff
<RoyK> yann2: openindiana :)
<yann2> not sure about the exact status though but it didnt seem very bright
<RoyK> well, you might not be very bright, and that's ok
<kieppie> RoyK: sweet. by heavy, do you mean in terms of system resource overhead, or something else? I'm setting up a new server from scratch, so if ZFS will deliver excellent performance, then it seems a good idea...
<RoyK> zfs has a toll of memory
<RoyK> you want 4 or 8 gigs of memory for a start
<RoyK> if you start playing with dedup, which you SHOULD NOT DO, well, if you still do, add a lot of L2ARC (that is SSD for caching)
<RoyK> but then, dedup on zfs is not stable
<RoyK> I've been testing it quite extensively
<RoyK> and chosen not to use it
<kieppie> cool. well, memory scales well, so I can add as much is needed.
<RoyK> 4-8gigs should be sufficient for a decent storage server
<RoyK> that is, 20TB or so
<RoyK> just don't use dedup
<RoyK> and if you're in doubt, don't use dedup
<RoyK> for your 4TB server, 2TB will be fine
<RoyK> 2GB even
<kieppie> well, this seems to be a old-ish server (2 or 3 years), & has only 1 G. I think I'll pop out & go get more RAM; 8 G (4 x 2G)
<kieppie> dell "poweredge" xeon 3.2 G
<RoyK> how much storage do you plan for that one?
<yann2> RoyK, for 4TB ext3 should be fine :)
<kieppie> so, other than "just selecting" ZFS as my FS of choice when doing the install, & avoiding dedupe, are there any other consideration I need to look out for?
<kieppie> At the moment it has 3 x 1 TB SATA's on a Hardware RAID (which I'll leave as-is) , which will have to do for a while. I'll clean it up a bit once the box is on-line again, but in terms of adding more storage, I'll rather wait for $$$ to become available & then build an entire new box from scratch, possibly with fibre-channel SSD's, etc
<RoyK> no need for fibre channel
<RoyK> sata is just as good
<kieppie> cool
<RoyK> just read up on zfs and you'll see why you should or should not use it
 * RoyK er farinna aÃ° sofa
<kieppie> reading the wikipedia article now & hav a few other relevant tabs open. seems a good fit; just meed to throw more RAM at it
<RoyK> kieppie: just one thing - hardware raid is bullshit compared to zfs in terms of safety
<kieppie> RoyK: to be safe, I'll use both
<RoyK> kieppie: no, you misunderstand
<RoyK> kieppie: if zfs has access to the drives directly, it can prevent data loss, far better than any hardware raid
<kieppie> oooooh!
<RoyK> if you use zfs on top of hardware raid, zfs can merely detect data loss
<RoyK> I've seen that a few times
<kieppie> ok, then I think I'll hold off for now (may be a bit overkill just yet), & do some experementing for a future build...
<RoyK> hardware raid systems with zfs filesystems on top and oops, corruption
<kieppie> oh, I see: "double correction"?
<RoyK> thing is, those hardware raid systems don't see data as a whole, only blocks, and merely that
<RoyK> if, no, when, you get silent errors from a drive, you want the filesystem to fix that
<RoyK> most filesystems relies on the drive reporting wheather the data is ok or not
<RoyK> if it reports ok, the filesystem just sends the data up to user or kernel or whataver
<RoyK> but with terabytes of data, you _will_ get silent errors
<kieppie> but in terms of physicality, is it not always faster for have multiple disks to read from?
<RoyK> the difference isn't big between reads and writes
<RoyK> if you leave it to a dumb raid controller to sort out what's good or not, silent errors will make corrupt data
<RoyK> that's a fact, not fiction
<kieppie> so, drop the h/w raid controller & add system resources to handle FS overhead.
<kieppie> hmmmmm.... FreeNAS.....? I think it uses ZFS internally, & already optimized for thoughput & FS functions....
<RoyK> just setup an opensolaris system or something with stupid controllers
<kieppie> & it's BSD ..... :D
<RoyK> direct access to the drives
<RoyK> freebsd if   you like
<kieppie> know FreeBSD better than solaris..
<RoyK> freebsd has a very old zpool version, so I wouldn't recommend it
<kieppie> think there's much life left in Solaris?
<RoyK> well, solaris isn't that far apart
<RoyK> there's a lot of solaris users
<RoyK> and openindiana
<RoyK> I just setup two 100TB boxes on OI, and I don't regret it
<kieppie> wow! 100TB? pretty sweet :)
<RoyK> two of them
<RoyK> 160TB raw storage in each
<RoyK> but leave some redundancy, and we're at 2x100TB
<kieppie> planning on starting a hosting co, or just a killer media-center?
<RoyK> heh - bacula backup storage :)
<kieppie> SaaS? hosted backups?
<RoyK> just some supermicro boxes with SAS controllers and a truckload of WD Black 2TB drives
<RoyK> cost us about $20k a piece
<kieppie> for long term storage, are the WD green's not better?
<RoyK> eveything will probably work
<RoyK> but the scrub times for those greens will be terrible
<RoyK> I have a 30TB setup with those
<kieppie> ah
<kieppie> thanks for your help & advise, RoyK: I'll head out & go get some more RAM while I wait for this backup to finish.
 * He4D ist away (Forever Alone!!!1)
<sabgenton> is https://help.ubuntu.com/community/NetworkConnectionBridge still the best way to bridge?
<sabgenton> uses pre-up brctl bla bla
<sabgenton> too many lines have they made somthing simpler to go the networking file?
<sabgenton> go in the
<sabgenton> is there an ubuntu debian way I mean
<sabgenton> rather than just using brctl directly
<sabgenton> no?
<pmatulis> sabgenton: that page is overly complicated
<pmatulis> sabgenton: i privated you a simple configuration
<Pupeno[work]> How do you configure which services start at boot time... any tool for that?
<patdk-lap> Pupeno[work], depends on what services it is :)
<Pupeno[work]> patdk-lap: mysql, postgresql, apache.
<sabgenton> pmatulis: so bridge_ports eth0 eth1 would do all that I need  without needing to type brctrl ?
<patdk-lap> use update-rc.d
<sabgenton> if i just want to bridge eth0 and eth1 with bridge utils
<Pupeno[work]> patdk-lap: actually, it is for crashplan... and it didn't work.
<sabgenton> pmatulis: ok sweet
<sabgenton> found http://manpages.ubuntu.com/manpages/lucid/en/man5/bridge-utils-interfaces.5.html
<sabgenton> sorry for ever dobuting you
<sabgenton>  https://help.ubuntu.com/community/NetworkConnectionBridge is way out of date then
<sabgenton> works but not showing the ubuntu way at all
<mrroth>  hi, I have a esxi box P4, that has two 500 gigs disk, and one usb flashi key (2 gigs), I want esxi on the flash key, and I want raid 1 on the two 500 gig disk  then I want to have one freenas applaince, and one ubuntu server applaince that all is that possible if so how is that possible
<pmatulis> mrroth: would you like some fries with that?
<mrroth> hmm
<mrroth> mp
<mrroth> no just help pmatulis
<ledzeplin1989>  hello all, i was wondering if someone could help me please with wordpress
<mrroth> ##wordpress
<ledzeplin1989> thanks
 * sabgenton put a one line update @  https://help.ubuntu.com/community/NetworkConnectionBridge 
<sabgenton> just told people to go to the bridge-utils-interfaces man page for the bridge_ports option
<ledzeplin1989> mrroth: thanks for passing me on to #wordpress but no one will talk to me on there, despite the fact there are people talking on there
<ledzeplin1989> :(
<ledzeplin1989> sorry, Im sorry to winge, I've just used irc a few times now for help and ended up getting no support at all
<delimiter> ledzeplin1989: what's the problem you need help with
<ledzeplin1989> I tryed to install wordpress earlier, I used a guide from the ubuntu site: https://help.ubuntu.com/community/WordPress
<ledzeplin1989> I followed all the instructions to install it on to my server but for some reason it will not load the wordpress page up to install it
<ledzeplin1989> thank you delimiter :)
<ledzeplin1989> got someone now talking to me on #wordpress but thank you very much
<ledzeplin1989> its greatly appritiated!! :)
<Error404NotFound> the version of zabbix is pretty old, any ppa out there?
<gobbe> 1.8.2 and newist is 1.8.4
<gobbe> not that old
<gobbe> 1.8.4 is release on january 4th
<skoude> Hi! I was thinking that is it possible or would it be usefull to run enterprise datawarehouse inside ubuntu server private cloud? What benefits could there be?
<skoude> And is it possible for example run PosgtgreSQL inside cloud?
<RoyK> skoude: you can run anything in a cloud, but for specific things like databases, I'd recommend dedicated hardware
<skoude> Yes currently we are planning physical servers:) This was just an idea... It would be nice to see which kind of performance loss there would be when running the DW in cloud :)
 * He4D ist away (Forever Alone!!!1)
<RoyK> skoude: for a database server, I'd separate that from the cloud
<RoyK> and then just have the cloud clients access that db server
<skoude> hmm. is there any other solutions thn greenplum, gridSQL or pgpool that uses postgres?
<skoude> or is working with postgres:)
<skoude> sorry wrong channel :)
<ruben23> hi guys what should i do installing ubuntu server and ask me for driver- my SATA HDD si not detected..any idea guys..?
<RoyK> ruben23: pastebin lshw output
<ruben23> where should i run this command im on the instalaltion menu.
<RoyK> ruben23: shift+left should take you to a console
<ruben23> ok ill do it now
<RoyK> you may have to boot on a live cd to get the output out of there :P
<ruben23> ok ill do it
<ruben23>  RoyK:ill still goto the console right..
<RoyK> with a live cd, you can just open a terminal to get that info
<ruben23> iwhere can i get livecd ubuntu 8.04 LTS
<RoyK> the ubuntu desktop cd is a livecd
<RoyK> btw, have you tried 10.04?
<RoyK> might be newer drivers there for your controller
<ruben23>  RoyK: yes i tried problme is i got application that dont run at all on ubuntu-10.04 LTS
<ruben23> eaccelerator for php
<RoyK> ruben23: http://blog.up-link.ro/how-to-install-and-integrate-eaccelerator-into-php5/
<RoyK> first hit on google
<DrNick_> is anyone here knowledgeable regarding likewise-open?  in particular, verison 5.4 which comes with ubuntu 10.4 LTS server
<RoyK> DrNick_: I didn't know that one - gotta try that when I get back to work :)
<DrNick_> the basic authentication seems to work fine, however the problem i'm having (and some others) is assuming the default domain.  i can change the setting in the likewise "registry", and it applies and refreshes OK, however it doesn't actually work, i.e. trying to authenticate without the domain fails
<DrNick_> i'm tempted just to un-install likewise and do it manually with samba + winbind instead :)
<RoyK> well, that works too
<DrNick_> would be really nice if they backported the latest likewise-open (which apparently doesn't have the problem) to the LTS version of ubuntu server, as obviously I don't want to upgrade to a non LTS version of the distro - as this server will be in produciton
<DrNick_> * production
<DrNick_> test & dev servers I don't mind running whatever really, but production stuff I like to keep on LTS only versions
<ruben23> guys im connecting ubuntu 8.04 LTS and ubuntu server 10.04- im installing php but is it ok it will communicate with the other php and mysql..? even they dont have same version..?
<RoyK> DrNick_: apt-get source likewise-open
<RoyK> then extract the new source in that dir, overwriting the old
<RoyK> make a new package, intall it
<RoyK> dpkg-build-package iirc
<RoyK> it's pretty trivial
<RoyK> most of the magick is in the debian/ directory of the source package
<RoyK> configure string and so on
<uvirtbot> New bug: #700288 in amavisd-new (main) "amavisd-release not working with quarantine_subdir_levels" [Undecided,New] https://launchpad.net/bugs/700288
<lenios> ruben23, there shouldn't be any problem
<ruben23>  lenios:..?
<lenios> running different versions on 10.04 and 8.04 of php and mysql
<ruben23> is it fine,actually im connecting this two server
<lenios> unless you're using features only available on latest versions, yes
<ruben23> guys any chance how to chnage resolution on ubuntu server 10.10 to more higher
<guntbert> ruben23: try with dpkg-reconfigure console-setup, in the end you are asked for the font size you want
<ruben23> its asking for keyboard model
<ruben23> guntbert: askign for keyboard model
<guntbert> ruben23: pc105 (intl) should be fine
<ruben23> what you mean..?
<ruben23> im resolution not for keyborad -sorry
<guntbert> ruben23: that reconfigures the "console", consisting of keyboard and screen
<ruben23> what should i select..? tell me
<guntbert> ruben23: for keyboard? I already suggested ^^
<RoyK> ruben23: Ålovakish?
<ruben23> RoyK: im suing US keyboard- im a Filipino- philippines
<Error404NotFound> i created a dir in /var/run, but everytime i reboot, it disappear and a software starts complaining about missing directory
<gobbe> Error404NotFound: that's because /var/run is designed that way
<gobbe> Error404NotFound: you could create it on every boot via rc.local
<Error404NotFound> gobbe: hmmm, the init scripts run before or after rc.local?
<gobbe> rc.local is last one
<Error404NotFound> hmm, then its no use :p
<gobbe> well, you can configure your init-scripts also
<gobbe> anyway quite weird that software needs folder and it's not able to create it
<Error404NotFound> yup, thinking to add a mkdir there.
<Error404NotFound> gobbe: its a source compiled install.
<gobbe> yeah
<gobbe> then manipulating init-scripts might be the best way
<RoyK> Error404NotFound: just create a newe init script - see the skeleton file
<Error404NotFound> gobbe: both are S20, what if i create a new script and do it as S10?
<Error404NotFound> RoyK: thats what i thought as well
<Error404NotFound> where can i find skeleton file?
<Error404NotFound> found it
<gobbe> yep
<Error404NotFound> RoyK: actually why do i need skeleton? i just need to define PATH, do one mkdir and chown
<RoyK> it might look nicer if you do it the ubuntu way
<RoyK> but it's by no means necessary
<RoyK> originally, unix had one file /etc/rc
<Error404NotFound> i will need to manually make symlinks right? or can i specify a number with update-rc.d command
<binBASH> http://twitter.com/timmartin2/status/23365017839599616
<binBASH> rofl
<Error404NotFound> found it
<uvirtbot> New bug: #700337 in migrationtools (universe) "No documentation for process "migration"" [Undecided,New] https://launchpad.net/bugs/700337
<RoyK> binBASH: ROTFL
<jeeves_Moss> how do I repair a pooched grub loader on 8.04?  it decided to die after I did a dist upgrade
<RoyK> jeeves_Moss: grub-install ?
<RoyK> is this grub1 or 2?
<RoyK> normally grub1 on 8.04
<jeeves_Moss> grub 1
<jeeves_Moss> I'm trying to get booted into the LiveCD.  Slow external ROM
<jeeves_Moss> RoyK, still here?  I'm booted via the liveCD now
<RoyK> I'm here
<jeeves_Moss> RoyK, ok, ideas on how to fix this?  LOL.  dumb upgrade
<RoyK> hopping around on fucking crutches
<RoyK> jeeves_Moss: I'd say, sudo in as root, mount the root volume, chroot into it, mount -a, and run grub-install
<IdleOne> RoyK: you going to be around all day?
<RoyK> no idea, but I guess I'll be around for a while
<IdleOne> ok so I'll ignore my client flashing
 * RoyK is pretty immobilised by a broken leg
<RoyK> client flashing?
<jeeves_Moss> RoyK, was the broken leg caused by a party induced accident?
<IdleOne> yeah every time you swear my irc client goes nuts
<RoyK> jeeves_Moss: nah, just tripped in a staircase in Iceland
<RoyK> on my way to a party
<RoyK> I just had one wish, to stay a few more days, and that was granted :P
<jeeves_Moss> that sucks!!  I could see it being better on the way HOME from the party
<compdoc> they do like their parties in iceland
<RoyK> iceland rocks!
<compdoc> nothing else to do there
<jeeves_Moss> RoyK, ok, I have the partition in question mounted into /mnt/temp, now just run grub-install?
<compdoc> I flew past it once - nothing but white
<RoyK> jeeves_Moss: chroot /mnt/temp
<RoyK> mount -a
<jeeves_Moss> ok,
<jeeves_Moss> then grub-install
<RoyK> that'll leave you with your old system
<RoyK> yes
<RoyK> but I'm unsure about the arguments as of now
<jeeves_Moss> kk
<RoyK> iirc it just takes the device name, as in /dev/sda or something
<RoyK> that's the device name of the drive you're on
<jeeves_Moss> ahh
<RoyK> obviously
<compdoc> the old /etc/fstab should show the correct partitions
<jeeves_Moss> I should just install grub2 and get it over
<RoyK> nah
<RoyK> keep it on grub1
<RoyK> grub2 is a PITA if you don't know it
<jeeves_Moss> true, I was thinking more that it's installer should fix the issues @ hand!
<RoyK> there is a choice in the boot menu for fixing things iirc
<RoyK> fix a broken system or so
<jeeves_Moss> hummmm
<jeeves_Moss> one sec,  going to reboot to see if I can find it
<RoyK> I don't know if that does grub install, but I would be somewhat surprised if it didn't
<jeeves_Moss> and to think, all of this came from an upgrade
<compdoc> happens a lot
<jeeves_Moss> weird, I booted into "recovery mode", and now, I get udev_monitor_new_from_netlink: error getting socket: invalid argument
<ruben23> hi are there any default firewall on ubuntu server 10.10...?
<KurtKraut> ruben23, configured and working by default? No.
<jeeves_Moss> RoyK, how can I mount this root partition on this server so I can do a dpkg -reconfigure on it?
<jeeves_Moss> RoyK, how can I get this dumb thing out of read only mode?
<RoyK> jeeves_Moss: just mount the root partition, chroot into it, mount -a (to mount proc etc)
<RoyK> that should be all
<jeeves_Moss> thanks
<RoyK> jeeves_Moss: I would think running an fsck -f on that filesystem first might be a good idea
<jeeves_Moss> I SOOOOOO don't want to do a fresh install right now
<jeeves_Moss> I'm wondering if I just copy all of my root drive to an external and do a fresh install.  My biggest concern is that I have a RAID array mounted (software) at MD0
<RoyK> the raid should be safe
<RoyK> linux will read the config from the drives
<jeeves_Moss> RoyK, that's a 100% sure that it'll be smart enough?  I can't loose that data (and yes, there's no backups)
<RoyK> jeeves_Moss: while in the livecd, can you see md0?
<RoyK> cat /proc/partitions
<RoyK> cat /proc/mdstats
<jeeves_Moss> dosn't like the last one!
<RoyK> jeeves_Moss: do you have a spare drive you can use for a new root?
<jeeves_Moss> RoyK, no :-(
<RoyK> or an usb pen?
<RoyK> s/an/a/
<jeeves_Moss> yes
<RoyK> well, unplug the root drive, install ubuntu server on the usb pen
<RoyK> you'll see quite quickly if you can address the raid
<jeeves_Moss> what I was thinking was backing everything up off of /dev/sda to an external drive, then reinstalling
<RoyK> jeeves_Moss: up to you :)
<jeeves_Moss> could I boot from the liveCD and see if it sees it?
<RoyK> I'm not sure all the drivers will be loaded
<compdoc> the livecd sees my 3ware raid
<RoyK> I was more concerned about the software raid
<jeeves_Moss> well, I guess this is a lesson learned then.
<jeeves_Moss> if it 'aint broke, don't f**k with it
<compdoc> what version did you upgrade to?
<jeeves_Moss> I just forced a dist upgrade from the CLI
<RoyK> jeeves_Moss: do-release-upgrade?
<RoyK> or just apt-get dist-upgrade?
<RoyK> the latter should be trivial
<jeeves_Moss> do-release-upgrade
<RoyK> hm...
<RoyK> :)
<RoyK> playing with matches
<RoyK> and a wee bit of gazoline
<RoyK> jeeves_Moss: if I were you, I'd disconnect the current root disk and try a fresh install on another drive, USB pen or spinning crap, doesn't matter
<jeeves_Moss> I think we may have something here!!!
<jeeves_Moss> http://www.linode.com/forums/viewtopic.php?t=5276%3E
<jeeves_Moss> well, we now have a 10.04 splash screen, so, it's time to see if this works
<jeeves_Moss> ...  and we have a login
<RoyK> :)
<RoyK> and the raid is there?
<jeeves_Moss> not yet.  I'm fixing broken packages right now.
<RoyK> dpkg --configure -a
<RoyK> :)
<jeeves_Moss> it's "limping" along right now.
<jeeves_Moss> lol
<RoyK> apropos limping http://karlsbakk.net/xray.png
<jeeves_Moss> bolly crap man!
<jeeves_Moss> *holly
<RoyK> it'll heal
<compdoc> do they take out the metal someday?
<RoyK> dunno, they said it could stay there if it didn't bother me
<jeeves_Moss> untill you hit a MRI machine!  LOL
<compdoc> just loks like they screwed one bone to the other, so I would think it would lesson movement
<compdoc> looks
<jeeves_Moss> brb, going to switch laptops
<RoyK> one of the screws goes through fibula and into tibia, but that'll break, the doctor said, and should't pose a problem
<compdoc> wow
<RoyK> another operation, and I'll be grounded for some more weeks
<RoyK> good thing it's titanium, less hazzle at airports
<compdoc> dont you have some volcano going off up there?
<RoyK> EyjafjallajÃ¶kull has fallen asleep
<RoyK> some 8 months ago
<compdoc> ahh, good
<RoyK> and, no, I'm not Icelandic :)
<RoyK> but I've studied the language for some time
<RoyK> btw http://www.youtube.com/watch?v=9jq-sMZtSww
<compdoc> I could never pronounce it
 * RoyK can
<RoyK> but, beleive me, it took me some time to learn the intonations in that language, and I'm still not there
<jkg> hi folks. I've just set up a Dovecot IMAP server on Ubuntu 10.04 (migrating from a host running old-ish Debian), I can connect to port 993 locally (e.g. with telnet) but not from other hosts. is there some obvious thing I should be checking, like a default firewall? (#ubuntu said I should try here!)
<jkg> oh, to pre-empt the obvious first answer, I have "ssl_listen = *:993" in my dovecot.conf, which is copied from the old server.
<RoyK> jkg: does lsof -p $pidofdovecotserver show that it actually listens?
<jkg> dovecot 14458 root    7u  IPv4            6911728      0t0     TCP *:imaps (LISTEN)
<jkg> suggests yes to me, but I'm not entirely sure what I'm looking at :-)
<gobbe> might be firewall
<jkg> yeah, that seemed the logical answer - but I don't _think_ I'm running one - iptables -L doesn't show any rules, and I've not installed anything else
<compdoc> looks like its listening.
<compdoc> ubuntu doesnt ask to install a firewall, if I remember
<compdoc> what are you typing into the remote host?
<compdoc> name or ip address?
<jkg> telnet nephos.uk-cvs.com 993
<compdoc> try ip address
<jkg> but I get the same result by IP - "Trying 84.22.181.182..."
<RoyK> jkg: pastebin iptables-save output
<jkg> http://paste.ubuntu.com/551855/
<gobbe> try tcpdump to see that traffic actually comes
<gobbe> in
<gobbe> why it is so hard to type correct with mobile phone :-)
<jkg> I'm a bit rusty on tcpdump - do I want: "sudo tcpdump 'tcp port 993'"?
<jkg> interesting: from the machine itself, I can connect to 84.22.181.182:993 (although my tcpdump command is obviously wrong, since it didn't pick that up ;) )
<jkg> tcptraceroute output is interesting, http://paste.ubuntu.com/551861/
<gobbe> it could be your external firewall
<gobbe> it seems that you are doing somekind of natting
<jkg> I am behind NAT, but the server isn't -- but I get the same results from another machine not behind NAT, too.
<gobbe> aah, ok
<gobbe> can you open any other connection your server like ssh etc?
<jkg> yep, I can ssh to it fine, load web pages from it fine
<gobbe> jkg: upload your /etc/dovecot/dovecot.conf to pastebin
<Thorn> hello
<Thorn>  is php 5.3.5 available for 10.04 anywhere yet?
<Thorn> (I'm mostly interested in php5-fpm)
<gobbe> Version: 5.3.2-1ubuntu4
<gobbe> so no
<gobbe> if you mean supported packages
<jkg> ahah! after all that, it /was/ a firewall issue. I just called our hosting provider, by default they block port 993 inbound ()!
<gobbe> :)
<gobbe> jkg: asking from someone else helps usually ;)
<Thorn> I don't really care if the packages are unofficial since 5.3.5 fixes a pretty huge security hole
<guntbert> jkg: I understood it did work before (on debian) ?
<jkg> sorry, the debian machine was in a different network location too, I should have mentioned that.
<Thorn> http://www.dotdeb.org/2011/01/07/you-really-should-upgrade-to-php-5-3-5-or-5-2-17/
<guntbert> jkg: :-)
<RoyK> Thorn: the security fixes may be backported - that stuff happens a lot in debuntu land
<RoyK> Thorn: check that first
<jkg> I couldn't imagine it would be the network provider's firewall to blame, so I just mentioned the stuff I thought might be relevant :-) so this means /all/ their customers have 993 blocked inbound... oddness.
<jkg> I'd name and shame them, but I think I've disclosed enough information for anyone interested to work it out ;)
<RoyK> jkg: can you telnet to the port on the server's address?
<RoyK> not localhost
<jkg> from the server itself? yeah. that was the final clue that I needed to ring the network provider and ask them about their firewalling :-)
<RoyK> jkg: and preferably from a box on the lan
<RoyK> jkg: ufw status
<RoyK> jkg: or pastebin iptables-save
<Thorn> nope, last update 2010-09 https://launchpad.net/ubuntu/lucid/+source/php5/5.3.2-1ubuntu4.5
<RoyK> php5 (5.3.2-1ubuntu4.5) lucid-security; urgency=low ...... ECURITY UPDATE: arbitrary code execution via empty SQL query
<RoyK> arbitrary code execution is "low"?
<jkg> RoyK: it's cool, I spoke to the vendor and they're changing the firewall config. thanks, though.
<RoyK> jkg: heh - ic
<Thorn> that's not the bug I'm looking for
<RoyK> Thorn: building a new package from source won't be too hard, though
<RoyK> apt-get source
<RoyK> extract new source into that dir
<RoyK> dpkg-build-package
<RoyK> iirc
<Thorn> there is no source package
<RoyK> there are source packages for all ubuntu packages
<RoyK> just get the source, unpack the stock php tarball into that, and make a new package of that
<Thorn> but as fas as I can see there is no ubuntu package which would include that fix
<RoyK> erm
<Thorn> oh, interesting
<RoyK> most of the magic is in the debian/ directory in the source tree
<RoyK> that says what to build and where to install it plus some package magick
<Thorn> I'll try that, thanks
 * RoyK leans back and watches the usual suspects
<lenios> RoyK, isn't pbuilder prefered over dpkg-build-package to build binary packages?
<ScottK> lenios: pbuilder uses dpkg-buildpackage.  It's a higher level tool.
<ScottK> Generally it's better to use it.
<lenios> any reason to recommend dpkg-buildpackage then?
<ruben23> hi guys how to make my hostname resolve to my local IP on my onw ubuntu server..?
<lcb> hi all. please give me a suggestion of a good (non CLI) GUI replacement for webmin, a  web interface to control some servers, mainly LAMP. but if possible, more servers on the same tool package . tks ppl.
<lcb> (let me add i intend to use the interface trough lan, i.e. blocking outside world)
<JanC> what else do you need outside LAMP?
<lenios> ruben23, define it in /etc/hosts
<JanC> lcb: what else do you need to configure outside LAMP?
<ruben23> lenios: got no idea, how the format would it be..
<lenios> format is : IP   hostname
<JanC> ruben23: read "man 5 hosts"
<lcb> JanC: I'm trying to build now the server, so, let's say, FTP, for instance and as time goes i'll add some multimedia servers.
<lcb> JanC: but i'll be happy if it's onÃ§y for LAMP, at this point.
<lcb> onÃ§y/only
<lcb> JanC: due to the suspended webmin for ubuntu and debian i wonder if there is any workaround to avoid the glitches on it though.
<ruben23>  lenios: i got already : 127.0.1.1  Database..
<lenios> ruben23, paste your /etc/hosts
<JanC> paste on a pastebin!
<lenios> !paste
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<ruben23>  lenios: ------------>http://pastebin.com/VM1JG4cK
<ruben23>  lenios: you there..?
<lenios> ruben23, what does "host Database" return?
<qman__> ruben23, instead of 127.0.1.1, you should use your interface's IP for your host's name, i.e. 192.168.1.123
<ruben23> http://pastebin.com/P3EH3TTj
<qman__> and for what it's worth, "Database" is a potentially very bad hostname if you ever have to talk to anyone about your server by its hostname
<qman__> example, "I can't connect to database!", does it mean they can't connect to the server called Database, or that they can't connect to a database?
<lenios> haha
<ruben23>  qman__: ok ill chnage it
<JanC> I also recommend you use lowercase hostnames...
<ruben23> JanC: ok, i tried testing it- replaced  127.0.1.1  Database-  then i get timeout
<lenios> replaced with what?
<ruben23> lenios: i put it- innermaxdb  (new hostname)
<lenios> with what IP?
<lenios> which*
<ruben23> on the  --> /etc/hosts , i ahve this ---> 192.168.2.2     innermaxdb  , is this ok..?
<lenios> what does host innermaxdb shows?
<gobbe> host-command doesn't use hosts-file
<ruben23> lenios: ---->http://pastebin.com/Hr7D9J7J
<ruben23> still not working
<gobbe> like i told, host-command is not using hosts-file
<gobbe> try ping
<lenios> oh true, host doesn't use hosts file
<gobbe> that might sound little silly, but that's how it is ;)
<lenios> i'll just have to remember that
<ruben23> ok, hope there are ways i can resolve my own hostane on for my local ip
<gobbe> ruben23: ping
<JanC> if you have mysql server installed there is also resolveip
<gobbe> yes
<JanC> and you could write a quick script in about any scripting language too
<jkg> anybody understand pam and/or exim? I'm seeing log lines like http://paste.ubuntu.com/551884/ in auth.log and can't send mail (my username is uk-cvs.com-james, and I'm getting my password right)
<jkg> as before, this is on exim 4.71 on 10.04, using an exim config that worked previously on exim 4.63 on Debian etch; I guess it's a pam issue, or an exim-running-as-wrong-user issue, rather than an exim config thing...
<jkg> the relevant authenticator in the exim config looks like http://paste.ubuntu.com/551886/
<MatBoy> guys is there a reason why dnssec-keygen does not work ?
<MatBoy> it seems to hang
<pmatulis> MatBoy: dunno, try using strace
<RoyK> good evening
<binBASH> moin
<binBASH> ;)
<RoyK> whatever :)
<binBASH> ;D
<_ruben> dnssec-keygen hanging .. lack of entropy comes to mind
<RoyK> _ruben: playing with openvpn?
<RoyK> or bind
<RoyK> I guess
<_ruben> RoyK: was a comment on an earlier q in here actually
<RoyK> i
<RoyK> erm
<RoyK> k
<RoyK> as in ok
<RoyK> laptop on the lap and moving around a bit can be challenging
<lcb> is there any problem if enter localhost's static ip (192.168....) as the LDAP server to use?
<RoyK> lcb: that's not localhost
<lcb> RoyK, indeed, sorry, i mean the machine ip
<RoyK> localhost is, by definition, 127.0.0.1 or ::1
<lcb> yes..
<lcb> i'll play around with it and see as i go if everything goes ok, with localhost, instead of any ip
<jeeves_> is there a "repair in place" option for Ubuntu server (like a repair option on XP)?
<LinuxAdmin> jeeves, what do you need to do?
<jeeves_> LinuxAdmin, I did a forced release upgrade, and now, my system is sitting @ a fsck (and has been for ~8 hours)
<jeeves_> the orignal release was 8.04, and now the splash screen shows 10.04
<jeeves_> LinuxAdmin, apparently, there is a LOT of complaints about that upgrade.  I think it was a lack of reason that I did the upgrade
<LinuxAdmin> jeeves_, you probably upgrade your kernel
<LinuxAdmin> have you tried to run a previous version of the kernel?
<LinuxAdmin> kernel upgrade can sometimes break things
<RoyK> LinuxAdmin: eeeeeerm
<jeeves_> LinuxAdmin, yes, and it says it can't find the kernel in question, and locks up
<RoyK> LinuxAdmin: most filesystems have been pretty rock solid for years, a minor upgrade won't do much
<jeeves_> lol, so....  since this is a production box (hangs head in shame), I've killed Apache, and our e-mail
<LinuxAdmin> have you checked your grub configurations?
<jeeves_> yes
<LinuxAdmin> I had ubuntu server box that broke my drbd shared storage
<LinuxAdmin> that's why I remembered that
<RoyK> LinuxAdmin: most filesystems are safe
<RoyK> drbd is another story
<LinuxAdmin> ok, but sometimes kernel upgrades break things
<RoyK> jeeves_: reboot to single and fsck it
<jeeves_> LinuxAdmin, I was able to get ONE boot out of it (before it went into this endless FSCK loop) on 10.04.  There was SOOOOO much broken stuff that I'm thinking that I'm just going to back up the 200Gb main drive, nuke it, and do a fresh install.  My biggest concern is that I have a 1Tb software RAID in that box as well, and I hope I can recover it
<LinuxAdmin> that's why he could see if he can boot with another kernel
<jeeves_> RoyK, hey man, it's the same issue from earlier today.
<RoyK> jeeves_: the raid stuff_
<RoyK> ?
<jeeves_> currently, I'm out for dinner with the wife, so I'm on my phone, I'll have to check tomorrow when I get it on the bench @ work.
<RoyK> jeeves_: just don't listen to that LinuxAdmin dude
<jeeves_> yes, I had a RAID0 set sitting @ /dev/md0 that is 2 500Gb diss
 * He4D ist away (Forever Alone!!!1)
<RoyK> sorry, sir, but raid0 is asking for trouble
<patdk-lap> royk, depends on what it's used for :)
<RoyK> for caching, spooling, something, ok
<RoyK> but I somehow doubt you'd need a terabyte for that
<jeeves_> it was to be a temp soulition untill I could get $200 for 2 2Tb disks that would be mirrored.
<jeeves_> RoyK, lol, spooling of Porn
<patdk-lap> royk, video, but anything <30tb is small for that
<RoyK> ;)
<jeeves_> ...  or Porn of spoling
<LinuxAdmin> ok RoyK, I'm not here anyway
<jeeves_> ok, so, lessons learned here...  back up, back up, back up.
<RoyK> real men don't use backups
<jeeves_> I just hope I can pull a copy of the configs for e-mail, data bases, etc
<RoyK> real men weep
<jeeves_> ....  then we get creative and put the damn thing in the freezer!
 * RoyK doesn't want his macbook pro in the freezer
<patdk-lap> heh, I had a raid50 die last sunday :(
<jeeves_> RoyK, FRUIT???  seriously?
<patdk-lap> 7 disk (6 + 1hotspare)
<patdk-lap> 5 bad drives
<patdk-lap> recovered almost everything
<jeeves_> patdk-lap, WTF?  please tell me there was a bottle of booze found in the server room
<RoyK> patdk-lap: zfs would have found the errors early
<RoyK> zfs ftw!
<patdk-lap> the raid card should have too
<RoyK> nope
<RoyK> the raid card listens to the drives only
<patdk-lap> no it doesn't
<RoyK> if the drives say it's ok, the raid card doesn't give a fuck
<patdk-lap> I had consistency checking on, every other day it does a sweep
<jeeves_> lol, that's why our "large" storage array in the data center is a 12 bay unit with 2Tb drives in it.
<RoyK> patdk-lap: that's just asking the drives
<RoyK> patdk-lap: no data consistency checks
<jeeves_> we're thinking ZFS is the way to go for it for us.
<patdk-lap> it takes 16hours to run a consistency check
<RoyK> only zfs and perhaps btrfs do that
<patdk-lap> it has to check it
<patdk-lap> half of it was also the scsi cable had gone bad
<RoyK> patdk-lap: there is no chance your controller can run consistency checks on the drive data without pre-stored checksums, which it probably doesn't have space for
<patdk-lap> drives that where fine, where marked bad
<jeeves_> anyone heard of the power requirements on a NAS with ZFS?  I'm thinking of buying a dual CPU board for the 2 3Ghz Xenon CPUS and 8Gb of RAM I have, but according to the specs, a P4 is the best
<patdk-lap> and drives that where bad, where still good
<uvirtbot> New bug: #700492 in ntp (main) "ntp complains about ipv6 errors every 5 minutes" [Undecided,New] https://launchpad.net/bugs/700492
<RoyK> jeeves_: there aren't really much power requirements for zfs, just your regular hardware
<jeeves_> RoyK, ok, cool.  saves me some $$!!!
<RoyK> jeeves_: and for zfs, not much cpu is needed
<RoyK> jeeves_: just make sure it's 64bit
<patdk-lap> hmm, isn't the network or disks the normal slowdown?
<patdk-lap> use quad qdr infiniband :)
<jeeves_> well, I figured with 12 disks to work with, and those 12 disks spaced out over 4 4port SATA cards, the slow part should be the PCI busor the NIC
<RoyK> SAS3 or perhaps SAS6 should be sufficient for most use
<patdk-lap> what do the sata cards use? pci? pciex1?
<jeeves_> and for our production boxes, we have 2Gb FC cards in them
<jeeves_> patdk-lap, PCI (it's an old 2.4Ghz P4)
<patdk-lap> oh horrible
<RoyK> we're using SAS3 for this 100TB setup, works well
<patdk-lap> max the whole system will get is 100MB/sec
<jeeves_> RoyK, how much $$ backing do you have though?  My busniess partner and I are still in the "start up" phaze
<RoyK> patdk-lap: depends how many drives you need, how much speed you want etc
<patdk-lap> you can get a cheap ass pcie system
<patdk-lap> that would be much faster than pci slots :)
<jeeves_> true
<jeeves_> I should get a better board
<RoyK> jeeves_: about $25k for 100TB net storage, 160TB raw space
<jeeves_> RoyK, nice.  well, we're still young/poor, so, everything we have comes from making "deals" and plotting along on a shoestring budget
<patdk-lap> heh, I did mine for cheap
<RoyK> jeeves_: this is off the shelf from supermicro
<jeeves_> once we get going, I'll start replacing stuff.  Consolidate all the web servers into a single blade cab, etc
<patdk-lap> a $30 mb, a simple intel 945 cpu, 4gig ram, pcie intel nic, and put in some pci sata dual port cards
<RoyK> patdk-lap: install something like openindana on that, add a bunch of drives
<RoyK> setup zfs to do its business
<jeeves_> I was thinking a cheap intel board, 5 PCI-E slots, a dual (or quad NIC), and the rest are 4 port SATA cards
<patdk-lap> that was my network storage/mythtv system
<jeeves_> I was thinking FreeNAS
<binBASH> hi patdk-lap ;)
<RoyK> nah - use openindiana
<jeeves_> why?
<patdk-lap> remember, sata2 drive can max out a pcie x1 if it wants :)
<patdk-lap> hello binbash
<RoyK> jeeves_: freenas is based on freebsd, which has a very old zfs implementation
<RoyK> openindiana development is quite in the game
<jeeves_> lol,  that's the point.  I'm trying to reduce as many bottle necks as I can on that box
<jeeves_> RoyK, lol, maybe I should dust off my SunBlade 1500.
<RoyK> the zfs implememntation in fbsd isn't good
<binBASH> Finally my voip phone works \o/ Installing Asterisk 1.8 fixed everything.
<jeeves_> ok boys, I'm outta here.  I have to go pick up my fiance, and go.  No more sitting in the car waiting!!!!  FINALLY!
<RoyK> binBASH: I used to work with asstrix
<binBASH> asstrix? :D
#ubuntu-server 2011-01-09
 * He4D ist away (Forever Alone!!!1)
<pehden> hi all
<pehden> any one know how to set up openvpn
<pehden> any one know how to set up openvpn y/n
<pehden> well
<pehden> dead room
<pmatulis> pehden: try docs on openvpn.net
<pehden> I have been but i think im missing someithing cause what im reading doesnt show for one thing thats in webmin
<deadsmith> anyone know an acceptable grub argument for video on the efifb for an XServe2,1?
<pmatulis> pehden: webmin doesn't work with ubuntu very well so i understand
<pehden> I know
<pmatulis> pehden: what about https://help.ubuntu.com/community/OpenVPN
<pehden> and i half to find things that work wth webmin adn run on ubuntu
<pehden> I got http://openvpn.net/index.php/open-source/documentation/howto.html#config
<pehden> could remote (Remote IP)   and  ifconfig (Transport network)  be the same addresses?
<pmatulis> pehden: what's the problem anyway?
<deadsmith> or alternatively, if there's a flag for text login for ubuntu?
<pehden> could remote (Remote IP)   and  ifconfig (Transport network)  be the same addresses?
<pehden> based on the normal start up part of  http://openvpn.net/index.php/open-source/documentation/howto.html#config
<jforman> pehden: huh? what is 'transport network' ? (and there is no need to repeat yourself)
<no--name> hi. anyone know how i can get the right click --> extract here for file-roller? It is in -desktop by default but after I installed file-roller under -server it isn't there 8(
<pehden> in webmin it has both of those to put in for local and peer and then for remote it has 2 entrees
<jforman> pehden: are you trying to configure your openvpn server or a client?
<pehden> the server
<pehden> i can get one running but then the client fails to connect
<pehden> so i guess i did it wrong
<jforman> i'm going to go out on a limb and say, ditch webmin in this case and edit the config file by hand. that way you can run openvpn on the command line, kick up the verbosity, and see what the problem is
<pehden> i would use ssh if i could figure out where they hide the .conf
<jforman> its normally in /etc/openvpn (you could also look at the ps output for the openvpn process)
<pehden> from the url pmatulis posted it says it should be there called server.conf
<jforman> it can be called anything, that is just the default.
<pehden> looking there now for it
<pehden> do you have webmin on a server
<jforman> me? no.
<pehden> afk
<billybigrigger> any mail gurus around?
<billybigrigger> i'm having quite the time trying to figure out this relay access denied error
<billybigrigger> recieving my mail is no problem
<billybigrigger> but trying to send to my other gmail account results in a relay access denied problem
<deadsmith> is there an alternate ISO for installation from a USB key?
<pehden> back
<pehden> billybigrigger: who is your ISP
<billybigrigger> not sending mail through my isp
<billybigrigger> i have my own server setup for incoming/outgoing mail
<billybigrigger> ie outbound smtp via mail.thefrozencanuck.ca
<pehden> billybigrigger: Is this server at your house
<billybigrigger> no
<billybigrigger> vps
<billybigrigger> linode
<billybigrigger> sorry for the 1 liners... :D
<billybigrigger> would you like a paste of my main.cf and mail.log that shows the error?
<pehden> billybigrigger: hmm
<pehden> billybigrigger: send it private
<pehden> billybigrigger: the conf hould be enough
<billybigrigger> ok
<billybigrigger> you talking main.cf or master.conf?
<pehden> billybigrigger: main is the one i was thinking of
<billybigrigger> k
<billybigrigger> there you go
<pehden> billybigrigger: the relayhost= needs to be localhost or the relayserver domain
<pehden> billybigrigger: if its blank it can fail
<pehden> billybigrigger: i would recomend putting the domain of the mail.thefrozen   etc.
<billybigrigger> ok
<billybigrigger> ill try that
<pehden> billybigrigger: let me know how it goes
<billybigrigger> billybigrigger@timmy:~$ sudo service postfix restart
<billybigrigger>  * Stopping Postfix Mail Transport Agent postfix                                                                                                                        postfix: fatal: myhostname and relayhost parameter settings must not be identical: mail.thefrozencanuck.ca
<billybigrigger>                                                                                                                                                                  [fail]
<billybigrigger> oops
<pehden> try localhost
<billybigrigger> nope
<billybigrigger> pehden, postfix will restart but still same error
<pehden> billybigrigger: lpost me the log
<pehden> billybigrigger: prvt post me the log error
<pmatulis> billybigrigger: why don't you post the error you're getting in this channel.  that would be the first step if you want help
<billybigrigger> http://pastebin.com/YxFZy7KN
<billybigrigger> that's me tailing the mail.log as i attempt to send the message
<pmatulis> billybigrigger: what's this shawcable.net stuff?  you're sending to gmail.com
<billybigrigger> whois me
<billybigrigger> :P
<billybigrigger> that's me
<pehden> lol
<billybigrigger> does my ip address have to be included in mynetworks?
<billybigrigger> as anyone not listed there can't use postfix correct?
<pehden> billybigrigger: i gonna compare it to my config real quick
<billybigrigger> there we go
<billybigrigger> added 68.146.143.0/24 to mynetworks and i got no error
<billybigrigger> well that's stupid
<billybigrigger> what if my home ip changes, i won't be able to sendmail through my mailserver?
<billybigrigger> hmm
<pmatulis> billybigrigger: sigh.  pastebin 'postconf -n'
<pehden> you can set up local dns
<billybigrigger> http://pastebin.com/vhUsQL1G
<pehden> you should be able to add localhost or 127.0.0.1
<billybigrigger> seems those mails didnt go through
<billybigrigger> i sent 2 out, 1 to gmail and 1 to hotmail and both havent arrived yet
<billybigrigger> so evolution didn't complain but something is still wrong i think
<pehden> billybigrigger: is reject_unauth_destination  this list for denied or allowed only
<pmatulis> billybigrigger: what is the internal ip address of your server?
<pehden> thats what i was thinkinh too pmatulis
<billybigrigger> 192.168.143.210
<pmatulis> billybigrigger: so that should be a part of $mynetworks
<pmatulis> billybigrigger: just change the /24 to /16
<billybigrigger> yeah it is
<billybigrigger> 192.168.0.0/16
<billybigrigger> you mean?
<pmatulis> billybigrigger: yes
<billybigrigger> ok
<billybigrigger> should me home ip be included in mynetworks?
<billybigrigger> i removed it and i get relay access denied again
<billybigrigger> also i'm seeing this now after pehden inquired about it
<billybigrigger> Jan  8 21:54:53 timmy postfix/smtpd[29492]: generic_checks: name=reject_unauth_destination
<billybigrigger> Jan  8 21:54:53 timmy postfix/smtpd[29492]: reject_unauth_destination: donzavitz@gmail.com
<pmatulis> billybigrigger: pastebin 'postconf -n' again
<pmatulis> billybigrigger: as well as 'ifconfig'
<billybigrigger> http://pastebin.com/x8QgeBbJ
<billybigrigger> ahhh
<billybigrigger> i see a problem :P
<billybigrigger>           inet addr:69.164.212.132  Bcast:69.164.212.255  Mask:255.255.255.0
<billybigrigger> shouldn't matter though as 192.168.0.0/16 is included in mynetworks
<pmatulis> billybigrigger: i'm going to bed very soon, please comply
<billybigrigger> http://pastebin.com/LhWKGbLm
<billybigrigger> there ya go
<pmatulis> billybigrigger: so the client you're sending the mail from.  it's on what subnet?
<pmatulis> billybigrigger: hello?
<billybigrigger> this is me
<billybigrigger> 68.146.143.85
<billybigrigger> thats my client from my house
<pmatulis> billybigrigger: well you can put just that ip in mynetworks if you want
<pmatulis> billybigrigger: if it will just be you sending mail
<pehden> or set up a dyndns
<billybigrigger> no i have others using the mailserver
<pehden> I would suggest taking off the restr
<bgupta> I am trying to setup a multihomed home server with a public IP on one side and a private on the other. This will be setup with L2TP/IPSec. I have so far got the multiphomed networking working, but I have two default routes..  (One to my internal gateway, and one to my public gateway.) I'm wondering from the networkign side if two default routes is best practice in this case. (I want VPN clients to both be able to access inte
<bgupta> hosts and use the VPN connection for internet access)
<pehden> *restriction
<pmatulis> billybigrigger: so you have not local clients.  i didn't know that
<pehden> and make authication be with username and password
<pehden> you could do a proxy for it
<pehden> set up a proxy server for the users to connect to the proxy and they would be local
<pmatulis> billybigrigger: the standard is to set up SMTP AUTH
<pmatulis> billybigrigger: in conjunction with STARTTLS
<billybigrigger> ok
<pehden> pmatulis : I agree
<pmatulis> billybigrigger: basically you need to authenticate the remote users who will be allowed to send mail through your server (relay)
<billybigrigger> ok
<pmatulis> billybigrigger: (SMTP AUTH)
<pmatulis> billybigrigger: but you should encrypt the connection otherwise p/w is in cleartext
<pmatulis> billybigrigger: (STARTTLS - SSL)
<pehden> pmatulis : I would love to do that on my server do you have in mind an artical that would show the conf for this set up
<billybigrigger> ubuntu server guide :)
<pehden> i have the ssl part
<pmatulis> pehden: no, but it's all over the internet
<billybigrigger> which i followed...but maybe i have something screwed up
<billybigrigger> https://help.ubuntu.com/10.10/serverguide/C/postfix.html#postfix-smtp-authentication
<pmatulis> billybigrigger: bingo
<pehden> pmatulis : I use webmin so it makes this a little difficult
<billybigrigger> eeewwwww
<pmatulis> billybigrigger: for now, you can hardcode your IP address in mynetworks
<billybigrigger> :P
<billybigrigger> no i'd rather get this figured out and working correctly
<billybigrigger> i go over the server guide again
<billybigrigger> s/i/i'll
<pmatulis> billybigrigger: mynetworks says who can relay mail, usually it is for your LAN-side clients.  that's why i was confused
<billybigrigger> oh
<billybigrigger> i didn't know that :P good to know
<pmatulis> billybigrigger: ok, good night
<pehden> lol an extreme way would be forall the clients to have dyndns
<pehden> then add there dyndns domain in there
<pehden> and it should check for there ip
<billybigrigger> pmatulis, night, thanks for the help!
<billybigrigger> thats too extreme for me
<pehden> lol i just thought of it lol
<pehden> i wish i knew how to do the force login for sending mail
<pehden> so i didnt have to use webmail
<pehden> i like it cause it remembers my filters
<pehden> billybigrigger: do you know anything about openvpn
<billybigrigger> negative
<pehden> billybigrigger: do you know how to make postfix user login to send
<pienkie> hi guys. I've started a remote server installation & initiated the format of a 1 TB HDD, which will take a *looooooong* time. I'm now off-site & have connected OK (installer@xxxxxx), & I d like to confirm that the format had completed before continuing with the installation. How can I check this, please?
<billybigrigger> pienkie, should have used screen
<billybigrigger> screen saves the ssh connection for you to reconnect to later
<pienkie> billybigrigger: i did not know that. unfortunately I'm not that intemately familiar with screen (I should probably swot up on that...)
<billybigrigger> hmmm
<billybigrigger> so now i can send out emails...but i'm getting relay access denied from gmail and hotmail when replying to my test mails...
<billybigrigger> so now i can't recieve mail...wtf
<Delerium_> billy: home server?
<billybigrigger> no
<billybigrigger> vps
<Delerium_> K, most the ISP block port 25, that's why I was asking :(
<pehden> I asked that earlier
<Delerium_> I wasn't there...
<pehden> oh
<pehden> so now cant recieve
<billybigrigger> yeah
<billybigrigger> haha
<billybigrigger> Jan  8 22:41:47 timmy postfix/smtpd[30537]: generic_checks: name=reject_unauth_destination
<billybigrigger> Jan  8 22:41:47 timmy postfix/smtpd[30537]: reject_unauth_destination: billybigrigger@thefrozencanuck.ca
<Delerium_> Billy: what VPS company are you using?  I'm looking for one in a near future
<billybigrigger> linode
<billybigrigger> i only had mail.thefrozencanuck.ca in mydestination
<Delerium_> thanks, are they in  US ?
<billybigrigger> now i have mail.thefrozencanuck.ca, thefrozencanuck.ca
<billybigrigger> all is working great now
<billybigrigger> yup
<billybigrigger> i couldn't find one in canada
<billybigrigger> well atleast worth spending my money on
<Delerium_> iWeb in Montreal looks good
<billybigrigger> im pretty sure linode comes highly recommended around here
<billybigrigger> i think it was here that suggested it to me
<pehden> I host my own site
<pehden> so i dont pay much
<billybigrigger> pehden, all fine and dandy until you want to run a mailserver :P
<billybigrigger> my ISP wouldn't allow it so i had to go with a vps
<pehden> i use webmail
<billybigrigger> wouldn't allow me to send out smtp on port 25
<pehden> you have to use there mail relay
<Delerium_> same here, just bought a home server with Custom DynDns and my ISP is blocking port 25 (in and out
<Delerium_> so I might by the DynDns Mail Relay or go for a VPS
<pehden> Delerium you in the UsA
<Delerium_> Montreal, Canada
<pehden> oh
<pehden> they should allow 25
<billybigrigger> now if i could only get effin email filters to work in evolution i'd be laughing
<pehden> you will need seige
<pehden> to be on dovecot
<pehden> or use imap4
<Delerium_> pedhen: using my ISP smtp, I can send mail judt fine, but incoming doesn't work
<pehden> thats port 110
<pehden> then not 25
<Delerium_> 110 is Pop, right?
<pehden> they shouldnt be blocking that one
<pehden> 110 pop3
<pehden> and 143 imap
<Delerium_> ok, but I have my own mail server at home. with a MX record, so it's goes to port 25 (if I'm not mistaken)
<pehden> sendin mail aka smtp is port 25
<toddnine_> Hi guys.  I need a cluster change event that will allow me to fire system scripts.  Essentially I have 3 nodes of haproxy, and when the primary node fails, a new node needs elected and needs to set it's ip.  Any systems you can recommend?  I'm looking at ucarp and pacemaker
<Delerium_> pehden: yup, there a tool at DynDns that will redirect incoming email to a another port (like 2025), but it costs 50$ a year..
<pehden> Delerium_:interesting
<pehden> Delerium_: incoming mail is 110 and 143
<Delerium_> pedgen: but a bit costly... I bought my domain mame + DynDns Custom with them (around 70$) and now I have to put another 50$
<pehden> i got my domain for 6.95
<Delerium_> pegden: pop3 and imap are used to get mail from a mail server, in my case, I have my own mail server so I need to accept smtp request (I think... still learning on this one) ;)
<pehden> the server will send you the emails on port 110 or 143 to your client and when you send to your server to send to other emails it sends on port 25
<Delerium_> Jet's win: Colts out.
<Delerium_> pehden: I'm the server, not a client ;)
<pehden> outlook is client
<Delerium_> I have my own mail server, so I use Outlook (or wathever tool) to connect to this server, using, yes, port 110 or 143
<pehden> lol
<pehden> whyats your web sote
<pehden> sorry eatin ribs
<Delerium_> In construction, just start last week... There's only a basic wordpress install
<Delerium_> Ntohing much to see
<Delerium_> Still working on my project ;)
<Delerium_> but you can always try: www.elezium.com
<pehden> ill added it to my spider
<Delerium_> Mad Men! :) I see you!!!
<pehden> youll get hits from it every night at midnight central
<Delerium_> I prefet not since the this is a temp domain name, it will change soon and this hostname will only be used for my own test purposed...
<pehden> k
<pehden> i can remove it when i finish eatin
<Delerium_> 98.217.65.X, it's you?
<pehden> my spider has few domains
<pehden> mine 69.*
<pehden> look for my useragen named pehden
<Delerium_> got it.. someone else is poking me! hhe
<pehden> lol
<Delerium_> bha.. this is a test server at home
<pehden> brb
<pehden> why not use both
<pehden> my server is hosting 5  domains
<Delerium_> pehden: I might go with a VPS, not sure yet
<pehden> and with most dns poviders you can use different IP to point to the same domain
<pehden> with a different subdomain
<Delerium_> pehden: gotta go, take care buddy
<pehden> take care
<CppIsWeird> i run "watch" it says its not installed, please install package procps. apt-get procps and its already installed and up to date.
<StrangeCharm> if i just want to execute a single command via ssh, is there any way to specify this from the client? as in ssh --just-run-one-command "more file" user@host
<timo> StrangeCharm: yes, e.g. 'ssh user@host ls'
<timo> StrangeCharm: so just type the command after 'user@host'
<CppIsWeird> i run "watch" it says its not installed, please install package procps. apt-get procps and its already installed and up to date.
<pienkie> hi guys. I think I really 5cr3w3d the pooch on this one. I'm busy with a remote install, & the partitioning (lookup) step hangs at 50%. the main & swap partitions are created & formatted, but I'm having difficulty continuing the installation. is there some way to manually (via CLI) complete that stem (i.e. assigning the swap & "/" part), so that the installer can continue?
<pienkie> the only other option left to me, it seems, is to get in my car, drive all the way to the site, & restart the installation
<billybigrigger> hehe
<billybigrigger> should have used screen
<pienkie> hehehhehe
<pienkie> yea....
<billybigrigger> do you have gas in the tank? :P
<pienkie> but how would I use screen @ the `debial-installer`? it's not even in the commandset
<billybigrigger> oh fresh install?
<billybigrigger> i thought you were upgrading or something
<pienkie> yeaâ¦.. was hoping to avoid shooting myself in the foot like this...
<billybigrigger> yeah you kind need physical access for that
<pienkie> no, no. fresh install, from scratch
<CppIsWeird> i run "watch" it says its not installed, please install package procps. apt-get procps and its already installed and up to date. how can i get the watch command functioning?
<pienkie> phys access: bit sillyâ¦. I've booted in, set & cleaned the parts & have remote ssh access OK. *theorecially* I has hoping to restart the installer (while retaining an SSH session) & start from the top
<jmarsden> CppIsWeird: Can you run /usr/bin/watch ?  Does that file exist?
<CppIsWeird> oh i renamed it, i remember now. :P
<CppIsWeird> thanks. :-)
<jmarsden> You're welcome :)
 * CppIsWeird renames it back
 * pehden is away: I'm busy
<andriijas> why is my network interface named eth2 and not eth0
<andriijas> how can i rename it eth0?
<oCean> andriijas: see /etc/udev/rules.d/70-persistent-net.rules I think you can change it there
<KidBro> Hi. I have a problem with Apache/PHP on my Ubuntu 8.04 LTS. First please advise if there is a specific channel for this?
<oCean> KidBro: if it's very apache specific, there is #httpd channel you could try
<KidBro> I don't know how specific it is. The problem is that php has stopped working after running updates.
<KidBro> I am not really a power user when it comes to apache. Until now I have installed it and it has worked...
<KidBro> So I don't know where to start :-|
<oCean> The webserver works, but php doesn't?
<andriijas> oCean: thx
<KidBro> yes
<KidBro> I use php for webmail and gallery. When I try to enter any of these the browser will attempt to download the php file.
<oCean> KidBro: create a test.php in your webserver root, like this one http://pastebin.com/hUe1Vhfx, and open the file in the webbrowser. It should show you all php information
 * He4D ist away (Forever Alone!!!1)
 * He4D ist away (Forever Alone!!!1)
 * He4D ist away (Forever Alone!!!1)
 * He4D|OFF ist away (Forever Alone!!!1)
<oCean> !afk > He4D|OFF
<ubottu> He4D|OFF, please see my private message
 * He4D|OFF ist away (Forever Alone!!!1)
<oCean> KidBro: in that case it seems php module is not loaded. Check /etc/apache2/mods-enabled if the php5.conf is linked there
<KidBro> Just a moment
<KidBro> It is not listed there
<oCean> KidBro: run "sudo a2enmod php5" then "sudo service apache2 restart"
<KidBro> oCean: No I can do the php.test you suggested. Webmail also works. Gallery not, but there may be other reasons for this
<gobbe> is there any errors on logs?
<oCean> well if the test works, then php is correctly enabled. Has to be in the gallery software then. Maybe logging in /var/log/apache2 can give some clues
<KidBro> Yes, I think that is an issue for a different forum. Roundcube webmail was more critical for me anyway, so thanks a lot! I one new questions though: Is the information listed by the test.php useful for anyone trying to break into the server or can I leave it freely available?
<gobbe> i would be
<gobbe> i wouldn't let it there :)
<KidBro> OK, I will hide it then. But I will keep it somewhere for future use. It seems to provide a great overview of the settings. Thanks for the help both of you. It would have taken me a week to find in forums and helpfiles :-D
<RoyK> oCean: or even apache2ctl graceful
<Fidelix> Hey guys, can please someone help me? My MariaDB is not starting: /usr/bin/mysqladmin: relocation error: /usr/bin/mysqladmin: symbol randominit, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
<RoyK> IIRC mariadb isn't included in ubuntu, is it?
<gobbe> no it itsn't
<gobbe> Fidelix: i would turn on mariadb's support in this case
<Fidelix> gobbe, they said i should post on ubuntu forums, because it looks like a dependency problem. Well... i'll somehow figure it out..
<Fidelix> Thanks.
<gobbe> yeah, but it's not included on ubuntu so it's quite hard to support every package out there :)
<gobbe> it looks like somekind of problem with libmysqlclient
<dassouki> what solutions exist for a linux enviorenment that allow for outlook, cal, mail and be able to do scheduling services (just like in ms excchange)
<gobbe> zimbra
<gobbe> for example
<chronos> hey people. someone have problems with bridged networking running ubunt-server on virtualbox 4 ?
<RoyK> chronos: pastebin /etc/network/interfaces
<RoyK> chronos: I don't have it running for vbox, but it should be the same basics
<gobbe> chronos: and please tell more, what kind of problems you have?
<RoyK> chronos: http://pastebin.com/MuS3rDu7 <-- this is my setup, works well
<yerkin> hi
<yerkin> pleas help me
<gobbe> ask the question
<gobbe> without it it's quite hard
<yerkin> i install ubuntu server 10.10 then i try install yum
<yerkin> write   apt-get install yum
<compdoc> why yum?
<compdoc> apt-get isnt as good, but it works
<yerkin> eroor  e:not found packet yum
<yerkin> sorry for my bad engl
<gobbe> apt-get is great
<gobbe> yerkin: you should use apt-get, not yum
<gobbe> but anyway, there's packet for yum
<gobbe> if you want it, for some silly reason
<yerkin> sorry for the stupid question
<Wolfsherz> if you dont like apt-get try aptitude...
<RoyK> compdoc: bugger off
<RoyK> compdoc: apt-get is the preferred way
<RoyK> compdoc: using yum on debian/ubuntu is nonsense
<compdoc> for ubuntu, it is
<compdoc> no question
<RoyK> and debian
<compdoc> Im not telling anyone to use yum
<RoyK> and, last I checked, this was #ubuntu-server
<RoyK> yerkin: just don't use yum
<RoyK> that's just fedora/redhat gibberish
<RoyK> apt is far better
<RoyK> compdoc: sorry, I misunderstood - forget about it :)
<yerkin> ok
<RoyK> yerkin: migrating from fedora?
<compdoc> Ive had apt-get install, then remove xinetd, and it left all the files behind and left the service running, so I dont think its all that great, but its what I use
<RoyK> compdoc: apt-get remove --purge
<yerkin> I read a book how to install openvpn. and there use the command         yum repolist
<Wolfsherz> good alternative is aptitude, really.
<Wolfsherz> this book is not written for ubuntu obviously
<RoyK> Wolfsherz: not too much of a difference these days
<Wolfsherz> RoyK: it takes better care of dependencies and installed packets that are obsolete when removing another packet.
<Wolfsherz> yerkin: what book is that?
<yerkin> yes. this book written for centos
<RoyK> Wolfsherz: apt does that too these days
<Wolfsherz> yerkin: centos is a whole other story... if you wish to install something on ubuntu apt-get is just the way to go.
<Wolfsherz> RoyK: may have improved, i didn't check for a while because i'm happy with aptitude
<yerkin> thank you. i undestand
<yerkin> there is a difference ubuntu and other OC linux
<RoyK> Wolfsherz: old package state was included, a year or two ago
<RoyK> yerkin: all linux distros differ somehow
<yerkin> who has a book for ubuntu?
<Wolfsherz> yerkin: amazon ;)
<RoyK> yerkin: google :)
<yerkin> )))
<Wolfsherz> yerkin: what country are you from?
<compdoc> openvpn is great
<RoyK> !guide
<ubottu> The Ubuntu server guide may be found at http://help.ubuntu.com/10.04/serverguide/C/
<yerkin> kazahstan
<RoyK> wow - can't remember talking to someone from there :)
<Wolfsherz> yerkin: ok there is a good book on ubuntu-servers, but it is in german. the net is full of english documentation though, and you will probably find some good, like the linm ubotto gave.
<RoyK> yerkin: what language do you speak there? something like russian?
<yerkin> yes
<yerkin> rus
<RoyK> k
<RoyK> anyway - the online server guid is quite good if you understand English, which I guess you do, as we're using the language now :P
<RoyK> s/guid/guide/
<yerkin> English level intermadia
<yerkin> I'll read with an interpreter
<yerkin> ÑÐ¿Ð°ÑÐ¸Ð±Ð¾
<gobbe> server guide is in fact quite good
<gobbe> much better than it used to be
<RoyK> ÐÑÐ¸Ð³Ð»Ð°ÑÐ°ÐµÐ¼ ÐÐ°Ñ
<yerkin> translated by  google,
<RoyK> indeed :)
 * RoyK doesn't know shit about the russian language
<yerkin> i write      sudo apt-get install dhcp3-server
<yerkin> error     e:not found dhcp3-server
<RoyK> apt-get update
<yerkin> my internet is not as fast
<yerkin> tell me how to specify a proxy
<yerkin> <RoyK> tell me how to specify a proxy
<RoyK> for what? apt?
<RoyK> yerkin: second hit on google http://ubuntuforums.org/showthread.php?t=96802
<yerkin> I have a network has a proxy server
<RoyK> see that link
<yerkin> ok
<uvirtbot> New bug: #700741 in dovecot (main) "dovecot won't start during boot" [Undecided,New] https://launchpad.net/bugs/700741
<MatBoy> he, do-relase-upgrade does not work, than we do it the hard way... upgrade teh sources.list and so a dist-upgrade :)
<RoyK> heh
<RoyK> do-release-upgrade has been working for me
<RoyK> but then, if you try to upgrade from an LTS release to a non-lts-release perhaps you should change /etc/update-manager/release-upgrades
<MatBoy> RoyK: YEP always worked, but I get some server error
<MatBoy> I hope this works
<MatBoy> ok, the only think I have is an error while mounting boot... I can skip it and it boots OK
 * He4D ist away (Forever Alone!!!1)
<MatBoy> weird
<MatBoy> no-one an idea ?
<RoyK> MatBoy: I somehow doubt anyone will have an idea unless you post the error given
<MatBoy> RoyK: as I said... I need to check the logs... maybe I can see somthing there
<MatBoy> RoyK: weird is that it CAN boot
<MatBoy> RoyK: and when I do a mount -a, it's listed again
<niteria> Can I install ubuntu server from within another debian/ubuntu installation?
<niteria> I did that with gentoo once
<niteria> I guess debootstrap is what I'm looking for
<gobbe> havent done ever in ubuntu so cannot say for sure but i would say that might be possible, have you tried to google?
<uvirtbot> New bug: #700812 in openldap (main) "dpkg-reconfigure slapd doesn't ask for domain or admin pasword" [Undecided,New] https://launchpad.net/bugs/700812
<RoyK> niteria: a VM or reinstalling the box?
<Plecebo> I'm having trouble getting my raid array to start on reboot. It actually hangs the boot process and I have to press "S" to skip the mounting of my raid device. Once booted all of my devices show as spares in /proc/mdstat I have to "mdadm -S /dev/md0"  then "mdadm -A --scan" at which point the array rebuilds for 6-7 hours. it is pretty consistent so i'm usually pretty reluctant to reboot
<Plecebo> lucid server btw
<niteria> RoyK: reinstalling the box I guess
<pienkie> hi gys
<charlvn> hi pienkie
<pienkie> how's it going?
<charlvn> good thanks, you?
<charlvn> you from south africa?
<pienkie> originally, yes. residing in NZ @ the moment.
<pienkie> & u?
<RoyK> or new zealand?
<RoyK> heh
 * RoyK was peeking on the whois
<pienkie> hehhe :)
<charlvn> pienkie: i'm also south african, when i saw your nick it looked a little afrikaans to me, that's just why i asked
<RoyK> do people that know Afrikaans understand Dutch?
<pienkie> yea. acrually using the wife machine, so "pienkie" is actuall her nic. wher u from?
<charlvn> RoyK: the two languages are quite different, i speak both
<RoyK> ok
<pienkie> RoyK: for the most part, yes. may take a bit longer to interperit the dialect
<_Techie_> pienkie, hows your stay in New Zealand?
<charlvn> pienkie: ah i was about to ask if you're a girl :) lol
 * RoyK once met a guy from .nl called Tjalling by his last name, which is quite funny, since one of the nicknames for cannabis in Norway is Tjall
<pienkie> RoyK: most Afrikaans-speaking people can understand Dutch with a bit of effort (we do a bit of Dutch literatre in school), but I doubt  native Dutch'ers would be able to easily understand Afrikaans
<charlvn> RoyK: yes the dutch would know all about cannabis, but probably not more than most other nations
<RoyK> I know
<charlvn> RoyK: in the rest of the world lots of people smoke pot too, it just isn't legal
<pienkie> _Techie_: NZ's pretty good. welcome change of pace: much more relaxed, & have access to the "1st world" more readily
<RoyK> it's just that noone would be named Tjalling (freely translated to weedie) up here
<charlvn> haha, yeah i'm sure
<pienkie> charlvn: not "legal" in Holland either (legal nowhere in world); just decriminalized
<charlvn> pienkie: whatever :)
<RoyK> which is a good thing, keeps the cops from wasting time on minor stuff
<charlvn> yep totally
<RoyK> cops should spend more time on what's important
<RoyK> some guy smoking weed isn't worse than someone drinking
<charlvn> yeah they should leave ppl smoking the peace pipe
<pienkie> ppl tokin' it up still shouldn't drive though...
<RoyK> not at all
<charlvn> nope not recommended
<pienkie> â¦ or on serious over-the-counter meds
<RoyK> they're testing people for that up here now
<pienkie> starting here too
<pienkie> but mostly for meth-related jobbo's
<charlvn> as they say, don't drink and drive, smoke and fly
<pienkie> â¦ though the window
 * RoyK was thinking a little about the flying doctor
<pienkie> *through*
<charlvn> http://retardsunites.blogg.se/images/2010/image20070428030913420_80070584.jpg
<RoyK> first time I heard about the flying doctor, I was laughing my ass off
<pienkie> anyhowâ¦â¦.
<pienkie> I'm having a bit of trouble with aptitude. I've set up a basic system (10.4.1 64) with the alternative install image, but now I'm trying to strip out unwanted junk. I've maked a number of areas in aptitude, such as the entire x11 section, but when I want to commit the ganges with "g", it indicated that it will proceed with installing a number of useless cr*p I don't want, cusch as compiz. what am I doing wrong?
<RoyK> why didn't you just install the stripped version?
<charlvn> interesting, i installed xorg+openbox on my mom's old laptop and didn't have any trouble with that
<charlvn> started out with a server install though
<pienkie> RoyK: looooong story. had load of trouble installing, so had to install from alternative disk. If I had the system here @ my "lab", I would've gone from minimal ISO & use my local repo cache, but I had to work on-site, so had to do the best with what I had on hnd. & it was getting late & Iwanted to finish off before I started making *serious* mistakes, such as detroying the patition of the backups
 * RoyK wouldn't recommend a server install on a laptop
<pienkie> hehehe
<pienkie> yea..
<charlvn> how so?
 * RoyK is more worried about those two 100TB setups at work, while he is at home with a cast on his leg
<pienkie> soooâ¦â¦ aptitude front-end? how can I commit the removals without it installing anything new? (or please refer me to the correct RTFM-section, pls)
<pienkie> found a hacked-together solution: `deborphan --guess-dev | xargs apt-get -y remove --purge` @ http://www.linuxquestions.org/questions/debian-26/how-do-i-get-apt-get-to-completely-uninstall-a-package-237772/#post3889272
<dschuett> anyone know what script is called at the start of ubuntu server that prints out.... "System information as of ..... system load, users logged in, memeory usage.... ?
<billybigrigger> thats landscape
<billybigrigger> well whatever it spits out is part of landscape
<dschuett> landscape?
<billybigrigger> err whatever spits it out
<RoyK> dschuett: /etc/update-motd.d/50-landscape-sysinfo
<dschuett> ahhh thanks RoyK!
<uvirtbot> New bug: #700846 in samba (main) "Samba install makes wireless flake (deconnect / stutter)" [Undecided,New] https://launchpad.net/bugs/700846
<uvirtbot> New bug: #700850 in mysql-dfsg-5.0 (universe) "package mysql-server-5.0 (not installed) failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/700850
#ubuntu-server 2012-01-02
<designatedqueen> is ubuntu a 64 bit OS?
<patdk-lap> if you want it to be, sure
<designatedqueen> ok cool
<designatedqueen> ubuntu is free right, hence its popularity
<designatedqueen> im looking into building my own computer and/or server in the next few years
<patdk-lap> next few years?
<designatedqueen> well... maybe in even 5 years time
<designatedqueen> all depends how things go
<designatedqueen> im waiting for GPU's to increase in their FLOP rates, up to teraflops i think, quite a few of them
<designatedqueen> which im guessing will come with successive die shrinks
<designatedqueen> so yeh.. im looking at 5 or so years when chips are made on 10-11nm scale, adding on a year or so for prices to drop too
<designatedqueen> does that sound weird or no?
<designatedqueen> i want to utilise nvidias CUDA or somethign similar for simulations and such
<designatedqueen> anyway.. im guessing ill be using a linux based OS to save on cash, and plus it offers more flexibility im guessing
<designatedqueen> so yeh, my plan was to build a mini supercomputer type thing around 2016-18 or so, but then i realised i might want to host my own website and so i will look into building a server too because by todays bandwidth in the UK, we can get 100mbps fibre optic but in a few years im guessing that up to 1gbps will be available
<designatedqueen> Oh and also im waiting for hard disk space to increase up to the 10s of Tb's
<designatedqueen> lordy sorry, not like anyone is talking
<designatedqueen> welcome back tightwork, i wont flood anymore
<yetisarehere> how does ubuntu have so many of its future versions already named out to the year 2017?
<yetisarehere> oh my bad, thats the date it is supported until
<yetisarehere> ill be back one day, knowing everything about ubuntu, you'll see
<strickly> angelabad in da HOUSE
<pythonirc101> I've  a machine with 3 users, out of which one used to be in sudoers list-- for some reason , or maybe I forgot, I can't sudo into root anymore...and seems like ssh root@localhost is barred
<pythonirc101> any ideas if I can fix this?
<Resistance> how can i force the use of an lesser-version-numbered package if a higher-version-number exists already on the system, without uninstalling the package
<uvirtbot> New bug: #910722 in libpam-ldap (main) "Request support for multiarch in libpam-ldap" [Undecided,New] https://launchpad.net/bugs/910722
<Resistance> never got an answer, so i'll re-ask.  how can i force the use of an lesser-version-numbered package if a higher-version-number exists already on the system, without uninstalling the package
<andol> Resistance: Want to have the newer version installed in parallel, or simply be able to downgrade without having the uninstall/remove be a separate step?
<Resistance> andol:  they're the same program, same version, just slightly different numbering because one's a fork of the original debian package
<Resistance> andol:  i need the newer version to be purged, but without the interruption of the program
<Resistance> downgrading directly would work better
<Resistance> without having uninstall/remove as an option
<andol> Well, not sure if there is a better way, but I have had luck calling "dpkg -i" directly on the deb package in question. In case there are multipe deb files all having to be of the same version you might have to list all those files as arguments to the same dpkg -i run.
<Resistance> andol:  i think i figured it out...
<Resistance> the newer version only exists because its installed... by me doing: aptitude install <package>=<version> repeatedly, for each package and version, it should fix it all.
<andol> Good to hear.
<Techdude101> squid acl .# is a subdomain of .# (Using HOSTS acl)
<Tm_T> I need to setup a server for vmware virtual machines, but looks like vmware server doesn't install on 10.04, any ideas?
<Tm_T> ah, always good to ask on IRC, as that's when you find solution: http://hmontoliu.blogspot.com/2010/04/installing-vmware-server-202-in-ubuntu.html
<cwillu_at_work> Tm_T, rubber ducky, you're the one
<cwillu_at_work> (http://c2.com/cgi/wiki?RubberDucking)
<Tm_T> cwillu_at_work: indeed
<uvirtbot> New bug: #910838 in quota (main) "Sync quota 4.00-3 (main) from Debian unstable (main)" [Wishlist,New] https://launchpad.net/bugs/910838
<_ruben> Tm_T: vmware server has been eol for ages now though
<Tm_T> _ruben: aware of that but thanks (:
<cwillu_at_work> _ruben, isn't that why we have vm's?  to run unsupported software when necessary? :)
<_ruben> cwillu_at_work: perhaps you do, it surely isn't the reason why we do virtualization :)
<pythonirc101> How do I keep an ssh tunnel alive ? I do have "TCPKeepAlive yes" but it seems to die in a few minutes of inactivity.
<patdk-lap> ssh tunnels don't die, unless you close them
<patdk-lap> most likely it's your routers nat timeout
<SpamapS> patdk-lap: TCPKeepAlive's usually work to keep the nat alive too tho
<SpamapS> pythonirc101: perhaps your router's nat timeout is very low
<patdk-lap> only if the other params for it are adjusted correctly for his nat timeout
<magicblaze007> I want the user to be not able to login, but be able to create tunnels. How can I do this? I am trying to use "bash -r". Is there anything better i can use?
<SpamapS> magicblaze007: you can assign a specific command to a particular key in the user's authorized_keys file
<magicblaze007> SpamapS: Do you know if ssh uses the shell when creating a tunnel?
<magicblaze007> SpamapS: if not i'd like to disable shell logins- was looking at bash -r
<SpamapS> magicblaze007: no, the shell is only run when there's no set command, and the user has requested a tty
<magicblaze007> perfect, so how do i tell a login, that it can never get a shell?
<SpamapS> magicblaze007: in ~/.ssh/authorized_keys , for the user's key, you set command="..."
<magicblaze007> I was reading this -- http://linux1.ca/docs/restricted_env.shtml -- which is more about restricting public ssh accounts...not completely stopping logins.
<uvirtbot> New bug: #910899 in samba "Enumerating users over NSS doesn't work with idmap_ad" [Undecided,New] https://launchpad.net/bugs/910899
<SpamapS> magicblaze007: if you want to let users bounce off you, why not try openvpn instead?
<magicblaze007> SpamapS: its a headache to setup and maintain?
<SpamapS> magicblaze007: actually you can do 'ForceCommand' in sshd_config also
<magicblaze007> + I've to install client installers...whereas in ssh, i can use a ssh library in my code.
<patdk-lap> headache to setup and maintain?
<SpamapS> yeah
<patdk-lap> takes like 30min to setup
<SpamapS> openvpn is crazy simple
<patdk-lap> and like 0 matintaince
<magicblaze007> patdk-lap: its not the server -- its the client i'm talking about
<patdk-lap> heh?
<magicblaze007> I'm coding in python, and dont like executing external commands-- do you know of a python client for openvpn?
<patdk-lap> what is there to maintain on a client?
<SpamapS> magicblaze007: they're peers.
<magicblaze007> the only problem i've with openvpn is my client code.
<SpamapS> magicblaze007: yeah I can understand that.
<SpamapS> magicblaze007: what about just using SSL + stunnel?
<magicblaze007> never looked at stunnel
<SpamapS> magicblaze007: its pretty simple, a lot simpler than sshd
<magicblaze007> is it better to use that compared to doing a "ssh -R"? Right now i'm doing a reverse proxy
<SpamapS> magicblaze007: -R would require you to allocate a port randomly on the server woudln't it?
<magicblaze007> what does the client need? In my case the client only needed an equivalent of "ssh -R"
<magicblaze007> I'm using a script on the client to post its public key to the server, and get back a port number right now
<SpamapS> magicblaze007: how are you posting that? via clear HTTP?
<magicblaze007> yes
<magicblaze007> if someone else uses that port, i just pick another one
<magicblaze007> SpamapS: how does one do reverse proxying using stunnel?
<SpamapS> magicblaze007: you realize you lose all the security of SSH then... somebody else might be man-in-the-middle and posting their own key.
<jamiemill> Hi - having an apache mod_rewrite problem: requesting an image which should be served directly but is instead failing RewriteCond %{REQUEST_FILENAME} !-f and being passed through to PHP. Any idea why - see rewrite log and vhost conf here: https://gist.github.com/9a83ad3155ee521d44c5
<SpamapS> magicblaze007: you really should use SSL for both bits here
<SpamapS> magicblaze007: SSL would be very easy in python as well.
<jamiemill> or I should say *passing RewriteCond %{REQUEST_FILENAME} !-f when it should fail. the file does exist.
<magicblaze007> SpamapS: how so? I've two levels of encryption -- useless but there -- IIS --> https --> ssh tunnel --> client -- even if the ssh keys are taken, the https keys encrypt everything, isnt it?
<magicblaze007> SpamapS: indeed, but what i dont know is how to patch the https server on the client, make it talk to in python using ssl -- then patch it thru to stunnel
<magicblaze007> SpamapS: In case of ssh -- its just ssh -R port :  xx : port
<SpamapS> jamiemill: Do you have other RewriteCond's before that? the first one that matches usually drops through to the rule IIRC.. but its been a while since I did deep rewrite-fu
<jamiemill> SpamapS: no there's nothing more than in that gist linked above
<SpamapS> magicblaze007: right but ssh is designed to have user sessions. You don't really need that. you just want an authenticated tunnel
<patdk-lap> If I remember right, rewritebase only works in .htaccess
<patdk-lap> and you need to put your rewrites inside a location section, to do the same type thing in a config file
<SpamapS> patdk-lap: +1 , that sounds right to me as well
<patdk-lap> but then, I normally solve it via trial and error :)
<jamiemill> patdk-lap whoops you're right that was just a failed experiment, it's not actually in there
<magicblaze007> SpamapS: if i use ssh for this compared to stunnel, how much speed/efficiency am i loosing?
<magicblaze007> SpamapS: Also, usually is stunnel an executable client that creates the port forwards?
<patdk-lap> stunnel is just opens a tcp tunnel, is all, an SSL encrypted tcp tunnel
<magicblaze007> ah ok
<magicblaze007> doesnt ssh -R doesnt do the same thing...?
<jamiemill> I solved my problem with thanks to #httpd - i had put the rewrite statements outside a directory block in my vhost config and so they don't work without a %{DOCUMENT_ROOT} being prepended on the URI
<magicblaze007> is there a problem in setting authorized_keys of a login to have these settings: -rw-rw-r-- 1 mytunnel www-data      0 2012-01-02 10:55 authorized_keys ?
<cwillu_at_work> magicblaze007, ssh -R opens up a secure tunnel for arbitrary ssh channels, one of which is a port forward somewhat similar to stunnel in security properties
<cwillu_at_work> magicblaze007, yes, they need to be -rw--r--r--
<cwillu_at_work> and the .ssh folder needs to be rwx------ or stricter
<magicblaze007> the problem is that i've to add/delete keys from the web to this file
<magicblaze007> from another account
<magicblaze007> hence the group access...
<magicblaze007> if a file belongs to user:group and the user is not in the group -- and both of them have read/write accesss -- unless someone is on my machine already as that group -- can they create trouble?
<cwillu_at_work> you should really treat www-data as an insecure account
<cwillu_at_work> "the problem is that i've to add/delete keys from the web to this file" is something I would tread very carefully around
<cwillu_at_work> i.e., find some other way of doing it :p
<cwillu_at_work> you could probably have it owned by the user you want to modify it though, with read access for the group
<cwillu_at_work> (I'd still tend to avoid that sort of thing though
<cwillu_at_work> I have a similar issue with the common way of setting up ftp servers and mail servers:  I'm deeply uncomfortable creating system accounts for agents which have no reason to log into the system
<cwillu_at_work> (and the infrastructure required to switch effective users is quite error-prone)
<mgw> any cobbler pros here today?
<mgw> I'm trying to figure out where $iface.filename comes from (used in dhcp.template)
<SpamapS> mgw: is dhcp.template a pre-seed or a kickstart snippet?
<SpamapS> mgw: oh wait, n/m .. thats for generating the dhcpd configs
<mgw> yes
<mgw> it's in /etc/cobbler
<SpamapS> mgw: right, that stuff all is deeply embedded in code.
<mgw> it's putting this in dhcpd.conf:             filename "gpxe/menu.gpxe";
<mgw> but should be putting pxelinux.0
<mgw> I have this in settings
<mgw> # cobbler uses pxe booting by default, enable this option if you want to
<mgw> # use gpxe
<mgw> use_gpxe: 0
<SpamapS> mgw: yeah maybe thats being overridden in some profile
<mgw> that's my thought, but I can't even find where to set it in the profile
<mgw> (in fact it does have the gpxe stuff in the json profile)
<mgw> for the system, that is
<mgw> "filename": "gpxe/menu.gpxe",
<mgw> spamaps : any ideas by chance?
<Binsh> Hey guise. Im having trouble connecting to postgresql. Ive tried opening the port(5432) in iptables and in ufw, but nothing seems to work. Im not shure how to config iptables, but ufw seems okay. Does iptables "override" ufw, so my ufw rules wont work?
<cwillu_at_work> Binsh, back up a couple steps, and define what you mean by having trouble
<Binsh> mkay
<cwillu_at_work> you can connect via loopback?
<cwillu_at_work> and do things work correctly with the firewall disabled?
<cwillu_at_work> (typically one would do these sorts of tests of a machine other than the production machine, and then simply apply the required settings)
<Binsh> yeah, i can. when i nmap my server-machine, port 5432 doesn't appear, but it does on loopback
<cwillu_at_work> that's moving forward, not backing up
<cwillu_at_work> is postgres set to listen on a remote port?
<Binsh> Yea
<cwillu_at_work> prove it :p
<Binsh> but still, the port should appear as open on nmap?
<Binsh> hehe, 2sec
<Binsh> listen_addresses = 'localhost,192.168.0.195'
<Binsh> port = 5432
<Binsh> in the config file
<Binsh> /etc/postgresql/8.4/main/postgresql.conf
<cwillu_at_work> and from the local machine, can you connect to it at the address 192.168.0.195?
<cwillu_at_work> (and if you just made that change, you've restarted postgresql?)
<Binsh> ive restarted the machine multiple times after that change =\
<Binsh> and postgresql
<cwillu_at_work> and you connect to it at the address 192.168.0.195?
<Binsh> yeah, im sshing to it
<cwillu_at_work> (from the local machine)
<cwillu_at_work> no, I mean postgres
<Binsh> no
<Binsh> thats my problem ;)
<cwillu_at_work> even from the machine itself?
<Binsh> that works
<cwillu_at_work> okay
<cwillu_at_work> I believe you want to use only one mechanism or the other to configure your firewall
<cwillu_at_work> so if you previously used ufw to set up ssh, then that's what you'll want to configure to make postgres work, for instance
<Binsh> Yeah
<cwillu_at_work> looked at /etc/ufw/applications.d/?
<Binsh> Well, ufw was disabled until 1 hour ago
<Binsh> when i knew it existed
<Binsh> hehe
<Binsh> ssh just worked after installing
<cwillu_at_work> and have you looked at /etc/ufw/applications.d/?
<Binsh> hmmm
<Binsh> looking
<Binsh> apache2.2-common  openssh-server  samba
<cwillu_at_work> sense a theme? :p
<Binsh> hehe yeah
<Binsh> :P
<cwillu_at_work> make sure to read "man ufw"
<Binsh> yeah, i did, but i couldnt see it mentioning anything about that
<Binsh> i skipped down to the "allow" etc. sections
<Binsh> hehe
<cwillu_at_work> well, at the bottom it said "see also: ufw-framework"
<cwillu_at_work> which is also a good one to read
<cwillu_at_work> but in essence, I believe you just need to make a new file similar to the contents of an existing one, for postres, and then poke the relevant things
<Binsh> Yeah, im trying atm ;)
<magicblaze007> cwillu_at_work: thanks
<Binsh> cwillu_at_work: hmm, it doesnt seem to work :S
<cwillu_at_work> did you do anything more than making the file?
<Binsh> ahh, i found a typing-error ...
<cwillu_at_work> when in doubt:  you're doing it wrong :)
<Binsh> yeah i restarted ufw oO
<Binsh> hehehe
<Binsh> cwillu_at_work: okay, now im making some progress here. My rules says that 5432 is allowed, but when i nmap from the local machine, it says its closed :S
<Binsh> 5432/tcp closed postgresql
<Binsh> [PostgreSQL]
<Binsh> title=Postgresql server
<Binsh> description=databaseskjit
<Binsh> ports=5432/tcp
<Binsh> root@ubuntu:/etc/ufw/applications.d# sudo ufw status
<Binsh> Status: active
<Binsh> To                         Action      From
<Binsh> --                         ------      ----
<Binsh> 80                         ALLOW       Anywhere
<Binsh> 8080                       ALLOW       Anywhere
<Binsh> 22                         ALLOW       Anywhere
<Binsh> PostgreSQL                 ALLOW       Anywhere
<Binsh> this is giving me a headache oO
<Binsh> have you got any idea bout whats wrong?
<cwillu_at_work> didn't anyone ever teach you to pastebin?
<Binsh> nope, but thx for doing that ;)
 * cwillu_at_work has his doubts, given that you apparently know what a pastebin is :p
<Binsh> Hehe, i use it to post code and stuff to friends, but im not really the most experienced irc user^^
<cwillu_at_work> if it's more than two lines, pastebin
<Binsh> mkay
<cwillu_at_work> among other things, it makes it much easier to look at potentially complicated stuff while still talking about it
<Binsh> Yeah
<cwillu_at_work> what does this say: sudo lsof -iTCP -sTCP:LISTEN
<Binsh> http://pastebin.com/7zfLugm2
<Binsh> here u go
<Binsh> 2sec ill have a look
<Binsh> http://pastebin.com/2uwxxQ2A
<Binsh> hmm strange
<cwillu_at_work> postgres  1446            postgres    3u  IPv6   4736      0t0  TCP localhost:postgresql (LISTEN)
<Binsh> localhost:postgresql
<cwillu_at_work> postgres  1446            postgres    6u  IPv4   4737      0t0  TCP localhost:postgresql (LISTEN)
<Binsh> Yeah
<cwillu_at_work> restart postgres
<cwillu_at_work> are there any other interfaces on the machine?
<Binsh> eth0, 1 and wlan
<Binsh> 0
<cwillu_at_work> if not, you may save some trouble by just using "*" instead of listing the interfaces
<Binsh> eth1 is disabled in bios *
<Binsh> wlan is also disabled
<Binsh> the listen_addresses in postgresql is whats failing
<Binsh> it seems like it dont accept any other alternative than '*'
<Binsh> i tried inserting my ip 192.168.0.195, but it wouldnt work ...
<Binsh> well, at least now it works, thx for ur time cwillu_at_work ;)
<uvirtbot> New bug: #910955 in samba (main) "package samba 2:3.5.8~dfsg-1ubuntu2.3 failed to install/upgrade: ErrorMessage: package samba is not ready for configuration  cannot configure (current status `half-installed')" [Undecided,New] https://launchpad.net/bugs/910955
<pythonirc101> cwillu_at_work: would you mind testing my ssh public access account and tell me if there is a problem?
<pythonirc101> I put my own program as shell. (Which logs the user out automatically)
<pythonirc101> sftp/scp doesn't seem to work
<mgw> is this the correct installer preseed line to immediately age the ubuntu user?:
<mgw> d-i preseed/late_command string chage -d 0 ubuntu
<aarcane_> When using KVM/QEMU on ubuntu server, can I in any way control the order in which machines boot after system boot?
<aarcane_> Because of memory ballooning, I have a handful of machines which I want to boot in sequence, instead of parallel.
<StevenR> aarcane_: have a look at the startup script, see what it does. It might do them in alphabetical order or something
<cwillu_at_work> pythonirc101, you should probably do some reading on what behaviours ssh supports :p
<cwillu_at_work> (be right with you)
<cwillu_at_work> back
<cwillu_at_work> whois pythonirc101
<qman__> pythonirc101, sftp requires a legit shell to work
<cwillu_at_work> qman__, sftp uses a completely separate subsystem, independent of the shell iirc
<qman__> for that reason, /usr/sbin/nologin exists
<qman__> I don't know at a technical level why it requires one, but it does
<qman__> you can trust me on that one
<qman__> if you set the shell to /bin/false or something that isn't a shell, it won't let you SFTP
<pythonirc101> qman__: I wrote my own shell in python , that kicks the user out
<patdk-lap> how evil
<qman__> that's my point, it's probably not doing whatever it is that /usr/sbin/nologin does to make SFTP happy
<pythonirc101> cwillu_at_work: I did read ssh as much as I could + port forwarding
<patdk-lap> you need to allow execution of sftp from the shell script
<pythonirc101> qman__: it doesn't let sftp work...i tried that...
<qman__> I also learned while setting up jailkit (before openssh had jailing built in) that it requires a minimal environment too
<patdk-lap> I run sftp after sanity checks, and print a messaging saying shell access disabled, sftp only
<pythonirc101> qman__: I like nologin better -- just changed to that
<pythonirc101> how else can I check for problems in an open ssh login?
<qman__> there's probably a way to start sshd with more verbose logging
<cwillu_at_work> qman__, the shell requirement will be in /etc/pam.d/sshd or something referenced therein
<cwillu_at_work> at least, I think :p
<pythonirc101> what kind of a key is this -- http://paste.pocoo.org/show/529050/ ? Can this be used instead of ssh id_rsa.pub/id_rsa for logins?
<Resistance> i think that's a certificate key
<Resistance> so no
<pythonirc101> Resistance: what's the difference between certificate keys and the ones ssh id_rsa type?
<Resistance> pythonirc101:  different encryption technologies, different formats
<Resistance> pythonirc101:  just generate an SSH key
<Resistance> its easy
<Resistance> also, a certificate key decrypts the info in a certificate
<Resistance> an SSH key doesnt decrypt anything, but works as an identifier
<pythonirc101> Resistance: I've to do it from python
<Resistance> you have to create an SSH key from python?
<pythonirc101> yes
<Resistance> why the heck would you need to do that?
<Resistance> python doesnt have that capability
<pythonirc101> because my application is written in python
<Resistance> your application doesnt need an SSH key if its sitting at the server
<Resistance> your client does to ssh in though
<pythonirc101> my app is a client
<Resistance> Python doesnt have SSH key compatibility, to my knowledge
<Resistance> i'd ask around in the python channel
<pythonirc101> k
<jamiemill> Could do with some help brainstorming somthing here. I have set up a new Ubuntu 11 server with apache/php, and compared to my Ubuntu 10 LTS server (both on AWS EC2), benchmarking a phpinfo() page is 3-4x slower on the new server. Any idea why or how I might find the cause?
<jamiemill> a real site is also slower, but I'm using the phpinfo page to cut my site out of the equation for comparison
<patdk-lap> you sure both ec2 machines are the same?
<patdk-lap> did you benchmark them otherwise, than just php?
<patdk-lap> ec2 is a pretty random source to *expect* a certain speed
<jamiemill> patdk-lap: they are both m1.large instances. The new one I configured via chef, whereas the old one was not. But I have coped the exact php.ini and apache.conf files over from the old server to make sure the config is the same.
<patdk-lap> well, first, make sure both have the same exact php plugins enabled
<patdk-lap> second, try benchmarking the two machine, WITHOUT USING PHP/APACHE as a socalled test
<jamiemill> patdk-lap: I just benchmarked a plain html file and it seems the new server is actually *faster* in this case. so must be php-related
<patdk-lap> ec2 is nice and all, but it's a shared resource, your performance on it could be different from moment to moment :(
<jamiemill> patdk-lap: hmm yes I understand but I have spun up quite a new instance since yesterday and all day it has been consistently the same amount slower
<jamiemill> patdk-lap: are you suggesting benchmarking the machines CPU somehow?
<patdk-lap> dunno, I wouldn't know where the slowness would be
<patdk-lap> cpu, disk
<patdk-lap> it could be php
<patdk-lap> I know for me, enabling the php snmp module slows thing down by a few seconds
<jamiemill> patdk-lap: hmm - the new server has less modules enabled. in fact snmp is on the old (faster) server.
#ubuntu-server 2012-01-03
<cwillu_at_work> pythonirc101, you're aware that twisted has a capable ssh server which will avoid some of the issues I was discussing (although will be slightly more complex initially)
<twb> I would be wary of any sshd implementation that isn't OpenSSH
<cwillu_at_work> pythonirc101, the actual implementation of crypto is extremely sensitive; it'd be quite rare to have a pure python implementation, and the security of such an implementation would be very questionable given the indeterminate timing of running that code
<twb> (Having said that, I do run lshd on prisoner desktops, because unlike sshd, it doesn't Depends: on an ssh client.)
<cwillu_at_work> twb, don't confuse the crypto with the protocol
<cwillu_at_work> it's quite straightforward to implement the multiplexing of ssh
<twb> cwillu_at_work: crypto can be correct and still used improperly and thereby result in weakness
<cwillu_at_work> twb, the multiplexing involved is not complicated from a security standpoint, that's the entire reason its there
<twb> All I'm saying is I wouldn't use it without careful consideration
<cwillu_at_work> you might as well say that you wouldn't trust a python web server that was hosted over https
<twb> cwillu_at_work: I wouldn't :-)
<cwillu_at_work> twb, then you're silly :p
<SpamapS> We're still on this? Why on earth would you use an SSH server unless you want crazy flexibility?
<cwillu_at_work> SpamapS, I'm not sure why he's taking that approach either, to be honest
<SpamapS> if you want authenticated, encrypted communication, without the saddle of "users" ... TLS is quite a bit more useful.
<cwillu_at_work> other than some requirement to use existing daemons (in an insecure way I might add)
<cwillu_at_work> given that there are mature implementations of both protocols available though...
<SpamapS> sshd has so many extra things
<SpamapS> so many places to screw up
<cwillu_at_work> most of which really aren't that scary when you're not actually hooking up to real system users
<SpamapS> TLS identifies each side (client, optional), and then encrypts. Very simple.
<OutOfControl> Hello
<cwillu_at_work> again, I don't disagree, it's pretty much guaranteed to be overkill for his purposes
<cwillu_at_work> (and he's been told this in both #python and #twisted)
<SpamapS> :)
<OutOfControl> I am wondering what would be better for a Amazon EC2, Ubuntu Server or Desktop and what is the difference besides Windows Manager in Desktop
 * SpamapS goes back to reading his mountain of unread email
<SpamapS> OutOfControl: um, what?
<SpamapS> OutOfControl: why would you ever run Ubuntu desktop on an amazon instance?
<OutOfControl> I'm not sure
<SpamapS> OutOfControl: We have official AMI's to run server in EC2
<SpamapS> OutOfControl: http://cloud-images.ubuntu.com/
<OutOfControl> Thanks
<SpamapS> OutOfControl: we also are working on making running specific apps in the cloud easier with juju. https://juju.ubuntu.com/
<OutOfControl> Ok
<OutOfControl> SpamapS: The server link on the cloud images is just a loop
<OutOfControl> Should I use the AMD64 AMI for the micro instance?
<tazmania> I have ubuntu-server 10.04 LTS with my nms and radius set up.  Is there a way in which I can create a customized distro based on my set up so that I do not have to go through the painful process of setting up the system?
<twb> http://paste.debian.net/150798/ why isn't APT::Default-Release DWIMming?
<twb> I told it I want oneiric to be D-R but it isn't using 990 priority for all my oneiric entries, only some.
<twb> Wait a minute, it looks like it's not applying them to Translation-en only, which is fucking strange and wrong, but I don't really care
<tazmania> How do I or can I create a customized ubuntu-server distro?
<twb> tazmania: jigdo
<tazmania> twb: jigdo? a new distro?
<smw> tazmania, what are you trying to do with this "custom distro"?
<tazmania> smw: So that I don't have to go through the same painful and lengthy process of setting up the system
<tazmania> smw: and also for the engineers who will take over the project next time
<twb> So you want an SOE
<smw> tazmania, it seems like an easier way would be to just make a script that does all the changes
<tazmania> What's SOE?
<twb> standard operating environment
<twb> Basically all your desktops end up configured the same way, same set of packages installed, same corporate wallpaper, etc.
<twb> https://en.wikipedia.org/wiki/Standard_Operating_Environment
<tazmania> smw: I do have the script which takes care of setting up additional packages.  But I still need to manually set up the mysql and other database
<twb> For the base install you want to read up on preseeding and/or kickstart
<tazmania> twb: yes. all the servers will have exact identical imaage
<twb> For post-install change management you should look at puppet/chef/cfengine
<tazmania> I just wondered how mythbuntu did it - myth-TV package + ubuntu desktop
<twb> Alternatively you can prepare a read-only / emphemeral image, the provision umpteen of them speaking to a persistent SQL backend -- this is basically what Amazon EC2/S3 is
<smw> tazmania, I have made custom disks before. It was awhile ago and I think jigdo did not exist
<twb> AFAIK mythbuntu does it differently
<twb> smw: jigdo has existed for decades
<smw> twb, in that case I was an idiot ;-)
<twb> smw: it was introduced about the same time as DVDs IIRC
<twb> To avoid having to host DVD-sized images on cdimage.debian.org
<smw> tazmania, I just made my own deb repo with everything I needed and put it on the cd.
<twb> smw: you don't even need a CD, just put it on your httpd
<smw> tazmania, then I made a custom preseed file and had some scripts that packaged it all into an iso
<smw> twb, yeah
<tazmania> So I can now do it with jigdo
<smw> twb, for my situation, the CD made more sense
<twb> preseeding is documented in an appendix of the installation guide; apt-get install installation-guide-amd64
<smw> twb, we gave them out and they put ubuntu on refurb computers to be donated to people
<twb> Ah, OK
<smw> twb, our windows licenses only worked for non-profits
<smw> twb, so we used ubuntu to give out functioning systems
<twb> Not freedos ? ;-)
<smw> lol
<tazmania> so I should try out jigdo
<twb> tazmania: preseeding + puppet is more likely what you want
<tazmania> let me check it out
<tazmania> twb: is it two different packages or in one source?
<OutOfControl> How do I update my Ubuntu Server in EC2 without rebuilding it?
<twb> preseeding isn't a package, it's part of the normal installer
<twb> puppet is a separate package that does a different job
<tazmania> ok thanks.
<tazmania> I will check out both jigdo and puppet as recommended
<twb> OutOfControl: I don't know, google for "ubuntu + cloud"
<twb> OutOfControl: it's some trendy buzzword bollocks, so you will need to filter out all the naff noise
<OutOfControl> Ok
<OutOfControl> https://help.ubuntu.com/community/EC2StartersGuide is good
<twb> I can't comment
<twb> I like my OSes to run where I can go kick the hardware when it inevitably shits itself
<twb> https://en.wikipedia.org/wiki/Cloud_computing#Issues
<glaukommatos> So, I've set up an ad-hoc wireless access point with dnsmasq and iptables; I can access things on the same network as the machine that's doing the sharing, but is it possible for me to forward all of the traffic in such a way that it would be effectively the same as having the clients be directly on the network? (For example, I'd like things like iTunes sharing to work and zeroconf to be working.) [Forgive my rather inarticu
<twb> glaukommatos: sure; use brctl as well as hostapd on the AP
<twb> i.e. bridge instead of routing
<glaukommatos> Does hostapd require that I be able to put my wireless device into master mode? I can only do ad-hoc afaik.
<twb> AFAIK if you want an AP and you run linux, you use hostapd.
<twb> I have only used it on managed networks but ISTR ad-hoc support
<twb> What evidence do you have that managed mode is not supported?
<glaukommatos> Oh, well I think managed mode does work- I had assumed that I needed it to run in master mode to run an ap (besides using ad-hoc); and when I tried to do master mode, it didn't seem to be supported by the driver (iwlagn).
<twb> I don't know what master mode is
<twb> hostapd should take care of putting it in the appropriate state
<glaukommatos> Alright, I'll try this out tomorrow. I'd do it now, but I'm in the middle of a very long file transfer on the connection as it is set up currently. Thanks.
<twb> sleepd vs. powernap -- anyone got an opinion?
<SpamapS> twb: I've not used sleepd.. powernap is, IMO, a bit too aggressive in its default configuration for a server. But kirkland has shown me a few settings to back it off so its less dangerous.
<twb> I'm just wondering why it exists since it looks like a new project from lp, when sleepd has existed for a while and AFAICT meets the same requirements
<twb> I wish lp would buy the lp.net domain
<twb> The same way sf.net -> sourceforge.net
<twb> kirkland: why is powernap better than sleepd?
<hallyn> zul: hey, tftpd-hpa upstart job has a type (needs 'start *on' runlevel and paren) - assume you've noticed?
<tero> What is the most easy way to setup ubuntu server as a NAT router?
<hallyn> tero: you can see the 'simple iptables example' at https://help.ubuntu.com/community/Internet/ConnectionSharing
<hallyn> also, the /opt/bin/share-wlan-eth0 script at http://s3hh.wordpress.com/2011/12/15/simple-netboot-setup/ shows what I always do
<hallyn> that's all i've got, i'm out
<twb> tero: install it, raise both interfaces, add -o upstream -j MASQUERADE to *nat table via /etc/ufw or /etc/iptables/rules.v4
<twb> tero: oh, and turn on ip_forward, of course
<NullEntity> I have a quick postfix question and every remotely related channel is completely dead. Could someone here help me quick?
<cwillu_at_work> NullEntity, ask a question, don't ask to ask
<SpamapS> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<NullEntity> I'm trying to use Dovecot with Postfix. I can revceive with TLS fine, but I can't send. My client tells me "Authentication methods are not supported by server." I'm not sure if it's related, but ELOH fails the first time.  http://pastebin.com/QGaNq80Z
<SpamapS> NullEntity: is there something in /var/log/mail.log at the same time?
<chiggins> Hey guys. So I just installed bind from tasksel, now how do I add DNS entries to it? I'm new with DNS
<chiggins> And bind
<qman__> the configs are in /etc/bind/
<qman__> be aware that it's very picky about syntax
<qman__> you need to create zone files and then define them in the config
<qman__> if you mess up, it'll show in /var/log/syslog
<chiggins> Cool, thanks for that tip. Sorry that I'm not too dns experienced, but what's a zone?
<twb> https://en.wikipedia.org/wiki/Zonefile
<twb> Personally I can strongly recommend NSD for hosting zonefiles, and unbound for a caching recursive resolver.  I'm not a fan of bind's "all in one" approach
<chiggins> Right now I'm just looking to have a local dns setup so I don't have to remember all of my ip addresses
<twb> In that case I recommend you use dnsmasq for DNS and dhcp
<twb> Then it'll automatically know where everything is because it'll get it straight from the DHCP request
<qman__> I use bind for that purpose, local zones and internet name resolution for the network
<qman__> and yes, dnsmasq is superior in that feature
<qman__> I used bind because that's what I know
<twb> you can do it with isc but it's a PITA
<chiggins> Well, I figured I would just use bind, considering that's usually the result I get when I look up linux dns
<qman__> it's been around a long time
<chiggins> Yeah, from my light research I noticed, heh.
<twb> widespread â  goo
<twb> widespread â  good
<qman__> yeah, age and popularity have benefits, but that doesn't make it the best option
<qman__> it's not a poor choice, though
<chiggins> So would you two maybe recommend I look into dnsmasq instead?
<twb> chiggins: for your needs, yes
<qman__> you'll probably have a better experience with it
<qman__> but bind isn't nearly as bad as sendmail
<twb> I basically don't trust ISC anything anymore
<qman__> still get nightmares about it
<twb> It's a red flag for me
<twb> Not necessarily enough to shitcan it, e.g. I still run ISC (vixie) cron
<twb> And ntpd
<chiggins> ISC?
<twb> https://en.wikipedia.org/wiki/Internet_Systems_Consortium
<twb> They make reference implementations of internet software
<twb> (As in, software that the internet runs on, not the other way around)
<chiggins> Ah I gotcha
<jamespage> morning all
<_ruben> g'day
<ttx> jamespage: morning, and happy new year
<jamespage> ttx: happy new year to you as well!
<jamespage> morning _ruben
<uvirtbot> New bug: #911135 in tftp-hpa (main) "tftp fails to install, broken and untested upstart job" [High,Triaged] https://launchpad.net/bugs/911135
<koolhead11> hi all
 * koolhead11 wonders is people are finally back from holiday hangover :D
<mhubig> Hello together, I need some advice: I want to restructure the IT in the small company I work for into a cloud. Now I have found a possible solution with two XenServer and three GlusterFS Servers. But I really like the idea of the Ubuntu Cloud Infrastructure. Now my question is if the Ubuntu Cloud Infrastructure is capable of running on "just" 5 servers?
<koolhead11> mhubig: i would say yes!! :)
<mhubig> koolhead11: But the docu says I need at least 5 servers for storage â¦
<koolhead11> mhubig: i would simply suggest you to ask the question in mailing list
<uksysadmin> happy new year y'all
<koolhead11> hola uksysadmin
<uksysadmin> how's koolhead11 ?
<koolhead11> rocking!! :D
<uksysadmin> awesome
<uvirtbot> New bug: #911155 in etckeeper (main) "Etckeeper should not complain about hardlinked files when using bzr VCS" [Undecided,New] https://launchpad.net/bugs/911155
<jamespage> smb: around? seeing lots of kernel oops on the latest precise ec2 images
<smb> jamespage, yes around. can you provide, point to more specific data?
 * smb wonders whether he had been providing a change to make precise bootable at all already... The holiday season somewhat blurred the memory
<smb> Ah, 3.2.0-7.13 already contains the revert from upstream...
<smb> <smb> oops
 * smb takes back the oops... that was for the wrong tab he wrote the line before... dammit cut and paste
<jamespage> smb: https://jenkins.qa.ubuntu.com/view/Precise%20ISO%20Testing%20Dashboard/view/Daily/job/precise-server-ec2-daily/ARCH=i386,REGION=eu-west-1,STORAGE=ebs,label=ubuntu-server-ec2-testing/29/artifact/None/i386/m1.small/ebs/i-2c54ca65/uec2-20120103-0416-9326cb93108049-terminated.console.txt/*view*/
<jamespage> that contains the actual oops message
<jamespage> they all appear to die in the same place :-)
<jamespage> https://jenkins.qa.ubuntu.com/view/Precise%20ISO%20Testing%20Dashboard/view/Daily/job/precise-server-ec2-daily/
<jamespage> although some do boot (odd)
<smb> jamespage, Ok, thanks. Hm, something bad with setting some timer...
<smb> jamespage, Interestingly not all of them seem failed... Err, I seem to get lost trying to move from the dashboard bullets to actual dmesgs... is there a simple path to follow that I miss?
<smb> oh... think I got it
<smb> some build and then console output...
<jamespage> smb: you have to look at the build artifacts for the failed tests
<jamespage> so clicking a red icon; then selecting the last failed built from the LHS of the screen; then click 'Build Artifacts' in the center of the screen and drill down to the console.txt files
<jamespage> (far to many clicks)
<smb> jamespage, Ok, thanks. Got to it now. So at least I found one case where it happened with a 3.4.3 Xen as well. So there is a chance I should be able to get the same on a local test system.
<jamespage> smb: great - do you want me to raise a bug report for this?
<smb> Yes, please. Then we got a place to track things.
<smb> At least it seems to be quite likely... Not like these after a day of doing work things...
<Ursinha> good morning :)
<jamespage> smb: bug 911204
<uvirtbot> Launchpad bug 911204 in ubuntu "precise ec2 images fail to boot with kernel oops" [Undecided,New] https://launchpad.net/bugs/911204
<jamespage> morning Ursinha - happy new year!
<RoyK> happy new year, all :)
<Ursinha> jamespage: happy new year! :)
<zul> heylo
<daff> I am trying to preseed an installation of Ubuntu 10.04.3 but the debian-installer keeps ignoring the pre-answered questions for mirror/country
<daff> how can I debug this?
<daff> after dhcp configuration is complete I keep getting asked to select a mirror
<koolhead17> hola zul
 * koolhead17 wishes every one happy new year!! :)
<koolhead17> daff: are you sure your preseed file is correct
<koolhead17> ?
<koolhead17> !cobbler
<koolhead17> daff: https://help.ubuntu.com/community/Cobbler/Preseed
<Error404NotFound> How can i configure the from email address that cron would use to send any output as email? i know MAILTO, MAILFROM doesn't seem to work.
<Error404NotFound> man 5 crontab only tells about MAILTO
<daff> Error404NotFound: the From: address in the mail headers is based on the user whose crontab is run
<uvirtbot> New bug: #911244 in mailman (main) "prompt to change unmodified conf file /etc/cron.d/mailman during upgrade from oneiric to precise" [Medium,New] https://launchpad.net/bugs/911244
<daff> koolhead17: my preseed file was correct, but the server did not serve it correctly :)
<daff> it was essentially empty, but appeared to have been retrieved correctly (HTTP 200 and all), so d-i used it
<koolhead17> daff: you mean DHCP/cobbler was not correctly configured
<daff> Foreman in my case, yes
<koolhead17> daff: so was there some bug in Foreman?
 * koolhead17 is not aware of Foreman
<uvirtbot> New bug: #911256 in tftp-hpa (main) "tftpd-hpa won't start, typo in upstart 'start on' stanza" [Undecided,Triaged] https://launchpad.net/bugs/911256
<daff> koolhead17: yes and no, a permission problem prevented the provisioning process from running correctly, but that wasn't obvious until I looked into the preseed file that was actually downloaded
<ppetraki> hallyn, I plan to look at that libvirt/lvm bug later today, still catching up on email
<koolhead17> daff: it be cool to file a bug then :)
<hallyn> ppetraki: cool, thanks.
<raubvogel> In nfsv4, how do I define the global root (using terminology from https://help.ubuntu.com/community/NFSv4Howto)?
<pmatulis> raubvogel: it is user/administrator defined.  maybe i misunderstand you
<raubvogel> pmatulis: from what I understand, v4 defines a root dir for NFS. So, if it is /export and you have a /export/moose, you mount it by saying mount fileserver:/moose.
<raubvogel> How do you define /export as the nfs root or is it understood?
<pmatulis> raubvogel: the latter.  you could also just try
<raubvogel> pmatulis: in my case I am trying to mount the share in a windows box
<raubvogel> can mount it as nfsv3 but am having issues mounting as nfsv4
<raubvogel> and am trying to go down on the list of possible issues
<pmatulis> raubvogel: i would first test with a linux client
<pmatulis> raubvogel: i have never used (any version) nfs with 'doze
<raubvogel> pmatulis: if you picture extreme suckyness, you are not even close ;)
<zul> hallyn: you are running the meeting today arent you?
<Daviey> utlemming: will you be chairing?
<hallyn> zul: yes i am
<hallyn> Daviey: utlemming did 2 meetings ago
<pmatulis> raubvogel: obviously the 'doze client needs to support v4.  did you confirm that?
<hallyn> i'll straighten out the order in the mtg page
<utlemming> :)
<Daviey> ahh, cool -thanks hallyn
<raubvogel> Company claims it is the case; I am trying to validate it
<pmatulis> raubvogel: you're using WSFU?
<Daviey> hallyn: so is it SpamapS ?
<raubvogel> nope. Hummingbird/opentext. Because it does kerberos and it seems the WSFU does not
<pmatulis> Daviey: sent you an email
<pmatulis> raubvogel: ah.  haven't used humingbird in donkey's years
<raubvogel> Lucky you
<raubvogel> One of the issues it seems is that they made it be more in the Solaris flavour of nfs than the linux one
<pmatulis> raubvogel: can't you just, like, have a drone roam the office and obliterate the 'doze boxes one by one?
<raubvogel> pmatulis: https://wiki.archlinux.org/index.php/NFSv4 seems to answer my root question
<raubvogel> pmatulis: with a Lee Enfield?
<pmatulis> raubvogel: a little raw, but yeah
<pmatulis> raubvogel: fyi, i have never needed to mount the root
<raubvogel> Me neither. The hummingbird people suggested me to setup the root in the solaris way to see if that will allow me to mount as a v4 instead of v3.
 * pmatulis can't believe that any responsible software maker these days would cater to solaris at the expense of linux
<raubvogel> I agree completely. And I do like Solaris. Oracle on the other hand...
<uvirtbot> New bug: #911317 in bacula (main) "bacula crashes while starting backup" [Undecided,New] https://launchpad.net/bugs/911317
<pmatulis> 'xactly , and does oracle even support hp machinery anymore?
<pmatulis> they dropped support for something i seem to recall
<pmatulis> or maybe they just charge a whole lot more
<rbasak> mdeslaur: ping
<koolhead17> lynxman: sir
<mdeslaur> rbasak: yes?
<rbasak> mdeslaur: hey, regarding bug 858878 - it's a bit confusing - I prepared an update for -security but RoAkSoAx spoke to you about doing a PPA build before christmas?
<uvirtbot> Launchpad bug 858878 in cobbler "lack of csrf protection in cobbler-web" [High,Fix released] https://launchpad.net/bugs/858878
<mdeslaur> rbasak: I've subscribed ubuntu-security-sponsors. tyhicks is on community duty this week, so he'll look at it and upload it. Thanks!
<rbasak> mdeslaur: aha, thanks. I was unaware of that team.
<mdeslaur> rbasak: np, thanks!
<RoAkSoAx> adam_g: pinn
<RoAkSoAx> adam_g: bug #908895
<uvirtbot> Launchpad bug 908895 in orchestra "The name 'precise-x86_64' is invalid." [Undecided,New] https://launchpad.net/bugs/908895
<RoAkSoAx> adam_g: are you manually setting the hostname to precise-x86_64 or something? or is it being done automatically?
<adam_g> RoAkSoAx: if im installing a non-juju profile, during installation the hostname 'precise-x86_64' is automatically assigned to the node during installation, or at least attempted. d-i fails as its an invalid hostname
<RoAkSoAx> adam_g: right, though, orchestra doesn't really set the hostname automatically, so there's maybe something done differently in cobbler to automatically assign a hostname
<adam_g> RoAkSoAx: from what i could tell, it was inherited from the profile name. renaming the to something like precise-x86-64 fixed the issue
<RoAkSoAx> adam_g: yeah, that's why I'
<RoAkSoAx> adam_g: right, but the preseed doesn't really set the hostname
<RoAkSoAx> adam_g: but it's set in the kernel parameters
<adam_g> RoAkSoAx: *shrug* im not sure how thats set, but using "_" wherever it is set (during iso import / profile creation) needs to change
<Daviey> RoAkSoAx: hmm
<Daviey> We ARE looking for automatic hostname generation as a fallback
<RoAkSoAx> adam_g: the _ is set by cobbler automatically for the profiles
<RoAkSoAx> adam_g: what should change should be "inheriting" the hostname from the profile name
<adam_g> Daviey: for enlisted machines, i was attempting to boot a new machine that hasn't been enlisted, and choose a profile manually from the boot menu
<Daviey> i wold say that should be done in enlistment/
<RoAkSoAx> adam_g: cause, as far as I can remember, that wasn't happening before, so it might be a new feature
<Daviey> ahhh
<RoAkSoAx> adam_g: or simply replace the _ with - unless, it used to happen that way and now it's broken
<uvirtbot> New bug: #901804 in glance "split glance package into glance-api/glance-registry" [Medium,In progress] https://launchpad.net/bugs/901804
<zul> rbasak: ping are you still worknig on those horizon bugs?
<rbasak> zul: yes - I've done most of the lintian fixes but have been slowed down by the embedded jquery one. While you're asking - what's would be the quickest/easiest way to test the jquery functionality?
<zul> rbasak: because im cleaning things up getting ready for the MIR
<zul> and it was on my list of cleanup
<zul> what jquery functionality?
<rbasak> zul: I can send you what I have so far if you like? jquery functionality as in http://lintian.debian.org/tags/embedded-javascript-library.html - I've depended on libjs-jquery{,-ui,etc} and had just changed the path in the template
<zul> k sure if you want to do that
<rbasak> Is a patch series out of git OK?
<zul> rbasak: yep
<rbasak> zul: http://paste.ubuntu.com/791935/
<rbasak> zul: lintian is bug 899427 and apache reload 905527
<uvirtbot> Launchpad bug 899427 in horizon "not lintian clean" [Medium,Confirmed] https://launchpad.net/bugs/899427
<zul> rbasak: cool thanks
<rbasak> zul: also I have a suspicion that debclean doesn't work due to s/override_dh_auto-clean/override_dh_auto_clean/ in debian/rules maybe but I hadn't looked at it yet
<rbasak> I've not put the bugs in the changelog message yet
<zul> rbasak: k ill take a look
<rbasak> zul: shall I hand it back to you and unassign myself for now? I don't want to hold up your MIR
<zul> rbasak: nah when ill upload it will close the bugs
<rbasak> zul: ok but just to check you don't expect me to be doing anything more with these bugs then? I'm happy to, just want to make sure that we don't both expect the other to be doing it :)
<zul> rbasak: right i dont expect you to be anything more with those bugs
<rbasak> zul: cool, thanks
<zul> rbasak: the apache stuff isnt in that patch
<rbasak> zul: no I hadn't got that far
<zul> k
 * rbasak spent a while today reading debian policy to work out what to do about the jquery thing
<SpamapS> rbasak: which jquery thing?
<SpamapS> rbasak: I ran into a pretty nasty problem with Wordpress and jquery being out of sync on what API version to expect.
<rbasak> SpamapS: horizon made lintian complain about http://lintian.debian.org/tags/embedded-javascript-library.html because upstream embeds jquery and related rather than using libjs-jquery et al
<rbasak> SpamapS: ah. I was hoping it would be OK
<SpamapS> rbasak: you're allowed to embed it if the API is different.
<rbasak> SpamapS: well that would make things easier. Just a lintian override instead of all this messing about :)
<SpamapS> rbasak: otherwise you need to replace the embedded file with a symlink to the jquery on disk.
<SpamapS> rbasak: err .. s/on disk/in the package/
<rbasak> ah. I was going to change the template to point to a different URL, but a symlink would be easier and less invasive, thanks :)
<rbasak> although zul is doing it now
<SpamapS> just make sure they're the same API version.
<SpamapS> Its *hell* if they're not.. javascript doesn't exactly fail at compile time. ;)
<zul> js is fun
<zul> RoAkSoAx: ping have you actually powered on systems with cobbler?
<uvirtbot> New bug: #906937 in glance "Error installing python-glance 2012.1~e2-0ubuntu2 (dup-of: 907543)" [Undecided,Fix released] https://launchpad.net/bugs/906937
<RoAkSoAx> zul: yes
<RoAkSoAx> zul: we have the openstack qa lab working that way
<zul> RoakSoAx: ok sweet
<RoAkSoAx> zul: I had to write a fence-agent for the power device there though
<zul> RoakSoAx: how do you configure it?
<RoAkSoAx> zul: its already configured isnt it?
<zul> RoakSoAx: no i want to know how you configure cobbler to do it
<RoAkSoAx> zul: sudo cobbler system edit --name <system name> --power-user=<user> --power-pass=<pass> --power-id=<id on power device> --power-address=<power device address> --power-type=sentryswitch_cdu
<RoAkSoAx> fence_cdu -a <IP or host> -n <id on power device> -l <user> -p <pass> -o <action: on|off|status>
<zul> RoakSoAx: cool thanks
<uvirtbot> New bug: #911449 in samba (main) "smbd crashed with SIGABRT in __kernel_vsyscall()" [Undecided,New] https://launchpad.net/bugs/911449
<mgw> anybody here familiar with isc dhcp?
<mgw> especialy dhcrelay
<guntbert> !anyone | mgw applies here too
<ubottu> mgw applies here too: A high percentage of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? See also !details, !gq, and !poll.
<mgw> sorry...
<guntbert> mgw: no need to apology, just ask your question and prepare for patience :)
<mgw> I'm having trouble getting dhcrelay to workâ¦. as a test, i have dhcrelay listening on two vlans, relaying to an ip on one of them
<mgw> and dhcpd is listening on that single interface
<mgw> dhcrelay logs that it is relaying the packets, but tcpdump does not see anything
<mgw> only the original inbound request
<mgw> (discover)
<mgw> tick tockâ¦ tick tockâ¦. patience engagingâ¦.
<RoAkSoAx> kirkland: ping?
<kirkland> RoAkSoAx: pong
<RoAkSoAx> kirkland: duuude how' it going
<RoAkSoAx> kirkland: hey did you happen to find a bug in byobu where it goes nuts when running byobu under another byobu session?
<kirkland> RoAkSoAx: hmm, i haven't;  but i don't do that often
<kirkland> RoAkSoAx: is this tmux or screen, or both?
<Troy_> quick question for home server and i want to have the ability to launch a gui such as xfce. should i install ubuntu-server on the machine and then install xubuntu-desktop or install xubuntu and add server software/packages?
<RoAkSoAx> kirkland: i was first running byobu screen, then upgraded and defaulted to tmux. Then by mistake con win 0 i run byobu and it just went completely nuts
<Patrickdk> troy, same diff
<kirkland> RoAkSoAx_: ugh, yeah, i just reproduced that
<RoAkSoAx_> kirkland: yeah it happens when running byobu tmux under a byobu screen
<kirkland> RoAkSoAx_: can you open a high bug?
<RoAkSoAx_> kirkland: sure ;)
<RoAkSoAx_> kirkland: did you reproduce this in precise?
<RoAkSoAx_> kirkland: i had it on lucid
<kirkland> RoAkSoAx_: yep
<kirkland> RoAkSoAx_: i just need to guard against an infinite loop there
<RoAkSoAx_> kirkland: happy new year btw!!
<kirkland> RoAkSoAx: you too, buddy!
<kirkland> RoAkSoAx: good to hear from you
<RoAkSoAx> kirkland: indeed!! how's downtown austin treating ya?
<kirkland> RoAkSoAx: loving it, actually
<kirkland> kirkland: it's fun being downtown
<RoAkSoAx> kirkland: i bet! I actually was looking into moving to austin but I think I'm gonna stay here for a little longer
<kirkland> RoAkSoAx: aw, come on over man
<kirkland> RoAkSoAx: you'd have a blast over here :-)
<osmosis> to use python pip or not to use python pip
<RoAkSoAx> kirkland yeah i'd love too but if i start studying again its gonna be more cost efficient fpr me here
<hallyn> kirkland: i'll need to check out your hq one day!
<kirkland> hallyn: yeah, come on down sometime
<kirkland> hallyn: do you find yourself downtown ever?
<hallyn> kirkland: not too much lately, unless we need somethig from that whole foods or feel an urge to eat at chez nous
<hallyn> but maybe i'll find a reason in february
<kirkland> hallyn: well drop by, we're right next to that whole foods
<hallyn> will do :)
<mgw> roaksoax, spamaps, et alâ¦ you always have good answersâ¦ do you have any ideas about the question i posed earlier regarding dhcrelay?
<hallyn> kirkland: oh hey, has anyone ever signed up to write the userspace ecryptfs file reader?
<kirkland> hallyn: you!
<kirkland> hallyn: would love to see that
<hallyn> maybe.  would like it on my phone.
<hallyn> but, i also have a compiz plugin or two to write :)
<hallyn> (shiny)
<pythonirc101> Anyone knows why my scp is getting stuck --  http://pbin.be/show/344/ -- It gets stuck at 0%?
<aarcane_> is there a "proper" way to use vmbuilder on natty to build an oneiric VM ?
<SpamapS> aarcane_: are you seeing a failure when trying to do so?
<aarcane_> SpamapS, I saw several failures, and managed to work around them.
<SpamapS> aarcane_: note that the cloud images should boot fine w/ kvm.. https://cloud-images.ubuntu.com/
#ubuntu-server 2012-01-04
<uvirtbot> New bug: #911539 in samba (main) "package samba 2:3.5.8~dfsg-1ubuntu2.3 failed to install/upgrade: ErrorMessage: package samba is not ready for configuration  cannot configure (current status `half-installed')" [Undecided,New] https://launchpad.net/bugs/911539
<smoser> SpamapS, around ?
<smoser> well, if you see this, see in #juju . looking for "how do i do the orchestra provider"
<samba35> i am @ home with 2 pc ,1st pc is utm (firewall/vpn/av.etc) and 2nd pc is linux with 3 interface ,i want dmz traffic on eth1 and lan traffic on eth0 how do i configure this usage scenario
<twb> UTM?
<twb> If you only have two hosts, where is the DMZ?
<samba35> sorry
<samba35> utm is unified threat mangment
<samba35> which provide anti-virus ,anti-spam, ips/ids ,firewall ,vpn ,routing ,proxy and more thing in one box
<twb> OK, an appliance
<samba35> yes
<samba35> u can get this on software based also /virtual also
<samba35> my utm network is 192.168.2.0/24 and dmz is 192.168.3.0/24
<samba35> lan is on utm network
<twb> Why do you have so many networks when you only have two hosts?
<samba35> i want to use web server ,mail server ,ftp server with linux on dmz
<twb> In English, the space goes *after* the comma.
<samba35> and i also want to host windows xp and server on ubuntu using kvm but that is later
<samba35> sorry for that i am not a native english speaker
<samba35> please forgive for that
<twb> No problem.
<samba35> if i want to access ssh or ftp or web server from outside (from differant location ) it should use dmz and when i am @ home it should use lan /utm
<twb> http://paste.debian.net/150904/ normally your network would look like this
<twb> Routing, DHCP, DNS, firewall, QoS, NTP, are done on the bastion.
<twb> Services like HTTP, IMAP, are done on servers hanging off the DMZ
<twb> And internal-only server like Samba would run in the LAN, along with the desktops
<twb> Now, it sounds to me like this "utm" is your bastion.  Is that right?
<samba35> yes
<twb> OK.
<samba35> but how do i configure network priority
<twb> I don't know what that means
<samba35> ok ,let me try to explain again
<samba35> say ,if u want to access my ssh ubuntu how u will ?
<samba35> i have nat rule configure for that
<twb> That configuration is done on the bastion
<samba35> ok
<twb> As you say, it is done with DNAT rules.
<samba35> yes
<samba35> can i send you pm so we will share more details
<twb> No.  Discussion should take place in public.
<samba35> ok
<twb> ubottu: /msg
<ubottu> Please ask your questions in the channel so that other people can help you, benefit from your questions and answers, and ensure that you're not getting bad advice. Please note that some people find it rude to be sent a PM without being asked for permission to do so first.
<samba35> what is best tool in ubuntu server to mange multiple nics
<twb> ip
<samba35> any gui tool
<twb> I don't approve of GUIs
<samba35> do you have any idea on kvm networking
<twb> It's the same as normal networking
<samba35> if i use bridge do i have to use same ip address on ubuntu bridge and guest nic ?
<twb> Just use kvm's built-in userspace networking
<samba35> ok
<airtonix> what are the compelling reasons to use nscd instead of bind?
<twb> nscd is not nsd
<twb> nscd should never be used
<twb> nsd3 serves zonefiles; bind does both that and caching recursive resolving -- two DNS role that IMO are unrelated and should be kept separate.
<uvirtbot> New bug: #911584 in samba (main) "smbd crashed with SIGABRT" [Undecided,New] https://launchpad.net/bugs/911584
<samba35> i have uninstalled apache2 still there is apache2 is been use (check with lsof ) how do i check which package is using apache2
<twb> Pastebin the lsof output
<samba35> ok
<samba35> lsof -i 4 ? or
<twb> What I mean is: what is your evidence, that apache2 is in use?
<samba35> ok
<samba35> http://pastebin.com/0Aby1mCJ
<twb> OK, what is your evidence that apache2 was uninstalled?
<samba35> i tryed apt-get remove apache2 /httpd
<twb> "sudo apt-get remove apache2" ?
<samba35> apt-get purge apache2
<samba35> yes 0 package remove
<twb> OK, your problem is that apache2 is a metapackage
<twb> The "real" apache2 package is called apache2-mpm-worker, or one of the other apache2-*-worker packages
<koolhead11> hi all
<twb> Er, one of the other apache2-mpm-* packages
<samba35> ic
<samba35> how do i check /search all packages with apache
<samba35> apt-file ?
<twb> Try apt-get autoremove, otherwise try manually removing packages
<twb> samba35: apt-cache search
<samba35> it will search on internet right ,i want all locallly
<twb> Wrong.
<samba35> ok
<samba35> twb, thanks got it remove
<samba35> dpkg --get-selections |grep -i apache2
<koolhead17> hi all
<Tm_T> good day
<uvirtbot> New bug: #911680 in samba (main) "smbd crashed with SIGABRT in store_inheritance_attributes()" [Medium,New] https://launchpad.net/bugs/911680
<_johnny> hi, i'm trying to use curl on a non-standard http server which gives incorrect (or no) content-length. i've tried setting NOBODY to true, "Expected:" to empty array, but i still get "transfer closed with ... bytes remaining to read" (or blocked time out if NOBODY = true). any ideas?
<uksysadmin> _johnny, when you say non-standard... what is it you've got set up and what is the curl command line used?
<_johnny> uksysadmin: i have not set it up, and it's over SSL so debugging was a bit tricky at first. it turns out there's a IGNORE_CONTENT_LENGTH which solved it :)
<uksysadmin> glad I could help! ;-) lol
<_johnny> ;)
<Ursinha> good morning :)
<zul> morning
<uksysadmin> hi zul, ttx suggested asking you about weekly releases of openstack on precise... is there anything I need to add to my apt configs or is this something that happens weekly in precise from the standard repos?
<zul> no there isnt anything you need to do extra they will just magically appear
<ttx> zul: did we converge on packaging ?
<ttx> I was wondering if there were still differences
<zul> still a bit of convergence will be synching everything up today
<uksysadmin> thanks zul and ttx
<uvirtbot> New bug: #911747 in openssh (main) "[Feature] Add AuthorizedKeysCommand to OpenSSH" [Undecided,New] https://launchpad.net/bugs/911747
<uvirtbot> New bug: #911753 in openssh (main) "Wrong directive in config file cause server to crash" [Undecided,New] https://launchpad.net/bugs/911753
<raubvogel> I setup a machine to do kerberos authentication and now when I try to change the password of a local user it wants its kerberos password. How come?
<pmatulis> raubvogel: prolly b/c of pam.  pastebin /etc/pam.d/common-password
<raubvogel> pmatulis: the "minumum_uid=1000" in line 27 perhaps? http://pastebin.com/nWLs2rcN
<pmatulis> raubvogel: right, see http://manpages.ubuntu.com/manpages/lucid/man5/pam_krb5.5.html
<raubvogel> Yeah. Yesterday we switched to kerberos. I bet we will still find other fun issues down the road...
<philpem> Hi all. I've set up a machine with 11.10 Server, running headless and I'd like to use it as a virtual machine host. I'm thinking Orchestra might be the easiest way to set up the VMs, but how would I go about using this with KVM or Virtualbox?
<philpem> Also, are there any admin tool (command line or web) which would make it easier to set up the VMs, create/delete them, start/stop, and so on? The KVM command line tools seem a bit rough around the edges.
<pmatulis> !info virt-manager
<ubottu> virt-manager (source: virt-manager): desktop application for managing virtual machines. In component main, is optional. Version 0.9.0-1ubuntu3 (oneiric), package size 330 kB, installed size 2960 kB
<pmatulis> philpem: â¤´
<philpem> pmatulis, I get the impression that it's desktop only though, and not really intended for servers.
<philpem> As in, if I install it I may well get X11 and half of GNOME or KDE thrown in for good measure
<pmatulis> philpem: correct, you run it on a desktop and connect to the server
<philpem> Well that sounds easy enough. Is there a HOWTO for what I need to set up on the server?
<pmatulis> philpem: but you can install it on the server as well (giving you access to local iso files and the host's bridge, if necessary) and it doesn't drag in the kitchen sink
<pmatulis> (i.e. ssh -Y server virt-manager)
<pmatulis> philpem: re 'howto', virt-manager is independent of the server.  maybe i misunderstand your question
<philpem> I just need to know if there's anything that needs installing or setting up on the server to allow virt-manager to connect
<pmatulis> philpem: just standard kvm & libvirt
<pmatulis> philpem: do you have your kvm host set up yet?
<philpem> i installed the virtualisation stuff as part of tasksel
<pmatulis> there you go
<philpem> hmm, i get the impression Orchestra is going to be a non-starter.
<philpem> "Network interface does not support PXE"
<kirkland> philpem: if you want to install a hundred or more physical machines in parallel, in an consistent, automated fashion, you want Orchestra
<kirkland> philpem: if you want "simple" KVM creation/deletion from a GUI, virt-manager is the best there is
<kirkland> philpem: if you want something much more graphical, have a look at VirtualBox
<philpem> I just want an easy way to create a bunch of similar-or-identical build server VMs, LAMP servers and so on for different things. perhaps virtualbox would be a better option.
<philpem> and if virtualbox + phpvirtualbox + orchestra is that solution, then great.
<pmatulis> philpem: use virt-manager
<philpem> pmatulis, and just install from ISO instead of using Orchestra? (seeing as virt-manager + KVM doesn't seem to allow PXE, and Orchestra appears to require PXE to install)
<pmatulis> philpem: i install from pxe all the time with virt-manager
<pmatulis> philpem: i think you're over-engineering yourself
<philpem> very possibly :)
<pmatulis> !info kvm-pxe | philpem
<ubottu> philpem: kvm-pxe (source: etherboot): PXE ROM's for KVM. In component universe, is optional. Version 5.4.4-7ubuntu3 (oneiric), package size 124 kB, installed size 196 kB
<uvirtbot> New bug: #911812 in facter (main) "processor fact does not handle arm, others" [Undecided,New] https://launchpad.net/bugs/911812
<rbasak> SpamapS: around?
<SpamapS> rbasak: yes but I will be disappearing to take son to daycare in about 20 minutes
<rbasak> OK, I'll be quick
<rbasak> just want to check on something
<rbasak> bug 858860, cobbler users.digest world-readable, already fixed in precise
<uvirtbot> Launchpad bug 858860 in cobbler "weak default configured permissions on /etc/cobbler/users.digest" [High,Fix released] https://launchpad.net/bugs/858860
<rbasak> I included it in a security fix reviewed by the security team in bug 858878, Tyler would like the fix to also apply to the upgrade path
<uvirtbot> Launchpad bug 858878 in cobbler "lack of csrf protection in cobbler-web" [High,Fix released] https://launchpad.net/bugs/858878
<rbasak> I figure that the fix needs to go into precise too, so it would make sense to do that first
<rbasak> I've just written http://paste.ubuntu.com/792803/ (untested). Is that sensible to push for precise, or would there be a better way of doing it?
<rbasak> Then I figure I can do the same for security but just change the version it compares against
<SpamapS> rbasak: sorry got pulled away.. reading
<rbasak> ok. the paste is a new debian/cobbler.preinst file
<SpamapS> rbasak: that looks good... assuming the version mentioned is the absolute last version that had the problem. Might be better to do "lt" the version that fixed it.
<rbasak> thanks, lt sounds better, I'll do that
<philpem> hm.. DHCP doesn't appear to be working for the virtual machines.
<philpem> I can see BOOTP/DHCP packets on virbr1, but there's no DHCP server response
<philpem> which DHCP server does Orchestra use? I can see it uses Dnsmasq, but it doesn't appear to configure it...?
<philpem> ok, it uses dnsmasq, but Cobbler sets it up a bit differently (obliterates the config and replaces it with its own). need to edit Cobbler's template so that it doesn't stomp all over my LAN DHCP server
<philpem> it did, however, boot very nicely into the VM :)
<philpem> and autoconfig works brilliantly
<philpem> /etc/cobbler/dnsmasq.template -- add "interface=virbr1" near the top
<andreserl> philpem, yeah in order get cobbler's DNS/DCHP features working with orchestra, yous hould have been asked that question upon installation
<philpem> I was, but it started listening on eth0 and confused the hell out of my network server!
<philpem> after changing the template file to listen on virbr1, it works fine
<philpem> next step will be to add a 2nd ethernet interface and attach it to the LAN
<zul> ttx: http://paste.ubuntu.com/792852/
<ttx> zul: did you add nova-rootwrap to your sudoers ?
<ttx> nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap
<ttx> see http://wiki.openstack.org/Packager/Rootwrap
<zul> ttx: yep i did
<ttx> zul: well, apparently running "sudo nova-rootwrap iptables-save -t filter" as the nova user asks for a password on your system
<ttx> while it shouldn't, if that line is in your sudoers file
<ttx> looks like a bug in your setup, or in sudo :p
<ttx> (worked well back when I tried, though)
<RoAkSoAx> philpem: could you please file a bug report at https://bugs.launchpad.net/ubuntu/+source/orchestra/+filebug describing your issues and the process you followed to change the interface so it gets documented and I can make it work automatically (or request the interface to use on configuration)
<philpem> RoAkSoAx, certainly.
<RoAkSoAx> philpem: awesome, thank you!
<zul> ttx: damn it i know whats wrong
<zul> ttx: i suck
<philpem> Hmm. It installed without asking me what I wanted to set as a username and root password. Nice.
<RoAkSoAx> philpem: ubuntu/ubuntu and we are aware of that and we will work on it once we decide how is best ;)
<philpem> :)
<philpem> what I might do is rig up a provisioning script... create VM, wait for it to call in, SSH in as ubuntu/ubuntu, then sudo, create user accounts depending on the machine's role, disable the ubuntu user, set up PPAs and repositories and log out.
<philpem> though I suspect most of that can be done with.. what are they called, kickstart scripts?
<philpem> instructions to the installer along the lines of "I want you to do this, don't ask me, just do it."
<RoAkSoAx> philpem: in debian/ubuntu we use preseed's and can probably be done with late_commands
<RoAkSoAx> adam_g: let me know when you are around
<GrueMaster> Daviey, rbasak:  Just saw server meeting notes re: ARM kernel issues.  I am already doing weekly precise (armel &armhf) installs on Panda and running the full QRT kernel test suite.  I am setting it up to run more autonomously in my jenkins setup.  If you have any questions or other tests, let me know.
<GrueMaster> I am also ramping up to automate as many of the tests that I ran in O for server as possible.
<rbasak> thanks GrueMaster
<rbasak> arosales: ^^
<arosales> GrueMaster: good stuff, thanks
<Daviey> neat
<philpem> RoAkSoAx, https://bugs.launchpad.net/ubuntu/+source/orchestra/+bug/911873
<uvirtbot> Launchpad bug 911873 in orchestra "No way to manually restrict DHCP to one interface" [Undecided,New]
<philpem> I seem to be doing rather a lot with Launchpad these days... two bugs filed in a week (one kernel build system, now a feature-req for Orchestra) and my first PPA... :)
<philpem> I really should port those fixed Debian linux-GPIB packages to Ubuntu... hm.
<RoAkSoAx> philpem thanks will work on it thus week :)
<philpem> RoAkSoAx, looks like virt-manager may have some issues with the way Orchestra works... dnsmasq is getting very upset. going to try a reboot and see if it behaves
<philpem> more specifically, virt-manager is spitting out dnsmasq errors when i create network interfaces
<philpem> Error starting network 'PXEReload': internal error Child process (dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/PXEReload.pid --conf-file= --except-interface lo --dhcp-option=3 --listen-address 192.168.100.1) status unexpected: exit status 2
<philpem> drat.
<RoAkSoAx> philpem: uhmmm but if you are running dnsmasq with orchestra + libvirt with dnsmasq it will most likely cause issues
<philpem> thing is libvirt SHOULDN'T be running dnsmasq at all
<RoAkSoAx> philpem: you might wanna take a look to: https://code.launchpad.net/~smoser/+junk/cobbler-devenv
<philpem> DHCP is disabled in virt-manager. Orchestra should have it enabled so I can do PXE reinstalls, but it most definitely shouldn't be enabled for that interface
<RoAkSoAx> philpem: try sudo cobbler sync
<philpem> just seems to crash when I run it
<philpem> if I ctrl-C it, I get "httpd does not appear to be running and proxying cobbler"
<RoAkSoAx> philpem: sudo cobbler sync? can you pastebin the output
<philpem> RoAkSoAx, no output -- http://paste.ubuntu.com/792888/
<philpem> the KVM virtual network interface isn't even up... huh?!
<RoAkSoAx> philpem: that's pretty weird. can you pastebin /var/log/cobbler/cobbler.log
<philpem> http://paste.ubuntu.com/792894/
<RoAkSoAx> philpem: seems that the change made in dnsmasq.template is not correct
<RoAkSoAx> philpem: as for the output it seems that it's a dnsmasq issue that apparently might have killed cobbler
<philpem> ugh
<RoAkSoAx> or there's an invalid parameter being passed to dnsmasq
<philpem> and the changes cobbler made might have killed the virtual network stuff
 * philpem goes to find a keyboard and a display
<RoAkSoAx> philpem: cobbler won't really mess up with any virtual network stuff
<RoAkSoAx> philpem: it will start/stop dnsmasq but if dnsmasq fails to start, then cobbler should continue to run successfully
<RoAkSoAx> philpem: but in the log,there's a dnsmasq output for -h, which means there's a invalid/not known option b eing passed to it that causes dnsmasq to output the help
<RoAkSoAx> the help menu
<philpem> ok, this is really really weird.
<philpem> I just removed that line from dnsmasq.conf and reverted the template (knew there was a reason I kept /etc under version control)
<philpem> virt-manager reports all VM interfaces as down, ifconfig says they don't even EXIST.
<philpem> try and bring up PXENetwork and I get:
<philpem> Error starting network 'PXEReload': internal error Child process (dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/PXEReload.pid --conf-file= --except-interface lo --dhcp-option=3 --listen-address 192.168.100.1) status unexpected: exit status 2
<RoAkSoAx> philpem: that's maybe libvirt?
<philpem> oh this IS interesting. 'sudo killall dnsmasq' and it works fine.
<philpem> this would seem to explain the issue -- https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/500484
<uvirtbot> Launchpad bug 500484 in libvirt "libvirt conflicts with existing dnsmasq installation (dup-of: 231060)" [Low,Invalid]
<uvirtbot> Launchpad bug 231060 in libvirt "packages dnsmasq and libvirt-bin conflict with each other" [Low,In progress]
<philpem> now how to fix it?!
<RoAkSoAx> philpem: so you are using orchestra in the host to provision VM's only?
<philpem> yeah
<philpem> "The above workarounds only work if you also set "bind-interfaces" in /etc/dnsmasq.conf, otherwise dnsmasq binds to the wildcard address."
 * philpem has a sneaky plan
<RoAkSoAx> philpem: well I currently run orchestra as a VM to deploy VM's and let libvirt handle dnsmasq
<philpem> ah.
<philpem> I was running Orchestra on the VM host, alongside libvirt
<RoAkSoAx> philpem: I do have an orchestra server on a VM host, but I don't manage dns,dhcp
<philpem> hm, it seems the problem is virt-manager doesn't let you specify PXE/bootp parameters in the network settings, so you can't make it bring new VMs up from cold
<RoAkSoAx> philpem: but libvirt does.
<RoAkSoAx> philpem: <bootp server="$all_systems.cobbler.ipaddr" file="pxelinux.0" />
<RoAkSoAx> philpem: you might be interested in: lp:smoser/+junk/cobbler-devenv
<RoAkSoAx> philpem: once you setup that, you can use install orchestra in the cobbler VM
<RoAkSoAx> philpem: that's what I use, but we haven't had time to update that to use orchestra by defauklt
<philpem> "starting kvm... FAIL". yay.
<philpem> grr. had a stray left over in /etc/dnsmasq.d from my earlier attempts with kvm.
<philpem> hm. dnsmasq is bringing up the libvirt interfaces now.
<philpem> but libvirt keeps running it even though DHCP is disabled on that interface!
<RoAkSoAx> uhmm interesting
<RoAkSoAx> philpem: you might wanna consult that with hallyn
<zul> adam_g: around?
<philpem> ok, so the workaround is to either NAT the thing, or edit the XML files to bridge my "local LAN" interface to eth0
<philpem> blehhh...
<indstry> When adding a second drive to my fstab conf file,  I first create a mount point dir.  When creating that dir what permissions should i set?  would permissions on the drive itself override the mount points permissions?
<zul> Daviey: ping
<adam_g> zul: yo
<adam_g> RoAkSoAx: yo
<zul> adam_g: im switching over the nova_sudoers to use the nova-rootwrap, its been tested i just had to add a patch that has been fixed upstream
<RoAkSoAx> adam_g: yo! man! So I tried to reproduce bug #908895... with no success
<uvirtbot> Launchpad bug 908895 in orchestra "The name 'precise-x86_64' is invalid." [Undecided,New] https://launchpad.net/bugs/908895
<adam_g> RoAkSoAx: weird, gimme a few and ill make sure i can still reproduce
<adam_g> zul: cool. if its been fixed upstream wont the patch be unecessary after the next snapshot?
<RoAkSoAx> adam_g: are you setting the *system* name to precise-x86_64?
<zul> adam_g: yeah it wont be needed, but they are having devstack pypi issues right now
<adam_g> RoAkSoAx: im doing nothing special, installing, importing isos, booting something and selecting an entry off the menu
<Daviey> zul: hola
<zul> Daviey: nova has libguestfs support now is that something we should look at?
<RoAkSoAx> adam_g: 1. downloaded iso. 2 imported it. 3. created a system selecting that profile, 4. installation doesn't fail
<RoAkSoAx> adam_g: the hostname gets inherited from the system name if it is not set
<Daviey> zul: interesting, this came up a while ago IIRC
<Daviey> as a standalone thing.
<Daviey> I think we should certainly investigate
<Daviey> Out of interest zul, how are MIR's looking?
<philpem> ok, it looks like to get this working, i need to set the bridge up manually.
<adam_g> RoAkSoAx: right. id love to have it totally usable out of the box for non-juju installs, though, without worrying about enlistment.. if its just a matter of changing 1 character
<zul> keystone MIR has been thrown back over the wall and horizon has been cleaned up so that I can start writing the mir
<philpem> so I have br0 for the LAN and pxebridge to netboot the virtual machines
<adam_g> RoAkSoAx: tho i suppose it wouldn't matter when the whole enlistment thing comes together. still think the default profiles should be bootable/installable regardless
<philpem> that SHOULD stop libvirt interfering
<Daviey> zul: \o/
<pmatulis> philpem: are you still struggling with setting up a few guests on kvm?
<philpem> just about fixed it actually
<pmatulis> philpem: you don't need orchestra for that
<RoAkSoAx> adam_g: right, but this works out of the box for non-juju installs. The hostname is not inherited from the profile name
<RoAkSoAx> adam_g: if a system points to a profile, the hostname is inherited from the *system* name not, from the *profile* name
<adam_g> RoAkSoAx: no, it doesn't unless the system has been enlisted
<RoAkSoAx> adam_g: right, but what i'm saying is: In order to be able to have a cobbler *system* then it has to point to a cobbler *profile*
<RoAkSoAx> adam_g: but yes, now i'm seeing your point when you are trying to deploy *from* a cobbler profile
<adam_g> RoAkSoAx: yeah, i understand.  if there is no system configured, the menu presents user with a bunch of profiles, those should be bootable/installable.
<RoAkSoAx> adam_g: yes they are, though your bug description should have made that differentiation :)
<RoAkSoAx> more explicitely
<adam_g> RoAkSoAx: i put together a new file server over the holiday. thoguth to myself, "oh! ive got this orchestra server ive been using for the last month to do dev work,ill just boot and install from there" it would have been awesome to just boot, choose precise, install.. w/o worrying about enlisting/macaddr/etc.
<RoAkSoAx> adam_g: yes yes, I know what you mean now. You are booting of a profile, not of a system (speaking within cobbler terms)
<philpem> oops. set up a bridge in /etc/network/interfaces and forgot the options to SAY it's a bridge.
 * philpem waits patiently for the server to give up trying to bring up an interface called 'vmpxe' which doesn't exist...
<adam_g> zul: are you talking about patching nova with the root wrapper stuff for precise, or oneiric/diablo?
<zul> precise
<adam_g> zul: ahh. is it getting cherry picked from master or from a patch thats still in gerrit review?
<RoAkSoAx> adam_g: so I just selected precise-x86_64 off the PXE Menu and so far so good
<zul> adam_g: still being reviewed: https://review.openstack.org/#change,2787
<adam_g> zul: ah
<adam_g> RoAkSoAx: weird. i haven't tried again with my setup. i will in a few
<RoAkSoAx> adam_g: i have an installation error with grub but nothing about the hostname
<philpem> woo! it works! :)
<pmatulis> what does?
<RoAkSoAx> philpem: awesome
<adam_g> RoAkSoAx: if you still have the error'd system, can you check what its got for a hostname?
<philpem> basically, you can't use the Routed or NAT interfaces because they start DHCP.
<philpem> what you have to do is configure a network bridge for PXE manually, in /etc/network/interfaces
<philpem> then run Orchestra on that interface, and tie the VMs to it
<pmatulis> philpem: so this is a limitation of orchestra?
<philpem> pmatulis, partly.
<philpem> Orchestra won't let you tell it "I want you to ONLY listen on interface blah" -- it listens on ALL interfaces unless you edit the dnsmasq templates.
<philpem> then run 'sudo cobbler sync'
<philpem> Libvirt doesn't do bridging like VirtualBox does -- VBox asks you to nominate the ethernet adapter you wish to bridge to. KVM/Libvirt/virt-manager expects you to set up the bridge in /etc/network/interfaces, then specify the name in the virtual machine config
<adam_g> zul: tftp-hpa is broke :(
<zul> adam_g: dist-upgrade please :)
<pmatulis> philpem: i routinely tell virt-manager to use my host's bridge
<pmatulis> when i set up a guest
<RoAkSoAx> adam_g: ok, I was able to reproduce it. For some reason ym setup was using a different hostname than the the profile name
<RoAkSoAx> adam_g: ok so I will replace all _ with -
<adam_g> zul: whats supposed to get upgrade to help? it looks like a bad upstart conf: start: Unknown job: tftpd-hpa
<adam_g> RoAkSoAx: ahh
<zul> adam_g: it was a bad upstart stanza which one are you using?
<zul> ubuntu2 or ubuntu3
<adam_g> 5.1-3ubuntu2
<adam_g> RoAkSoAx: should we also rename existing release profiles on upgrade? via cobbler profile rename should be safe
<zul> ok there is an ubuntu3 which fixes that
<adam_g> zul: ah, updating my mirror
<zul> adam_g: k
<adam_g> back in 10min
<RoAkSoAx> adam_g: you mean while updating the already imported iso's?
<philpem> hmm, OK, there is one issue with the install.... it can't handle the concept of having two Ethernet adapters.
<philpem> One ends up configured (the one used to PXE from), the other (the LAN I/F) does not.
<RoAkSoAx> adam_g: so this is the change needed if you wanna test it: http://paste.ubuntu.com/793002/
<RoAkSoAx> philpem: yes, unfortunately we cannot handle that in the preseed as we could with kickstarts. But you will have to look into late_commands. Also, if you want to file a bug report about it would be great
<philpem> it *looks* like it doesn't use persistent-eth either.
<uvirtbot> New bug: #911922 in libnss-ldap (main) "libnss-ldap:1386 does not install" [Undecided,New] https://launchpad.net/bugs/911922
<philpem> and the rule for assigning eth numbers is that eth0 gets the lowest MAC address... or something along those lines
<RoAkSoAx> right, but in regular bare metal installations its always gonna be the same MAC ;)
<philpem> :)
<philpem> i usually swap the ethN IDs around in /etc/udev.d/70-persistent-eth
<philpem> I might have to save this scrollback for future reference :)
<adam_g> RoAkSoAx: ah, cool. that looks a lot cleaner than what i was thinking
<bitmonk> hey guys are there any kernel backports to lucid in a ppa somewhere? or examples of .config for -server oriented kernels? we need newer megaraid driver than lucid kernel has, and the last kernel someone built here is having stability issues under some workloads.
<hallyn> stgraber: do I remember incorrectly that you enabled libcgroup uploads for me?
<stgraber> hallyn: I thought I did yes. Looking...
<hallyn> stgraber: was trying to upload with http://people.canonical.com/~serge/libcgroup.rm.debdiff applied, but it came back saying I had insufficient perms
<stgraber> hallyn: could be that the ubuntu-server package set is autogenerated and libcgroup was lost with the last update. That or LP tried to fix the package set mess they created early in the cycle and that got dropped
<stgraber> hallyn: anyway, I added it again now
<hallyn> stgraber: thanks.  guess we'll find out in awhile if it is getting auto-regenerated :)
<stgraber> if it's auto-generated, I'm not exactly sure why lxc would be in the ubuntu-server package set and not libcgroup which is part of its recommends
<hallyn> was lxc in there?
<stgraber> yeah, I checked and lxc is in the package set
<stgraber> maybe lxc should be promoted to main at some point, as we seem to recommend it everywhere :)
<hallyn> stgraber: btw a bug raised by smoser reminded me that devpts is worse off than i thought, and to our apparmor security mitigations for lxc concerns we need to add 'mount -t devpts devpts /mnt' from a container, which will give it the host's devpts
<stgraber> then we'll have a very good reason to ensure libcgroup is in the ubuntu-server packageset (and stays there)
<hallyn> true.  it's been awhile since lxc MIR was rejected.  Wonder if how it woudl fare now
<hallyn> i coudl see it being rejected on the basis that "we already have libvirt-lxc"
<hallyn> which would sink juju-lxc of course
<stgraber> hallyn: yeah, that's the same problem I had before I patched mountall where devpts would get remounted and mess with the host. It's indeed pretty bad when someone does it on purpose.
<stgraber> hallyn: I'm not sure how much flexibility apparmor will give us, but ideally I'd only allow perfectly safe filesystems (if that even exists) and loop mounting
<stgraber> hallyn: looking at what gets mounted by mountall, I'd at least add binfmt_misc, debugfs and securityfs to the list of stuff we don't want
<stgraber> for devpts, we'd also have to prevent the container from unmounting the ones we mount when we create the container
<hallyn> we wouldn't have to if lxc made sure pre-the host's devpts was umounted before startup
<hallyn> good point, i'll add binfmt_misc to the list on the wiki page  (the others i'd added yesterday)
<hallyn> stgraber: finally (i assume we'll be chatting next week? :)  I've been trying to push Daniel to push the kernel patch for reboot, but we may have to go beg smb to take the patch as it stands
<hallyn> (it has sign-off by Oleg, so I see no problems...)
<stgraber> hallyn: yeah, missing that patch is currently making my containers to fail at shutdown (I'm running the patched mountall and without /etc/init/lxcmount.conf) and it's really the next step to get rid of lxcguest
<hallyn> stgraber: somebody go tell linaro they're working him too hard :)
<stgraber> I'll definitely have some time to discuss lxc next week, I guess we'll just need to go buy a pack of beer for smb and we'll be good for our patch ;)
<hallyn> crossing my fingers...
<philpem> Question re. Orchestra. If I do a bare metal install on a VM, are the VMs supposed to register themselves with Nagios?
<philpem> Because Nagios Admin is only showing me what appears to be the VM server itself (calling itself "localhost")
<philpem> Cobbler can obviously see that the VM is up though, because it turned netboot off...
<g00gle> I would like to install ZendTo (http://zend.to) and I have the x64 version of Ubuntu, however - in order for php to handle downloads / uploads of 2GB+ I need to complie php for x64... is this something you guys can help me with? is there such a package in the ubuntu repositories for x64 PHP...?
<TJ___> Hello
<TJ___> Does anyone know if a LEMP stack will ever be one of the install options?
<philpem> TJ___, LEMP?
<TJ___> linux, nginx, mysql, php
<TJ___> as opposed to LAMP
<jmarsden|work> I see no letter "E" in nginx ... ?
<mgw> hey, i have a question regarding cobbler+dhcp (isc): If I have two interfaces (e.g., two VLANs) with the same MAC address, only one of them ends up in dhcpd.conf
<TJ___> it's because nginx is pronounce engine-x
<jmarsden|work> So either rename it to that, or ask for an LNMP stack :)
<mgw> manually adding the second interface to dhcpd.conf works fine though
<mgw> i think it's a bug, as it lets me add them to cobbler
<TJ___> Well then, does anyone know if a LNMP stack is being worked on? I hear nginx can be better for hardware-limited machines.
<TJ___> as one of the install options in tasksel that is.
<jmarsden|work> Sounds like a trivial patch to tasksel, but whether there is enough interest from people who can't just sudo apt-get install nginx mysql php5 , I don't know.
<g00gle> I'm trying to follow this: http://zend.to/phpfix.php and keep running into issues when I use the source from apt-get source .... any suggestions?
<philpem> jmarsden|work, AIUI Nginx is pretty niche-market at the moment. It is in the repos, though.
<philpem> So setting it up wouldn't be hard.
<philpem> If you're doing it on a lot of machines you might want to write instructions, an automated script (e.g. Python) or a package to set everything up, but that's it.
<jmarsden|work> philpem: Right, so I think it is more of a marketing "do we want this option in tasksel" question than a technical one.  And I'm more techie than marketing :)
<philpem> LAMP is, at least at the moment, the standard and I think you'll find it hard to overcome the inertia that it's built up.
<philpem> Also if you're at the point of using Nginx over Apache, you'll probably want to do the setting tweaks manually anyway (unless you have a box of identically configured machines, in which case... break out a SysRescCD image, a USB hard drive and a copy of PartImage)
<shade34321> hey...this is probably a stupid question but I can't seemt to find my answer, probably because I'm looking for the wrong thing. Essentially what we've just found out is that our webserver is allowing people to go through the folders and access content that is not allowed to be accessed w/o login credentials and then only certain people are allowed to see certain things.
<shade34321> how can I get it set up correctly to do this? Also I took this system over almost a year ago just haven't had time to play with it much and hence why I haven't found it before...thanks for the help
<philpem> shade34321, Htaccess / Htpasswd is one way to do it
<shade34321> and the site is usiing drupal for the front end along with some trac
<philpem> and "Options -Indexes" in your htaccess will stop it generating directory indexes where there is no index.{php,html}
<shade34321> hmm..I will look into that, haven't looked/touched that in awhile but would that change anything in my current set up?
<philpem> well, Options -Indexes would replace the directory listings with a 403 Forbidden error.
<philpem> so if you go to http://foobar.local/images -- where you'd normally get a list of everything in /images, you just get a 403
<shade34321> ok
<shade34321> ill do that real quick
<philpem> the htaccess/htpasswd stuff would pop up a password requester whenever someone wanted to browse either the entire site or a specific directory (depending on where you put the .htaccess file)
<philpem> and for the love of $DEITY, don't put the htpasswd file in public_html! put it somewhere else that Apache can see, but that Apache won't serve to the world
<shade34321> lol
<philpem> otherwise someone can download the htpasswd file, then crack the password hashes....
<shade34321> i may actually take the site down for a week or so to paly with this because accroding to the ubuntu docs htaccess is not the preferred way
<philpem> I usually mkdir /var/htpasswds and put them in there
<philpem> I'm suggesting quick ways to tighten up a site, i thought that's what you wanted :)
<shade34321> yup....right now quick
<philpem> but yeah, grab the Ubuntu Server Guide and read through the section on Apache.
<shade34321> and then fix it for good
<philpem> the base install is usually fairly tightly locked down.
<shade34321> (my philosphy in life...fix it quick then go back and do the shit right:)
<philpem> if you're running PHP, Suhosin is worth turning on (it's a security-hardening extension which makes it less likely that a hole in a PHP app will allow the server to be rooted).
<philpem> Ubuntu ship it by default, but make sure it's still enabled...
<philpem> if someone's turned directory indexes on, there's no telling what else has been done.
<shade34321> when i took this job the exiting admins told me google was my best friend and that was my training, we're all college students btw
<uvirtbot> New bug: #322327 in bzr "Integrated permissions/ownership diff output for etckeeper/bzr" [Wishlist,Confirmed] https://launchpad.net/bugs/322327
<philpem> shade34321, that is... shocking and yet not surprising.
<shade34321> lol...which part the google stuff or we're college students?
<philpem> the google stuff
<shade34321> lol...it's very annoying
<philpem> though to be honest, six years ago I took over a webhosting company (which still isn't making a profit but that's another story). never did a single Zend PHP course or anything.
<shade34321> well congrats on making it so far:)
<philpem> in six years I've learned enough to bring up a server from scratch, clean up a hacked server, trace intrusions and find out *exactly* what happened...
<philpem> this is the sort of thing that's learned best as.. well, an apprentice really.
<shade34321> lol
<shade34321> yes it is
<philpem> find someone who knows what they're doing and get them to teach you. kinda like what happened with blacksmiths, glassblowers and so forth :)
<shade34321> luckily for me I have a bunch of friends who are smarter than I am so i grill them whenever I can...but alas they have work just like i do
<shade34321> so time is always an issue
<philpem> Well, we're all friendly in here. More friendly than most of the trolls in #ubuntu anyway :)
<shade34321> that and they can't be allowed access to our systems...some national security stuff:(
<philpem> Say no more.
<shade34321> lol...and #centos:)
<shade34321> (we use primarily RHEL systems with some other distros so I ask #centos a lot of questions just to find it on google hours later and nobody answered it)
<philpem> if those machines are running RHEL, make use of the RedHat support contract :)
<philpem> but my server (the web hosting one) is Centos + CPanel, so I know how that works.
<philpem> hint: I spent nearly a day securing the stupid thing. on ubuntu it took 20 minutes, and most of that was ticking stuff off my checklist...
<shade34321> i would love to but alas I don't have any of the info or the credentials to get the info,  I work for my school and so we have a plethora of RHEL support
<shade34321> been working on trying to get it though
<philpem> "disable directory indexes... oh, it's already off. check."
<philpem> "set PHP to log errors to the apache error log... already done."
<shade34321> lol
<Folklore> whats the max number of connections
<Folklore> ubuntu server can handle by default
<Folklore> also is there a command to check that
<guntbert> Folklore: what sort of connections?
<Folklore> TCP
<Folklore> since UDP is connectionless :P
<Folklore> and others aren't on my radar
<guntbert> Folklore: you will likely reach the individual limit of any server before you reach the OS's limit
<Folklore> individual limit?
<guntbert> Folklore: about what kind of server programs are we talking?
<amstan> i'm trying to scp a lot of data over gigabit, but it seems like the cpu on my server's the bottleneck
<amstan> i suspect it's the encryption
<amstan> how can i disable it?
<guntbert> Folklore: and please use my nick, so I get alerted to your answer
<amstan> or pick a better cypher(which)
<Folklore> ubuntu server
<Folklore> GUNTBERT
<guntbert> !tab | Folklore
<ubottu> Folklore: You can use your <tab> key for autocompletion of nicknames in IRC, as well as for completion of filenames and programs on the command line.
<Folklore> you assume i'm using the same client as you
<Folklore> :p
<guntbert> Folklore: ubuntu server, yes of course but you are talking about connections, so you will have some server programs running (httpd, smptd,...)
<Folklore> well i'm using thread pool and epoll
<Folklore> so unless a new file handle(socket handle) takes up lot of mem
<Folklore> I want to see how far I can push the server limit, so do you know the default setting
<Folklore> and the memory usage
<philpem> drat. it looks like the install image uses the PXE source as eth0 and doesn't bother setting up eth1 :(
<guntbert> Folklore: it seems I don't understand what you are trying to do at all - so I'm obviously the wrong person to help you :)
<Folklore> guntbert
<Folklore> I want a simple simple server that can handle x number of tcp connections
<Folklore> my question is what the default of limit is
<Folklore> *os limit
<Folklore> and possibly want kinda memory i'm looking at per connection
<Folklore> not taking into account the memory I allocated, just from the connection itself, memory used at kernel level or whatever
<guntbert> Folklore: I don't know of any hard coded limit - and as for the memory - the biggest part will be the buffer (I guess, with a lot of hand waving...)
<Folklore> thanks
<Folklore> guess only way to find out, is to find out heh
<Folklore> and test
<RoyK> Folklore: check the tunables under /proc/sys/net
<RoyK> Folklore: current kernels (that is, recent as in the latest five years or so) will have reasonable defaults for more use
<RoyK> Folklore: read http://fasterdata.es.net/fasterdata/host-tuning/linux/ or just google linux network tuning
<RoyK> [rw]mem can be worth tuning
<RoyK> and make sure you have a nic and driver that supports checksum offloading
<nxvl> adam_g: ping
<adam_g> nxvl: pong
<nxvl> adam_g: i've been pointed at you to get some cloud deployment docs?
<nxvl> adam_g: did you have those somewhere i can see them?
<adam_g> nxvl: the only stuff i can point you to is whats been published on blogs last cycle. exactly what are you deploying?
<nxvl> adam_g: a private cloud
<nxvl> adam_g: and i have not much idea on how to do that
<adam_g> nxvl: if you mean ubuntu + openstack, take a look at the series at http://cloud.ubuntu.com/2011/10/ubuntu-cloud-deployment-with-orchestra-and-juju/
<nxvl> that one
<nxvl> awesome, thanks
<adam_g> nxvl: np. thats a bit outdated but should get you going. if you make it as far as https://wiki.ubuntu.com/ServerTeam/UbuntuCloudOrchestraJuju and run into problems ping me again. there are a couple more charms you'll need to checkout and deploy if doing this on precise
<uvirtbot> New bug: #912030 in autofs5 (main) "The auto.net script that comes with autofs5 is broken" [Undecided,New] https://launchpad.net/bugs/912030
<Aison> when I turn of my ubuntu server, I allways get the message: system halted but in fact it's not turned of
<Aison> why?
<philpem> I'm using Orchestra to deploy servers. Each server has two NICs, both need to be enabled in DHCP mode. How do I make the installer enable the second one ready for the first boot?
<Folklore> maybe echo DHCP=YES >> /etc/rc.conf
<Folklore> or add it manually
<Folklore> nano /etc/rc.conf
<philpem> well, it's bringing the one it used to PXE-boot up (with DHCP) but not the second one
<philpem> so I end up with eth1 active, and eth0 sitting idle.
<Folklore> try
<Folklore> ./etc/rc.d/dhcp start
<Folklore> without .
<shade34321> hmm...so my web server is allowing access to the directory view yet everything I can think of to disable this is failing. If I go to the base url I see the home page yet if I go to the file manager I can edit the url and eventually get to the root of the websites folder structure. Any ideas of what I'm missing?
<shade34321> I've trying to edit the .htaccess, get a new version of the .htaccess file, edit the virtual hosts file
#ubuntu-server 2012-01-05
<SpamapS> shade34321: apache's config is extremely complex. Its hard to tell what is wrong without understanding your whole config.
<shade34321> SpamapS: ok, thanks. I don't even know the entire config, took it over and haven't had a chance to look at it much, if you have any questions about it I will try my best to answer them
<lifeless> SpamapS: and even then .... ..
<uvirtbot> New bug: #912066 in cloud-init "/etc/fstab contains incorrect device for swap partition when no ephemeral disk present." [Undecided,New] https://launchpad.net/bugs/912066
<uvirtbot> New bug: #912069 in python-novaclient (main) "nova client does not work, fails to import argparse library" [Undecided,New] https://launchpad.net/bugs/912069
<shade34321> does apache have a irc channel?
<SpamapS> shade34321: You probably want to make sure that there are no 'Options' lines with 'Indexes' in the listanywhere
<shade34321> hmm...question...Options Indexes and Options -Indexes are not the same thing are they?
<philpem> shade34321, first one enables, second one disables
<philpem> most specific takes priority
<shade34321> <--idiot
<shade34321> thanks
<philpem> no, the word you're looking for is "newbie", or possibly "greenhorn".
<philpem> the only stupid question is the one you didn't ask.
<SpamapS> shade34321: Options -Indexes should turn it off
<shade34321> I've been looking at this to long...lol...I kept reading about -Indexes and then seeing Indexes and not seeing the difference
<shade34321> that it did
<shade34321> now to see if everythign else works right
<shade34321> o.O
<SpamapS> shade34321: the real question is why did it get turned on.
<philpem> on the specificity rule, if you have Options Indexes set for /var/www, then clear it for /var/www/images, images will be unindexed but everything else will be indexed
<shade34321> thanks for the help!
 * SpamapS suggests version control for configs.. *soon*.. ;)
<philpem> SpamapS, some web developers turn it on to be lazy... I have it set on my development servers, and for only one directory (temp) on the release server
<shade34321> <--took over last may and I haven't touched any config stuff and the people who did do this graduated and will not answer questions as to why certain things are the way they are
<philpem> well.. release server == my website
<philpem> I already have version control for configs -- I use Mercurial to do it :)
<shade34321> i've actually had a lot of questions for them since my knowledge is fragmented to get told to google it
<shade34321> :)
<philpem> hg addrem / hg commit before I change anything in /etc, then hg commit after I change it. something goes wrong, I hg up -C to the rev before I changed it.
<philpem> I can roll everything back to when I last installed or upgraded if I need to.
<philpem> Or trundle through the change history and see what I changed, when, and why.
<philpem> "Don't explain what you did, you can get that from the change history and deltas. Instead, explain what the problem was and why you fixed it that way."
<philpem> "Years after you leave, your colleagues will praise you for your foresight, and thank the Gods of Computing that you were working on their codebase. Instead of, you know, wishing you a nasty end involving sharks with frickin' laser beams."
<Kyle__> philpem: Sure you're not talking about my predecessor at this job? For the last part.
<shade34321> or mine?
<Kyle__> philpem: But you forgot, sharks with laser beams, and syphallis.
<shade34321> lol
<Kyle__> I'm setting up a server with multiple-heads, to put up interesting graphs of what's going on on the network (manager distractors).  Does ubuntu-server's X-config work just like the desktop config?
<philpem> My goal at work: to have people remember me as "the one guy in the SE department who bothered to document his code, and write decent commit comments"
<philpem> Kyle__, should do.
<philpem> install ubuntu-desktop for GNOME3, kubuntu-desktop for KDE, xubuntu-desktop for XFCE.
<philpem> that'll pretty much turn an Ubuntu Server box into a desktop
<philpem> or just pick the app you want and install it... you can run X without a window manager but it isn't fun :)
<Kyle__> philpem: Lofty goal.  I was teased heavily for 6 months about commiting ever peice of documentation to a wiki.  Then cursed by other departments when it was documented that we did _EVERYTHING_ right on our side, thank you very much.  Then thanked heartily when I left, for leaving the new sysadmin with viable, up-to-date documentation :)
<philpem> if you want a manager distractor and don't care for functionality, you probably want GNOME3 (he says, only half joking, and with a grin like the Cheshire Cat's)
<Kyle__> philpem: Heh.  Gnome3 would eat up all the CPU.  I'm going to generate some little graphs via a lightweight ruby script and toss them up as X backgrounds, rotating through the monitors periodically.
<philpem> Kyle__, You haven't seen my code... 1:1 code:comment ratio, and anything with a public API has Doxygen comments.
<philpem> you can throw Doxygen at it, and get fully searchable and *useful* documentation.
<Kyle__> Wowzers.
<philpem> every so often I read through the docs and make sure I haven't made any silly mistakes, so there are commits and CRs along the lines of "Fix stupid documentation mistake: open_device_fast does NOT return an int, it returns an E_CT_ERROR_CODE!"
<philpem> My manager used to ask why I was "filling the CR database up" with those. Until one of the other developers asked him to pass along a message: "Can you ask Phil if he's got time to document ES_CP_VIDEO_ENGINE? I know it's not his code, but his documentation rocks!"
<Kyle__> Sweeet.
<SpamapS> Kyle__: re the manager distractor.. if I were in your situation, I'd do all the graphics w/ HTML and just run a browser fullscreen.
<SpamapS> Kyle__: backgrounds and X programming seem like overkill for a simple rotating image. Also, ratpoison should suffice for your window management needs. :)
<Kyle__> SpamapS: Oh, not real X programming, just tossing up a background with a CLI program.  Backgrounds in X are held in graphics memory, and take up zero CPU.  Can't say that about firefox or webkit, even when they're idle.
<Kyle__> SpamapS: Ratpoison?  Humm.  Don't you need something to put that on, so the manager eats it?
<SpamapS> Kyle__: how crappy is this box you're putting this on that you're worried about most likely about 20MB of memory?
<SpamapS> Kyle__: and if you do it w/ HTML you can let the managers run it on their desks, and not even come over to look at the distractors ;)
<Kyle__> SpamapS: Not crappy at all, but it's going to serve as the head of an openstack cluster.  If my experience with eucalpytus is a judge, this system will occasionaly swap under load, up until I get ~24GB of ram into it.
<SpamapS> Kyle__: do not compare openstack to eucalyptus so quickly. ;) openstack is the anti-euca when it comes to scaling
<Kyle__> SpamapS: God I hope so.  That's why we switched when moving from our proof-of-concept hardware to real hardware.
 * Kyle__ loves academia: you couldn't pull a change like that in business so quickly.
<SpamapS> err
<SpamapS> thats so not true of successful businesses.
<shade34321> Kyle__: what is this cluster going to be used for if you don't mind me asking?
<SpamapS> Kyle__: successful businesses know when to stop shovelling money into a hole just as fast as academics.
<Kyle__> shade34321: For students, it will run web-apps, or whatever server-code they're writing for class.  1-1 or 1-many student to vm ratio.  Also will be used for sysadmin classes.  MOre importantly, I'll spin down the student VMs at night, and be running hadoop and nexus jobs on it :)
<shade34321> nice
<Kyle__> SpamapS: Yes, but usually you'd have to re-do the proof of concent with the new software, before getting the new hardware.  At least that's what I've seen.
<shade34321> at my job I'm in charge of 3 HPC clusters, two running slurm, one running pbs
<SpamapS> Kyle__: depends on how burdened with big and clunky red tape the business is.
<Kyle__> shade34321: pbs?
<shade34321> one sec and ill give a link
<Kyle__> SpamapS: and how flush the manager running the project is feeling, coupled with their feelings towards the employee/contractor.
<Kyle__> These Dell C5000s are neat.  Like having 12 blades in 2 U.  Sadly without the fancy network interconnect, but a fraction of the price.
<Kyle__> Sorry, 3 u.
<shade34321> sorry....to many links
<Kyle__> Heh.
<shade34321> stands for Portable Batch System and it's used in unison with torque and maui
<shade34321> it's for academic research, AE to be exact
<shade34321> currently the boss wants to get a new cluster since our main one is dying:(
<Kyle__> Neat.
<Kyle__> Shame mosix isn't still going strong.
<shade34321> mosix?
<Kyle__> How big of a cluster, if I can pry?
<Kyle__> Rigth now I'm feeling lucky to have these 96 cores, but I know that's small potatoes.
<shade34321> one of them is about to be decommissioned, it's really old and we've lost 60% of the nodes, one is 256 nodes with 2 dual cores procs(it's an IBM machine from early 2000's) and the other is a 30 node cluster with dual quad core xeons
<SpamapS> mosix was a mess
<SpamapS> neat idea
<SpamapS> but too naive
<shade34321> lol
<Kyle__> SpamapS: Yea, but if they hadn't tanked, it may have been georgous by now.
<shade34321> our big cluster, with 1024 cores is currently at 214 active nodes at 856 cores:(
<shade34321> his current path is he wants to stick as many cores per node as he can
<Kyle__> shade34321: Yea.  Small university, smaller budget.  But if this pans out the budgets of next year and years after may get funneled right into my racks!
<shade34321> so 4 socket boards:D
<shade34321> understandable
<SpamapS> Kyle__: I'm not sure I agree.. the idea ignored things like map/reduce entirely and focused just on trying to magically make multi-processing scale out.
<Kyle__> shade34321: Wow does that sound fun.
<shade34321> the new opterons though seem a flop, our origianl idea so it'd be 64 cores per board
<shade34321> *seem to be a flop
<SpamapS> Kyle__: hadoop and its map/reduce cousins work because they acknowledge that big data requires specific strategies to break it up.
<Kyle__> SpamapS: Which for some jobs is the easiest way of comprehending them.  If you can get it "good-enough" it's worth it to trade some computational efficiency for programmer efficiency.
 * Kyle__ nods
<shade34321> and we're currently talking over the new intel ones with 10 cores
<Kyle__> shade34321: The i series xenons?  Why did I think they were at 12?
<shade34321> b/c they should be?
<SpamapS> I think all the mosix-friendly problems can be handled simply by writing generic workers and communicating jobs to them via message queues.
 * SpamapS got really excited about mosix when he first saw it tho
<shade34321> since we're talking about clusters...i have a question for you guys
<Kyle__> SpamapS: Especially compared to the PVM code I did for an undergrad project, it seemed like mana from hevan.
<shade34321> our clusteres are running RHEL or CentOS(same difference) and currently we don't seem to have any temp monitoring software/hardware in place
<SpamapS> Yeah, definitely nicer to just fork and run than have to get into PVM muck
<shade34321> what do you guys recomend for it?
<Kyle__> shade34321: You tried the traditional i2c sensor package?  I forget what it's called in centos, been awhile since I tried it.  It never recognized the hardware in the Dells at my last gig, but once in awhile it would on nicer hardware.
<shade34321> well our dells our r410's with an IBM x3550 mgt node, home built cluster built before I arrived
<shade34321> and the IBM are all IBM x3550 nodes or some variant, don't reemember exact numbers
<shade34321> wanted to use the iDRAC stuff from Dell but the head node stands in my way:/
 * SpamapS shuts down for the day
<Kyle__> shade34321: See if the dells came with a DRAC or iDRAC.  Dell Remote Access Console.  They usually share the first NIC, and have a web-page that you can power-off or on the box from, and give sensor readouts.
<Kyle__> Heh.
<SpamapS> Kyle__: good luck. There are a lot of openstack experts in here and in #openstack ... don't hesitate to ask. :)
<Kyle__> SpamapS: Thanks!
<shade34321> Kyle__: I couldn't find iDRAC installed but again I don't want to manually check all 30 nodes
<Kyle__> shade34321: Get into the network, behind the head-node.  See if the DRAC/iDRACs in your dells supporrt SNMP.  If they do, you should be able to pull the sensor data using snmpget from the head-node.
<Kyle__> is there a DHCP server?
<shade34321> I want to say it's the head node
<Kyle__> Hey, some people hate DHCP (though I don't know why)
<Kyle__> Check for a bunch of DHCP requests that aren't your nodes.  Could be DRACs.  On some dells you have to enable it from the BIOS first though.
<shade34321> I'll take a look into it tmr afternoon when I stop by
<shade34321> and see what I can find, though last time I looked I couldn't find a drac option on start up...though I did not check the bios
<Kyle__> Good luck.  I should head out myself
 * Kyle__ waves
<shade34321> thanks
<shade34321> night
<twb> Does dnsmasq have an upstream VCS?   I can't find it.
<xInterlopeR777x> !ubuntu
<ubottu> Ubuntu is a complete Linux-based operating system, freely available with both community and professional support. It is developed by a large community and we invite you to participate too! - Also see http://www.ubuntu.com
<twb> re dnsmasq -- never mind, I isolated the fault on my own.
 * rmk is away: no reason supplied
 * rmk is no longer away
<uvirtbot> New bug: #912138 in samba (main) "smbd crashed with SIGABRT in dcerpc_lsa_lookup_names4() (dup-of: 911449)" [Undecided,New] https://launchpad.net/bugs/912138
<Vivek> Hi
<Vivek> I am facing issues with Enlisting a system at install time with the Ubuntu Orchestra Server.
<Vivek> The step it fails is select and install software.
<shang> Vivek: are you doing that from the server CD?
<Vivek> Yes I am doing it from the server cd.
<Vivek> The ubuntu client does not get authenticated to the ubuntu orchestra server.
<eagles0513875> hey guys i am a bit lost im trying to create a new install on a remote server via kvm and i want to setup lvm i have a root partition which isnt part of the lvm and 960gb of free space which i want to setup as a logical volume
<eagles0513875> what are the steps to do this as i am a bit lost
<koolhead17> hi all
<atari2600a> hey
<atari2600a> what's the default location for KVM images?
<atari2600a> I'm setting up a server on my desktop & I want to give the VM its own partition & the more vanilla I do it now the easier it'll be when I end up wiping my / partition whenever
<koolhead17> !virt-manager
<koolhead17> atari2600a: https://help.ubuntu.com/community/KVM/VirtManager  see if this is what you need
<atari2600a> please don't be cryptic
<koolhead17> !virtmanager
<atari2600a> it's a simple question & if I can get this done BEFORE I create the image it'll make everyone's life easier
<jamespage> morning
<Daviey> hey jamespage !!
<jamespage> gudday Daviey!
<atari2600a> I'm...gonna go
<lynxman> morning o/
<lynxman> jamespage: morning sir ;)
<koolhead17> hey Daviey jamespage  :)
<koolhead17> elhoo lynxman
<lynxman> Daviey: morning! happy new year ;)
<Daviey> hey lynxman, trust you had a good one?
<lynxman> Daviey: internet disconnection for 2 weeks, it was very good indeed, how about you?
<Daviey> lynxman: wow, how did you cope?
<lynxman> Daviey: I did indeed :)
<lynxman> Daviey: didn't know I could but it definitely was refreshing :)
<Daviey> lynxman: sounds like an ordeal
<lynxman> Daviey: :)
<uvirtbot> New bug: #912212 in samba (main) "package samba 2:3.5.8~dfsg-1ubuntu2.3 failed to install/upgrade: ErrorMessage: package samba is not ready for configuration  cannot configure (current status `half-installed') - version upgrade from 10.04 LTS to 10.10 to 11.04.  various errors flashed up during upgrade from 10.04 to 10.10" [Undecided,New] https://launchpad.net/bugs/912212
<zul> morning
<lynxman> zul: morningtons
<uvirtbot> New bug: #911888 in samba (main) "gvfsd-smb-browse crashed with SIGSEGV in debug_lookup_classname_int()" [Medium,Confirmed] https://launchpad.net/bugs/911888
<uvirtbot> New bug: #912254 in drbd8 (main) "Please upgrade to 8.3.12 for Precise" [Undecided,New] https://launchpad.net/bugs/912254
<zul> lynxman: not exact internet disconnection though :)
<Deathvalley122> is it a bad sign of a bad drive when you get I/O error when trying to set up lvm?
<Daviey> SpamapS: Upgradibg oneiric -> precise, mysql-server has asked me for the password 3 times, which i keep leaving blank
<Daviey> "If this field is left blank, the password will not be changed"
<lynxman> Daviey: Trying to come back slowly, do you remember the nomenclature we were talking about for a package including the git last checkout hash into its name?
<lynxman> Daviey: I think it was package-N.N+git~checkouthash right?
<Daviey> lynxman: date 20120105-shorthash is better IMO.
<lynxman> Daviey: ah that's the one, thanks :)
<Daviey> date will always increment in value, hash will not.
<Daviey> perhaps s/-/.
<lynxman> Daviey: indeed, that was the issue
<lynxman> Daviey: building new mcollective-plugins, so the version will be 0.0.0~git20120105-9b90c2b-0ubuntu1
<Daviey> lynxman: so one exampe, mythtv (2:0.24.0+fixes.20111207.40f3bae-0ubuntu1)
<lynxman> Daviey: would that be okay? :)
<lynxman> Daviey: ah ok, dot
<Daviey> yeah, but maybe use a . instead of -
<Daviey> winner
<lynxman> Daviey: :)
<stgraber> hallyn: around?
<hallyn> stgraber: sneaking off soon for breakfast.  what's up?
<stgraber> hallyn: was wondering if you had an idea why we don't get a console on /dev/console with a clean precise container
<stgraber> hallyn: getty is spawned but nothing shows up
<stgraber> hallyn: also lxc-console seems to behave a bit differently (like if it was resetting the console or something like that)
<hallyn> is that new?
<hallyn> someone mentioned something similar on lxc-users m-l i think
<Daviey> smoser: are you around?
<hallyn> stgraber: in short no i've not looked into it
<stgraber> hallyn: I only started noticing it a few weeks ago. I don't really use precise containers that much
<hallyn> oneiric containers on precise don't do that?
<stgraber> hallyn: well, at least lucid containers on precise don't
<hallyn> it's possible i messed something up with the console.conf...
<hallyn> (creating some new containers to play with)
<stgraber> hallyn: found the problem I think, well, at least what caused it. Downgrading util-linux to Oneiric's version fixes it
<hallyn> yuck
 * stgraber starts diffing the two source packages
<yakster> anyone know how to fix "/var/run/dbus/system_bus_socket: No such file or directory" when trying to install mysql?
<zul> Daviey: hes on holiday this week
<Daviey> zul: who approved THAT?!
<zul> gah?
<RoAkSoAx> lol
<uvirtbot> New bug: #912352 in nova (main) "euca-allocate-address is not acepting any parameters" [Undecided,New] https://launchpad.net/bugs/912352
<uvirtbot> New bug: #912355 in euca2ools (main) "euca-allocate-address is not acepting any parameters" [Undecided,New] https://launchpad.net/bugs/912355
<Koheleth> bunt support with webmin ok?
<Koheleth> or vice versa come to that
<Koheleth> dumping Plesk
<Koheleth> worst cp gui there is
<EvilResistance> !webmin | Koheleth
<ubottu> Koheleth: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<uvirtbot> New bug: #912376 in elinks (universe) "elinks unmodified config promoted for confirmation on upgrade" [Undecided,New] https://launchpad.net/bugs/912376
<pmatulis> Koheleth: no, do not use webmin
<SpamapS> Daviey: hrm.. not sure why mysql would bug you repeatedly for a password on upgrade.. it shouldn't even *ask* on upgrade.
<SpamapS> Daviey: possible we need to port the mysql-5.1 values forward to 5.5.. hrm
 * SpamapS is hrming a lot
<SpamapS> no.. we use the generic mysql-server/root_password
<SpamapS> so debconf should be silent
<uvirtbot> New bug: #912403 in bind9 (main) "package bind9 1:9.7.3.dfsg-1ubuntu2.3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/912403
<adam_g> RoAkSoAx: ping
<Daviey> Can everyone make sure blueprints are up to date please? :)
<Daviey> rbasak / jamespage: ^^ before you go for the day please? :)
<Daviey> (sorry for the late reminder)
<RoAkSoAx> adam_g: pong
<Daviey> Spamaps,  rbasak and hazmat: http://status.ubuntu.com/ubuntu-precise/group/topic-precise-servercloud-service-orchestration.html
<Daviey> ^^ can you update that please :)
<uvirtbot> Daviey: Error: "^" is not a valid command.
<SpamapS> Daviey: I'm working on the juju mir right now.. will mark them all as INPROGRESS
<RoAkSoAx> Daviey: shgould we start preparing orchestra/cobbler for MIR too or shouild I just wait post sprint?
<RoAkSoAx> err rally
<adam_g> RoAkSoAx: were you ever able to resolve that grub issue you had yesterday with orchestra? the same thing is blocking precise installs oon the openstack cluster
<RoAkSoAx> adam_g: nope, still having the issue as of this morning. I'm waiting for this afternoon to see if it gets solved
<SpamapS> RoAkSoAx: MIR's are mostly just gathering of information for the MIR team to digest.. so I'd say go for it, unless you think we won't be MIR'ing cobbler. :)
<adam_g> RoAkSoAx: is there a bug for it somewhere?
<rbasak> Daviey, nothing to update. I think I've just about cleared most other outstanding stuff, was hoping to make a start with the pandaboard tomorrow.
<RoAkSoAx> SpamapS: indeed I guess I'll start then ;)
<RoAkSoAx> adam_g: hold on let me retry
<smoser> Daviey, i'm here now
<adam_g> RoAkSoAx: doing the same
<adam_g> RoAkSoAx: same thing
<RoAkSoAx> adam_g: I can't seem to find a bug with the issue, so i guess we'll have to file one
<adam_g> http://paste.ubuntu.com/794026/
<adam_g> RoAkSoAx: syslog ^
<RoAkSoAx> adam_g: yeah same error
<RoAkSoAx> adam_g: have you tried with i386?
<adam_g> RoAkSoAx: i have not
<adam_g> RoAkSoAx: filing a bug against grub-installer now
<hallyn> grr - the latest precise server mini iso fails on grub install
<hallyn> oh heh
<adam_g> :)
<hallyn> man, lag is insane.  takes liek 30 seconds for chars to show up
<adam_g> hallyn: are you doing anything special in the preseed (orchestra / juju-wise) or just a straight up precise install?
<hallyn> adam_g: i've had this both with and without preseed
<adam_g> hallyn: ok, i guess thats "good"
<hallyn> first tried 'vm-new' from the vm-tools, and then just by hand with kvm on cmdline
<hallyn> on the bright side, you can just reboot from cd, rescue, and install grub2 by hand :)  so all the more i wonder why it fails in installer
<adam_g> hallyn: Bug #912431 if you want to weigh in / confirm
<uvirtbot> Launchpad bug 912431 in grub-installer "Preseeded 12.04 grub-install failed: Wrong number of args: mapdevfs <path>" [Undecided,New] https://launchpad.net/bugs/912431
<adam_g> RoAkSoAx: ^
<hallyn> thanks, will do
<hallyn> stgraber: you know, I wonder if the lxc /dev/console issue is related to all my recent server installs popping up on an empty console
<hallyn> oh, no, i bet that's bc it still wants to jump the X window even though there is no x
<hallyn> yeah, it's the vt.handoff=7 in /proc/cmdline.  getting rid of that fixes that.
<hallyn> oddly, that bit isn't in /etc/default/grub.
<yakster> hey all, have an issue installing mysqlâ¦. says that it cannot connect to dbus
<yakster> start: Unable to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
<SpamapS> yakster: installing?
<yakster> ok yeah now its not installing â¦ sorry got ahead of myself
<SpamapS> yakster: that should only happen when you try to run 'start mysql' as a non-root user
<yakster> can't install mysql-server cause mysql-server-5.1 is dependsâ¦.
<SpamapS> yakster: please explain what you tried to do, what you expected to happen, and what actually happened.
<yakster> okâ¦.
<yakster> installed mysql, and it failed, told me that
<yakster> Package mysql-server-5.1 is not configured yet.
<yakster> dpkg: error processing mysql-server (--configure):
<yakster>  dependency problems - leaving unconfigured
<SpamapS> ok, try 'sudo dpkg --configure -a'
<yakster> ok
<yakster> Setting up mysql-server-5.1 (5.1.54-1ubuntu4) ...
<SpamapS> don't paste it all here
<yakster> pastebin is down
<yakster> sry
<SpamapS> http://paste.ubuntu.com .. :)
<yakster> http://paste.ubuntu.com/794071/
<smoser> yakster, also, fyi, 'pastebin' command is your friend.
<smoser> er... pastebinit.
<smoser> utlemming, https://code.launchpad.net/~smoser/ubuntu-on-ec2/ec2-publishing-scripts.detach-vol-cleanups/+merge/87668
 * utlemming looks
<yakster> what is the pastebinit syntax?
<yakster> nm i got it
<yakster> SpamapS: anything?
<SpamapS> yakster: sorry I got distracted :-P
<yakster> np
<yakster> says it can't continue because of a previous failure
<SpamapS> yakster: ok, look in /var/log/mysql/* .. there may be some clues there
<SpamapS> yakster: most likely problem is an error in /etc/my.cnf
<SpamapS> err
<SpamapS> /etc/mysql/my.cnf
<yakster> I "rm -R /etc/mysql" when trying to reinstall from scratch
<SpamapS> yakster: that would be the problem then
<SpamapS> yakster: dpkg considers that intentional, and will leave the files there
<SpamapS> err, leave the files "deleted"
<yakster> so whatâ¦.
<SpamapS> yakster: try using 'dpkg -i /var/cache/apt/archives/mysql-server-5.1*.deb --force-confmiss
<SpamapS> err, without the extra ' ;)
<RoAkSoAx> adam_g: how did you manage to pastebin the syslog from the failed installation?
<SpamapS> yakster: next time use 'apt-get purge mysql-server-5.1' if you want to get rid of the config files.
<SpamapS> yakster: though that will also remove the databses.
<yakster> didnt know thatâ¦. but now i do
<yakster> ok still mysql-server-5.1, errors were encounterd
<SpamapS> yakster: its one of the more confusing things about dpkg... but its intentional since sometimes you do want to rm a config file and not have it come back.
<SpamapS> yakster: is /etc/mysql back?
<SpamapS> yakster: you may need to do mysql-common as well
<yakster> not yes
<yakster> ok mysql-common
<yakster> ok
<yakster> yes, mysql is back
<adam_g> RoAkSoAx: there is a menu option to dump debug logs, and the installer serves them to you via http
<RoAkSoAx> adam_g: where's that menu option?
<adam_g> RoAkSoAx: somewhere at the bottom, near 'execute a shell'
<RoAkSoAx> adam_g: ok cool ;)
<smoser> utlemming, please review that above.
<yakster> , with all that you told me heres what I finally did to get it all to workâ¦. apt-get  purge mysql*
<yakster> apt-get install mysql
<yakster> and now it is working
<utlemming> smoser: merging now
<yakster> thank you
<Combatjuan> I'm looking at iotop and finding that the Total DISK WRITE is often an order of magnitude or more than the actual sum of disk writes per process.  What does that mean?
<Combatjuan> The only thing I can figure is that it means that it's writing to swap, however iostat indicates that swap is not being touched.
<SpamapS> Combatjuan: journalling is also an issue
<Combatjuan> SpamapS - Interesting.  I'm seeing 20M/s in iotop (that's the total, the combined children come to about 1M/s).  That seems like an awful lot of journalling.  Could it possibly even be anywhere near that high?
<SpamapS> Combatjuan: are you using any kernel based services like kernel nfs server maybe?
<SpamapS> Combatjuan: also, dumb question.. RAID?
<Combatjuan> Not sure about kernel-based services.  Yes, it is RAID.  Not sure why that's a dumb question though.
<Combatjuan> (Hardware RAID)
<SpamapS> Combatjuan: another possibility.. if your processes are writing *tiny* things.. your disks are writing *blocks* .. so if writes are 400 bytes each, but you have 4k blocks.. that would explain it
<SpamapS> especially if things are highly random and not sequential
<Combatjuan> SpamapS - If the per-process settings are in fact per byte and the total is a sum of blocks, that would definitely seem like a possibility.
<Combatjuan> s/settings/numbers
<SpamapS> Combatjuan: I don't know for sure how iotop works, but its at least a theory. :)
<SpamapS> Combatjuan: I'd suggest stracing your processes and looking at how big the write's are.. and also looking at how often they fsync (or if the files are opened O_SYNC).
<SpamapS> Combatjuan: strace -e trace=fsync,write -p xxxx
<Combatjuan> Excellent idea.  Thanks SpamapS.
<Combatjuan> It's a postgres server.  It looks like most ever write is exactly one block (which I think is pretty normal).  Sometimes a few blocks at a time.  Each line is an fsync though?
<SpamapS> yeah with postgres I'd expect it to be fairly efficient with the writes
<SpamapS> Combatjuan: perhaps iotop is misleading you about the total writes of all processes
<Daviey> RoAkSoAx: wait for the Rally.
<SpamapS> Combatjuan: note.. that the cheapest solution for all write problems (even cheaper than diagnosing like you're doing now) is to go buy a FusionIO card and stick your data on that. ;)
<Combatjuan> That's my best guess.  There are times when it claims that it wrote 50M/s.  It seems to use bytes though it doesn't explitely say so everywhere.  I don't think the raid (simple mirror) can even do that.
<Kyle__> How do you put a desktop environment on ubuntu-server, and have it only start when you run startx?  I dont' want it running all the time
<Combatjuan> SpamapS: Yeah, I do love solving things with hardware even though that's hard to do from half a continent away.  But this is the first time that the server has ever seemed to be disk-bound instead of something else.  So I'm not ready to do that yet.
<RoAkSoAx> Daviey: OK
<Kyle__> upstart seems so odd to me.
<SpamapS> Kyle__: you're not alone. :)
<SpamapS> Kyle__: its quite elegant once you get used to it though.
<thyrant> hi guys
<semiosis> hi all, i have a quesiton about contributing ubuntu-specific improvements to a debian package
<semiosis> specifically, how would I go about doing that?
<semiosis> the package, glusterfs-server, is included in ubuntu from debian, but there's an ubuntu-specific bug resulting from our use of upstart/mountall.
<thyrant> I am thinking about updating my ubuntu server 9.10 .. how much trouble might I be in if I do so?
<semiosis> i have an upstart job which solves the problem, which can be added to the packaging
<semiosis> see https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/876648
<uvirtbot> Launchpad bug 876648 in glusterfs "Unable to mount local glusterfs volume at boot" [Undecided,New]
<Kyle__> thyrant: From 9.10 to 11.10?  They changed what they call the DHCP and DNS servers, even though they're the stock ISC ones in both versions.  I can't recall a whole lotta differences.
<thyrant> stock ISC ?
<Kyle__> thyrant: internet systems consortium.  They're DHCP server is the standard, and that's what debian ubuntu and most every other distro has always come with
<Pici> thyrant: You'll need to go through all the intermediate releases if you upgrade from 9.10 to 11.10.  May I suggest just upgrading to 10.04 and then to 12.04 when it is released (you can update from  LTS to LTS)
<Kyle__> thyrant: They also make BIND
<thyrant> the server works fine though.. I only wanted to update to get a supported version so I can install bittorrent. Cant figure out how to do it in 9.10
<Pici> thyrant: Then just upgrade to 10.04
<thyrant> thanks I'll do that
<RoAkSoAx> adam_g: do you have any orchestra server with dnsmasq?
<RoAkSoAx> in the lab
<RoAkSoAx> adam_g: nevermind
<cjz> can anyone help with a postfix issue, im trying to get amazon ses working through postfix and id like to up the debugging to see what pipe is actually doing
<cjz> ive done -vvvv on the pipe daemon entry but its not showing how its calling the actual argv command
<Patrickdk> !verbose
<Patrickdk> oh heh, wrong channel
<Patrickdk> increasing verbose in postfix is hardly ever needed by anyone
<semiosis> cjz: you know amazon recently announced SMTP support for SES?  so your apps can connect directly to SES via SMTP and not have to use a postfix gateway anymore
<cjz> my app is nagios
<cjz> would that work?
<semiosis> i'm not sure, i'm still using the postfix gateway :D
<semiosis> cjz: here's my config for example: http://pastie.org/3133795
<semiosis> cjz: i found the amazon-provided perl scripts to be completely worthless, so i wrote this nice & clean python script using boto (requires boto 2.0+) to do the ses delivery
<semiosis> cjz: however, thinking about this again, since SES now supports SMTP itself, you can probably skip the whole pipe-to-a-script method and just have postfix deliver to another SMTP server which is SES
<newbie|2> hi all
<newbie|2> i've a question about ldap configuration with proftpd
<newbie|2> when i try to connect on my ftp, i've this message : "invalid dn syntaxt" in proftpd.log
<newbie|2> syntax*
<semiosis> cjz: oops just realized that should be master.cf, not main.cf, in the pastie link i just sent
<newbie|2> my ldap server is a windows serveur 2003 active directory
<jwac> hello...anyone active?
<jwac> yo anyone got a sec?  I have a quick question about nullmailer
<SpamapS> jamespage: around?
<SpamapS> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<SpamapS> jwac: ^^
<jwac> k
<Kyle__> How long does patience take?
<Kyle__> !patience
<ubottu> Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/
<SpamapS> Kyle__: I'll tell you tomorrow ;)
<Kyle__> !impatience
<Kyle__> Hu.  It really should have that one....
<robbiew> !yell
<ubottu> PLEASE DON'T SHOUT! We can read lowercase too.
<jwac> I use nullmailer.  Whenever I send an email from the CLI it just sits in my queue, not autosending.  I'm thinking it may be the permissions on the queue folder because I mistakingly deleted it and had to recreate it.  I want to start by asking, can anyone find out what the default permissions on the nullmailer queue folder are for me please?
<thyrant> hey guys, me with ubuntu server 9.10 again.. My question is now: what is the best procedure to backup the whole system, repartition the drives and restore the system to exactly the same state as before?
<Patrickdk> I thought 9.10 went unsupported a long time ago
<Kyle__> Is anyone here airly familiar with dell's DRAC cards?
<cjz> woot
<cjz> semiosis:  that was it, got it working thank you very much
<semiosis> cjz: glad to hear it, you're welcome
#ubuntu-server 2012-01-06
<uvirtbot> New bug: #912588 in ocfs2-tools (main) "mount.ocfs2 doesn't accept mount option "uhelper=udisks"" [Undecided,New] https://launchpad.net/bugs/912588
<Xaev> Hi all :) Anyone do much work with Orchestra and Juju yet? Having a problem with what I thought would be some simple customizations to my juju.preseed file
<twb> No, but
<twb> !any
<twb> !anyone
<ubottu> A high percentage of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? See also !details, !gq, and !poll.
<Xaev> Ok :) I'm editing juju.preseed to customize the initial user/pass (passwd/user-fullname, passwd/username, and passwd/user-password-crypted). The change works fine, and machines that are imaged using a profile that references juju.preseed come up with the expected initial username and password. However, "juju status" fails with "Invalid SSH key" errors.
<pythonirc101> If  I have an ubuntu server and a domina name (mydomain.com) -- can I tell a free dns system (zoneedit for instance) -- to forward all *.mydomain.com accesses to my IP?
<twb> pythonirc101: yes
<twb> Get the registrar to add mydomain.com NS <zoneedit IP> to the com domain, have zoneedit serve a mydomain.com zone that includes @ IN A <your IP> and * IN A <your IP> or so
<pythonirc101> twb: will this help : http://www.namecheap.com/support/knowledgebase/article.aspx/597/46/how-can-i-setup-a-catchall-wildcard-subdomain ?
<twb> No
<pythonirc101> what's the difference?
<twb> The difference is that's some company's website about that company.
<pythonirc101> twb: I bought the domain from them
<twb> "Please note that you can only setup this option for the domain if itâs using our default nameservers."
<twb> i.e. not zoneedit
<pythonirc101> I have the option of pointing the dns to zoneedit, then to my ip. or I cud use the trick that they mention in that article
<pythonirc101> am not sure if url redirection is the same thing as dns thou
<twb> It is not.
<pythonirc101> twb: I'm currently using their dns, but I can easily point it to zoneedit.
<twb> Supposing your target host is 1.2.3.4, and the domain is example.net.  URL redirection involves adding a record "example.net. IN A 5.6.7.8" and running a web server on 5.6.7.8 that responds to all requests with "go talk to 1.2.3.4"
<pythonirc101> twb: I really don't know if I should go with zoneedit vs their own dns for what I need
<pythonirc101> indeed
<twb> pythonirc101: it doesn't really matter which you use
<pythonirc101> ok, in my case, I need https://joe.mydomain.com --> 1.2.3.4
<twb> The whole space is pretty shitty, because the components are cheap and not many people understand it.  So all the vendors try to basically trick you into thinking you need to buy extra components from them.
<pythonirc101> also https://smith.mydomain.com --> 1.2.3.4
<pythonirc101> I still have to figure out how to decode what one used to come to my website...but I think https can do that...but first thing is to point
<twb> e.g. the registrar will try to convince you to get your zone hosting ("name server") from them as well
<pythonirc101> the registrar gets my $8/year for now. that is it.
<twb> pythonirc101: IMO get https working first by IP address, then worry about zones
<pythonirc101> ok -- @ IN A <your IP> and * IN A <your IP> or so -- what do I type in subdomain? Its right now "@"
<twb> @ means "myself", e.g. if the zonefile is for example.net, @ expands to example.net.
<twb> * means all subdomains (less any subdomains that are explicitly mentioned).
<twb> it would be better to define @ and www only, rather than @ and *
<pythonirc101> @ - NS - 7200 - ns19.zoneedit.com -- is already defined.
<pythonirc101> so I need : * A 7200 1.2.3.4?
<twb> That is the NS (name server) RR.
<twb> You want an A (address) RR
<twb> I strongly recommend you go read some wikipedia articles on zonefiles and DNS so you understand what all this means.
<pythonirc101> A = IPv4 -- since mine is static ip v4
<twb> Yes, A RR is for IPv4 addresses; AAAA RR is for IPv6 addresses
<pythonirc101> from what you told me @ and * should both point to my ip, so should perhaps www ?
<twb> I would recommend @ and www, no *
<pythonirc101> twb: in that case, how would I route joe.example.com --> to my ip. Same with smith.example.com
<pythonirc101> I want them both come to my server
<twb> Unless you have hundreds of such examples, I would name them explicitly
<twb> In fact, it would be best to simply use CNAMEs for those back to www, if they are just http vhosts
<twb> If you run "dig AXFR cyber.com.au @ns1.cyber.com.au" you can see one of my domains
<pythonirc101> twb:  I'm trying to have one webserver per user on my machine...and have 100s of users.
<twb> pythonirc101: you can't.
<pythonirc101> so I do need a *
<twb> You can't run one webserver per user unless you have one IP per user
<twb> Why do you want to?
<pythonirc101> my webserver proxies each user to a backend webserver
<pythonirc101> twb: so essentially my ubuntu box is just a proxy for 100s of webservers behind a firewall - do I make sense?
<twb> I guess so, but IMO you should just use www.example.net/~fred/
<twb> And disallow PHP so the users can't fuck up the webserver with broken scripts.
<pythonirc101> we used to use http://www.example.net/fred
<pythonirc101> but we have a new guy who is writing the proxy code, and he says he can do fred.example.net
<twb> Why do you want to?
<twb> It's a waste of resources
<pythonirc101> this is what I was told: setup a wildcard dns
<pythonirc101> and I guess you told me exactly how to setup a wildcard dns, right?
<twb> I did.
<pythonirc101> twb: what's the difference between url forwarding and dns setup? I can ask dns to point http://*.example.net --> IP or I can say redirect all URLs of http://*.example.net --> IP.
<twb> 13:09 <twb> Supposing your target host is 1.2.3.4, and the domain is example.net.  URL redirection involves adding a record "example.net. IN A 5.6.7.8" and running a web server on 5.6.7.8 that responds to all requests with "go talk to 1.2.3.4"
<twb> URL redirecting should be avoided; it's less efficient than plain DNS
<pythonirc101> I could do it with namecheap itself... :)
<twb> At this point, I've lost interest.
<pythonirc101> namecheap is awesome :)
<twb> Registrars are not awesome.
<cwillu_at_work> twb, even godaddy?
<twb> They get to charge money for no reason other than a RIR handed them write access to TLDs
<pythonirc101> twb: how much money does one have to pay to get that access?
<cwillu_at_work> pythonirc101, more than you have available :p
<twb> AFAIK you don't have to pay any money, you just have to be working on a research-grade internet at a lab in your country circa 1980
<twb> Maybe that's only the way RIRs get priviliged access; I know more about AUNIC/APNIC history than godaddy or whatever
<twb> (I'm the master server for aunic.net, for purely historical reasons ;-)
<pythonirc101> cwillu_at_work: you should have said: "I don't know" :)
<cwillu_at_work> pythonirc101, what, you have enough money to buy a registrar?
<pythonirc101> twb: Thanks for all the info today :)
<cwillu_at_work> pythonirc101, I'm not the one who's been trying to set up hosting by programmatically modifying sshd authorized_keys files
<pythonirc101> cwillu_at_work: indeed I was trying -- you've anything against trying?
 * twb blinks
<twb> cwillu_at_work: do I even want to know?
<pythonirc101> *smiles*
<cwillu_at_work> pythonirc101, no, but you were also quite resistant to explaining what you were actually trying to accomplish
<cwillu_at_work> which could have easily saved you the last week of exploration on this :p
<pythonirc101> cwillu_at_work: Indeed what I do is my business, don't you think? :)
<twb> pythonirc101: not when you do it wrong
<cwillu_at_work> pythonirc101, and maintaining the snr in the support channels I frequent is mine :p
<pythonirc101> * goes back to work * - no noise. :)
<twb> Especially, not when you do it wrong on a shared network like the internet.
<cwillu_at_work> after all, every general web server ever has the ability to reverse proxy requests to other locations (including other http servers only listening on loopback on the local machine)
<cwillu_at_work> hell, twisted can do it quite easily, and would be trivial to programmatically configure the forwarding at runtime
 * cwillu_at_work notes that #twisted is one of the places pythonirc101 was asking about how to implement sshd with the "required" features :p
<cwillu_at_work> twb, I believe that will answer your question as well :)
<cwillu_at_work> (specifically:  no, you don't really want to know :p)
<twb> Yeah I switched off when I heard "twisted"
<cwillu_at_work> there's nothing wrong with twisted :p
<pythonirc101> cwillu_at_work: if you are free, I would love to get your help on a twisted program for starters? :)
<cwillu_at_work> pythonirc101, I was last week.
<twb> cwillu_at_work: it's not a C library? ;-P
<cwillu_at_work> (when I was asking :p)
<cwillu_at_work> now I'm rather busy :p
<cwillu_at_work> twb, sorry?
<twb> A python library is hard to use unless you're writing in Python
<twb> A C library has less lock-in
<cwillu_at_work> I'm sorry, you want a c wrapper around the standard non-blocking unix stuff?
<twb> i.e. all else being equal, I'd rather have a library or framework implemented in C than <not C>
<cwillu_at_work> I think you're confused
<pythonirc101> cwillu_at_work: simple twisted question for you: How do I do a "ssh -R"  using twisted with keys in files? -- and the keys have to be generated using twisted / python as well. Can you do this in python? :)
<cwillu_at_work> pythonirc101, you can, and I'm not in the mood to explain how
<pythonirc101> twb: I agree. :)
<cwillu_at_work> twb, pointing out that two thirds of the conversation I've seen was in python related channels, I think the use of python is probably a given
<twb> Maybe, but I'm not
<pythonirc101> cwillu_at_work: I asked a binary question :) not for an explanation :) And I know the answer is "no", unless you prove me wrong :)
<cwillu_at_work> lock-in doesn't even enter into it at that point
<cwillu_at_work> pythonirc101, given that I've done it...
<pythonirc101> cwillu_at_work: you've not.
<pythonirc101> prove it? show the code ? :)
<cwillu_at_work> you asked a binary question, I gave an answer
<pythonirc101> cwillu_at_work: perhaps you should ignore me in the rooms that I frequent.
<pythonirc101> cwillu_at_work: conversation with you only kills my time? -- if you can't help -- why put your nose in the conversation?
<cwillu_at_work> pythonirc101, I invite you to look at twisted/conch/ssh/channel.py and forward.py
<pythonirc101> cwillu_at_work: you think I've not
<pythonirc101> I need a way to generate the keys cross platform in python before I can even think of doing a ssh -R -- rule number one -- no subproces or os.system
<pythonirc101> problem number two -- ssh-rsa keys are not the same as pem or ssl keys
<cwillu_at_work> there are crypto libs for python; that has nothing to do with twisted though (and so asking in #twisted was somewhat amusing :p)
<pythonirc101> cwillu_at_work: someone who writes a ssh client / server - and says, hey keys -- get it from somewhere else -- I can't take that seriously
<cwillu_at_work> still have no idea why you're trying to use ssh for the proxying though
<pythonirc101> because I thought it was easy to do this from inside python -- turns out its not
<pythonirc101> since I can do it in one line from outside python
<twb> cwillu_at_work: is he the guy that wanted to use ssh -w in production?
<pythonirc101> twb: For beta --> ssh -R actually.
<koolhead17> hia ll
<koolhead17> grrr timezone timezone !!
<SpamapS> koolhead17: many of us will be in UTC+1 next week
<koolhead17> SpamapS: some come coming ahead? :P
<koolhead17> *conference
<SpamapS> koolhead17: just a meeting
<koolhead17> SpamapS: can i participate remotely !! :P
<SpamapS> koolhead17: its a company meeting
<SpamapS> koolhead17: but we'll still be in here to chat w/ everyone who isn't there. :)
<koolhead17> SpamapS: ooh. okey!!
<koolhead17> SpamapS: sorry for not poking you for help  with the php bug. I need to get back to it soon.
<SpamapS> koolhead17: I have plenty to do, just get to it when you can
<koolhead17> SpamapS: am currently working on one juju charm and get to see it working on eucalyptus in way you suggested
<koolhead17> i dont want to be very optimistic but i am thinking to give talk on Juju at one of the conf here, if it gets selected
<cloudgeek> iptables -I INPUT -p tcp --syn --dport 22 -m connlimit -- connlimit  - above 2 -j REJECT
<cloudgeek> iptables v1.4.10: You must specify "--connlimit-above"
<cloudgeek> Try `iptables -h' or 'iptables --help' for more information.
<cloudgeek> how to fix ir
<cloudgeek> it
<SpamapS> cloudgeek: you have a space between -- and connlimit
<SpamapS> -m connlimit --connlimit-above 2
<cloudgeek> okay trying
<SpamapS> cloudgeek: also, you may want to add --reject-with tcp-reset
<cloudgeek> spamapsS: can you give me full command line i also try to remove space between -- and also try -, connlimit  --connlimit-above 2
<cloudgeek> SpamapS:thanks it working yeah :) iptables -I INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 2 -j REJECT
<cloudgeek> now it working !!
<SpamapS> cloudgeek: unless you add --reject-with tcp-reset, some firewalls will drop the ICMP errors your firewall is sending.
<cloudgeek> SpamapS: i also need add this with the same
<cloudgeek> iptables -I INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 2 -j REJECT
<twb> Is there a CLI utility that can read and write xattrs? apropos(1) and apt-cache search aren't finding anything
<SpamapS> twb: looks like there's a python library if nothing else
<twb> That's kinda plan B
<SpamapS> twb: I believe it includes a program 'xattr'
<SpamapS> twb: python-xattr
<twb> Hmm, /me checks apt-file
<twb> There's also python-pyxattr, I think a separate implementation
<twb> SpamapS: yep, you're right.  I'll try that
<twb> My goal is to make a little shell utility that increments/decrements a target file's "score", stored in xattrs.  And then to say "show me a random file weighted by score"
<twb> Argh, xattrs aren't supported on a tmpfs?
<twb> And mount -o remount has failed me: http://paste.debian.net/151194/
<twb> OK that is way too slow
<twb> It takes 3 seconds to print an xattr from 10 files!
<twb> ls takes 0.07s to print dirent data, that's the kind of speed I'll need
<SpamapS> strace to the rescue?
<twb> Maybe gnu find can just print it, or stat can...
<twb> SpamapS: well the cost is probably the ramp-up time for python
<SpamapS> yeah don't call it for each file! ;)
<SpamapS> twb: the C interface is pretty simple.. if you need speed
<twb> time &>/dev/null xattr x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x  ==> real        0m0.467s
<twb> time &>/dev/null ls x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x     ==> real        0m0.017s
<twb> I guess I will need to write C for this job.
<SpamapS> something weird with your system
<SpamapS> real	0m0.106s
<SpamapS> hmm actually no the same speed ratio on my system.. 0.003s for ls
<twb> ARM
<twb> And an SD card
<SpamapS> ah haha
<SpamapS> well strace -c says 60% of the time was spent int he stat syscall
<SpamapS> wait no.. in getdents
<twb> I blame python until there's conclusive evidence to the contrary :P
<SpamapS> actually it varies *wildly*
<SpamapS> strace -c xattr x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x
<SpamapS> twb: pretty interesting results, but not consistent
<SpamapS> hmmmm..
<SpamapS> twb: I think python may be searching its path for 'x'
<twb> haha
<twb> I will RTFS after I finish clocking this batman game
<SpamapS> twb: 1309 calls to 'open'
<twb> SpamapS: more likely its searching its path for pyxattr.py or whatever
<twb> xttr/__init__.py
<SpamapS> twb: indeed, that seems to be it.. like you said.. python spin up time
<twb> Probably what I should do is write xattr(1) in C and leave my main glue code as sh
<SpamapS> 1220 unique paths attempted
<SpamapS> twb: doing 1000 files closes the gap quite a bit
<twb> OK
<twb> But by then you start to hit MAX_CMDLINE_ARGS or whatever
<SpamapS> no such thing exists anymore
<twb> Well, I hit it...
<twb> Not just now, but I'm pretty sure I've hit it in the near past
<SpamapS> I just did 100000 files
<SpamapS> actually.. hrm.. no.. still sucks with 100000 files.. ls is 0.512s, xattr is 2.902s
<SpamapS> twb: it was solved somewhat recently.. last 2 years IIRC
<twb> That ratio is more acceptable tho
<twb> Hum, it just occurred to me there's no tcc on arm
<twb> No C "scripts"
<SpamapS> twb: looks like xattr probably calls stat too often, and I think it might exec something one time
<SpamapS> actually no
<SpamapS> its just that it does stat, and listxattr, where ls can just do stat
<SpamapS> twb: I don't know that rewriting in C is going to save you much
<SpamapS> twb: good luck
 * SpamapS passes out
<koolhead17> hi all
<jamespage> morning all
<koolhead17> smoser: hello there
<koolhead17> hey jamespage
<jamespage> morning koolhead17 - how are you today?
<koolhead17> jamespage: doing great thanks!!
<Daviey> jamespage: How is, [james-page] Investigate upstream co-operation from Hortonworks/Cloudera to ensure ongoing collaboration going forward: INPROGRESS
<Daviey> ?
<Daviey> SpamapS: when you see this, what is servercloud-p-juju-mir blocked on?
<jamespage> Daviey: slowly - we might get to a point where that entire spec if moot and can be canned
<jamespage> Daviey: due to discuss next week at the sprint
<Daviey> jamespage: thanks!
<Daviey> rbasak: How is https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-arm-service-orchestration looking?
<rbasak> Daviey: I'm looking at it as we speak. It's still pretty vague for me tbh
<Daviey> rbasak: what part?
<Daviey> jdstrand: How is [jdstrand] rewrite aa-complain and aa-enable/etc. in python and make sure they are installed in base installs: TODO
<Daviey> lynxman: [lynxman] write puppet external node classifier for juju status -> puppet: TODO ?
<rbasak> Daviey: most of the WIs. Eg. juju with LXC seems hardcoded for single machine operation, so the 1xmachine LXC idea would need to be written AFAICT. I should sync with juju people next week on this really
<Daviey> rbasak: Good thinking, there has been work in this area.  I think SpamapS would be the best PoC for that.
<SockPants> hi all
<SockPants> i'm having some basic cli trouble
<SockPants> i'm trying to use grep to search text files for the string: $user['id']
<SockPants> so i used grep -R '$user['\''id'\'']' .
<SockPants> but that still doesn't work
<SockPants> how can I do this?
<SockPants> cd #ubuntu
<lynxman> morning o/
<lynxman> Daviey: thought that one was on SpamapS, can take care of it, its todo indeed :)
<Daviey> lynxman: right, i just saw your name on it :)
<lynxman> Daviey: my name is on very compromising things ;) hehe
<caribou> lynxman: is it possible to run juju on an Openstack cloud or is it still limited to EC2
<caribou> ?
<lynxman> caribou: afaik it's possible, you just need to add some parameters to enviroment.yaml
<lynxman> caribou: gimme some mins and I'll give them to you ;)
<caribou> lynxman: sure, I'm just playing around with it & the FAQ says that EC2 is the only way right now
<lynxman> caribou: officially that's true
<lynxman> caribou: since the openstack ec2 api is lacking some commands so it's not production ready
<caribou> lynxman: ah, ok
<Daviey> sad, http://is.gd/zm3Vpo
<koolhead17> lynxman: i failed to get it working in my case :( because of my internal infra
<uvirtbot> New bug: #912701 in nova "Openstack Compute with Xen in Ubuntu 11.10 fails to load due to Domain-0 being considered an instance" [Undecided,New] https://launchpad.net/bugs/912701
<koolhead17> Daviey: sir!!
<lynxman> caribou: http://pastebin.ubuntu.com/794738/
<lynxman> caribou: remove the "branch" values as well since it's already in trunk :)
<caribou> lynxman: thanks !
<caribou> lynxman: what should be used for the ec2-key-name ,
<lynxman> caribou: you can either put the key name of a key you've created in openstack or just remove hte option and juju will create a new one
<caribou> lynxman: ok
<joze> hello
<uvirtbot> New bug: #912716 in apache2 (main) "[improvement] allow graceful-stop via the init.d script" [Undecided,New] https://launchpad.net/bugs/912716
<joze> could you please tell me what i must do to have secure openssh connection?
<patdk-lap> run ssh
<patdk-lap> only accept connections to keys that you have verified
<zul> morning
<lynxman> zul: morningtons
<caribou> quick and easy question : Is Ubuntu server using Ubiquity as its installer and, if not, what's the name of Server's installer ?
<caribou> is it a different flavor of Ubiquity ?
<jdstrand> Daviey: re rewrite> you mean, what is the status of it?
<pmatulis> caribou: ubuntu-installer
<caribou> thanks pmatulis
<pmatulis> caribou: sometimes called debian-installer
<koolhead17> pmatulis: :P
<Daviey> jdstrand: right!
<Daviey> jdstrand: just going through the WI's and checking in.
<xlinux> uke47*ikp=xO97q
<xlinux> power mag
<xlinux> textdb.allow_full_path
<jdstrand> Daviey: well, *ahem* since it only recently showed up on my list and no one talked to me about it, I'm not sure yet. I hope to take a crack at it next week
<xlinux> <537 |)4748453 6( 57473/V\3/V7> ::= 537 |)4748453 6( <|_|/V5!6/V3|) !/V7363|2 1!73|241>
<jdstrand> Daviey: I hope that didn't come off as *too* snarky. some snarky, yes, too snarky, no ;)
<jdstrand> Daviey: let's go with 'very-slight snarky'
<xlinux>      if (isr != null) isr.close();             } catch (IOException ioe) {                 // TODO: Throw appropriate exception
<soren> xlinux: Stop it.
 * xlinux prev.svg
<soren> Daviey, zul: Are there any plans from your side to adopt the OpenStack packaging from Debian?
<xlinux> * *DSV_TARGET_FILE = ${java.io.tmpdir}/test-roundtrip-${user.name}.dsv * *DSV_TARGET_TABLE = t CREATE TABLE t (i INT, a INT, d DATE);
<soren> Daviey, zul: If not wholesale, then basing your packaging off of them?
<Daviey> soren: it's up in the air :/
<Daviey> two way communication is currenty poor.
<Daviey> jdstrand: How did it get on your list?
<jdstrand> Daviey: that is the source of the snarkiness. I don't know. someone just gave it to me without discussing it
<jdstrand> well
<Daviey> jdstrand: right, i'm going through my mail log now.
<jdstrand> we might have discussed it at UDS
<Daviey> ahhhh!
<zul> soren: ive cherry picked some stuff from them
<Daviey> jdstrand: is this backtracking the snark? :)
<jdstrand> Daviey: but it wasn't assigned then, and no one asked/reminded me before assigning it to me. ie, I didn't say at UDS that I would do that per se. just that someone should/could
<soren> zul: That's a "no", is it?
<jdstrand> Daviey: re backtracking> not in the least :P I am justifying the slightliness of it :)
<zul> soren: still up in the air though
<jdstrand> Daviey: it isn't a big deal. I was just surprised to see new work items on my list is all
<Daviey> jdstrand: looks like it's been there since at least 10th Nov
<jdstrand> as assigned to me?
<Daviey> jdstrand: right
<jdstrand> I am quite surprised by that but not saying it is outside the realm of possibility
<jdstrand> especially since I was doing various reports and things
<jdstrand> oh well
<Daviey> jdstrand: https://lists.ubuntu.com/archives/ubuntu-server-bugs/2011-November/066450.html
<jdstrand> Daviey: perhaps the bp was formatted in a way that it didn't show up properly. *shrug*
<Daviey> jdstrand: it's possible someone fudged the mail logs? :)
<Daviey> </snarky> :P
<jdstrand> I don't know. maybe it is the new year. I wiped my slate a bit too clean
<jdstrand> maybe the bp wasn't approved or something
 * jdstrand seriously doesn't remember seeing it in the work items tracker
<jdstrand> and I look at that more frequently than I care to admit
<Daviey> jdstrand: it was approved, https://lists.ubuntu.com/archives/ubuntu-server-bugs/2011-November/067295.html :D
<uvirtbot> New bug: #910296 in php5 (main) "Please backport the upstream patch to prevent attacks based on hash collisions" [Medium,Confirmed] https://launchpad.net/bugs/910296
<Daviey> jdstrand: sorry, i shouldn't be enjoying combating snarkyness so much :D
<raubvogel> When you install ubuntu it asks for an account with sudo (and other) rights. For some reason, when I am running synaptic or whatever gui-based program that needs to elevate rights, that is the account that is asked for (as opposite to the ldap account I am logged into as even though it has sudo rights). Is there a way to change that?
<pmatulis> raubvogel: this behaviour is seen only for gui-based programs?
<raubvogel> pmatulis: yeah. And only a few
<rbasak> hallyn: ok, so lxc-create on armel (panda) fails. I get http://paste.ubuntu.com/795021/, running "lxc-create -n test_container -t ubuntu -f lxc.conf -- -r oneiric". debootstrap on its own works fine. Looks like an apt-get update is failing, maybe because sources.list is wrong? Only lxc-create helpfully wipes what it was doing after the error so I haven't got the sources.list out of it yet
<hallyn> rbasak: you're on precise, right?
<hallyn> rbasak: the work is being done by /usr/lib/lxc/templates/lxc-ubuntu.
<rbasak> hallyn: yes, but juju is hardcoded to oneiric inside the container so that's what I'm testing
<rbasak> hallyn: thanks, I'll dig around in there
<hallyn> rbasak: is your mirror bad?  the archive urls that are failing *should* be fine, right?
<rbasak> hallyn: I think it should  be s/archive/ports/
<hallyn> rbasak: are you taking a pandaboard with you next week?
<rbasak> yes
<hallyn> rbasak: ports.ubuntu.com?  that would be too bad...
<hallyn> i guess we'll need to special-case line 112 in lxc-ubuntu for armel
<rbasak> hallyn: http://paste.ubuntu.com/795030/
<hallyn> rbasak: glad you're trying it out then!  I swear this has worked for me on the arm netbook up to UDS-p...
<rbasak> I'm sure I've seen some special casing of that somewhere else
<hallyn> stgraber: ^
<hallyn> drat.  so i guess powerpc will end up being the same then.
<hallyn> one day
<rbasak> aha
<rbasak> /usr/share/debootstrap/scripts/oneiric for example has special casing at the top for this
<stgraber> hallyn: yeah, I think debootstrap is clever enough but looking at these it looks like your changes for updates and security don't use the same logic
<stgraber> so it indeed used to work until the change to make sure our containers are also using updates and security
<hallyn> all right i'm sure there's a right way to do that (rather than hack it in with a case stmt)
<stgraber> amd64 and i386 go to archive.ubuntu.com/ubuntu
<stgraber> the rest goes to ports.ubuntu.com/ubuntu-ports
<hallyn> man i don't know what is going on this week, but i'm typing pages ahead of the echo back from server, it reminds me of using the slow VAX decades ago
<hallyn> stgraber: is there a tool we can use, feed it $arch and get back the right archive url?
<stgraber> hallyn: I think python-apt has that logic in a queriable way, but we probably don't want to end up writting a python script just for that :)
<rbasak> bug 912842
<uvirtbot> Launchpad bug 912842 in lxc "lxc-create fails on armel" [Medium,Triaged] https://launchpad.net/bugs/912842
<hallyn> well, is that logic likely to change sometime?
<stgraber> hallyn: I think it's going to be much easier to just add the extra if statement to the template
<stgraber> assuming i386 and amd64 are on archive and the rest on ports works for all current releases
<stgraber> maybe one day armhf will be moved to archive.u.c but for now it's not possible due to space reason (as in, we'd have to ask all our mirrors to buy more disks)
<hallyn> stgraber: ok
<hallyn> rbasak: thanks
<rbasak> hallyn: np. I'm happy to fix it although I won't get it done today
<hallyn> rbasak: that'd be great.  thanks.
<SpamapS> Daviey: servercloud-p-juju-mir is blocked on somebody from txaws upstream ACK'ing the patches
<uvirtbot> New bug: #912842 in lxc (main) "lxc-create fails on armel" [Medium,Triaged] https://launchpad.net/bugs/912842
<SpamapS> Daviey: actually all my patches were merged upstream, so I can upload and move forward with juju's MIR, w00t. :)
<Daviey> SpamapS: \o/
 * SpamapS would like to thank the academy, and Thomas Herve. :)
<Daviey> I do feel i can take most of the credit fwiw.
<uvirtbot> New bug: #833073 in gdm (universe) "oneiric gdm picker list contains system user, rabbitmq" [Undecided,Confirmed] https://launchpad.net/bugs/833073
<andrew667> test
<RoyK> one two
<genii-around> !test
<ubottu> Testing... Testing... 1. 2.. 3... ( by the way, remember that you can use #test )
<patdk-wk> doesn't let me know if this channel is broken though :)
<andrew667> that's OK
<pauliax> hello, i need to instakk , but i can't becouse ubuntu disk 11.10 don't recognizes gigabyte p35-ds3 RAID-0 hard diskcs, can someone help me?
<RoyK> does it see the drives?
<RoyK> pauliax: if it sees the drives and not the raid-0 logical volume, it means you don't have a real raid controller, just a controller with a fancy windows driver to do the raid stuff
<genii-around> Probably jmicron chipset
<RoyK> probably no reason whatsoever to dig further until pauliax tells us what really happens :Ã¾
<grdnwsl> ls -alh
<grdnwsl> >_<
<pauliax> i reformated drives, configured RAID-0 on gigabyte p35-ds3, when installing no hdd or partition - i think raid driver is missing
<andrew667> use mdadm, because you have fake raid
<pauliax> what is fake raid, raid0?
<andol> pauliax: http://en.wikipedia.org/wiki/RAID#Firmware.2Fdriver-based_RAID
<andrew667> distro?
<andrew667> "fake" raid requires special software during the installation
<pauliax> i dont need soft raid - it will be bad
<pauliax> i thats why i dont need it
<pauliax> i have two operating systems by now, and i need four, dont ask me why :)
<andrew667> soft raid is not as bad as that. Month ago my 2 old dell power edge 1850 died (faulty RAM in hardware controller - at the first server (no data loss)! and at the (controller faild with partial loss of data))
<andrew667> and at the second server (controller faild with partial loss of data))
<KillMeNow> afternoon folks, anyone have a good page for configuring apache on ubuntu to use UCC SSL certs?
<KillMeNow> is it just like replacing a regular ssl cert in apache?
<eagles0513875> hey guys is it possible to get a net install version of ubuntu server
<adam_g> eagles0513875: you can grab the mini.iso from the netboot directory in, for example, http://archive.ubuntu.com/ubuntu/ubuntu/dists/precise/main/installer-i386/current/images/
<Deathvalley122> is there a 64bit one?
<eagles0513875> adam_g: thats for 12.04
<eagles0513875> we need something that has already been released that has the cobbler stuff so for now 11.04 will upgrade once 12.04 becomes LTS and released
<eagles0513875> Deathvalley122: check that out and try that iso instead
<ikonia> eagles0513875: cobbler is available in 11.04 I believe
<ikonia> it's been around for quite a while
<eagles0513875> ikonia: its in 11.10
<eagles0513875> cobbler with orchestra is what i mean
<eagles0513875> Deathvalley122: want to explain to ikonia the issue we are having trying to get things installed on this remote server
<Deathvalley122> it's taking ages to erase the lvm data from Debian lvm
<ikonia> http://archive.ubuntu.com/ubuntu/ubuntu/dists/oneiric/main/installer-i386/current/images/netboot/
<ikonia> use that then
<eagles0513875> ok
<adam_g> eagles0513875: http://archive.ubuntu.com/ubuntu/ubuntu/dists/natty/main/installer-i386/current/images/
<eagles0513875> thanks adam_g
<adam_g> eagles0513875: you should be able to import the precise/oneiric/whatever ISO into cobbler
<eagles0513875> im trying to install this on the server we dont have anything installed yet
<ikonia> just net boot the iso then
<adam_g> eagles0513875: orchestra isn't in natty (i dont believe)
<xInterlopeRx|DT> !help
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<Deathvalley122> whats natty got to do with orchestra
<Deathvalley122> O_o
<eagles0513875> ikonia: ok
<eagles0513875> Deathvalley122: cobbler is in natty just not the whole orchestra stuff
<adam_g> eagles0513875: you'll want oneiric, or if you're just testing i'd advise precise. lots of new stuff going on around orchestra right now, and the precise version is more usable/complete
<Deathvalley122> I think there is a serious issue with erasing lvm data from another distro
<eagles0513875> the problem for now is we need something for production and 12.04 isnt ready for a live environment yet
<Deathvalley122> for ubuntu
<ikonia> trouble is 12.04 isn't
<ikonia> eagles0513875: then just wait
<eagles0513875> we are trying to wipe the stuff using 11.04
<eagles0513875> 11.10 i mean
<xInterlopeRx|DT> i want to set up a remotely controlled server at my father in laws house for android building. what are your suggestions?
<ikonia> "wipe the stuff" ???
<Deathvalley122> yes
<ikonia> eagles0513875: what are you looking to do
<Deathvalley122> the server has Debian on it with lvm
<Deathvalley122> we are erasing the lvm data from the debian one
<eagles0513875> ikonia: reformatting the drive for a clean installation as the DC tested with debian works fine and now erasing one lvm partition is going really slowly
<Deathvalley122> but
<Deathvalley122> its taking ages to do that
<ikonia> or do you both have the same issue ?
<ikonia> are you and Deathvalley122 the same person ?
<Deathvalley122> no
<ikonia> (confused that you both seem to be having the same issue)
<eagles0513875> im working for Deathvalley122 ikonia
<Deathvalley122> cause we both are on kvm right now
<eagles0513875> we are using kvm to connect to the server during install
<ikonia> how are you trying to "wipe" the data
<eagles0513875> using manual partitioning that is part of the installer
<ikonia> ok - so are you deleting the partition ?
<eagles0513875> correct and it asks to confirm you want to wipe and erase the partition and that is whats taking ages
<ikonia> is lvm started ?
<ikonia> (eg: are volume groups active)
<Deathvalley122> no
<Deathvalley122> it is not
<eagles0513875> Deathvalley122: how sure are you though
<Deathvalley122> cause we aren't in Debian
<Deathvalley122> the drive isn't even active right now
<Deathvalley122> only the virtual drive
<eagles0513875> virtual drive which has the iso mounted
<Deathvalley122> correct
<ikonia> how big is the partition ?
<Deathvalley122> I have no freaking clue
<Deathvalley122> I didn't check that
<ikonia> roughly
<ikonia> you must have an idea
<Deathvalley122> just a guess
<Deathvalley122> 200
<ikonia> 200GB / MB ?
<Deathvalley122> GB
<ikonia> how long has it been running
<Deathvalley122> about 2 hours
<ikonia> is this a physical machine or a virtual guest
<ikonia> ok, so I suspect it's probably hung
<eagles0513875> physical machine
<Deathvalley122> physical machine the host machine
<eagles0513875> the DC already replaced the motherboard the drive and sata cables
<ikonia> why ?
<eagles0513875> cuz they found problems with the hardware not sure exactly what
<eagles0513875> with the motherboard they said sata controller
<ikonia> ok - boot a CD - run fdisk and remove the partition
<eagles0513875> hard disk was I/O errors
<eagles0513875> ok
<ikonia> if they have changed the drive - why does it have data on it also ?
<ikonia> surly if it was a new drive, there would be no data on it
<Deathvalley122> no
<Deathvalley122> they put data on it as a test
<Deathvalley122> to make sure
<Deathvalley122> there isn't anymore issues
<Deathvalley122> with the hardware
<ikonia> not sure I'd agree with that, but lets see what fdisk does
<Psi-Jack> Hmm
<Psi-Jack> Anyone know how I can shut down excessive non-needed getty's?
<Psi-Jack> They're upstart managed, so I'm not so sure how to stop them and keep them from running.
<ikonia> kill them ?
<Psi-Jack> Ahh, duh me..
<ikonia> kill -19
<Psi-Jack> tty1 - tty6
<Psi-Jack> I just stopped 3-6 ;)
<Psi-Jack> ikonia: Upstart would just repawn then. :p
<ikonia> ahhhh
<ikonia> ha ha, nice one
<ikonia> Psi-Jack: not sure it would with 19, not tried it
<Psi-Jack> It would.
<ikonia> I'll take your word on that
<Psi-Jack> upstart itself has to know to stop it, else it will keep bringing it back up
<ikonia> totally, I thought upstart monitored for process exit state though, so 19 it would ignore, but I've not looked into it in enough detail
<Psi-Jack> Trying to figure out why our db server's chewing through memory like no tomorrow, when it's maximum threshold limit is 5.5GB out of 8GB.
<Psi-Jack> Already dug through 755mb swap, out of 1gb total.
<ikonia> whoa
<ikonia> any signs of what's using it ?
<Psi-Jack> Nope.
<ikonia> top show any big footprints ?
<ikonia> not active, but something that maybe leaking a little
<Psi-Jack> Just MySQL, which has a VIRT of 4GB, and RES of 2.3GB
<Psi-Jack> Lots of those. ;)
<ikonia> what's your mysql memory limits set to in my.cnf
<ikonia> not exceeding those is it ?
<Psi-Jack> Nope. The MySQL tuning-primer script actually says, maximum utilization of memory is topped off at 5.5GB.
<Psi-Jack> And it's not even hit that yet.
<Psi-Jack> CPU usage itself is almost nil
<eagles0513875> are you running a web server or anything like that
<ikonia> ok, so at least you know that's good and no crazy process is found an obscure leak
<ikonia> Psi-Jack: watched vmstat for a while with say 5 second interval,
<Psi-Jack> Nope. This is a dedicated MysQL server.
<eagles0513875> ok
<ikonia> is the usage growing, high scan rate, anything like that ?
<Psi-Jack> swap si/so are fluctuating.
<Psi-Jack> cpu usage is minimal.
<ikonia> not a massive surprise if you are eating ram that si/so are moving around a bit
<Psi-Jack> memory usage is constant. I have this server under Zabbix monitoring, too,.
<ikonia> what abour sr
<Psi-Jack> sr?
<ikonia> sorry, so it the equiv in vm
<ikonia> that is very odd then that nothing seems to be actually eating it, but it's in use
<ikonia> and if you're swapping it's struggling, but why ?
<ikonia> cpu is low so there is probably no wait time lock
<eagles0513875> ikonia: could it be vcpu allocation
<ikonia> what ?
<eagles0513875> not enough cpu time has been allocated to the vm
<ikonia> he's not running a vm
<ikonia> or not said he is
<eagles0513875> ikonia:  above you said sorry, so it the equiv in vm?
<ikonia> eagles0513875: yes, vmstat
<eagles0513875> ahh my b ad
<ikonia> "so" is the equiv in "vmstat"
<Psi-Jack> Actually.
<Psi-Jack> It IS a VM.
<Psi-Jack> Under VMWare.
<eagles0513875> ha
<eagles0513875> Psi-Jack:  how many vcpus does this vm have
<Psi-Jack> Bu tno.
<ikonia> what's the host doing ?
<ikonia> (or host showing)
<Psi-Jack> it has 4 VCPUs, and they're all pretty much idling.
<ikonia> the cpu usage is low, so that is not going to be an issue
<Psi-Jack> ikonia: It's a VMWare vSphere server, and it's on a very high end system. ;)
<eagles0513875> Psi-Jack: are you running vmware esxi or another piece of vmware kit
<Psi-Jack> eagles0513875: We're running the full vSphere server.
<eagles0513875> nice
<eagles0513875> ikonia: could the be a bug not with the os on the vm but with vsphere?
<Psi-Jack> On AMD 48-core Opteron servers with 256 GB RAM each. ;)
<eagles0513875> god all mighty lol
<Psi-Jack> Hooked up to an EMC SAN over FC 3
<ikonia> eagles0513875: you're just picking random things for no reason
<Psi-Jack> With FC Disks in the SAN.
<ikonia> eagles0513875: vsphere is enterprise class and tested, I'd have confidence that basics like this are covered
<ikonia> Psi-Jack: is the box actually doing "anything" at the moment ?
<ikonia> or is it just a typical day, typical usage etc
<Psi-Jack> The Host, or the VM itself?
<ikonia> vm
<Psi-Jack> Very little. Serving 57 clients that're 90% idle.
<Psi-Jack> And zabbix monitors which probe every 30s to 120s
<Psi-Jack> To pick up information about the system and db. ;)
<ikonia> sorry pretty solid standard style setup
<ikonia> it's odd that it's swapping when the memory isn't at threshold
<Psi-Jack> Yeah.
<Psi-Jack> Here's the wierd part.
<eagles0513875> ikonia: i have noticed the same thing with my linodes they do a bit of swapping as well even when not fully using the vcpus
<Psi-Jack>              total       used       free     shared    buffers     cached
<Psi-Jack> Mem:       8196800    8139208      57592          0       4172      20860
<Psi-Jack> -/+ buffers/cache:    8114176      82624
<Psi-Jack> Swap:       905208     768004     137204
<ikonia> eagles0513875: that's not uncommon though to swap out idle stuff
<Psi-Jack> Cache is EXTREMELY low, memory usage is EXTREMELY high.
<ikonia> Psi-Jack: yeah, that's not good
<ikonia> Psi-Jack: I have one terribly random thing I can think of that was not the same but had a similar situation
<Psi-Jack> Anything short of restarting mysqld? ;)
<ikonia> Psi-Jack: some large mysql 5 inodb tables had a corrupted index in ram, and each write the db was using the ram to create a temporary "at that moment" index, and had to keep getting flushed and re-written every $X writes
<RoyK> Psi-Jack: pastebin ps axfv
<ikonia> basically constantly re-creating an idex on the fly, it was low load on the machine, but kept the ram swapping out/in as it updated the index
<RoyK> Psi-Jack: seems to me you need more swap, then more RAM
<Psi-Jack> RoyK: http://pastebin.com/hABWNnqE
<Psi-Jack> We actually just upped the RAM this morning at 3am from 4GB to 8Gbn
<RoyK> Psi-Jack: 24837 ?        Ssl  107:20  24834  8893 4315578 2439240 29.7 /usr/sbin/mysqld
<RoyK> but then that only eats 4,3GB
<Psi-Jack> Exactly.
<Psi-Jack> Which matches exactly what the tuner-primer script tells me MySQL is consuming, too.
<ikonia> which ties in clean with the limits in your my.cnf
<Psi-Jack> Yep.
<Psi-Jack> I tuned it low, so I could tune it up.
 * RoyK thinks postgresql is a *BIT* better than mysql for 99% of use cases
<Psi-Jack> I agree. :)
<Psi-Jack> but, still.
<Psi-Jack> mariaDB will definitely be better. I'm waiting on that one. ;)
<Psi-Jack> Gonna try out MariaDB, in fact, on my home server farm, replacing MySQL with MariaDB 5.2, unless 5.3 comes out real soon. ;)
<ikonia> that won't fix your current issues though
<Psi-Jack> Nope. ;)
<ikonia> looking at maria on a centos 6 setup at the moment for similar reasons
<Psi-Jack> this isn't my home system either, this is corporate. ;)
<Psi-Jack> Well, looks like there's updates to the kernel, mysql, so, I'm going to push that out tonight.
<Psi-Jack> May very well be a kernel level bug.
<ikonia> well, you'll need a reboot
<Psi-Jack> That, I know. ;)
<ikonia> almost a shame to boot it without a better understanding
<Psi-Jack> For now, I'm pulling the packages in, and might restart the mysqld real quick since it's pretty fast.
<Psi-Jack> One good thing I do like about Ubuntu. It also boots like super fast. Great for a server. :)
<Psi-Jack> This is 10.04 too BTW. :)
<Psi-Jack> Wow. Swap just dropped a crapload doing all these updates. ;)
<ikonia> that's so annoyingly wrong
<Psi-Jack> And now, Swap and Free RAM have crossed over each other.
<Psi-Jack> And now, mysql is probably stopping from the update and swap is relieving itself RAPIDLY.
<ikonia> lets see if it goes back up
<Psi-Jack> It did.
<Psi-Jack> Course mysql is still stopped, grrr!
#ubuntu-server 2012-01-07
<Psi-Jack> And, we're live again.
<Psi-Jack> Finally, sheash. ;)
<ikonia> I half want it to break again so we can work it through
<Psi-Jack> Swap was 100% ejected by that whole thing.
<Psi-Jack> It might, I hope not, but it might still.
<ikonia> I don't actually want it to break really, just nice to do something interesting for a change
<Psi-Jack> I did the updates today at 3am, so it took around 13-14 hours to get that way.
<RoyK> Psi-Jack: porting the shite to psql will probably be better
<Psi-Jack> RoyK: Yeah. Good luck convincing R&D that/. ;)
<RoyK> Psi-Jack: not my job, just saying it ...
<Psi-Jack> Oh, i know.
<Psi-Jack> And there's tools to do it. :)
<ikonia> RoyK: easy on the language please,
<Psi-Jack> I'm hoping it was just a mysql bug that was fixed by that update.
<RoyK> yeah, and memory tuning psql is such a stroll in the park
<Psi-Jack> Memory at the moment is maintaining solid right now.
<Psi-Jack> RoyK: Indeed. :)
<RoyK> ikonia: what?
<ikonia> RoyK: no need for the swearing, please try not to
<eagles0513875> gd luck Psi-Jack :D
<RoyK> ikonia: what?
<Psi-Jack> heh
<ikonia> RoyK: "shite"
<ikonia> please don't use it
<ikonia> Psi-Jack: throws my broken index theory out of the window too
<eagles0513875> O_o
<RoyK> ikonia: what?
<Psi-Jack> hehe
<ikonia> RoyK: please don't be silly, I've asked you clearly to respect the rules of the channels in the ubuntu name space
<Psi-Jack> About to put up a screenshot of the memory utilization chart since the update till just now.
<Psi-Jack> Hmm
<Psi-Jack> What's a good simple photobucket?
<RoyK> ikonia: I know there are certain rules written by a set of homegrown religious fundamentalists, rules that belong in various churches and so on, but not, I repeat NOT, on IRC or in the real world elsewhere
<ikonia> RoyK: the rules of the ubuntu channel is no swearing, I've made a polite request 3 times for you to follow those rules
<ikonia> !guidelines > RoyK
<ubottu> RoyK, please see my private message
<ikonia> RoyK: please check the url ubottu has sent you
<RoyK> ikonia: shut up, please
<ikonia> RoyK: sorry, that's not going to cut it, if you can't be polite to people and follow the no swearing rules, that in unacceptable
<RoyK> ikonia: can you please stop the offtopic talk in here? we're trying to talk about things on topic, stuff like ubuntu server. this isn't a language channel, you know
<Psi-Jack> http://tinypic.com/view.php?pic=2dotv6&s=5
<Psi-Jack> There's the Zabbix monitor for memory utilization.
<ikonia> RoyK: if you want to behave like this just because I asked you to not swear - that is your issues and I'll deal with it
<RoyK> or did someone promote you to be a anti-swearing-guardian to help salvage the world from evil people like me?
<RoyK> we have bots for that sort of jobs, you know
<ikonia> if you can't follow a simple request to not swear in the channel or be polite to people you will not be welcome in the channel until you can do so
<ikonia> !language | RoyK
<ubottu> RoyK: Please watch your language and topic to help keep this channel family-friendly, polite, and professional.
<ikonia> RoyK: there the bot has told you not to swear rather then me, so if you could please comply not that would be great
<RoyK> ikonia: please stay on topic. computer professionals are known throughout the world to use their language like anyone else
<ikonia> RoyK: I'll explain one more time
<RoyK> no need to
<ikonia> RoyK: the ubuntu channels has no swearing policy - if you can't follow that, you are not welcome
<ikonia> if you can, that is great.
<RoyK> I haven't uttered a single bad word after the initial word for which you started flaming me
<ikonia> RoyK: totally, but when I asked you to stop you tried 3 smart answers, and continue to suggest you will continue to swear, I've simpley asked you not to
<ikonia> simple request, please don't swear, simple response "sure, no problem"
<RoyK> ikonia: I have not sworn after you told me, not asked me, not to
<ikonia> RoyK: correct, but you kept aruging it as you have been doing suggesting it was acceptable
<ikonia> rather than "sure no problem"
<ikonia> if you are happy to not swear, then fantastic, thank you
<RoyK> so unless the rules are changed and also includes suggesting the rules are stupid, then I guess I yet haven't broken more of them
<Psi-Jack> Anyway. ;)
<Psi-Jack> ikonia: Did you see that chart?
<ikonia> RoyK: saying "what" every time I asked you to not swear is not anything but provocotive
<RoyK> ikonia: what?
<ikonia> well done, enough now
<ikonia> Psi-Jack: I didn't
<AlanBell> ok, lets call this argument to a close now
<Psi-Jack> Yeah, that's why I'm trying to distract them back into the subject. ;)
<AlanBell> I am sure RoyK will be a model citizen from now on, lets talk about servers
<Psi-Jack> ikonia: Notice right at the maintenance window, the free memory jump up and go WAY back down the moment mysql started?
<ikonia> Psi-Jack: just finding the url for the image, one moment
<Psi-Jack> And again, around 3:45, which was after doing some tuning.
<Psi-Jack> http://tinypic.com/view.php?pic=2dotv6&s=5
<Psi-Jack> I think it was mysql somehow.
<uvirtbot> New bug: #737882 in cloud-init "cloud-init bails when any mimetype is non text" [Undecided,Confirmed] https://launchpad.net/bugs/737882
<Psi-Jack> Cause it just ATE the memory for lunch every time for no reason.
<ikonia> because it's not going beyond the bounds it must be something non-static as an issue and internal to mysql though
<RoyK> Psi-Jack: late lunch
<Psi-Jack> Yeah. There's a kernel update as well, but I'm not rebooting this server right now. :)
<Psi-Jack> I was pushing it by pushing out an update that I should've done this morning, but oh well.
<ikonia> it's go to be something it's doing at a regular time that's causing it rather than a bug of something just going out of control/leaking
<yakster> hello allâ¦.
<ikonia> it stays within the memory boundary, so it's not a "bug" in that respect
<ikonia> it just seems to struggle to do $X Task
<Psi-Jack> ikonia: Even now, the free memory itself is steadilly dropping.
<yakster> was wondering if anyone can successfully mount a Airport-Ectreme USB drive, if so what is the FSTAB line for it
<ikonia> but it never oversteps that it's doing
<Psi-Jack> But it's not at the bottom 200MB.
<ikonia> I can only think of that index example
<Psi-Jack> Right now, it's peeked down to around 600MB free, and is maintaining it there with a very VERY slow pace there of continuing to get lower.
<Psi-Jack> I'm betting that's just it filling all it's caches, like query cache, innodb buffer pool, etc.
<yakster> having issues getting an airport extreme USB disk to mount inâ¦ perhaps someone can help?
<Psi-Jack> I know this DB isn't TOTALLY optimal, but we have this running just fine on other servers.
<yakster> in smbclient i can see it as a share
<ikonia> Psi-Jack: are they configured in a similar scale setup ?
<Psi-Jack> Yes
<Psi-Jack> This one DB is scaled in a 2-pair cluster, Master and Slave.
<Psi-Jack> We have others of similar setup for R&D, QC, Staging, and specialized ones for Sales Demos and Partner's Sandbox.
<Psi-Jack> Along with that, another production pair, each of those are running under a Xen server with Rackspace Cloud servers, that pair has 16GB each for it's Master and Slave.
<Psi-Jack> That one's over allocated by about 5% more total memory than it's got to allocate for, and it still hasn't even swapped yet.
<Psi-Jack> So, where the Rackspace MySQL pair has 16GB RAM each, they're maximum memory allocation is around 16.5GB.
<mansion> Does anybody happen to know about hard drive caddies in relation to hard drive speed?
<ikonia> mansion: try ##hardware
<mansion> Say, the caddy says 15k on it; would i be able to put a 10k drive into it and have it work?
<ikonia> Psi-Jack: interesting,
<Psi-Jack> Yeah. That's my thought too.
<Psi-Jack> But THIS one server, which hasn't been updated in months, had some updates to kernel and mysqld, and before that, everytime mysqld starts up, it eats ALL free memory, and then swap gradually starts to decline.
<Psi-Jack> Until it's almost out.
<Psi-Jack> So, I think we have a memory leak. ;)
<ikonia> Psi-Jack: I'd agree with that if it was going beyond it's allowed memory
<ikonia> (or agree easier)
<Psi-Jack> The slave server was freshly rebuilt not but a few weeks ago, just before Christmas, and it's not had that issue.
<ikonia> is it at the same patch level (before you updated the master today)
<Psi-Jack> The slave?
<Psi-Jack> No, it was ahead. ;)
<ikonia> are they are the same level now
<ikonia> (assuming a reboot)
<Psi-Jack> ikonia: Yes, mysqld is exactly the same patchlevel version as it's slave now.
<ikonia> ahh, but the kernel isn't
<ikonia> that's the only thing thats different from what you're saying
<Psi-Jack> Not presently no.
<Psi-Jack> Kernel is different, at the moment.
<ikonia> if this stops after a reboot, it would be good but annoying
<Psi-Jack> Main thing that changed was mysql, I think even glibc maybe?
<Psi-Jack> Nope. glibc is the same
<Psi-Jack> apparmor, mysqld, and upstart, are the three things in effect now, that were updated, that's changed.
<Psi-Jack> And I just ran my tuner-primer script on the DB, and free memory shot down some more. LOL
<Psi-Jack> But, most of it's cache. :)
<Psi-Jack>              total       used       free     shared    buffers     cached
<Psi-Jack> Mem:       8196800    7990704     206096          0     111312    1215036
<Psi-Jack> -/+ buffers/cache:    6664356    1532444
<Psi-Jack> Swap:       905208       1696     903512
<Psi-Jack> Heh wow.
<ikonia> ok, I'm too curious now, I want you to reboot (I know you can't just do that) but I'm too curious
<Psi-Jack> I just turned on the graphs for cache and buffers for that same time frame.
<Psi-Jack> Buffers and cache free during that whole time, was as low as the total free memory.
<Psi-Jack> Since the mysqld update, cache rose while free memory dropped, criss-crossing over each other as it should.
<Psi-Jack> I'll screenshot that if you want to see. ;)
<Psi-Jack> Shows quite a difference.
<ikonia> please
<Psi-Jack> http://oi39.tinypic.com/21j3rfa.jpg
<Psi-Jack> Huge difference.
<ikonia> I see
<Psi-Jack> Before mysqld update, cache skyrockets, then gradually lowers until there's none left, while free memory's already gone.
<Psi-Jack> After update, free memory skyrockets but gradually goes down while it's building up cache.
<Psi-Jack> And now, the memory is maintaining itself at it's current level steady. ;)
<ikonia> Hmmmmm
<ikonia> need to feed now, it's late
<Psi-Jack> hehe
<Psi-Jack> Vampire, eh?
<onekenthomas> 2233
<uvirtbot> New bug: #913009 in bacula (main) "package bacula-common-mysql (not installed) failed to install/upgrade: trying to overwrite '/usr/lib/bacula/libbaccats.la', which is also in package bacula-common 5.2.1-0ubuntu2" [Undecided,New] https://launchpad.net/bugs/913009
<uvirtbot> New bug: #913010 in tftp-hpa (main) "package tftpd-hpa 5.0-11ubuntu2.1 failed to install/upgrade: subprocess installed post-removal script returned error exit status 2" [Undecided,New] https://launchpad.net/bugs/913010
<Guest69559> modprobe won't load my module, I think it's a dependency thing. How do I update modprobe's list of kernel modules ?
<Guest69559> I found it, depmod
<eagles0513875> morning
<eagles0513875> morning
<Samic> can anyone one here check if my smtp server is secured enough?
 * Samic wonders  if anyone is actually here!
<koolhead17> Daviey: around?
<Deathvalley122> does 11.10 have issues with its usb drivers cause its unable to read kvm now
<Psi-Jack> ikonia: Interesting results. So far, 4 hours later, and the memory report on that MySQL server is /COMPLETELY/ different.
<Psi-Jack> Free Memory is still >2 GB remaining, which is desired and expected, cache usage is now > 2 GB, which is better than it was before where it would go up or start extremely high and plumet within an hour..
<uvirtbot> New bug: #910962 in postfix (main) "installArchives() failed: dpkg: unrecoverable fatal error, aborting:   syntax error: unknown group 'postdrop' in statoverride file  Error in function:   SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2)" [Undecided,New] https://launchpad.net/bugs/910962
<eagles0513875> what is the default url for cobbler on 11.10
<bastidrazor> a
<pmatulis> b
<RoyK> c
 * patdk-lap steals def
<cloudgeek> best network analyzer for network !!???
<patdk-lap> !best
<ubottu> Usually, there is no single "best" application to perform a given task. It's up to you to choose, depending on your preferences, features you require, and other factors. Do NOT take polls in the channel. If you insist on getting people's opinions, ask BestBot in #ubuntu-bots.
<cloudgeek> no bestbot there !!
<uvirtbot> New bug: #913166 in krb5 (main) "kprop will not find slave-kdc" [Undecided,New] https://launchpad.net/bugs/913166
<iToast> hey
<iToast> Im thinking of using my ubuntu machine to kill my freenas box.
<iToast> Whats more important, cpu or ram?
<iToast> Its using a amth cpu at 1.5 ghz. with 1gb of ram.
<iToast> I'm thinking of upping its ram to 2 / 4 gb :)
<Onepamopa> guys, some help with IP alias ?
<Onepamopa> auto eth0:0 iface eth0:0 inet static address <IP> netmask <mask>
<Onepamopa> interface does not appear
<Onepamopa> "cannot assign requested address"
<Onepamopa> mask is 255.255.255.255
<uvirtbot> New bug: #913252 in mysql-5.1 (universe) "package mysql-server-5.1 5.1.54-1ubuntu4 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/913252
<qman__> that's an invalid mask
<wmp> hello, is possible to show what configure options is package compiling?
<uvirtbot> New bug: #913286 in squid (main) "Squid not loading upon Ubuntu start up" [Undecided,New] https://launchpad.net/bugs/913286
<jasonmchristos> Anyone here able to help with libpam-rsa?
<jasonmchristos> Installed it generated keys checked the config the lines to require rsa seem to be in place but my systems authentication looks unchanged.
#ubuntu-server 2012-01-08
<allowoverride> yawns
<uvirtbot> New bug: #913379 in ntp (main) "Migrate ntp from SystemV to Upstart" [Undecided,New] https://launchpad.net/bugs/913379
<sw0rdfish> hey
<sw0rdfish> RoyK, er du her?
<RoyK> sw0rdfish: jau
<sw0rdfish> pm?
<RoyK> k
<uvirtbot> New bug: #913464 in rabbitmq-server (main) "rabbit creates new PAM session" [Undecided,New] https://launchpad.net/bugs/913464
<samba35> why i am not able to acess httpd after installing apache2 package
<Resistance> samba35:  did you try http://localhost/ ?
<samba35> i am running on port 8050 but still i am not able to open http://localhost:8050
<ewook> samba35: port 8050 on what interface?
<samba35> you mean host or nic ?
<samba35> sorry i dont understand
<ewook> samba35: I mean nic, yes.
<samba35> oh
<samba35> i have 3 nics
<ewook> samba35: to what interface did you bind the httpd? * or a specific?
<samba35> how do i test that
<samba35> should i post config file
<ewook> do a netstat -natp and paste the result into pastebin or something.
<ewook> (as root)
<samba35> ok
<samba35> http://pastebin.com/uzhGJcDr
<RoyK> apache is listening to 8050 there
<ewook> tcp        0      0 0.0.0.0:8050            0.0.0.0:*               LISTEN      7808/apache2
<ewook> exactly.
<ewook> So, apache is listening on all interfaces on the assigned port. if you can't reach the desired content, I'd point to your site config.
<JanC> you do need a site configured, of course...
<samba35> how do i get httpd command in ubuntu
<RoyK> I'd start out reading apache's log files
<RoyK> samba35: you don't, that's redhat's naming of what's called 'appache2' in debian/ubuntu
<RoyK> or 'apache2'
<ewook> ya. /var/log/apache2/access.log and error.log or if you specified any specific logfiles for your site.
<RoyK> see apache2ctl for more info
<samba35> ok
<samba35> apache2 -S give me
<samba35> apache2: bad user name ${APACHE_RUN_USER}
<samba35> how do i fix that
<RoyK> what are you trying to do?
<RoyK>        -S     Show the settings as parsed from the config file (currently only shows the virtualhost settings).
<samba35> some other person want output of that command to understand the problem in apache /httpd irc
<RoyK> if you want to test if the config is valid, try 'apache2ctl configtest'
<samba35> ok
<samba35> it say syntex ok
<RoyK> what happens if you telnet to port 8050_
<RoyK> ?
<samba35> yes
<samba35> i got output i dont get apache
<samba35> telnet localhost 8050 Trying ::1...   Trying 127.0.0.1...      Connected to localhost.Escape character is '^]'.
<samba35> any other workaround
<uvirtbot> New bug: #913515 in apache2 (main) "Migrate Apache2 from SystemV to Upstart" [Undecided,New] https://launchpad.net/bugs/913515
<fikus> 	hello, my ubuntu version - 11.10 x64. I install into my computer Sun 4 ports card. I find that simple command couldn't run correctly - ping for example. My problem is same as - http://networkbroadcast.co.uk/2011/04/sun-quad-nics-and-x86_64-kernels/. Have anyone shot solution?
<fikus> join
#ubuntu-server 2012-12-31
<jeeves_moss> how do I set up BIND to play nice with a Windows DNS server?
<JanC> jeeves_moss: you can run BIND as a Windows DNS server?
<JanC> (not sure what exactly your problem is)
<jeeves_moss> JanC, no, I need it to be a secondary.  I can't find the technet howto page
<JanC> it's been a very long time since I configured a Windows DNS server, but IIRC I just wrote a BIND configuration file and used that with the Windows DNS server
<jeeves_moss> JanC, ok, thanks.  I'm going to keep looking
<JanC> the BIND config file worked fine with the Windows DNS server
<JanC> but using the Windows GUI would break that config file
<jeeves_moss> JanC, that's about par for Windows
<JanC> so I had to avoid using the Windows GUI and maintain it as a text config file  âº
<JanC> that was almost 10 years ago though
<Louis__> ssh
<lvmer> Just want to make sure I get this right; where should I untar an ethernet adapter driver?
<lvmer> is it:  /lib/modules/3.5.0-21-generic/kernel/drives/net/e1000e  ?
<kenetik> Anyone on? Need some help
<patdk-lap> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<lvmer> I'm having trouble setting up a pci gbe adapter. I think I installed everything correctly via intel's instructions, but I don't see the interface listed
 * lvmer feels invisible
<patdk-lap> lvmer, why did you attempt to install the driver manually?
<kenetik> I am running 12.04 lts, I have postfix and dovecot installed and using my mail server successfully. I have multiple domains and would like to have a web interface to create/edit email accounts. Any suggestions?
<lvmer> because that's what intel said
<lvmer> how do you do it automatically
<lvmer> I haven't changed any settings so I have no problem removing it and redoing it automatically, lmk how
<lvmer> well after I run $ sudo modprobe e1000e   I can not seem to find a valid interface for the following command: $ sudo ifconfig eth0 up
<patdk-lap> lvmer, who said it was called eth0?
<patdk-lap> use dmesg
<lvmer>  can not seem to find a valid interface
<patdk-lap> heh?
<lvmer> under modules I see it87, which im betting it is, but I can't seem to initialize it
<patdk-lap> your betting your it87 tempature sensor is your gigabit network card?
<lvmer> can you help me find the device name?
<lvmer> I could not find it with dmesg
<lvmer> I'm having trouble figuring out one of my network interfaces. Does anyone have any ideas?
<lvmer> in case anyone else has this problem, the appropriate command is: lshw -C network
<patdk-lap> well dmesg shows the output of stuff you load, like the network module, and hardware detected
<graphmastur> Hey, I have a question about apache and port 80 on my linux system. When running from port 80, I can't access apache from outside the server. When running on port 50123 for example, I can. So I set up a port forward from 50123 to 80. Now, I can't access through port 501234 from inside the server (where I can 80), and I can't access 80 from the outside (where I can 50123)
<uvirtbot`> New bug: #1094830 in openvswitch (main) "brcompat failed to be loaded" [Undecided,New] https://launchpad.net/bugs/1094830
<mattronix> Hi
<grab> Hello
<grab> I would like to install a ubuntu server on a virtual machine with a mac pro hardware. When i install it, i have the message please use a kernel appropriate for you CPU
<grab> But after a lot of version I don't know what is the good version
<vezq> grab: what virtulization software?
<grab> Virtual box
<grab> and my  cpu is a like i686
<vezq> have tried different CPU options with virtualbox (can't remember exactly)
<vezq> you*
<grab> I'm sure I must install a 64 bits version.
<grab> I have a quadcore intel
<vezq> the standard 64-bit version should work with Virtualbox
<mattronix> what type of mac  do you have?
<mattronix> all of the newer macs are 64 and 32 bit cpu's
<kim0> Hi folks, any idea if ESXi (with vCenter) can somehow pass cloud-init data to instances? o/ @ smoser
<RoyK> grab: I have a few linux vms in virtualbox on my macs
<RoyK> mattronix: all intel macs are 64bit
<mattronix> what version of the ISO of linux are you using grab
<mattronix> yep
<mattronix> thats true all of the intel macs are 64 bit and can run any 32 bit software
<grab> I was installed the 32bits and 64bits version but i have the same message
<RoyK> grab: have you disabled PAE and trying to boot a 32bit linux? that would trigger that error
<RoyK> grab: what mac model?
<grab> mac pro
<grab> quad core
<RoyK> Xeon?
<mattronix> what is the template for the VM?
<mattronix> ubuntu
<grab> hum good question, i'm seeing
<mattronix> i have found some wired things with that  sometimes
<grab> yes xeon
<mattronix> Xeon so its a mac pro?
<grab> mattronix, I'm using virtual box
<grab> mattronix yes it's a mac pro
<mattronix> I know :) i mean when you create a VM you have options for ram hdd and template of the os
<mattronix> you can choose for example windows xp 64bit
<mattronix> and say ubuntu 64 bit
<mattronix> is all of the vm's settings correct
<grab> RoyK, what's ESCxi it's on Virtual box ?
<mattronix> no
<RoyK> grab: really, I have three mac's and it works on my installs
<mattronix> Vmware
<RoyK> grab: that's with virtualbox
<mattronix> yep i use mac too it works fine
<mattronix> i can look by team viewer if you want
<grab> Ok thank guy, the problem is the template from virtualbox, i was choose the wrong template
<mattronix> but its up to you
<mattronix> thought so :)
<grab> Do you know if it's possible to emulate failover with 2  servers on virtual machine on a same machine ?
<mattronix> i have found these kind of errors occur when you choose the wrong template i do not know the exact technical reason behind it  but i guess its hardware emulation
<mattronix> yeah you can do anything with a vm that you can do a physical machine
<mattronix> yep you can create virtual network
<mattronix> and a virtual network card on each of the hosts and join it to the same virtual network
<mattronix> so is the ubuntu install working?
<NikP> I can't shut down my server, when I enter "sudo shutdown -P now", all services end and the HDD stops, but the Power isn't switched off.
<mattronix> acpi?
<NikP> Yes, status is OK.
<mattronix> what is the -p option?
<mattronix> try sudo halt
<mattronix> "sudo halt"
<mattronix> i use that and it works for me :)
<NikP> OK, I will try that, wait a moment. (Currently over SSH to the server connected)
<RoyK> "halt" will only take down the OS, use "poweroff" to poweroff
<mattronix> ok then type sudo poweroff
<RoyK> using that over ssh, makes it rather hard to get contact with the server again...
<mattronix> should have the exact same result
<mattronix> it will do a complete powerdown
<mattronix> yep
<NikP> The same result. No power off.
<NikP> OK, I will try poweroff
<mattronix> what is on your console screen?
<mattronix> yep
<mattronix> i have not seen the option you use before
<RoyK> "halt" won't poweroff anymore - they changed that somewhere between 10.04 and 12.04
<mattronix> o i see
<mattronix> i still use 10.04.4   LTS
<RoyK> I thought "halt" still worked there
<mattronix> it does :)
<RoyK> ah
<mattronix> i have not seen the you must use power off option so i guess they changed it
<mattronix> let me know if poweroff works for you
<NikP> poweroff doesn't works, too.
<mattronix> ok
<mattronix> do you have a monitor connected to your server
<NikP> but ACPI is on OK, so what's the problem?
<NikP> Yes , I have a monitor. It shows at the end System halted
<mattronix> try it from the server console (with keyboard and mouse + monitor attached to your server) and see what you get
<RoyK> NikP: with poweroff?
<mattronix> o ok
<NikP> RoyK: With all.
<mattronix> and you can you still ssh into it
<NikP> No, I must reboot it.
<mattronix> hrrm
<mattronix> reverse your ACPI setting
<mattronix> and try again
<NikP> OK, I will try that.
<mattronix> i have found for me that is the main reason this happens
<mattronix> how old is the hardware
<NikP> Oh, not very old, ca. 2006, but I'vbe tried for this problem two power supplies, but it does'nt works. I have a PC with Mint 14, there all commands work.
<mattronix> herm i have found the same with the fitpc that with freebsd  i have this problem
<mattronix> and with debien on the same machine it works fine
<mattronix> maybe there is an option in ubuntu that it keeps the hardware online
<NikP> On the server I'm using Ubuntu 12.04 LTS.
<mattronix> herm maybe it is an option in 12.04 let me look it up
<mattronix> do you get system is ready to halt?
<NikP> Yes
<grab> Question : Do you download automatically your update for  your  system ?
<vezq> grab: I use unattended-upgrades package
<mattronix> 12.04 is still unstable if i am honest
<mattronix> i think personally that is safer to stay on 10.04
<mattronix> https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/880240
<uvirtbot`> Launchpad bug 880240 in upstart "system doesn't turn off if "sudo halt" is given" [Medium,Won't fix]
<mattronix> SNAP XD
<mattronix> looks like we both got the same ending
<mattronix> bug 880240
<uvirtbot`> Launchpad bug 880240 in upstart "system doesn't turn off if "sudo halt" is given" [Medium,Won't fix] https://launchpad.net/bugs/880240
<mattronix> whats the best command to halt an ubuntu server?
<mattronix> i use the halt but it works fine for 10.04
<grab> I have install my server on a VM, how to get a ip adress diffÃ©rent that the host (macpro)?
<RoyK> what hypervisor?
<grab> VirtualBox
<grab> I want to create a virtual network equipement to connect by ssh my other machine
<RoyK> you can use virtualbox's port forwarding
<RoyK> or you can use bridge networking
<RoyK> the latter is probably the easiest
<grab> RoyK it's a connexion by point ?
<grab> actually  im using the NAT, but i have the  host ip address
<RoyK> just use bridge
<RoyK> that way you can setup an address "beside" the host, on the same network
<grab> ok it's true thank
<bjensen> : Im trying to pick a rack server from dellâ¦R320, R420 or so..but I don't know if I can use software raid on the embedded SATA controller they have?
<patdk-lap> heh?
<patdk-lap> isn't that the whole definition of *software raid*?
<bjensen> patdk-lap: probably..just wanna make sure :)
<vezq> patdk-lap: I'd recommend hardware raid if possible
<bjensen> do you know if their S110 can be used with ubuntu?
<bjensen> raid controller*
<patdk-lap> oh, a *fakeraid* controller
<patdk-lap> that is not a raid card
<bjensen> the problem is that only the top server models have real controllers
<patdk-lap> not really, the problem is, dells website option selections
<bjensen> think their call service works today?
<bjensen> ;-)
<vezq> bought a T420 recently, you can get rack kit for it
<patdk-lap> vezq, that is called a r420 :)
<patdk-lap> I can't find any info about the s110
<bjensen> patdk-lap: you think I can get a R320 with a H310 raid?
<patdk-lap> likely it will work fine
<patdk-lap> but I wouldn't want to use it's *raid* support, but I would use md instead
<bjensen> I think i need to call them. When I google I found this: http://www.server-warehouse.co.za/index.php?main_page=product_info&products_id=1381&zenid=iatr515cp98imvl5vfa4mj3pf7 a r320 with a h310 controller
<bjensen> the h310 controller works with ubuntu 12.04LTS?
<patdk-lap> bjensen, sure, it gives me that option directly on their order system
<bjensen> what!?
<patdk-lap> though, h710 is going be much faster
<vezq> jep, h710 recommended
<bjensen> Its going to be running a Ruby OnRails Web app
<patdk-lap> it gives me the option of, s110, h310, h710
<bjensen> won't need that much io
<patdk-lap> bjensen, it isn't about io
<patdk-lap> io depends on the disks you use
<patdk-lap> the raid card you use helps decrease latency
<bjensen> Ohhh I see now. its their ordering system hat is weird
<bjensen> bah i gtg. thanks for the help
<uvirtbot`> New bug: #1094921 in keystone (main) "keystone postinst fails with KeyError: <VerNum(4)>" [Undecided,New] https://launchpad.net/bugs/1094921
<disposable> i upgraded 10.04 to 12.04. after the upgrade, i noticed 2 weird things which may even be related. 1. even though i uninstalled all kernel images apart from the newest one, grub-update puts all my old kernels into the os_prober section of grub.cfg. 2. when i run update-grub, it complains: "error: found two disks with the index 1 for RAID md2. error: superfluous RAID member (2 found)" BTW, md2 is where i store data, not the OS. /proc/mdstat 
 * RoyK thinks ubuntu, even LTS versions, are getting less stable
<RoyK> I've seen similar issues, disposable
<uvirtbot`> New bug: #1094944 in squid3 (main) "squid3 remove doesn't delete logrotate conf file" [Undecided,New] https://launchpad.net/bugs/1094944
<SpamapS> q/win 11
#ubuntu-server 2013-01-01
<akerok> Hello everyone.  I have a bit of an issure with my ubuntu server.
<RoyK> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<akerok> I'm running ubuntu server 12.04, and I was trying to configure the IP address to my new static IP so it can be accesses online (it's a web server).  I have to run it in a VM (I'm using Virtual Box for this).  I can no longer access the server using ssh, and I can't get it to allow me to access it directly.  It doesn't even give me the option to log in directly.
<pdtpatrick> Why does pvcreate give an error when ran in a script that device is ignored or filtered but running the same command manually works just fine? http://pastie.org/private/unch4uvjjkmmtpimsxwv8q
<uvirtbot`> New bug: #1094984 in exim4 (main) "/etc/aliases does not work (dpkg-reconfigure error)" [Undecided,New] https://launchpad.net/bugs/1094984
<djx> hello, what's the default image viewer for ubuntu server?
<antix> image viewer? there's no gui for ubuntu server...
<madc|SPYnX> How to start with the GUI in ubuntu server?
<jonbaer> is this the right forum to ask cloud-init questions?
<slyboots> Hello
<slyboots> Anyone aware of a variant of ubuntu that I can install and "blind-bood"
<slyboots> *boot, preferably something that'll auto-start, grab DHCP and start up something like ssh/vnc?
<sw0rdfish> jeez
<sw0rdfish> what were the websites for european VPS' for 5 or 6 euros again?
 * slyboots faceplants
<slyboots> Anyone know how you delete a freaking mdadm RAID array?  It keeps rebuilding despite me deleting it..
<slyboots> And running mdadm --zero-superblock /dev/sdX doesnt work "mdadm: Unreconginised md component device - /dev/sdX"
<RoyK> slyboots: mdadm --stop ?
<RoyK> slyboots: then just dd a bunch of zeros on the devices
<slyboots> Aye, Thats what Im doing now.. its crazy thats what its come to
<RoyK> a few megabytes will do
<RoyK> on each device
<slyboots> Aye, rebooted and its no-longer seeing the RAID array anymore, weird thats what the solution is though
<slyboots> Still trying to figure out the best way to build a NAS, 4x disks.. quite keen to try ZFS but.. eh
 * RoyK finds that a rather terrific solution ;)
<RoyK> ZFS is cool, but not as flexible as md
<RoyK> and under fuse, write speeds suck rather hard
<slyboots> I thought it was native, not fuse
<slyboots> (Although write-speeds are awful)
<RoyK> no native zfs in linux yet, that is, zfsonlinux exists, but probably not stable yet
<RoyK> and will probably *never* enter any distro because of licensing
<slyboots> Aye, mdadm seems a bit.. I dunno, build it once and it had to resync twice.. :P
<RoyK> MD is like, build a mirror, convert to raid-5, add a few drives and convert to raid-6, get low on space, convert to raid-5 again...
<RoyK> zfs is like, build a raidz1 VDEV and it'll stay in that form until you remove it
<slyboots> Well, thats not a huge problem. I've got a lil hp microserver so it only takes 4x disks anyway
<RoyK> but then, zfs has checksumming and support for l2arc and slog, ssd caching
<slyboots> But yea, the performance is *awful*
<RoyK> slyboots: then use omnios or openindiana or some other illumos distro - there it's native, and pretty neat
<slyboots> Yea I tired openindina, spend about a hour sobbing in a corner lol..
<RoyK> performance with MD is quite good ;)
<slyboots> Yea with mdadm Im getting about 60=80MB/sec over GigE which is about.. wella s much as you can hope for
 * RoyK had 350TiB or so on openindiana at his last job
<slyboots> ZFS was about.. 20MB/sec?
<RoyK> that's because of fuse
<slyboots> WEll, might try a native zfs, see if it really makes a difference
<RoyK> native zfs is quite fast
<RoyK> it won't be as fast as MD, though, because of the checksumming
<slyboots> well I figure it shouldnt be slower by a factor of 4..
<RoyK> of course, you can turn that off for a dataset, but that somewhat takes away the fun of zfs
<RoyK> slyboots: you can't compare native zfs with zfs under fuse...
<slyboots> Yea, Checksumming is good.. dedup not so much
<RoyK> don't use it
<RoyK> I mean DOÂ NOTÂ USEÂ DEDUPÂ ONÂ ZFS
<RoyK> I've spent hours testing that
<slyboots> Oh neat, OmniOS has a esxi application, ssweet
<RoyK> there's a very good reason why dedup never entered Solaris 11
<slyboots> Its just awful? :P
<RoyK> well, it doesn't work too well, unless you have a terabyte of RAM or so
<RoyK> slyboots: but with something illumos, you get snapshotting and perhaps SSD caching (if you have room for that in that microserver)
<slyboots> Nah, its pretty tight fit unless I did something over USB/eSATA
<RoyK> slyboots: but then, you get snapshotting if you're brave enough to use btrfs, and ssd caching if you're brave enough to use bcache ;)
<slyboots> lol, lot of intresting things you can do but.. eh, its only a lil server so.. playing it safe
<RoyK> ext4 on MD is safe
<RoyK> omnios or openindiana should be safe too
<slyboots> Aye, shall be intresting to test em
<RoyK> if you have windows clients, snapshots show up under "previous versions" in windows if you have an illumos server (given you use the CIFS server, and not Samba)
<slyboots> Ah cool, shadow-copy support?
<RoyK> yep
<slyboots> Neat :)
<RoyK> setup a server on that for a remote office in my last job - never heard a word of anything related to data restores later ;)
<RoyK> the autosnapshot service in Illumos is rather good for such things
<RoyK> unless you do something stupid as starting to use some file area with snapshotting as a spooling space or temp space
<slyboots> heh, we still have to take care of that manually at work :P
<RoyK> slyboots: as I said, zfs doesn't support extending VDEVs or changing RAID levels. It supports replacing drives with bigger ones, though (given zpool set autoexpand=on)
<slyboots> Kinda shitty like that, odd you cant expland
<slyboots> *expand
<RoyK> it'll require the famous block pointer rewrite mechanism that was reported to be worked on some three years ago
<slyboots> :P
<sw0rdfish> errr
<RoyK> er?
<sw0rdfish> i forgot the website with 6 euro VPS
<sw0rdfish> germany based vps
<RoyK> no idea
<RoyK> google sent me to http://www.alvotech.de/
<RoyK> google:vps 6 euro site:.de
<slyboots> RoyK: Dont suppose you know any good gides for this OmniOS :P
<RoyK> slyboots: just follow the guides for openindiana or even opensolaris
<RoyK> same thing, different wrapping
<RoyK> slyboots: but if you want to install on a microserver, I guess that only has four drives, are all four of them meant to be used for data?
<RoyK> if so, use a pen drive for the root
<RoyK> or two
<slyboots> AYe, already got that all setup
<RoyK> zpool create mydatapool dev1 dev2 dev3 dev4
<RoyK> probably something like c0t0d0
<RoyK> don't partition them
<RoyK> slyboots: btw, /j #omnios for this
 * slyboots nods
<slyboots> -So..... Filesystems!
<slyboots> Its on a home NAS, mostly large files like media.. perforamnce is imporataint so Im thinking either ext4 which appears to be the Go-to FS.. but I've heard xfs is quite good too
<RoyK> slyboots: no big difference
<RoyK> ext4 is a good allrounder, xfs is perhaps better on large files, but sucks sick through a straw when it comes to metadata operations, like lots of files in a directory
<slyboots> Mmm..
<slyboots> AYe, I've decided to sod ZFS, at least for now
<slyboots> Wills tick with ZFS..
<slyboots> Found some awsome tweaks for mdadm thats got the rebuild down from 10+ hours to about 3
<RoyK> zfs is good for dedicated storage systems IMHO
<slyboots> Aye, I've used FreeNAS and its worked well in the past, no idea why its so god awful inside a VM
<RoyK> erm
<RoyK> don't use a VM for hosting storage
<RoyK> just don't
<RoyK> it's stupid
<slyboots> Eh, I dont have the hardware to avoid it
<slyboots> Maybe in a few months when Im not crazy poor will buy something dedicated
<RoyK> then setup the system with native drives and use it for virtualization
<RoyK> not the other way around
<qman__> actually, that's not true anymore of xfs
<qman__> but I don't trust it because it's lost my data more than once
<qman__> I'm going to zfs myself
<RoyK> qman__: really? they fixed it?
<qman__> yes, as of ~3.0
<qman__> kernel 3.0 that is
<RoyK> ok
<slyboots> I dunno, its hard enough getting zfs going inside OpenIndiana, I would dread setting up virtulisation isnide that too
<slyboots> Plus, I've no screen for my microserver lol
<RoyK> slyboots: it's no problem getting zfs going good in openindiana ;)
<RoyK> qman__: I chose !zfs because of the flexibility of MD
<slyboots> RoyK: yea but its using it for aother things too, its too.. "Strange"
<slyboots> :P
<RoyK> slyboots: well, I can't blame you for not knowing what you do, but I can try ;)
<jchamb2010> Hi, does anyone here have knowledge of dual NIC setups and wouldn't mind helping me troubleshoot an issue I'm having? More info: http://ubuntuforums.org/showthread.php?t=2100344
<slyboots> But yea, mdadm works "Fine" inside a VM enviroment
<slyboots> Get about 60-80MB writes, thats about as best as I Could hope
<RoyK> slyboots: Just don't run storage servers virtualized
<RoyK> slyboots: really
<qman__> jchamb2010, you have two gateways, don't do that
<RoyK> slyboots: as in *really*
<jchamb2010> qman__ : is that really all it is?
<qman__> if you need both gateways you need to either set up metrics, or configure load balancing
<qman__> otherwise get rid of one
<jchamb2010> qman__ : IDK why I didn't try that
<qman__> yep, that's all it is
<qman__> as-is, they're treated as identical
<jchamb2010> I'll try that and see what happens
<qman__> and the kernel stupidly assumes that they are identical in every way, and just chooses at random where to send traffic out
<qman__> regardless of its nature
<qman__> and expects it to come back from either
<jchamb2010> I can disable the eth0 though, and eth1 still won't work
<qman__> it might not be configured right, or plugged into the right spot
<qman__> is 74.91.21.41 pingable through eth0?
<jchamb2010> yes
<jchamb2010> what you said works though :) i removed a gateway and it started working
<qman__> 74.91.31.45 is pingable from here, so it looks like you're all set
<jchamb2010> Thank you very much :)
<jchamb2010> I replied to the thread with the fix and thanking you. Have a good afternoon :D
<uvirtbot`> New bug: #994521 in openldap (main) "package slapd 2.4.28-1.1ubuntu4 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,Incomplete] https://launchpad.net/bugs/994521
<SnowBro-> hello there
<SnowBro-> need some help please
<SnowBro-> first of all i want to say sorry for my english, it's not my native...now the issue, im NEW to Linux and i pick Ubuntu Server as my option because the great support community that it has, im trying to put to work a machine as a server, i already installes LAMP environment (from tasksel), and i have everything working inside my network (i can access to the server in every machine inside my house), but i dont know how to go out with
<SnowBro->  the server, thats my issue, any clues?
<rckrd> hey guys, so i've have a small personal server and i want to use it for a few different things (development, music server, etc).  Whats the best way to keep these functions separate?  Virtual machines?  Or just different users and groups?
<greppy> rckrd: unless you have LOTS of resources ( RAM & CPU ) just use different users/groups.
<uvirtbot`> New bug: #1039763 in cinder "Multiple nova-volume services fails to create volume on second storage server when using Nexenta driver" [Undecided,New] https://launchpad.net/bugs/1039763
<uvirtbot`> New bug: #1095145 in openssh (main) "package openssh-server 1:5.9p1-5ubuntu1 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/1095145
#ubuntu-server 2013-01-02
<slyboots> What the flying..
<slyboots> For some reason, when my server reboots.. the RAID5 array I've build as /dev/md0 appears as /dev/md127 and its always stuck in a "PENDING" state, then it moves into "RESYNC"
<qman__> yeah, I've had that before
<qman__> it was caused by a screwed up disk and some uuid confusion
<slyboots> Hmm..
<slyboots> I've given them the disk by its ID though
<slyboots> Which.. shouldnt change
<qman__> your array is probably fine, for me it was just a matter of correcting the problems and getting it to assemble again
<qman__> I did it by removing the troublesome disk and manually assembling, then letting it fail over to another
<qman__> then fixing the messed up disk
<uvirtbot`> New bug: #1095180 in etckeeper (main) "incorrect bash-completion" [Undecided,New] https://launchpad.net/bugs/1095180
<uvirtbot`> New bug: #1095181 in etckeeper (main) "terrible breakage on git rebase" [Undecided,New] https://launchpad.net/bugs/1095181
<neologico> hi people!
<neologico> somebody knows about linux drivers installation?
<neologico> helpme please
<lifeless> ~ask | neologico
<lifeless> !ask | neologico
<ubottu> neologico: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<neologico> ok understand
<neologico> I can copy a compiled driver: wl.ko in one machine to another and install it?(wl.ko is the driver compiled for wireless card broadcom 43xx)
<neologico> ?
<lifeless> neologico: for the same kernel version only
<lifeless> neologico: best to use dkms to trigger a a compile and install on the other machine a swell
<neologico> lifeless:dkms in any application installable ?
<lifeless> neologico: its a standard package, if thats what you mena
<lifeless> *mean*
<neologico> yes understand; even though my language is spanish:)
<neologico> I can not even compile the driver after installing backtrack with updated repositories
<neologico> well; thanks lifeless in another time i continue this, is too late here.Thanks!!
<neologico> bye everybody happy 2013
<inflex> hiya all, planning on using U12.04LTS server on a few HP-NAS boxes.  Out of curiosity, is there much trouble in getting it all to fit within 4GB? ( rsync, ssh, and samba are the only services being offered )
<vezq> inflex: will fit just fine
<inflex> tx.  Planning on using a Sandisk Cruze Fit USB 4GB as the OS/boot.
<inflex> (Amazing how tiny those things are)
<LuizAngioletti> Hello there! How can a machine be resolving names if it's resolv.conf is blank?
<LuizAngioletti> I do have a dns server running, but does it affect my external name resolution?
<RobCWDudley>  LuizAngioletti: DNS can be passed in via DHCP iirc and Network config handles DNS from outisde of resolv.conf
<hallyn> ahs3: hey, new netcf release is out :)  prevents segfault in use by threaded app iiuc.  i assume now is still a bad time for debian upload/
<hallyn> zul: good morning!  could I pursuade you to sync netcf from debian-experimental into raring?
<zul> hallyn: i could be persuased
<hallyn> zul: looking at bug 591489.  i don't get it.  vnc doesn't fwd audio, so why would this bug matter?  (hoping you understand :)
<uvirtbot`> Launchpad bug 591489 in libvirt "No sound in virt-manager (QEMU_AUDIO_DRV set to none by libvirtd)" [Medium,Confirmed] https://launchpad.net/bugs/591489
<ahs3> hallyn: hrm.  good question; i'll check on the freeze, but yeah, i suspect it may be "hurry up and wait" a bit
<zul> hallyn: cant tell you but its in lucid have you tried something a bit newer?
<zul> hallyn: ill put the 1.0.1 up for you somewhere this week btw (with the lxc attach stuff backported)
<hallyn> zul: i havne't tried any, no :)
<hallyn> zul: IIUC, they're saying they want to start a VM with virt-manager, and have the qemu guest use the host's sound card even though they're using vnc
<hallyn> so if they were doing vnc remotely, the server would be playing sound
<zul> right it wouldnt
<hallyn> i see yeah that's what they want  (comment #8)
<bieb> I installed 12.04 server before the holidays.. everything was working fine, I have a static IP and downloaded software and updates.. today I am not able to get to any sites I tried apt-get update and it fails with "unknown host", same error if I try to ping anything. I have looked at /etc/resolvconf/resolv.conf.d/original and the correct DNS servers are in there.. any ideas where to look next?
<hallyn> zul: tempted to set it to wontfix rather than ignoring it...
<hallyn> but i guess i won't
<zul> i would
<zul> but i would also get them to test something newer
<zul> since they are like an lts behind
<hallyn> zul: the proposed fix reuires running the vm as the logged-in user (to connect to the pulseaudio session) so i doubt i'ts "fixed"
<hallyn> (where "fixed" == broken imo, but...)
<hallyn> so i don't really want to ask - seems like a waste of time
<zul> it does....besides its not a fix that we would SRU imho either
<davmor2> hey guys on precise server I just ran sudo apt-get update and got this error any ideas? W: GPG error: http://gb.archive.ubuntu.com precise Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>
<melmoth> davmor2, you need to import a public key.
<melmoth> wich key.. i m not sure
<davmor2> melmoth: it's the archive repo key and seems to be in place
<melmoth> http://en.newinstance.it/2009/06/22/the-following-signatures-were-invalid-badsig-40976eaf437d05b5-ubuntu-archive-automatic-signing-key/
<davmor2> melmoth: many thanks
<sw0rdfish> whats the 6euro a month vps provider's website again?
<RoyK> sw0rdfish: you asked yesterday, and I even did some googling for you
<slank> I notice that the default conf for rsyslog specifies $FileGroup adm, but then $PrivDropToGroup syslog. The latter appears to negate the former - I'm seeing new files created as syslog:syslog.
<sw0rdfish> errr
<sw0rdfish> sorry RoyK..... plz highlight me next time, and I found what you said, thanks man
<sw0rdfish> damn man how did you that... that is a nice search technique ;)
<RoyK> sw0rdfish: I guess I to prefix with your nick...
 * RoyK knows search engines :)
<sw0rdfish> hehe
<hallyn> zul: not sure you care to follow my process on this :)  but in case you do, i've pushed my changes to the debian qemu tree to github.com/hallyn/qemu.  i'm going to now push that (based on 1.2 from end of october, i.e. what is in ppa:serge-hallyn/crossc) to raring, then i will merge in the new 1.3.0 qemu from debian's git tree (which hasn't been tested)
<hallyn> though i see it's in debian-experimental
<zul> hallyn: okie dokie
<hallyn> so, seems impossible that *something* would not go wrong with the 1.2.0 qemu push,
<hallyn> which will remove qemu-kvm and qemu-linaro source packages
<hallyn> oh wait.  what am i thinking
<hallyn> clearly i won't have rights for this
<RoyK> hallyn: famous last words...
<hallyn> RoyK: i'm not happy about it :)  but it's gotta be done...
<jtd> hi guys.
<jtd> so, I have Ubuntu Quantal authing to AD for logins and it seems to be working well. I created /home/DOMAIN for domain accounts to have their home directories created, but instead of creating home directories it's just complaining they don't have one and putting them in /. How do I get it to create their homedir on login if it doesn't exist?
<hallyn> zul: so actually i think i'll have to (a) ask someone (you?) to push the source from my ppa, and (b) ask for the new src pkg to be added to ubuntu-server group
<zul> hallyn: most likely :)
<zul> hallyn: or anyone else on the team
<hallyn> zul: you'd prefer someone else?
<zul> hallyn: nah but if you can spread the love that would be cool
<hallyn> zul: since i expect problems with this one, i'd (sorry :) rather have you do this, and ask roaksoax to do the netcf sync
<zul> hallyn: sure
<hallyn> zul: thanks!  so after you push, i'll ask on #ubuntu-devel to have qemu added to server set
<hallyn> roaksoax: ping (when you're around) :)
<roaksoax> hallyn: here
<hallyn> roaksoax: could you sync netcf from debian-experimental into raring?
<roaksoax> hallyn:  sure :)
<hallyn> roaksoax: thanks.
<hallyn> (honestly i need to package 0.2.3 too, but i prefer to go through debian first for all netcf now)
<hallyn> (and that potentially awaits freeze)
<sw0rdfish> w00t! my american alienvps is doing ok for paltalk usage!
<sw0rdfish> I'm soo happy
<sw0rdfish> in that case I don't need a european one :D
<roaksoax> hallyn: yeah I agree,  but if we are close to freeze and it is not in debian yet, then I would suggest to package it
<hallyn> roaksoax: we're not close to freeze just yet right?
<roaksoax> hallyn: nope
<roaksoax> we still have time :)
<hallyn> do you remember off hadn when ff is?
<roaksoax> hallyn: March 7th is FF
<roaksoax> hallyn: https://wiki.ubuntu.com/RaringRingtail/ReleaseSchedule
<hallyn> thx just foudn it :)
<hallyn> ok so i'll put this down in my feb tickler file to package it for sure
<roaksoax> hallyn: yeah, another thing you could do to help Debian, is package it yourself, ping the maintainer to try to get the new prepared in (specially of there are packaging changes)
<hallyn> roaksoax: no no, i'm the maintainer :)
<hallyn> roaksoax: i just want to wait until i'm sure wheezy is not in freeze
<roaksoax> hallyn: yeah just noticed :)
<hallyn> roaksoax: thanks again.  (going to step away now for a few mins)
<roaksoax> hallyn: syncpackage: Request succeeded; you should get an e-mail once it is processed.
<daguz> I'm trying to install quantal under xen(suse) . Last time I followed these directions for natty ( http://www.mmacleod.ca/blog/2011/05/ubuntu-natty-narwhal-and-xen/) and it worked fine.  Now I cannot seem to get it to go with the latest version.  After install, I get "boot loader didn't return any data"
<Daviey> adam_g: Hey, are you able to fix up the CA report to present for grizzly aswell?
<adam_g> Daviey: yeah, *should* be able to just add a new set of cronjobs to pull grizzly/raring instead of folsom/quantal. ill confirm in a bit and let you know what to add
<smw_> "Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces" What should I do then?
<RoyK> restart networking?
<smw_> RoyK, when I rand networking restart. it gave that warning
<smw_> so if one way is deprecated, another should be used instead, right?
<smw_> anyways, I am running into other issues :-\
<RoyK> did you try that command I just gave you?
<smw_> oh, that is a command XD
<smw_> nope
<smw_> doesn't appear to be an upstart job
<smw_> http://fpaste.org/M2vw/ I added eth1:1 to my /etc/network/interfaces and then did /etc/init.d/networking restart
<RoyK> root@bcache:~# restart networking
<RoyK> networking start/running
<RoyK> that's on quantal, though
<smw_> I am on precise
<smw_> works for me too
<RoyK> dunno - I have no VM on precise, and wouldn't like to test on a box in roduction ;)
<RoyK> *production*
<smw_> RoyK, this is odd
<smw_> RoyK, http://fpaste.org/SGCd/
<LuizAngioletti> ;quit
<adam_g> Daviey: yeah, the currentl script in lp:ubuntu-reports should work the same way. './gather-versions.py grizzly && ./ca-versions.py -r grizzly' should dump a ca_versions_grizzly.html
<Daviey> adam_g: Ah, i thought you might try and put it on the same output file.. but yeah, ok, that makes sense
<Daviey> Thanks
<adam_g> Daviey: oh, ya i could do that i suppose.
 * hallyn looks for utlemming
<hallyn> oh well
<Daviey> adam_g: hmm, unless you think it is easy.. don't stress about it.. two pages is fine.
<ribo> so, I used to be able to install 32bit packages like so: apt-get install libx11-6:i386 in server 12.04, no longer, it seems for 12.10
<ribo> are i386 packages still avilable?
<patdk-lap> yes
<ribo> is package:i386 no longer the way to do it?
<TheLordOfTime> last i checked that's still how to do it
<TheLordOfTime> unless there's a bug in apt/apt-get which i missed during my rounds of watching the bug announce channel
<ribo>  sudo apt-get install libx11-6:i386 .... E: Unable to locate package libx11-6
 * TheLordOfTime pulls up a quantal VM
<ribo> (I am using the canonical EC2 image)
<TheLordOfTime> and which mirror is it pulling from :P
<ribo> us-west-2
<TheLordOfTime> bleh my VM segv'd.
<TheLordOfTime> REINSTALLAITON TIME
<ribo> hehe
<ribo> might have figured it out
<ribo> yep
<ribo> dpkg --add-architecture i386
<ribo> apparently that used to be default in 12.04 amd64 server
<ribo> and it no longer is
<ribo> woo success
<mkander> What is the best bare-metal backup strategy for an ubuntu server?
<patdk-lap> veeam? acronis? backupexec?
<mkander> checking them out now
<mkander> both acronis and backupexec seems to do what I want
<patdk-lap> or did you mean free backup software?
<mkander> It doesnt have to be free if it works
<patdk-lap> I played with backupexec, but never put it into production
<patdk-lap> I mainly use veeam and acronis
<patdk-lap> only two free ones I know of, and I haven't digged deep into either
<patdk-lap> bacula, and hmm, someting that starts with an a
<mkander> Seem like you have to boot from a CD
<patdk-lap> you always have to
<patdk-lap> to do a baremetal restore
<sarnold> coworkers use rsnapshot, suggested looking into e.g. horcrux for scripting around rsnapshot
<sarnold> but that isn't a back-to-working sort of tool, more of a "keep my data backed up" sort of tool
<patdk-lap> ya, I do multible types of backups
<patdk-lap> including rsync, backuppc, and baremetal
<patdk-lap> all depending on how paranoid, and whattype of recovery I want
<mkander> I looked into clonezilla
<mkander> that also required a boot
<patdk-lap> also, nice, to know if some backup method fails, the others will likely still be working
<patdk-lap> acronis installs onto ubuntu, it shouldn't need reboots at all to backup
<mkander> Hmm interesting
<patdk-lap> other thing you could do
<patdk-lap> use lvm
<patdk-lap> create snapshot
<patdk-lap> backup snapshot
<patdk-lap> release snapshot
<patdk-lap> but now your basically building your own backup solution
<patdk-lap> all depends what you want
<mkander> I think I use lvm now.. =P I am not that pro with this
<patdk-lap> I'm all virtualized, so I just use veeam to backup everything
<mkander> Lvm really makes the snapshot consistent? I have a database running as well
<adam_g> hallyn: ping
<patdk-lap> it makes a *consistant* disk snapshot
<mkander> okey
<patdk-lap> flusing your applications to disk is a different story, and all backup solutions will have that issue
<patdk-lap> that is why microsoft created the vss stuff, so databases and things just plug their own vss driver in, and when someone requests that, they all flush
<patdk-lap> though really, I would not depend on that for a database backup
<patdk-lap> while it is nice, I would defently also backup the db's seperately, using just a db specific thing
<hallyn> adam_g: what's up?
<patdk-lap> it's more likely for your db to have issues, or get corrupted from user error or something
<patdk-lap> and easier to restore that way also
<mkander> Ok, so the db is most prone to these kind of problems
<patdk-lap> well, the more you use sometihng, the more prone it is to issues
<mkander> the rest of the os would backup fine
<sarnold> patdk-lap: too true.
<mkander> okey
<patdk-lap> likely, doing weekly backups baremetal, and daily backups of the db would be good
<patdk-lap> but all depends on what it is you need, and what goals you want
<adam_g> hallyn: slowly making my way thru comments in bug #1057024 but, testing some openstack updates in quantal-proposed and running into a permissions issue on /dev/kvm  with the qemu-kvm version thats in quantal-proposed. what is responsible for setting group ownership of /dev/kvm with the newer package?
<uvirtbot`> Launchpad bug 1057024 in qemu-kvm "kvm kernel module always loaded, without setting /dev/kvm permissions" [High,Fix committed] https://launchpad.net/bugs/1057024
<mkander> Goal #1: Not having to reboot the server once a week and carry a monitor into the basement :P
<patdk-lap> like my db servers, nothing changes on them, so just doing database backups is enough, as long as I have one baremetal, or I could just easily rebuild it quickly enough
<mkander> maybe there is a way to automate a boot-backup solution?
<patdk-lap> mkander, likely, doing a baremetal backup once every 3 months might be enough
<patdk-lap> and backing up /home /var/lib /etc daily
<patdk-lap> well, if websites and mail on it, /var/www and /var/mail also maybe
<hallyn> zul: if you're pushing a libvirt soon-ish, there's a trivial buglet in bug 1095140
<uvirtbot`> Launchpad bug 1095140 in libvirt "documentation flaw "libvirt", rather than "libvirtd" group" [High,Triaged] https://launchpad.net/bugs/1095140
<mkander> Yeah agree, but I want everything automated
<mkander> maybe I could get clonezilla to do it automated
<patdk-lap> well, you could automate all that easily, except for the baremetal backups
<hallyn> adam_g: /dev/kvm perms are supposed to be fixed by the end of qemu-kvm.postinst
<hallyn> by the call to udevadm
<hallyn> adam_g: i thought i'd verified that one already fwiw
<adam_g> hallyn: hmm. im installing nova-compute, which pulls in qemu-kvm and i end up with root:root. ill keep poking
<mkander> Seems like Clonezilla is an open source program that creates complete disk backups. Looks like some people have managed to automate the process as well.
<hallyn> adam_g: hm
<hallyn> adam_g: oh wait!  yes there is (we believe) a bug in udev
<hallyn> lemme see where i filed it
<hallyn> adam_g: https://bugs.launchpad.net/ubuntu/+source/udev/+bug/1092715  lemme ping slangasek in -devel
<uvirtbot`> Launchpad bug 1092715 in udev "udevadm trigger --action=change not working since quantal?" [Undecided,New]
<adam_g> hallyn:  yippie, thanks!
<adam_g> hallyn: so is it unlikely the qemu-kvm update will be moving to -updates anytime soon?
<hallyn> not sure
<hallyn> in precise it should be :)
<hallyn> maybe i should spend tomorrow looking into the udev bug
<adam_g> hallyn: im concerned with quantal ATM. testing these nova SRUs assumes qemu-kvm works as expected. :\ i suppose i can work around the issue by manually setting permissions on /dev/kvm during deployment but i'd consider it a regression in the qemu-kvm in quantal-proposed, no?
<hallyn> adam_g: no, bc as slangasek points out that never worked right in quantal, so it's not a regression
<hallyn> adam_g: symptom is slightly different of course - before it would mark it the right group, but it wouldn't give group rw perms
<hallyn> adam_g: i'm sneaking away - ttyl
<adam_g> hallyn: well
<adam_g> hallyn: 1.2.0+noroms-0ubuntu2 gives me correct crw-rw---- 1 root kvm 10, 232 Jan  2 17:08 /dev/kvm
<hallyn> adam_g: getfacl /dev/kvm
<hallyn> (gotta run, bbl)
<adam_g> cya
<jeeves_moss> how do I setup a proper bidirectional replication FROM a windows DNS server TO a bind server?
<sarnold> bidirectional? I thought the whole point of dns replication was master / slave
<jeeves_moss> sarnold, ideally, however, the BIND server will need to do updates to the Windows enviroment.
<jeeves_moss> sarnold, Idealy, the BIND server will be the slave
<sarnold> jeeves_moss: slave-only ought to be easy, configure the axfr access controls on your windows server to let your bind zone transfer, and set up the bind system to do zone transfers..
<sarnold> jeeves_moss: .. but getting that bind to update the windows dns, well, that just doesn't seem 'normal' to me.
<jeeves_moss> sarnold, right now, I need to setup the BIND server to be a slave, and pull all newly created domains, etc FROM the windows servers.  there is a technet howto posted by microsoft that's a step by step, but I can't find it
#ubuntu-server 2013-01-03
<F3Speech> looking for some help with a networking error using cli - basic prob is usb wifi adaptor will only connect to network is i reset my router
<genii-around> This is my last game, need to get going after
<genii-around> Misdirect :(
<Free99> Hello everyone. Trying to make a high-availability KVM setup, but the only thing that is giving me trouble is figuring out how to use iptables
<Free99> the setup is pretty simple, two machines that replicate the disk images for the VMs via drbd, and the VMs running on both machines for quick failover
<Free99> issue is, there would be an ip conflict for each VM... any suggestions?
<Free99> my idea is to use arptables and iptables to block off all of one of the machine's VMs while still letting DRBD and administrative access through
<sarnold> Free99: wild guess here, but could you solve your problem by giving each vm two interfaces? give both vms a private IP and one vm a public IP, and use linux-ha or similar tools to failover to the other VM and takeover the public IP on its other interface?
<sarnold> (you may be able to do it entirely with one interface, but two may be easier to use existing tools..?)
<Free99> sarnold: well... I don't really know anything about linux-HA.. I have DRBD setup already, bridged networking, etc.. I'd just like to block everything that isn't going to the host VM, then use a sysctl to disable iptables and arptables if the other host goes down
<Free99> I guess I'll go check on Linux-HA, but that implies adding new interfaces to every single one of my VMs, right?
<patdk-lap> heh, would be so simple to use linux-ha (pacemaker)
<patdk-lap> doesn't need to
<patdk-lap> all depends on how you do your iptables config
<Free99> oh wait, I'm using linux ha, specifically heartbeat! lol
<patdk-lap> that is really old
<Free99> patdk-lap: to be honest, haven't gotten to that point yet, still doing the DRBD setup, stuck on the iptables though
<Free99> oh I see, I should have said pacemaker
<Free99> my bad
<la_> need help understanding what im doing in Aptitude
<la_> i think it just so simple i do not understand
<blim_> hi, Does anyone know a guide on how to set the ToS (type of service) on a client os?
<jabba_> hello
<jabba_> i just setup a crypted partition (SR for a Xen-Server). and added it to the /etc/crypttab with the option "timeout=30". sadly the option seems to be ignored (as boot-process keeps going on) and i am not able to enter the passphrase at boot. anyone an idea what's wrong?
<jabba_> referring to this bugreport: https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/468208
<uvirtbot`> Launchpad bug 468208 in cryptsetup "cryptsetup passphrase prompt at boot not working if waiting too long (w/o usplash)" [Undecided,Triaged]
<jabba_> is it possible that it's an upstart relatet problem?
<xnox> jabba_: cryptsetup unlocking is done in initramfs while talking to plymouth (these days)
<xnox> jabba_: I didn't experience it timing out, but I did experience three attempts only.
<xnox> jabba_: is your root file system encrypted or some other partitions?
<jabba_> no, the encrypted partition in an SR (Storage Repository) for a Xen Hypervisor (XCP-XAPI). It is essential, that  this device is decrypted before the XCP-XAPI-Service starts.
<jabba_> -in +is
<jabba_> so is it possible at all to halt the boot-process for a defined time (waiting for the passphrase) and in the case of a timeout countinue booting without decrypting the partition?
<jabba_> why is the timout-option not listed in the crypttab-manpage anymore?
<jabba_> xnox: any idea?
<xnox> jabba_: so when root is not encrypted, indeed upstart jobs unlock the partitions. it should be accepting options.
<jabba_> xnox: so upstart is responsible for calling the cryptsetup scripts, BUT the main problem is, that the boot process doesn't get interrupted, for the defined time (timeout=30 in crypttab).
<xnox> the same way it does when rootfs is encrypted.
<xnox> jabba_: where did you find reference to crypttab "timeout" option?
<jabba_> http://manpages.ubuntu.com/manpages/hardy/man5/crypttab.5.html
<xnox> jabba_: but that's for hardy. Are you running hardy? that option is not present in lucid and later.
 * xnox is not entirely sure how cryptsetup is managed in hardy.
<jabba_> xnox: the timeout option i missing in precise... is there a new way to handle this?
<jabba_> *is
 * xnox is looking into the history behind this: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=495509
<uvirtbot`> Debian bug 495509 in cryptsetup "cryptsetup: timeout option does not work anymore" [Important,Fixed]
<xnox> "* Completely remove support and documentation of the timeout option, document this in NEWS.Debian."
<jabba_> can't believe it... that's really sad
<xnox> well your problem seems to be that it's way too quick, instead of non-existing.
<jabba_> xnox: could you explain this please?
<xnox> jabba_: we are processing udev triggers and we should have approx. udev settle timeout to enter the passphrase which is ~30s long. yet earlier you said "i am not able to enter the passphrase at boot" meaning that even if there was a prompt, it vanished too quick.
<jabba_> ok.. i must admit: "i can't follow" :/
<jabba_> xnox: my udev settles to fast?
<xnox> jabba_: I guess I need to experiment with a setup similar to yours. E.g. regular install and one non-essential luks encrypted mountpoint.
<jabba_> and that's why the passphrase-promt timeouts to early?
<xnox> I am speculating a little bit. But i am not spotting anything that will block boot to capture passphrase-promt.
<xnox> for non-rootfs luks volumes.
<xnox> bug 518964
<uvirtbot`> Launchpad bug 518964 in cryptsetup "cryptsetup can't get input at boot." [Undecided,Confirmed] https://launchpad.net/bugs/518964
<jabba_> so i am lost :(
<jabba_> xnox: did you change the State of the bug-report?
<xnox> no, it's been there since 2010, I assigned it to myself to check if it can be fixed or not.
<jabba_> xnox: you're my hero :) , thanks!
<zul> yolanda: ping can you have a look at https://code.launchpad.net/~zulcss/python-cinderclient/fixtures/+merge/141751
<yolanda> hi
<yolanda> ok
<yolanda> done
<zul> yolanda: thanks one more https://code.launchpad.net/~zulcss/python-swiftclient/ftbfs/+merge/141754
<yolanda> done
<zul> yolanda: if you want to have a look at whats wrong with glance and quantum be my guest :)
<yolanda> zul, what's the problem? i saw a fbtfs in your other reviews, is that the same?
<zul> yolanda: glance patches need to be refreshed or dropped and quantum its the testsuite
<yolanda> ok, i'll take a look
<Lovelidge> Is anyone on this channel?
<Free99> hey everyone, I have a bridged connection that I need to filter. Should I use iptables+arptables, or will ebtables do the job of both?
<patdk-lap> heh?
<patdk-lap> that depends on what you want
<patdk-lap> all 3 of those do totally different things
<patdk-lap> if they did the same thing, they wouldn't all exist
<Free99> yeah, that's kind of why I'm a little confused. I basically need to isolate my VMs which are connected via a bridge.. the VMs are replicants of the VMs on a different physical host, so there would be an ip conflict
<Free99> the hosts watch each other using pacemaker, if one machine goes down, the other unblocks its VMs
<patdk-lap> are you doing some kind of memory replication?
<Free99> no, just DRBD. I had been wondering if that would work actually
<patdk-lap> ya, your whole setup is fail
<patdk-lap> kind about this, what happens when two computers use the same harddisk?
<patdk-lap> it causes complete corruption
<patdk-lap> your vm should only be running at one location at a time ever, then no harddrive corruption, no ip issues, ...
<Free99> patdk-lap: so what could I do instead? I suppose I would just keep the VMs off on the other machine, keep their disk images synced though
<patdk-lap> drbd keeps the disk images synced
<patdk-lap> just have pacemaker start the vm on the other machine on failover
<Free99> lol I dunno why I thought that'd work
<Free99> well, just for my knowledge, if I need to filter these VMs, like for fencing or anything, how would I do that?
<patdk-lap> depends :)
<patdk-lap> you could use iptabls/ebtables/...
<patdk-lap> all depends on what level/layer you want to do it at
<Free99> getting conflicting advice saying iptables doesn't work on br0 and such, others saying it does
<Free99> kind of don't know my dick from my elbow when it comes to networking haha
<patdk-lap> heh?
<patdk-lap> iptables works on everything
<patdk-lap> lets put it this way
<uvirtbot`> New bug: #1095710 in linux-lts-quantal (main) "update-grub-legacy-ec2 does not consider 3.5.0-generic as valid for Xen (dup-of: 1005551)" [Undecided,New] https://launchpad.net/bugs/1095710
<uvirtbot`> New bug: #1005551 in cloud-init (main) "Quantal does not boot on EC2" [Critical,Fix released] https://launchpad.net/bugs/1005551
<uvirtbot`> New bug: #1095757 in krb5 (main) "krb5 packages should be updated to v. >=1.10.2 to workaround bug with gssapi kerberos authentication" [Undecided,New] https://launchpad.net/bugs/1095757
<tflgen2> hi guys, i asked this question on the #ubuntu channel but it may have fallen on deaf ears. I've just got a new installation of 12.04 and am trying to set up a mail server following this guide: http://flurdy.com/docs/postfix/edition11.html but I'm having issues with SASL. It shows up as Error: authentication failed: no mechanism available
<sarnold> tflgen2: anything in the logs?
<tflgen2> mail log shows "mailsrv postfix/smtpd[8927]: warning: unknown[192.168.40.180]: SASL PLAIN authentication failed: no mechanism available"
<genii-around> Sounds like a PAM issue
<tflgen2> in /etc/pam.d/smtp:                auth required pam_mysql.so user=mail passwd=QQuz43 host=127.0.0.1 db=maildb table=users usercolumn=id passwdcolumn=crypt crypt=1
<sarnold> tflgen2: looks like pam_mysql.so is part of that guide; do you have it installed?
<sarnold> tflgen2: does the guy say why he hsa both "mailPASSWORD" and "aPASSWORD"? Why are they different..?
<tflgen2> sarnold: no idea. i just use the same
<sarnold> tflgen2: and can you connect to your mysql database on the command line with the specified username and password?
<tflgen2> yep
<tflgen2> hang on
<tflgen2> for a sec there, thought I had missed some files to installâ¦â¦not the case
<tflgen2> sarnold: i'm kinda at a loss of where to look here. what else can i look at regarding pam?
<sarnold> tflgen2: are there any errors in the auth log?
<tflgen2> no, it doesn't even look like it hits the auth log.
<sarnold> you could always strace it. it's not much fun (especially if its your first time iwth strace), but you can at least watch what it tries to do..
<tflgen2> https://bugs.launchpad.net/ubuntu/+source/cyrus-sasl2/+bug/875440
<uvirtbot`> Launchpad bug 875440 in cyrus-sasl2 "Cannot authenticate with saslauthd and mysql" [High,Confirmed]
<sarnold> tflgen2: wow. be sure to click the "oes this bug affect you?" thing..
<tflgen2> https://bugs.launchpad.net/ubuntu/+source/cyrus-sasl2/+bug/875440/comments/61 that is the fix
<uvirtbot`> Launchpad bug 875440 in cyrus-sasl2 "Cannot authenticate with saslauthd and mysql" [High,Confirmed]
<uvirtbot`> New bug: #1095775 in maas (main) "maas 0.1+bzr482 (precise) incompatible with python-django 1.4 (cloud-archive)" [Undecided,New] https://launchpad.net/bugs/1095775
<sarnold> tflgen2: oh, woo, so the guide is outdated and needs fixing. that's more pleasant. :)
<genewitch> so i am trying to use the cloud live image, the http://localhost comes up but does not accept the password in the GettingStarted.txt file
<genewitch> What up wit that
<tflgen2> case sensitive?
<genewitch> yeah it just says ubuntu:ubuntu123
<genewitch> i can't even think where to look for the config file with that password in it so i can change it
<tflgen2> "Point your browser to the public address of the openstack-dashboard node, "http://node-address/horizon" , login using admin/openstack"
<genewitch> horizon isn't a valid URI and that password combo doesn't work. it seems like the glance API isn't working on port 9292
<tflgen2> hrm
<genewitch> yeah the stock commands don't work.
<genewitch> It's ok, i mean i am sure i can set it up manually, i was a bit enthused by being able to check it out this way
<genewitch> oh, eth3 (the nic i am using) keeps getting unassigned
<genewitch> so this is a networking issue perhaps
<genewitch> and there it goes
<genewitch> it keeps getting wiped out
<RoyK> genewitch: pastebin ifconfig -a
<genewitch> what's the pastebin package called on ubuntu
<RoyK> pastebinit
<genewitch> i killed network-manager
<genewitch> should i not have done that
<genewitch> http://paste.ubuntu.com/1492939/
<RoyK> if you reconfigured /etc/network/interfaces and restarted first, yes
<genewitch> no
<RoyK> eth[012] not configured, eth3 configured with rfc1918 address, what do you want to know?
<genewitch> RoyK: It kept getting UNconfigured.
<genewitch> like 4 times.
<RoyK> pastebin /etc/network/interfaces
<genewitch> nothing in there but lo
<RoyK> well, then configure it
<RoyK> !interfaces
<RoyK> !network configuration
<RoyK> no idea how to get that from the bot - google ubuntu network configi
<genewitch> i know how to manually configure a network. My point was that's not in the GettingStarted guide.
<genewitch> nice: failed to get http://169.254.169.254/latest/meta-data/public-keys
<genewitch> this is a bang-up job.
<genii-around> Looks like no dhcp and it falls back to the 169.254.x.x
<genewitch> genii-around: no, 169.254.169.254 is a metadata server
<genewitch> i guess this cirros image is meant to boot on amazon cloud, and openstack doesn't have a metadata server by default
<genewitch> oh, it does have a metadata server, just doesn't have curl :-p
<tflgen2> genewitch: i just booted into the livecd of ubuntu cloud on virtual box and was able to get an instance of cirros to boot
<genewitch> yes, i have 3 running.
<genewitch> i'm testing ephemeral
<tflgen2> oh, my bad
<genewitch> it is KVM >.<
<Lovelidge> Question to all. Where can I get documentation on how to install mono on uBuntu server?
<Lovelidge> Hello?
<guntbert> Lovelidge: does http://askubuntu.com/questions/11921/how-to-install-mono-on-a-server help?
<tflgen2> has anyone here migrated from courier to dovecot? I'm gonna try and give dovecot a try to see if the imap behavior is better.
<guntbert> tflgen2: I never "migrated" - and dovecot gave no reason for complaints
<tflgen2> guntbert: i guess i'm not actually looking to migrate as this is a fresh install. Doing this for a customer, they currently are hosted elsewhere and using outlook with pop3. I'm hoping to get them on my server with courier/dovecot (preferably imap).
<tflgen2> do you have a good tutorial on installation of dovecot on 12.04?
<_KaszpiR_> Question - I have a home server running ubuntu-server LTS on some quite old motherboard, SATA (AHCI) in bios, booting form single drive
<genewitch> that's not a question
<_KaszpiR_> now, I'd like to migrate to  new motherboard that uses EFI - any suggestions what can go wrong/
<genewitch> _KaszpiR_: nothing, if you turn EFI off
<lastninja> witam moÅ¼na po polsku
<_KaszpiR_> hm
<_KaszpiR_> ok, looks like I've found some more info
<guntbert> tflgen2: sure - see https://help.ubuntu.com/12.04/serverguide/dovecot-server.html :)
<tflgen2> gah, dovecot sucks :( can't even get it to start "dovecot: config: Warning: Killed with signal 15 "
<patdk-lap> heh? dovecot is great
<patdk-lap> I switched to it in 2003 I think, and I wouldn't call anything else currently close to it
<tflgen2> i can't get the damn thing to start :)
<patdk-lap> well, it has a lot of config options
<patdk-lap> use dovecot -n, to see what you did
<tflgen2> ! thanks for that
<ubottu> tflgen2: I am only a bot, please don't think I'm intelligent :)
<patdk-lap> ubottu, oh, go get a life
<ubottu> patdk-lap: I am only a bot, please don't think I'm intelligent :)
<tflgen2> patdk-lap: ok, so that helped me weed out some crap. now i'm staring at fatal: unknown database driver 'mysql'
<patdk-lap> that is a debian/ubuntu thing
<patdk-lap> likely didn't install the dovecot-mysql package
<tflgen2> dovecot --build-options |tail -n4
<tflgen2> Mail storages: shared mdbox sdbox maildir mbox cydir raw
<tflgen2> SQL driver plugins: mysql postgresql sqlite
<guntbert> tflgen2: how did you install dovecot? I had no troubles at all
<patdk-lap> hmm
<tflgen2> turns out =i didn't have the dovecot-mysql package installedâ¦.
<tflgen2>  /facepalm
<uvirtbot`> New bug: #1095840 in nova (main) "No logrotate config for nova.log" [Undecided,New] https://launchpad.net/bugs/1095840
<anti-neutrino> hi guys
<anti-neutrino>  I have a ubuntu-11.10 (Desktop) installation.. which I was using as a server (not using any gui feature of it)
<anti-neutrino> can I just rip off the ubuntu-desktop package .. to make it as efficient as ubuntu-server
<anti-neutrino> as I see on the ubuntu website .. they merged the generic and server kernl after 12.04 .. but I have 11.10 installation
<anti-neutrino> this machine was suppoed to be only for testing .. but then we want to use it in the production too
<anti-neutrino> please suggest .. thanks!
<sarnold> anti-neutrino: feel free to uninstall any packages you don't use.
<sarnold> anti-neutrino: you'll make most impact by removing applications that are using the most amount of memory, but that can be a bit difficult to quantify
<anti-neutrino> thanks for the reply @sarnold
<sarnold> anti-neutrino: for my little pandaboard, I just asked upstart to not start X, it seemed like a nice half-way point. it doesn't run, but the libraries are all there in case I want to ssh -X an application some day.
<anti-neutrino> yeah I am removing all the unwanted services .. (making list  of resource hungry services from the top command)
<anti-neutrino> yeah I can do that too ..
<anti-neutrino> so once again a naive question .. if kill X  .. it pops up again
<anti-neutrino> i understand this should be in something like 'startup'
<sarnold> aha, here we go: http://upstart.ubuntu.com/cookbook/#override-files
<sarnold> anti-neutrino: the book says these override files were added in upstart 1.3; if so, something like: echo manual >> /etc/init/lightdm.override    ought to save you a few hundred megabytes of RAM/swap
<anti-neutrino> cool... thanks a lot sarnold
<anti-neutrino> I can try this on one of the VMs ..
<anti-neutrino> you guys are awesome .. this seems to do the trick :)
<sarnold> anti-neutrino: you may also wish to look at the powertop output and find which processes are responsible for the majority of your CPU wakeups
#ubuntu-server 2013-01-04
<anti-neutrino> ohh thanks once again .. i dint know about this utility
<F3Speech> Having problems moving file system from usb to hdd, have moved file system, installed grub, updated, updated fstab mounts. on reboot with usb still i can choose the new disk and it boots find no problems, however as soon as i remove my usb i appear to lose access to sudo etc and system starts churning errors about missing fs. Any ideas how to finilise the fs transfer so i can remove my original usb.
<sarnold> F3Speech: you're probably looking for a way to run pivot_root and re-exec init off the 'new root'
<F3Speech> think i might have sussed it 1sec will let you know :)
<F3Speech> sucess :)
<F3Speech> even if it is with just 1 'c' lol
<F3Speech> then grub menu was still setting fs root to the usb even thought it was mounting the fs from the other disk
<sarnold> hehe
<F3Speech> lshw -C disk
<F3Speech> opps
<tflgen2> hey guys, I've almost got everything working with my mail server but I'm having dovecot-lda temporary failures. where can I look? log file: mailsrv postfix/pipe[3888]: 6C2DB41E97: to=<testuser@indycase.com>, relay=dovecot, delay=4719, delays=4719/0.01/0/0.02, dsn=4.3.0, status=deferred (temporary failure)
<tflgen2> nvm, found the issue
<tflgen2> dovecot-sieve wasn't installed
<NoReflex> hello! One of my computers is not able to mount the root file system. I get dropped to busybox and it looks like fsck is not included in busybox
<NoReflex> wget is there ...
<NoReflex> I found examples that mention e2fsck but it is not there
<sarnold> NoReflex: you can download the e2fsprogs off one of the mirrors, unpack it using ar and tar, and use e2fsck that way
<sarnold> hah, busybox has dpkg.
<NoReflex> sarnold, dpkg not found...
<NoReflex> on the sourceforge page there's only the source for e2fsprogs from what I can tell
<NoReflex> ls
<sarnold> NoReflex: yeah, you'll want one of the compiled ones, e.g. http://us.archive.ubuntu.com/ubuntu/pool/main/e/e2fsprogs/e2fsprogs_1.42-1ubuntu2_amd64.deb (that's the e2fsprogs that my AMD64 Precise system has installed)
<NoReflex> sarnold, the server is an older version (Karmic I think)
<NoReflex> but that should not be a problem
<sarnold> NoReflex: this path is for lucid: pool/main/e/e2fsprogs/e2fsprogs_1.41.11-1ubuntu2_amd64.deb
<NoReflex> sarnold, the problem is that inside busybox the network is not configured
<sarnold> NoReflex: oh man. :/
<sarnold> NoReflex: busybox does have a dhcp client, and the 'ip' program in case you just want to statically configure something quickly...
<NoReflex> it has the ip program but the interface do not appear in the list
<NoReflex> only lo
<sarnold> NoReflex: you may need to modprobe the module that provides the interface. hrm.
<NoReflex> sarnold, I've modpobed it (bnx2) but eth0 is still not created
<sarnold> NoReflex: well, perhaps it's time for a change; can you boot to USB or CD? Perhaps the thing to do is to boot to a livecd or liveusb and just fix it with access to _good_ tools.
<NoReflex> sarnold, unfortunately I don't have physical access to the machine; I'm accesing it via iLO from HP
<sarnold> NoReflex: oh man.
<NoReflex> I will try to mount an image using iLO Virtual Media and see how that goes
<NoReflex> thank you for your time
<sarnold> NoReflex: oh, iLO can do that? neat.
<mgw> I have a slightly odd scenario I'm trying to use start-stop-daemon for
<mgw> an unpriviledges user who has sudo only on /usr/sbin/ngrep needs to start ngrep as a daemon
<mgw> that same unprivileged user needs to be able to stop the daemon
<mgw> I thought I had this working on a system some time ago, but what's happening is ngrep appears to be dropping privs to nobody, but the original user cannot stop the daemon
<mgw> any suggestions?
<sarnold> mgw: if it were me, I'd write an upstart job configuration file for ngrep, and then you can grant your unprivileged user sudo access to the commands "start ngrep" "stop ngrep" and "restart ngrep"...
<mgw> sarnold: thanks
<mgw> I'll look into that
<Super_Dog2> Having some problems logging in and did a "df -h" command and find that my server's hard drive is full.  I presume this could be a problem. :-)
<Super_Dog2> Have a 80gb hard drive (SSD) with a 3TB LVM...
<sarnold> Super_Dog2: apt-get clean is a fast way to free up a few gigs in /var
<Super_Dog2> Any good ideas to look at for places to delete files from?  All I have right now is SSH without sudo privileges...
<Super_Dog2> I'll try the apt-get clean...
<sarnold> heh, it'll require sudo..
<sarnold> check if there's stuff in /tmp you can get rid of
<Super_Dog2> Looks like I've got some big *** log file...  Running Zentyal which is Ubuntu 10.04 LTS server base.
<Super_Dog2> deleting the big log files?  That shouldn't hurt anything right?
<sarnold> Super_Dog2: well....
<sarnold> Super_Dog2: normally best if you can get the programs involved to do logortates and the like
<sarnold> if you delete the file but the program is still logging to it, the space won't come back _and_ you can't read your log files.
<sarnold> but rm /var/log/*gz ought to be fine. :)
<Super_Dog2> cool... looks like I've got sudo now...
<Super_Dog2> Let me see if I can login at the machine.  (i.e. not SSH)
<Super_Dog2> crap... still no machine login...
<Super_Dog2> Are those *gz files pretty much useless archives / backups of old logs?
<sarnold> Super_Dog2: "useless" is in the eye of the beholder, of course :)
<sarnold> Super_Dog2: but with a full disk, they probably look useless. :)
<Super_Dog2> Is it *gz or *.gz?
<Super_Dog2> I'm used to Winbloze delete command
<sarnold> they're probably the same; at least I haven't seen anything named e.g. /var/log/logz  :)
<Super_Dog2> There were some big log files in there but df -h still shows "0" left in the "/" directory...
<Super_Dog2> Do I have to empty the trash or something via the command line..?
<sarnold> how have you been deleting files?
<Super_Dog2> With your "rm" command above?
<sarnold> oh, good. there's no trash. once you do that, they're gone. :)
<sarnold> Super_Dog2: root is allowed to go beyond the "full" mark
<Super_Dog2> Looks good.  It canned all the *.gz files it looks like when I do an "ls -al".
<sarnold> Super_Dog2: if you haven't gotten below that full mark yet, it will probably still read '0'...
<Super_Dog2> So you're saying "my cup overfloweth..."?
<Super_Dog2> ?
<Super_Dog2> With files that is?
<sarnold> Super_Dog2: that's a possibility. I'm trying to recall where the details are for that..
<Super_Dog2> What about the file "user.log.1" that looks pretty big...
<Super_Dog2> And the file "messages.1"?
<sarnold> Super_Dog2: you can delete those too, if you're that bad for space
<sarnold> Super_Dog2: did you run the 'apt-get clean' ?
<Super_Dog2> How about "sudo rm /var/log/*.1" ?
<sarnold> Super_Dog2: rm /var/log/*.[1-9] may be useful
<sarnold> sure
<Super_Dog2> Don't seem to have any log files beyond *.1...
<Super_Dog2> Yep...  ran "sudo apt-get clean"... Didn't get any error so I assume it worked...
<sarnold> okay
<Super_Dog2> What's a good command to list large files that are not in the /mnt directory?
<sarnold> Super_Dog2: probably "du -s /bin /boot /dev /etc /lib /lib64 /opt /root /srv /usr /var" is the easiest way to skip /mnt, /proc, /sys, etc.
<sarnold> Super_Dog2: oh, better, "du -a /path1 /path2 | sort -n"
<sarnold> Super_Dog2: you could also delete some kernels, if you've got a dozen kernels installed, you may want to clean some up
<Super_Dog2> Where are the old kernels hiding out?
<sarnold> Super_Dog2: /boot
<Super_Dog2> I keep deleting stuff and still df -h says 100% used....
<Super_Dog2> Is there a trash empty function or something I'm missing?
<bradm> Super_Dog2: by default, there's 5% reserved for root on a partition
<sarnold> Super_Dog2: hrm. pastebin your df output..
<bradm> Super_Dog2: so what you're probably seeing is that you've got over that, and you're bringing it back down
<sarnold> bradm: he's only got 80g /, he shold have dropped below that 5% again some time ago, I think...
<bradm> maybe deleted files with open filehandles then
<sarnold> bradm: mm. good thinking.
<sarnold> Super_Dog2: sudo lsof | grep " (deleted)"   -- see if soething has huge files open that you don't mind killing. :)
<Super_Dog2> Which df command?
<sarnold> Super_Dog2: df -h
<Super_Dog2> Sorry.  have to go back from one machine to the other.  Pastebin will be a PITA in this scenario.  Can't SSH from this Ubuntu box for whatever reason.
<Super_Dog2> I just never thought you'd need more than 80G for the server files.  The data files / mounts sure.  But that's on a separate LVM volume mounted in the /mnt directory..
<Super_Dog2> It's an SSD...
<bradm> Super_Dog2: you really shouldn't
<sarnold> Super_Dog2: my / takes 18 gigs.
<bradm> Super_Dog2: well, depends on how many logs you keep, and how busy the server is
<Super_Dog2> That's what I thought.  I've had this thing setup for over a year.  Basically been set it and forget it until today...
<sarnold> Super_Dog2: that includes a pile of lxc things, 150+ apt repository lists, and over six gigabytes of cached deb packages for ten different ubuntu distributions. :)
<Super_Dog2> Going to pastebin... hang on
<Super_Dog2> df -h command output here:  http://pastebin.com/QTujKUtX
<sarnold> (just to answer any potential curiosity, the difference I was thinking of was the minixdf vs bsddf behavior described in the mount(8) manpage, which keeps back the blocks in the filesystem that can't be used for files; it doesn't have anything to do with the 5% saved for root)
<sarnold> heh you even got the columns lined up :) nic
<Super_Dog2> I presume the 100% in "/" is not good...
<bradm> Super_Dog2: try a du -sk [a-lo-z]* | sort -n ?
<bradm> Super_Dog2: trying to avoid doing a du on /mnt/data there
<sarnold> ha! du -x "skip directories on different file systems"
<sarnold> awesome. 18 years in, still learning things about the basic tools. :)
<Super_Dog2> That's where all the big data files are - in /mnt/data...
<bradm> Super_Dog2: right, we want to narrow down where the disk usage is
<bradm> Super_Dog2: so is the du running?
<Super_Dog2> Here you go - http://pastebin.com/7GUe99bu - apologies - but I have to switch desks to do this.  SSH only works on one of my Windows workstations running putty...
<sarnold> oh. I just now notice that there's no /home filesystem...
<Super_Dog2> I'll try SSH on this workstation again...
<bradm> its not that bad though, 11G in /home by the looks
<bradm> is there anything else in /mnt other than the data dir?
<sarnold> 11+7+3+2+1 .. I just don't see how his drive shows full at 80g.
<Super_Dog2> All right cool.... Now I can log in with a terminal on this Ubuntu workstation.  Should go lots faster now....
<bradm> either there's something else on /mnt, or there's stuff in /mnt/data thats had the other FS mounted over it
<bradm> or a _lot_ of deleted files with open filehandles
<sarnold> what does sudo lsof | grep " (deleted)" | wc -l  return?
<sarnold> (granted, number isn't everything, it could be one huge 45gigabyte file...)
<Super_Dog2> running...
<Super_Dog2> says just "8"....
<Super_Dog2> Is that files?
<sarnold> yeah
<sarnold> goahead leave off the | wc -l and you can see them...
<Super_Dog2> mostly stuff in Apache2
<Super_Dog2> You want me to PasteBin that?
<Super_Dog2> Something in "Asterisk"...
<bradm> Super_Dog2: if its /var/log/apache2 its probably just log rotation stuff
<Super_Dog2> Not using asterisk...
<sarnold> mm, apache stuff...
<Super_Dog2> Here's the output of your command:  http://pastebin.com/NT4yC9qq
<sarnold> oh man, two megabytes. :/
<bradm> Super_Dog2: can you do a ls on /mnt?  is there anything else in there than data ?
<sarnold> how about "du -xk / | sort -n" ?
<sarnold> time for me to bail..
<Super_Dog2> This is kind of a good description of my problem that started today when I re-booted the server.  http://forum.zentyal.org/index.php/topic,11306.msg45782.html#msg45782
<Super_Dog2> Nothing in /mnt other than the /lan and /nas...
<sarnold> Super_Dog2: you may wish to check dmesg output, perhaps the kernel is screaming something at you about a corrupted filesystem or something
<Super_Dog2> Big /mnt/data directory where all the files are.
<Super_Dog2> I see something shifty....
<Super_Dog2> Was trying Rsync yesterday to an Rpi and it errored out...
<Super_Dog2> Let's take a look...
<Super_Dog2> Thanks sarnold... think I found the problem.  When the Rsync process errored out yesterday it left a big boatload of Rsync file in the /home/user directory...
<Super_Dog2> going to have to bone up on the rsync command.  Must have done something wrong.
<Super_Dog2> That "du -xk / | sort -n" is a keeper.  Thanks for that one...
<bradm> Super_Dog2: it still looks like there's more files hiding somewhere
<bradm> since the du you pasted doesn't show 70G worth of stuff
<Super_Dog2> I'm pretty sure it was an rsync gone wrong...  I'll have to figure out how the heck it wrote the files to the source /home/user directory....
<bradm> Super_Dog2: wait, you said something back up there about lan and nas in /mnt ?
<Super_Dog2> There must be an rsync switch I missed.
<Super_Dog2> Not much in /lan and /nas in the /mnt directory though.
<Super_Dog2> Let me check again.
<bradm> in /mnt I'd do a du -sk lan nas to see how much it uses
<Super_Dog2> You guys are frickin' Ubuntu geniuses...
<Super_Dog2> :->
<Super_Dog2> Check out this Paste Bin:  http://pastebin.com/ss9vxCYk
<Super_Dog2> 59G though... Still a little heavy for what I'd expect a 10.04LTS / Zentyal 2.27 install to be...
<bradm> that looks better
<bradm> but, yes, there's still some other data somewhere I'd say
<Super_Dog2> Yes.  Thanks a lot.  I am not worthy of you ans sarnold's terminal skills...
<bradm> at least its not completely full now though, you've got some breathing room
<Super_Dog2> Yeah... Any ideas for hunting for other useless files?
<bradm> did you do the du on /mnt/lan and /mnt/nas ?
<Super_Dog2> That wonderful "sudo du -xk / | sort -n" has revealed some other gems.  Looks like someone - me or my associate put a big boatload of PDF file in the web server.  Problem is we don't use that webserver anymore for anything other than logging in the server management interface.
<Super_Dog2> So I can get rid of a big load right there, too.
<Super_Dog2> That "du -sk lan nas" says 12 and 4 respectively.  I presume that's just bytes for the empty directories...
<bradm> yeah, you can ignore them
<bradm> thats really odd, it seems like that covers most of where stuff is stored
<Super_Dog2> Yep... nothing in there when I "ls -al" inside the directories...
<bradm> the du -xk should have showed you where everything was
<Super_Dog2> Yeah, that was brilliant stuff dude...
<Super_Dog2> That sort switch is the key...  Killer idea there...
<bradm> is it possible that the server was ever running without the /mnt/data mounted?  and files got put in there?
<Super_Dog2> Now we're down to 53 G out of 71 G.  Good....
<Super_Dog2> Lots of stuff in /var/cache , /lib , /var , /usr ...Not sure if anything that can go away though.
<bradm> it really depends on what it is
<bradm> but in general for stuff in /lib or /usr, I'd leave it alone
<bradm> its _possible_ there's stuff in /var/cache you can clean up
<Super_Dog2> Yeah.  don't see anything in there that looks like a candidate for deletion...
<Super_Dog2> Yeah.  don't see anything in there that looks like a candidate for deletion...
<bradm> Super_Dog2: /var/cache/apt/archives might be a candidate, depends on how much stuff you've installed
<qman__> apt-get clean will do that for you
<bradm> indeed.
<Super_Dog2> This is an old USB hard drive I used to back up...  "924860	/media/500GB_WDGreen".
<Super_Dog2> This shows on the "sudo du -xk / | sort -n" command.
<bradm> that sounds like you were trying to backup when it wasn't mounted
<Super_Dog2> But it doesn't seem like there's anything there... An old media mount or something?
<bradm> Super_Dog2: try a ls -a in that directory, possibly its hidden files?
<qman__> did you unmount /mnt/data to see if there was anything underneath it?
<Super_Dog2> Permission denied.
<qman__> because that's where my money would be, given the size of the data that's just plain missing
<bradm> qman__: I've asked that a couple of times and been ignored
<Super_Dog2> I presume that needs "sudo umount /mnt/data"  ?
<bradm> qman__: not to unmount it, but if there's possibly stuff under it
<bradm> Super_Dog2: don't do that
<bradm> Super_Dog2: if you've got stuff writing to it, that could be bad
<bradm> Super_Dog2: if you know there's nothing writing there, or processes with it open, go ahead
<bradm> Super_Dog2: but don't just blindly unmount it without checking
<qman__> right
<qman__> make sure it's not busy first
<Super_Dog2> I'm checking the directories in that mount and everything is what I'd expect there.
<bradm> Super_Dog2: but is there possibly stuff that was copied into /mnt/data without that big filesystem mounted?
<qman__> that's all stuff that's on the mount
<Super_Dog2> That's where all my core digital files are on the server.  TV, PDF's, spsheets, word docs, photos, etc....
<qman__> it's possible to have stuff there, and then mount another drive on top of it, and as a result not be able to see those files
<qman__> that can happen if you accidentally copy stuff while it's not mounted
<Super_Dog2> interesting.  never thought of that...
<Super_Dog2> I set this up like right when I built the server...
<Super_Dog2> The odds are lower I'd think...
<Super_Dog2> let's see if I can get this 10.04 server to boot with the Zentyal login screen...
<Super_Dog2> thanks a ton...
<Super_Dog2> You guys totally saved my !@# with that ""sudo du -xk / | sort -n" command.
<Super_Dog2> Still wondering why I have all these files on the server though....
<Super_Dog2> May have to go to a much larger hard drive.
<bradm> Super_Dog2: still feels like there's something missing though.
<Super_Dog2> Anybody have a good recommendation for a high quality / capacity SSD boot drive?  I'm thinking I better up this 80G to about 200+ something...
<Super_Dog2> Well - imagine not being able to even log in to your server...   That's where I was at.  So you guys are awesome..
<Super_Dog2> I will now donate to some charity.  I was going to call my $150/hour linux administrator over here...  Saved me some coin boys.  Thanks...
<koolhead17> Super_Dog2: buy ubuntu goodies even better :)
<Super_Dog2> It's karma....  I was freaking out there for a bit...
<koolhead17> Super_Dog2: cheers!! :)
<Super_Dog2> Couldn't even get sudo privileges on an SSH into the server...
<Super_Dog2> Thanks again...
<Super_Dog2> Still don't see much else to delete though.  Any good server quality drives you guys are using?  Has to be a 2.5" as this is only a 1U blade...
<Super_Dog2> Need 200gb or better probably...
<patdk-lap> hmm, 10 2.5" per 1u, or 4 3.5" in 1u, if you want hotswap
<patdk-lap> I really like my samsung 830, plextors are also good
<Super_Dog2> My 1u only has room for a 3.5" and a 2.5".  That's it.
<Super_Dog2> Is that Samsung 830 an SSD?
<Super_Dog2> Wow... Sammy 830 seems sold out everywhere unless I want a 500GB unit...
<Phibs> anyone know why: https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1003845 has not been backported into Precise LTS ?
<uvirtbot`> Launchpad bug 1003845 in sssd "Invisible symbols in sssd upstart config causes sssd to not start if /bin/sh is a link to /bin/bash" [Medium,In progress]
<Phibs> tjaalton: ping :)
<Phibs> amazing number of idlers in here
<patdk-lap> phibs, it's not a security issue, and probably no one has request it to the backporters
<qman__> it's probably not even affecting that many people, since it requires a non-default system setting to appear
<Phibs> qman__: what, using SSSD? If you aren't using it you should :)
<Phibs> patdk-lap: pretty lame/bad bug
<patdk-lap> sounds like additions security issues, if I where to use it, plus it's in universe, so security patchs won't be applied
<jdstrand> patdk-lap: that is inaccurate. things that are not Canonical-supported receive community support. The Ubuntu security team will sponsor security patches that are submitted by the community. Therefore, since sssd is in universe, it can receive security support from the community, if people step up to do it
<BlackDex> Hello there
<BlackDex> I get the message: connect_to localhost: unknown host (No address associated with hostname)
<BlackDex> i have searched, but didn't found a helping awnser
<melmoth> BlackDex, well, looks like there s no entry for localhost in /etc/hosts
<BlackDex> there is :(
<melmoth> then your resolver is not using it.
<melmoth> in /etc/nsswitch.conf be sure you have hosts:          files
<melmoth> you may alos have other entry, but be sure there s the files one.
<BlackDex> melmoth: could it have something to do with ipv6?
<melmoth> dont know
<BlackDex> !deur
<BlackDex> oeps
<tizz> Hi all! i have a strange problem: After upgrading a server from 10.04.4 to 12.04.1 dhcp3-server is replaced by isc-dhcp-server. But now the server is unable to start, no matter what I do it wants to start isc-dhcp-server6 instead of isc-dhcp-server. this complains about dhcpd6.conf not present, since only dhcpd.conf is present. how can i get rid of this 6?
<aFeijo> hi folks!
<aFeijo> I'm in a process of installing a new ubuntu server that will replace our 4 years old one. I have ubuntu 12.10 x64 installed, now I need to migrate the email server services. What would be the best options today? I need smtp, imap integrated with mysql
<aFeijo> no hints?
<TeTeT> aFeijo: I'm not up to date with 12.04, unfortunately, but the server guide in the online help might be the best start to see what servers are default
<vezq> aFeijo: I would use 12.04 LTS
<vezq> aFeijo: Postfix for SMTP and Dovecot for IMAP
<aFeijo> vezq, that was my idea too :) thanks
<uvirtbot`> New bug: #1095988 in nova (main) "nova-api-metadata update causing corruption" [Undecided,New] https://launchpad.net/bugs/1095988
<sazawal> I need to install ubuntu from networkboot
<sazawal> Please help
<sazawal> I have followed the instructions here, https://help.ubuntu.com/community/Installation/QuickNetboot
<sazawal> What next?
<ogra_> netboot your machine
<ogra_> it should dump you directly into the installer
<sazawal> ogra_, I have connected my machine to the another one(target) machine via ethernet cable
<sazawal> I guess my server is working, but how to provide the path of iso file?
<ogra_> so do you end up in the installer booting your target machine ?
<sazawal>  I guess so, but I have nowhere provided the path to iso file
<sazawal> It wouldnt know, from where to copy the files
<ogra_> it would default to use archive.ubuntu.com
<sazawal> none of the two machines are conected to the internet
<ogra_> you can mount the iso on your server and export it through http
<ogra_> and then point the installer to the ip of your server during installation
<sazawal> yes, but how to do it? Where do I mount the iso file?
<ogra_> well, if your webserver is confgured to have /var/www as / you mount it under that dir
<sazawal> ogra_, well I do have apache server running on my computer
<ogra_> depends what webserver you install and how you set it up
<sazawal> Does it work for installation?
<ogra_> i guess so
<sazawal> ogra_, Have you tried the netboot yourself?
<sazawal> I googled and got different tutorials on different websites
<ogra_> i usually use my local mirror for that so i dont need to mount any iso :)
<sazawal> I dont know which one will work
<sazawal> ogra_, Can you help me step by step?
<ogra_> well, if you have the installer running on the target machine thats 90% of the work
<sazawal> I have http server running
<ogra_> well, sudo mount /dev/cdrom /var/www/ i guess
<ogra_> and check with a browser if yoou can see the content
<sazawal> Yes it works
<sazawal> I mean, I frequently use it to transfer files over LAN
<sazawal> ogra_, What to do next?
<sazawal> I need to install ubuntu via networkboot, please help
<ogra_> sazawal, what elsse do you want ?
<ogra_> it should fully work now
<ogra_> in the installer point to your server IP that exports the iso
<sazawal> ogra_, How to specify the IP to the target computer
<ogra_> inin the installer dialog that asks for it
<patdk-lap> you download the netinstall iso, and either boot the cd, or copy the netboot files to a pxe server, and pxe boot
<ogra_> that part he has done (he said)
<patdk-lap> oh, I didn't see that
<ogra_> so once you get to the qestion about which mirror should be used, you give it the IP of the server
<sazawal> ogra_, Yes the DHCP connection is made. I have this folder /tftpboot/ubuntu-installer/i386
<sazawal> Do I have to mount my iso file somehwere here?
<ogra_> you said you already booted into the installer
<ogra_> no
<sazawal> Sorry, I have not booted to the installer
<ogra_> you just netboot your client and will end up in the first installer screen
<sazawal> I should have misunderstood something
<ogra_> the installer is inside the initrd
<ogra_> the iso is only needed for the debs later
<sazawal> ogra_, Okay
<donvito> VERSION=`cat /etc/slackware-version` how can i edit this line so in motd ill get ubuntu version
<sazawal> So, how to start the installer in the target machine?
<ogra_> PXE should
<ogra_> you just need to PXE boot the target machine
<ogra_> (if everything is set up properly indeed)
<donvito> DISKUSAGE=`du -sh ~/|awk '{print $1}'`
<donvito> or this on ubuntu?
<hallyn> stgraber: I don't like the patch for lxc-shutdown for busybox
<ogra_> what does /etc/slackware-version contain ?
<donvito> ogra
<donvito> fixed that
<hallyn> ISTM busybox is simply broken and we shouldn't hack lxc to work around it
<donvito> i now need command for diskusage
<donvito> so in motd will tell the disk usage
<donvito> at the moment
<donvito> DISKUSAGE=`du -sh ~/|awk '{print $1}'` so i need to edit this line and make work on ubuntu motd
<ogra_> how did you fix it ? on ubuntu you should use the lsb_release command for it
<ogra_> the du should work
<donvito> VERSION=`cat /etc/issue.net`
<donvito> i fixed with this line
<ogra_> ugh
<donvito> du: cannot read directory `/home/nertil/.subversion/auth': Permission denied
<donvito> i got this
<donvito> like error
<ogra_> VERSION="$(lsb_release -ds)"
<ogra_> use that (and depend on the lsb stuff if it is a package you work on)
<ogra_> well, du indeed needs the permissions to index the dir you use it on
<ogra_> but tht wont differ on any othe linux
<ogra_> nothing ubuntu specific
<donvito> how to fix the permision
<ogra_> well, why dont you have access to the dir in the first place if it is in your own home ?
<donvito> i dont even know how to
<donvito> :)
<donvito> im new at this
<ogra_> (assuming you are logged in as netril)
<donvito> im root
<donvito> its my home box
<ogra_> if you are root you should have all permissions you need
<donvito> nertil@Precise:/var/etc$ sudo su
<donvito> root@Precise:/var/etc#
<donvito> sure
<stgraber> hallyn: yeah, not fond of it myself, mostly because we have the SIGPWR stuff in the API too and we don't really want to special case busybox in there...
<stgraber> hallyn: I "think", I'd be much happier with making those signals configurable in the container config, then make lxc-shutdown a part of lxc-stop so it can use the API (instead of being a shell script)
<stgraber> then everything would be using the same code and it'd be the template's job to set any non-standard signals
<hallyn> that sounds good
<uvirtbot`> New bug: #1096081 in php5 (main) "server reboots because of php5 cron job" [Undecided,New] https://launchpad.net/bugs/1096081
<caribou> roaksoax: ping
<koolhead17> zul, Daviey smoser hallyn are you folks still on vacation :P
<jabba_> can anyone tell me what to do to get pci-passthrough running on a 12.04 ubuntu domU on a 12.04 ubuntu dom0 with xcp-xapi stack?
<hallyn> koolhead17: not by a long shot
<jabba_> i already bound the desired pci device to pciback, so that lspci -k tells me pciback as driver for that specific device (usb2.0 extension card)
<jabba_> then i told the vm to use this device (via xe vm-param-set blabla...)
<jabba_> the i told the vm to se iommu=soft as kernel-arg
<jabba_> but the damn vm doesn't get any pci-device passed through....
<jabba_> why the hell are there so many variations of xen, toolstacks, pciback as module, build in, etc., etc..  this is so confusing!
<roaksoax> caribou: pong
<caribou> roaksoax: I have someone asking for some fence script added to the precise version of pacemaker which is already in Quantal
<caribou> roaksoax: what would be the best way to work at it : get the whole 1.1.7 version of pacemaker in Precise or just add the one fence script ?
<caribou> roaksoax: to the current pacemaker version in Precise  I mean ?
<roaksoax> caribou:  pacemKer doesn't ship fence agents
<caribou> roaksoax: they must be fence script, lemme find the reference
<roaksoax>  there is a package called fence-agents
<caribou> roaksoax: my bad; he's talking about /usr/sbin/fence_pcmk which is in Quantal but not in precise
<roaksoax> jamespage:  howdy that drbd import is failing cause it's trying to import an older version?
<roaksoax> caribou: so you want to ship that file in precise?
<caribou> roaksoax: yeah, make this available in Precise
<roaksoax> if the precise version is not shipping it is because it is not supported unless the file is not being installed
<roaksoax> zul: why are you installing that?
<zul> roaksoax: centos?
<feisar> probably a simple question, is MySQL 3306 tcp or udp?
<roaksoax> zul: yeah
<zul> roaksoax: so i can make sure our openstack stuff runs centos/rhel images
<_ruben> feisar: tcp
<roaksoax> zul: cool
<feisar> _ruben: thanks : )
<roaksoax> jamespage: you back/win 26
<roaksoax> arr
<zul> hallyn: uploading now god help us all
<hallyn> zul: i should be in front of some shrine or other right now
<zul> hallyn: excactly
<zul> hallyn: done
<hallyn> zul: thanks
<hallyn> stgraber: now that zul has uploaded the new qemu source, what does it take to get that into the server set so i can upload to it (when it invariably breaks)?
<SpamapS> hallyn: +1 on putting it in the server package set. Its critical to a huge portion of the things servers are doing these days.
<hallyn> "but no pressure on the drastic upload you just did which will break it for everyone" :)
<koolhead17> hallyn, :P
<dekatrom> Hi
<TheLordOfTime> so, anyone know where apache2 sets up its site configs by default?
<TheLordOfTime> does it still uses /var/www ?
<vezq> TheLordOfTime: may vary depending on distro
<TheLordOfTime> vezq, i meant in ubuntu
<TheLordOfTime> :P
 * TheLordOfTime thought that might've been implied.
<vezq> yes :)
<vezq> it's still /var/www
<vezq> have too many open channels here :)
<TheLordOfTime> okay, that breaks from Debian policy...
<TheLordOfTime> at least afaict
 * TheLordOfTime was reading the debian packaging policy booklet
<TheLordOfTime> who here's more fluent with the apache webserver packages than I/
<TheLordOfTime> because i certainly am not :p
<TheLordOfTime> (question on its packaging and whether its violating debian policy)
<wilmaaaah> hi all
<wilmaaaah> how well does ubuntu 12.04 work on machines with 2 cpus? do i need to make adjustments? i don't get the performance i like
<TheLordOfTime> wilmaaaah, are we talking server or gui version?  (server's commandline only unless you add a GUI)
<sarnold> wilmaaaah: how well something runs on SMP systems depends upon the workload, more than anything else. my laptop
<wilmaaaah> it's the server edition, i'm using it as a kvm-host. sysbench showed slower performance than my 2 core athlon. i'm speaking of 2 opterons
<sarnold> wilmaaaah: my laptop's got four cpus, and I can run make -j 4 on it without driving the temperature too high but still getting excellent compile performance...
<wilmaaaah> it's 2 cpus Ã¡ 6 cores here
<wilmaaaah> sarnold: can i trigger that somehow?
<sarnold> so 12 procs total?
<wilmaaaah> yep
<sarnold> cool. :)
<wilmaaaah> 12 cores
<wilmaaaah> 2 cpus
<wilmaaaah> it's slower than my four year old athlon :(
<sarnold> wilmaaaah: does vmstat and top show you're IO-bound or compute-bound or are the guests just not running as quickly as you'd like?
<wilmaaaah> i did a quick sysbench test and compared the result with my desktop machine
<wilmaaaah> the video of the guests seems laggy, i thought until now it's maybe vnc, but now i tried spice - same lagginess
<sarnold> wilmaaaah: what does sysbench test?
<wilmaaaah> lemme check vmstat though
<wilmaaaah> cpu speed, among others
<wilmaaaah> not sure about the algorithm
<sarnold> is it an accurate reflcetion of the workload you're going to put on the machine?
<wilmaaaah> as i said, i have no idea what is being measured
<wilmaaaah> let me run another test to compare
<wilmaaaah> this one: time echo"scale=4000; a(1)*4" | bc -l
<wilmaaaah> calculates 4000 digits of pi
<wilmaaaah> my athlon stays faster
<wilmaaaah> :(
<sarnold> by how much?
<sarnold> .. and could you run _twelve_ of those simultaneously on your four year old system and still be done more quickly? :)
<wilmaaaah> well, check it on your machine, my opteron gives 26.8s,26.8s,0.004s
<sarnold> I see what you mean. 10.5 seconds.
<wilmaaaah> sarnold: that's why i asked if i need to take actions
<wilmaaaah> 16.3s on my athlon
<sarnold> wilmaaaah: I wonder if you need to fiddle with your BIOS settings -- ram speeds, clock speeds, etc. :/
<sarnold> wilmaaaah: some systems are quite a bit slower if the RAM isn't installed "just right" -- spread over the banks in an optimal way, for example.
<wilmaaaah> mmh, lemme look into this
<wilmaaaah> that's been done according the manual
<sarnold> because I wouldn't expect any hexacore kind of system to be more than half the speed of my i7 laptop on a compute-intensive workload that has near zero disk io...
<wilmaaaah> i need more ram, two more modules
<wilmaaaah> obviously i need two for each cpu
<sarnold> wilmaaaah: that may help. I know nearly nothing about internals, but I could easily imagine that if the CPUs have to work together to share the memory, you could be running at less than half the speed you expected.
<wilmaaaah> these are dual channel modules, they need to be paired to work as expected, major mistake on my part here
<ogra_> wilmaaaah, also i highly doubt that bc does any multijthreading, you will only stress one core with it anyway
<wilmaaaah> that may be true as well, the problem remains the faulty planning though
<wilmaaaah> i think if i removed one cpu it'd be faster
<sarnold> ogra_: hehe, yes, that was my initial thought of course, but the beauty of the bc test is that it shows the macihne to be 2.5 times slower than my laptop on a single-threaded compute-intensive task.
<ogra_> yep, i also immediately tried it on my varoius machines
<sarnold> ogra_: of course, for something as small as 4000 digits of pi, one might expect it to _also_ run nearly entirely in cache, but that's probably not the case..
<ogra_> my chromebook (running ubuntu) is only three times as slow as my core i5 3500k
<ogra_> five times for my nexus7 though
<sarnold> heh, 52 seconds for my pandaboard.
<ogra_> funny
<ogra_> 50 for the nexus7
<ogra_> and 30 for the chromebook
<ogra_> (10 on intel)
<sarnold> haha, your tablet beats a computer that's plugged into the wall. awesome. :)
<ogra_> that clearly shows that it is only single core ... nexus7 is massively faster than the panda usually
<sarnold> in the twenty minutes or so that I had my panda running X, I was reasonably impressed with the performance for the price, size, heat, etc.
<sarnold> cool to hear the nexus is nicer :)
<ogra_> yeah, its already okayish
<ogra_> but yep, the nexus is so much better
<wilmaaaah> sarnold: anyway, thanks for pointing me in the right direction!
<sarnold> wilmaaaah: sure thing, I hope you get the system you want in the end :) hehe
<sarnold> i'd be curious to know The Solutions, too, if you've got the time.. (don't worry if i'm responsive or not, I read highlights. :)
<TheLordOfTime> sarnold, where should i report potential Debian packaging policy breakage for packages?
<TheLordOfTime> in Debian?
<TheLordOfTime> or also in Ubuntu
<ogra_> TheLordOfTime, if it is caused by an ubuntu modification, file it in ubuntu
<ogra_> otherwise its a debian issue
<TheLordOfTime> ogra_, okay, will do thanks.
<sarnold> TheLordOfTime: debian seems to like bugreports for everything, though something as big as "apache doesn't follow standard for websites in /var/www/" or whatever, maybe bring it up with the maintainer first -- it feels like the sort of tihng to have been covered before
<TheLordOfTime> i'll start there, then, after I spin up a 13.04 VM to try and find a specific bug
<TheLordOfTime> (unrelated to apache)
<sarnold> TheLordOfTime: aha :)
<domino> hey everyone. I've recently encountered an issue and have tried numerous suggestions (to no avail) to try and correct it, so im coming here to see if anyone has any ideas. when I run an apt-get update it fails with this output: https://gist.github.com/d9045e2660167ac33617
<domino> im running ubuntu-server 12.04.1 on virtualbox
<domino> has anyone encountered this before?
<njin> domino, try commenting out that line in /etc/apt/sources.list and run again apt-get update
<sarnold> it's perhaps four lines, deb lines and src lines, for both updates and security.. one wouldn't want to run that way for long :)
<sarnold> domino: are you using any caches?
<domino> sarnold. sorry i'm not sure
<domino> also, when I run: ls /var/lib/apt/lists I get this strange output
<domino> http://d.pr/i/6Tww
<sarnold> domino: I've seen those sorts of hash mismatches before if the package list is retrieved off one mirror, the hashes off another mirror, and sometimes caches can hold onto stale data too long
<domino> when I wc on the file I get 96, I believe its only supposed to have 90
<domino> hmm
<domino> sarnold, how can I check if I have caches (and clear them)
<sarnold> domino: I was thinking more along the lines of squid or apt-cacher-ng ..
<sarnold> domino: those are some funny files in /var/lib/apt/lists -- I've got nothing that looks similar
<domino> sarnold. no I don't believe im using anything like that. this is just a vanilla virtualbox install from a fresh 12.04.1 dl.
<sarnold> domino: was your /etc/apt/sources.list or any of the lists.d/ files updated with, say, a windows text editor?
<domino> i don't believe so. after the OS finished installing the first thing I did was just apt-get update and then ls that file. the VM is probably like 5 minutes old
<sarnold> haha
<sarnold> oy :)
<domino> i've also tried to rm -rf /var/lib/apt/lists/* and then run apt-get update again, and those 6 wierd files always come back, and the update fails in the same way
<sarnold> I think I'd rm /var/lib/apt/lists/* and run 'apt-get update' again and see what happens..
<sarnold> okay, that's a time saver :)
<domino> ;)
<sarnold> paste your /etc/apt/sources.list and the list.d/*list files?
<domino> sure. one sec
<DarylXian> Hi.  A general maintenance 'apt-get update' on my Ubu12LTS_64 box today updates grub.  During the process, I get a warning that "The GRUB boot loader was previously installed to a disk that is no longer present, or whose unique identifier has changed for some reason." and instructing me to choose the right GRUB location:  http://pastebin.com/KGquUbLj
<DarylXian> I'm not sure what's happened -- if there's a problem or not. My current partition plan (/etc/fstab) is: http://pastebin.com/raw.php?i=G2U7Xg8r
<DarylXian> What should I do here -- just check one of the boxes and proceed?  Or is there a problem underway?
<sarnold> DarylXian: oof. uh. hrm. does that uuid still exist in /dev/disk/by-uuid/ ?
<DarylXian> sarnold: hi.  No. ls -al /dev/disk/by-id/ | grep "5d26d773-d323-5b7a-e946-8e64e64cc978" ==> (empty).  but,
<DarylXian> tune2fs -l /dev/sda1 | grep UUID  ==> Filesystem UUID:  5d26d773-d323-5b7a-e946-8e64e64cc978
<DarylXian> oops.  by-uuid.  sec ...
<DarylXian> sarnold: yep. ls -al /dev/disk/by-uuid/ | grep "5d26d773-d323-5b7a-e946-8e64e64cc978" ==> lrwxrwxrwx 1 root root  10 Dec 20 16:29 "5d26d773-d323-5b7a-e946-8e64e64cc978" -> ../../sda1
<sarnold> DarylXian: hrm... that's a partition, so perhaps it was using the un-recommended blocklist mechanism? (large guess here..)
<DarylXian> sarnold: I'm not familiar with that at all ... what can I check to answer that for you?
<sarnold> DarylXian: I odn't know, it's beyond my experience. :(
<DarylXian> sarnold: well, then ... oh-oh?
<sarnold> DarylXian: I _think_ I'd be inclined to install it to both /dev/sda and /dev/sda1 here. This looks like a server system that's never going boot a different OS by selecting a different boot drive in the BIOS, right?
<sarnold> DarylXian: perhaps you've got an msdos-style MBR installed on /dev/sda, and perhaps it automatically selects the "active partition", /dev/sda1, and that's where grub is installed and does its work.
<sarnold> but if grub doesn't like that, maybe overwrite the one on /dev/sda and let grub own the drive
<sarnold> and I think update /dev/sda1 just in case something else is booting that partition through a mechanism I don't understand. :)
<DarylXian> sarnold: Yes, it's a single-purpose, single-drive Zimbra mailserver install on 12LTS UbuSvr.  It will ONLY boot Ubu.
<DarylXian> sarnold: re-reading your comments, I'm in over my head atm.  I'll do my homework on the various options -- but, for now, I'm mainly concerned with NOT fubaring this server.  If I check-to-install both "[X]   /dev/sda (1000204 MB; WDC_WD1003FBYX-027GC3)" & "[X] - /dev/sda1 (270 MB; /boot)", will I be 'safe'/able to boot?
<sarnold> DarylXian: I think so. But it's not my ass on the line :) so I completely understand your skepticism. It's healthy...
 * DarylXian has deomnstrated strong fubar-foo in the past ...
<DarylXian> er, fubar-fu
<domino> sarnold: sorry for the delay. Here is my poor attempt at getting you my /etc/apt/sources.list (https://gist.github.com/f616f64bb7679a1ccb77), and it turns out my list.d/* is empty
<sarnold> domino: ehe, I think you'd have more fun with this system if you install putty and ssh into it for things :) then copy-paste will work way better... or, another option, install "pastebinit" :)
<sarnold> domino: but I don't see anything in there that looks surprising. :(
<Guest84637> Hi. I'm trying to install Ubuntu on VMWare Fusion but I get an error during installation -> Unanble to install the selected kernel. An error was returned while tryin to install the kernal into the target system. Kernel package: 'linux-generic'''
<sw0rdfish> my provider's site says the VPS has 1 cpu core yet when I do cat /proc/cpuinfo it shows cpu cores: 8
<sw0rdfish> does that mean the vps thinks it has 8 cores?
<RoyK> sw0rdfish: I guess they're running xen and just allows you to use a single core for real work
<sw0rdfish> I see.
<sarnold> sw0rdfish: /proc/ isn't very well mangled for containers... chances are good you've got cgroups or something limiting you to a single core
<sw0rdfish> OpenVZ actually
<RoyK> I didn't know you could limit cores in openvz
<sw0rdfish> hehe
<sw0rdfish> well damn they have a cheaper and better vps out as specials offer for christmas/new years....
<sw0rdfish> I'm definitely gonna get that and let this one expire i geuss
<sw0rdfish> guess*
<DarylXian> sarnold: fyi, it appears that install to disk MBR (/dev/sda) was the correct choice.  system's rebooted.  thanks.   still no idea what the installer's actual problem was :-/
<sarnold> DarylXian: *pfew*! :)
<sw0rdfish> what is cpanel for
<sw0rdfish> I think solusvm is more than enough right?
<sw0rdfish> oh its for webhosting and stuff and I don't need that.
<domino> sarnold, thanks for taking a look at it. I'll keep digging and see if I come up with anything
<sarnold> domino: good luck :)
<uvirtbot`> New bug: #1093918 in multipath-tools (main) "grub-probe auto-detection fails on raid" [Undecided,New] https://launchpad.net/bugs/1093918
<DarylXian> sarnold: thx agn!
<adam_g> hallyn: hmm. im getting the same (bad?) results across precise, quantal + raring for the test case you posted to bug #1092715 http://paste.ubuntu.com/1497057/
<uvirtbot`> Launchpad bug 1092715 in linux "chown does not update acls if there are >1 user acls (in quantal)" [High,Confirmed] https://launchpad.net/bugs/1092715
<hallyn> adam_g: yes.  and now i am too.  right before i added that in there, i swear it updated the acl on my laptop
<hallyn> i dunno
<hallyn> adam_g: but note that if you dont do 'setfacl -m user:ubuntu:rw- xxx', then chown does update the group acl
<hallyn> maybe until you do that it doesn't actually set a real acl?
 * hallyn tries out attr
<adam_g> hallyn: :) fwiw, i wasn't checking ACLs yesterday, but it seemed udev did pick up the newly installed rule if i moved it from /lib/udev/rules.d/ to /etc/udev/rules.d/ or (i think) if i modified the rule in /lib/udev/.
<hallyn> adam_g: ok my test was bad.
<hallyn> in the cases where chown seemed to update it, setfacl simply hadn't set an acl (bc it didn't need to)
<zastern> Anybody know if the other_vhosts_access.log that apache uses on debian/ubuntu when you don't configure per-vhost logging includes errors also? E.g. what would show up in a per-vhost error log?
<hallyn> adam_g: the newly picked up rule *is* found even in /lib/udev/rules.d, because otherwise group wouldn't be changed to kvm
<adam_g> hallyn: the group wasn't changing to kvm for me
<adam_g> lemme try again
<hallyn> adam_g: oh, wow.  then that's a different bug than i'd been seeing
<adam_g> hallyn: i think its some inotify wonkiness wrt udev picking up the new rule: http://paste.ubuntu.com/1497150/
<hallyn> adam_g: ext4 rootfs?
<adam_g> hallyn: hm, ext3 actually
<hallyn> adam_g: i've been testing in a quantal vm and not had the same results.
<hallyn> lemme try one.  more.  time.
<adam_g> hallyn: fs might be my issue? dont personally know much about inotify support in ext3 vs ext4
<hallyn> adam_g: yeah my current vm is xfs, but it gets /dev/kvm chgrpd to kvm when i apt-get install qemu-kvm (just tried)
<hallyn> and previously i had it ext4
<hallyn> so yeah this coudl be ext4 bug.  you created thsi vm just for this test?
<hallyn> or where are you testing?  (wondering if anythign else could be modified)
<adam_g> hallyn: this is in a hardware lab where i'm testing other stuff from quantal-proposed, that depends on qemu-kvm (nova/libvirt). ive got ext3 root just to speed up d-i installation, i can try ext4
<hallyn> hold on.  maybe i'm still being an idiot
<hallyn> nope - i mean probably, but my results are still the same
<mikeey> Some sysadmin of ours managed to run Bastille, and it made a couple utilities "root only", how do I reverse it if we do not have the restore file? For instance, ifconfig = -bash: /sbin/ifconfig: Permission denied
<Free99> mikeey: try running "ls -l /sbin/ifconfig" and put the results here
<mikeey> -rwxr-x--- 1 root root 72320 Mar 31  2012 /sbin/ifconfig
<hallyn> adam_g: clue.  if you apt-get purge qemu-kvm; modprobe kvm_intel; apt-get install qemu-kvm, does it then change the /dev/kvm group owner?
<hallyn> adam_g: I'm thinking the very first time you install qemu-kvm, it may refuse to change the group bc /etc/group hasn't been updated yet (though it *should* have been, which confuses me)
<hallyn> i ask bc that's what i just saw on raring
 * hallyn curses fragile magic crap
<Free99> mikeey: sudo chmod +x /sbin/ifconfig
<Free99> and also +r
<adam_g> hallyn: need to try again on a fresh box. fwiw, kvm_intel is already loaded before installing
<Free99> the permissions say that it is read/write/exec by the user root, read/exec by anyone in the root group, nobody else
<Free99> mikeey: (you know how to read permissions right?)
<hallyn> adam_g: yes, but when you apt-get purge qemu-kvm, it gets unloaded
<mikeey> sort of. I should really learn it
<Free99> mikeey: when you did "ls -l", the part of the response that says "-rwxr-x---" is the permissions for a file, or whatever you're looking at
<adam_g> hallyn: ugh. of course when i try again on another box, /dev/kvm looks good after installation. :)
<hallyn> adam_g: then do getfacl /dev/kvm
<Free99> r=read,w=write,x=execute
<adam_g> hallyn: group::rw-
<Free99> mikeey: it's 5, I'm out. But try to find whatever things you need that have weird permissions, and just do "sudo chmod +x <file>" and "sudo chmod +r <file>"
<Free99> peace guys
<hallyn> adam_g: !  in what?  quantal?
<adam_g> hallyn: oh jeez, disregard
<hallyn> ok
<adam_g> hallyn: i wasn't testing from proposed on those systems heh
<hallyn> no need to even explain :)
<adam_g> what a LONG short week :)
<hallyn> so easy to do a bad test with this
<hallyn> oh, the week was short - phew - that explains why i feel so unproductive
<hallyn> anyway i'm trying one more precise installation to test
<hallyn> tbh i'm still not sure that part of it isn't some consolekit daemon racing with udev...
<adam_g> hallyn: apt-get purge qemu-kvm; modprobe kvm_intel; apt-get install qemu-kvm gets me the correct group perms
<sazawal> I need to install Ubuntu via Networkboot. My DHCP server is working. I have also configured TFTP and I am able to get files on the server-system using tftp localhost -c get testfile. But the client system is showing pxe-e32 tftp open timeout
<adam_g> hallyn: and yeah, if i also remove kvm from /etc/group between purge + install, /dev/kvm remains root:root after insatll
<hallyn> adam_g: you getting the correct groups, which release is tha ton, and is that with you being logged in on a console?
<adam_g> hallyn: quantal/quantal-proposed. im ssh'd in
<adam_g> hallyn: trying again once more on another system, but adding kvm group before antyhing
<hallyn> adam_g: ok ssh'ing in is not enough
<hallyn> you have to be logged in on console to make consolekit set acls
<adam_g> hallyn: thats  beyond what im able to test ATM. :|
<hallyn> adam_g: the kvm group thing really scares me.  qemu-kvm.postinst clearly, serially first adds the group, then much later calls udevam trigger
<hallyn> yeah that's why i'm having to test on nested kvm VMS on my laptop :)
<hallyn> (which only had 80G ssd, so i'm short on space to keep things around)
<adam_g> hallyn: yeah, well, just confirmed group perms look good if the kvm group exists prior to doing anything
<hallyn> adam_g: so that would suffice for the tests you need?
<hallyn> it might be worth creating an empty dummy package to test this
<adam_g> hallyn: well, chown'ing vs addgroup'ing prior to install is an easy enough workaround for me.
<sazawal> ogra_, I am using tftp-hpa as tftp server. It is working fine when I try to get files on the server-system using this command "tftp localhost -c get testfile". The client system gets connected via DHCP server and then shows this error "pxe-e32 tftp open timeout"
<hallyn> adam_g: not sure i follow.  you're saying you're ok with what is there now?
<adam_g> hallyn: no. :) but its not blocking me from testing the other things in quantal-proposed.
<hallyn> ok
<adam_g> hallyn: if it were simply an issue of the group not being there, reruning '/var/lib/dpkg/info/qemu-kvm.postinst configure' should fix it like a reboot, no? i still think udev isn't picking up the new rules
<hallyn> adam_g: i believe you
<hallyn> adam_g: at this point i feel like my life is a lie :)
<adam_g> :P
<hallyn> when we finish up some other thing, i'm going to beg stgraber or slangasek for some help in figuring out what on EARTH is going on
<hallyn> hopefully early next week
<hallyn> cause it's apparently still messing up precise - raring
<adam_g> is there a utility around to monitor inotify events?
<hallyn> google for a python based inotify monitor
<hallyn> adam_g: something like http://people.canonical.com/~serge/inotify3.py
<sazawal> I want to set up TFTP for ubuntu installation via Networkboot. I am using tftp-hpa as tftp server. It is working fine when I try to get files on the server-system using this command "tftp localhost -c get testfile". The client system gets connected via DHCP server and then shows this error "pxe-e32 tftp open timeout"
<adam_g> hallyn: a quick look at events using that script: a CREATE for the 40-qemu-kvm.rules.dpkg-new but nothing when its moved to 40-qemu-kvm.rules. assuming udev doesn't pick it after the event since its not a .rules file
<hallyn> adam_g: might add IN_MOVED_TO to the mask,
<hallyn> adam_g: but i'm trying to remember whether we can expect inotify to detect renames
<hallyn> it looks at the inode, so rename won't be marked as a change against the file itself for sure
<hallyn> it might be marked as a change to the parent dir
<sw0rdfish> hey, can my provider move my OpenVZ VPS to another plan that I bought?
<sarnold> it's up to them, probably. maybe they could just twiddle some configs, maybe they'd have to move your data to another system.
<sw0rdfish> i thought its as simple as moving the image or whatever
<sarnold> could be :)
<hallyn> mdeslaur: ^ I'm not going to look into it further today, but if you remember the /dev/kvm weirdness we had around UDS time, adam_g and I are having more oddness in backlog for last few hours
<hallyn> I'll be begging for help next week to figure out WHAT IS GOING ON
<hallyn> adam_g: heh, maybe it'll suffice to have qemu-kvm.postinst echo >> /lib/udev/rules.d/40-qemu-kvm.rules right before udevadm trigger :).  i'll look into it mor next week
<hallyn> \o
<AlphaWolf> Hi. Running "sudo mountall" gives the following error: http://paste.ubuntu.com/1497640/ I also get this error on boot. It doesn't seem to be able to mount my swap partition. Here is my /etc/fstab: http://paste.ubuntu.com/1497645/ Any ideas?
<_KaszpiR_> AlphaWolf try swapon -a
<sarnold> if that device has been corrupted it may no longer be a 'valid' swap space; you may need to run 'mkswap' on it before swapon will use it.
<sarnold> (perhaps you used a suspend-to-disk that somehow did not get cleaned up correctly?)
<AlphaWolf> _KaszpiR_: Sorry, took a while: swapon: /dev/sda6: swapon failed: Invalid argument
#ubuntu-server 2013-01-05
<AlphaWolf> sarnold: Sorry, I did not see that message. I'm not 100% sure how to use 'mkswap', I only seem to get errors ("mkswap: error: UUID parsing failed" or "mkswap: error: Nowhere to set up swap on?"). I don't remember using a "suspend-to-disk" (hibernate?), so I can't help there
<sarnold> AlphaWolf: check dmesg output? maybe there's something more wrong..
<AlphaWolf> sarnold: Aha! "wap area shorter than signature indicates" appears a couple of times
<sarnold> AlphaWolf: any success?
<AlphaWolf> sarnold: No. I'm thinking I'll have to run a utility on it but I'm not sure which one, and I can't run it right now since I need the machine on so I'll have to continue tomorrow. Thanks for the help so far!
<pmp6nl> Hello, I want to change all server folders within one folder to 755, is the following the best way to do it?  find /home/brian/public_html/talk -type d -exec chmod 755 {} \;
<Guest89826> anyone home?
<Guest89826> anyone had experience adding noapic to GRUB.cfg?
<Guest89826> anyone?
<ddfgt> hi
<ddfgt> how i can mount a window share that not use username/password
<ddfgt> if i click in nautilus "file > connect to server" select "windows share" and put only the IP address - it is work fine..
<ddfgt> but when i try to make mount  - it's ask for password..
<ddfgt> any idea>
<ddfgt> any idea?
<Glitchd> does anyone in here use teamspeak?
<Glitchd> hellooooooooooooooooooooooooooooooooo...............?
<ddfgt> me
<ddfgt> Glitchd,
<ddfgt> me
<Glitchd> huh with the wha?
<Glitchd> ddfgt, ^^
<ddfgt> what?
<Glitchd> you just said "me"
<Glitchd> what did u want?
<ddfgt> <Glitchd> does anyone in here use teamspeak?
<Glitchd> can i offer u a kidney punch?
<Glitchd> ohhh
<Glitchd> i was trying to get my teamspeak server working with this new shitty att hardware
<Glitchd> i think i got it now
<Glitchd> care to help me test it?
<Glitchd> ddfgt, ..?
<ddfgt> i used teamspeak time ago..
<Glitchd> well then that would mean u dont actually use it. u used to use it.
<Glitchd> meaning past tense
<Glitchd> im talking present tense
<Glitchd> cuz i presently need to know if people can connect to it
<ddfgt> mmm
<ddfgt> btw you know how i can mount my NAS to my ubuntu?
<Glitchd> mmhmm.
<Glitchd> not a clue
<Glitchd> i know google can help you tho
<ddfgt> i can get to it with nautilus.. but i cant mount it..
<Glitchd> google that shit son.
<Glitchd> sounds like something to do with  permissions
<ddfgt> i'm after alooooooot of goooogleing
<Glitchd> what version ubuntu?
<Glitchd> 64 or 32 bit?
<Glitchd> uhh...
<Glitchd> alright then
<Glitchd> adios
<ddfgt> Glitchd, ubuntu 12.04
<ddfgt> 32 bit
<Glitchd> mmk ill see what i can find out
<Glitchd> what version ubuntu?
<Glitchd> oh duh.
<Glitchd> it has something to do with putting the username and password in fstab
<Glitchd> do u want it to automount or to just mount in general?
<Glitchd> http://ubuntuforums.org/showthread.php?t=1380583
<Glitchd> Solution for 12.04
<Glitchd> sudo apt-get install smbfs
<Glitchd> Change cifs to smbfs in fstab listings.
<Glitchd> Although it was primarily trial and error. This Ubuntu wiki page was highly useful and contains information about other common setups:
<Glitchd> https://wiki.ubuntu.com/MountWindowsSharesPermanently
<Glitchd> kbye
<taofd> anyone know how to mount a smb share in 12.10?
<taofd> mount -t smbfs doesn't work and gives me a smbfs unknown filetype error (what is 12.10 using to handle smb by default?)
<wilmaaaah> taofd: do you have the cifs-utils installed?
<taofd> well i can mount smb/cifs sahres
<taofd> shares*
<taofd> my main problem is when using freefilesync to sync files, it doesn't show my shares
<taofd> is there somewhere gnome keeps virtual mount points?
<taofd> i don't see .gvfsâ¦ has that been deprecated?
<wilmaaaah> taofd: is your share mounted?
<wilmaaaah> to mount a samba share use -t cifs
<taofd> meh, i found it, it's mounted under /run/user/taofd/<sharename>
<taofd> i'm on 12.10 and gnome seems to be able to mount shares automatically
<taofd> was just searching for the mount point...
<webwurst> hi! is it possible to install raid with ubiquity in ubuntu 13.04?
<jacobw> does anyone use UML to document networks? i'm looking for some information about using UML for infastructure
<jabba_> hello... is there so,e special trick one might know to enable pci-passthrough to a xen/pvops virtual machine?
<jabba_> i mean besides binding the pci-device to pciback, passing the pci-identifiers to the vm-config and setting iommu=soft as kernel parameter in the guest?
<Daniel988> Hey. I am testing a server for one month. provider wrote: raid1 (2x1TB hdd). How can I check if raid is enabled. cat /proc/mdstat shows nothing.
<jabba_> i am searching for 3 days now, yesterday i spent hours with a kernel-hacker (friend of mine, who does kernel programming all day at work) until 5am, but we didn't get it working... i am really desperately searching for help.
<jabba_> used configuration was dom0: ubuntu12.04, domU: ubuntu 12.04. until now both are on 12.10 - as we both suggested a kernel update might help (which didn't)
<jabba_> Daniel988: hardware or software raid?
<jabba_> xen pci-passthrough can't be that hard under ubuntu... maybe it is the xapi stack i am using? might that be? perhaps not stable yet?
<jabba_> the weird thing is: i got it all working using the xcp-distribiution from citrix. but they use kernel 2.6.[something] which could not handle the device i want to passthrough...
<ariel> anybody here?
<jacobw> ariel: yes
<ariel> can you tell me what you see at this address http://192.168.1.2/phpBB3
<r3fresh> anyone know where I can find the ubuntu server iso Md5's?
<r3fresh> nvm
<andol> ariel: Well, this is what I get when I enter that url into my web browser - https://dl.dropbox.com/u/322162/19216812.png
<ariel> so how do i get it to work in that ip address
<andol> ariel: Well, here is the thing, your 192.168.1.2 isn't the same as my 192.168.1.2. You see, all 192.168.*.* addressses (aka 192.168.0.0/16) are individual for each local network, and not routed across the public Internet.
<andol> (See also http://en.wikipedia.org/wiki/Private_network)
<samba35> there are two nic .1st is dhcp and 2nd i want to use as a bridge with kvm
<sw0rdfish> how do I make sure openvpn starts automatically on restart
<RoyK> well, upstart should do
<RoyK> or an init script
<andol> ...unless your OpenVPN conenction has som dependencies the stock upstart/init script isn't aware of.
<RoyK> otherwise, if you want to restart if failing, use something like puppet or cfengine
<qman__> when you install it using the package manager, it starts by default, provided you set up the config
<M0rsa> Hello
<RoyK> hi
<TheLordOfTime> so, who do i need to stab about the php5 packaging
<TheLordOfTime> because apparently packages were dropped from building in raring which are causing complaints
<TheLordOfTime> nevermind, i figured it out...
<andol> TheLordOfTime: How to fix the build, or who to stab? :)
<TheLordOfTime> ehh nevermind,.
<TheLordOfTime> figuired out the cause, apparently 5.4.9 in Raring has packages disabled which prevent backporting
 * TheLordOfTime leaves it be
<TheLordOfTime> they disabled building of modules... because there's separate ones in universe...
<TheLordOfTime> not entirely sure... *why* they did that, though...
<subman> I currently have a /home directory being backed up successfully via rsync.  I also want to back up the /var/mail/user file.  Could I create a symbolic link in each of the users /home directories and rsync would then backup the mail file in /var/mail/?
<subman> I think I found my answer, thanks.
<TheLordOfTime> SpamapS, ping.
<magma> hi, I have a cluster with 6 machines, is it possible to execute a sudo apt-get upgrade in all of them with pssh?
<atyoung> i script that and run it from my "master" node.
<magma> how do you script that?
<magma> with a for loop in a bash script?
<magma> doing something like ssh user@host "sudo apt-get upgrades; y" ?
<atyoung> yeah
<atyoung> I have ssh key auth, my script essentially passes the apt-get command arguments captures the stout and send it to the console in the master node.
<atyoung> There are probably other ways but it was quick and dirty and with 2 nodes made it simple
<magma> humm
<magma> would you share your script?
<tasslehoff> first server install ever. this be the place to ask install advice, or should I go to ubuntu-installer?
<atyoung> I would but I'm at work, they frown on remoting to our personal networks heh
<atyoung> o_O paranoid
<atyoung> it's not hard though just set up key auth ssh to your nodes, with hte same key and then has ssh send the commands man ssh will give ya the syntax
<tasslehoff> I have a 60gb ssd as systemdisk, and 4 hdd's that I want to run in RAID5. Should I just format the ssd as ext4, and use it as / ?
<magma> just the thing of capturing the stdout, I don't know how to do
<magma> but I will try something
<magma> I already have the keys
<atyoung> if I remember correctly thats a ssh feature, you can grab the output on the remote with a flag on your ssh command. The tax escapes me at the moment, i've used it for years so I just don't remember ;)
<magma> ok I will check that out :)
<atyoung> Maybe I'll look at it when I get home, make it more universal and post it on github heh
<atyoung> I think I figured there was already a solution for that
<atyoung> But perhaps not.
<taofd> anyone know what the syntax is for mount -t cifs? i can't find anything in the man pages, and not sure what "mount.cifs" man is pointing to or supposed to be
<taofd> i want to access a remote smb share
<magma> atyoung: just a thing I have the keys for my user set and I can connect with no problem. But if I want to run as sudo it asks me for a password
<magma> do I have to put the keys in the /root/.ssh/?
<jacobw> magma: no, your user needs to have sudo access
<magma> jacobw: my user has user access and the keys are ok
<magma> I can connect to other node without password
<jacobw> magma: SSH has granted to a shell as a your user, not the root user, in the same way as gnome-terminal grants you a shell as your user on the desktop
<jacobw> magma: sudo works in the same way on the desktop and on the server
<magma> jacobw: If I try sudo ssh myuser@node2 it asks me for password
<magma> oh I think I know what the problem is
<jacobw> magma: it's asking for the passphrase for your SSH key
<magma> my localhost key is not on my localhost authorized keys
<magma> jacobw: no
<magma> if I execute any command with sudo it asks me password
<jacobw> magma: yes, sudo requires a user to enter they're password to execute a command as another user
<jacobw> magma: http://askubuntu.com/questions/192050/how-to-run-sudo-command-with-no-password
<magma> thanks
<magma> btw, I can't ping my machines through their hostname
<magma> what could be the problem?
<jacobw> magma: are you using just they're hostname (foo) or their fully qualified domain names (foo.bar.com)?
<jacobw> magma: if you're on the same subnet, try pinging foo.local
<magma> just the hostname
<jacobw> does the machine have an FQDN? try pinging the local address, if it has an FQDN add the root your hosts domain to the your search domains
<jacobw> magma: you can use something like .bar.com as a search domain to query foo.bar.com instead of foo first
<jacobw> magma: you can set search domains in /etc/resolvconf/resolv.conf.d/head or /etc/resolv.conf or network-manager
<magma> ok I will try that
<magma> I don't know if I have FQDN
<magma> "DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN"
<magma> do I need to reboot so that my changes take effect?
<magma> jacobw: I added the search domain
<magma> do I need to reboot the machine?
<jacobw> magma: no
<jacobw> do `dig node2` and check that it matches the IP address of node2
<magma> I don't see the ip of node2
<magma> in the SERVER I see something like XXXXX.41.1#53
<jacobw> that's address of the name server the records came from
<jacobw> can you describe your environment for me?
<magma> I don't know exactly
<magma> :S
<jacobw> ok
<jacobw> do you know if your on the same subnet as node2?
<magma> I've been trying to figure that out via ssh
<magma> the ips are the same
<magma> the first node ends in 141, the second in 142
<magma> I can ping using the ip of node 2
<jacobw> which is the host that your using?
<jacobw> (it's address)
<magma> but using its hostname I can't, unless I have it in /etc/hosts
<jacobw> /etc/hosts overrides the domain name system
<magma> where is located the domain name system?
<jacobw> so node1 is at 141 and node2 is at 142, where is your pc?
<magma> my pc is on another network
<magma> I'm connected to 141 via ssh
<magma> and trying to ping node2 through its hostname
<jacobw> see if you can ping node2.local
<jacobw> from node1
<SpamapS> TheLordOfTime: pong, wassup?
<magma> no
<magma> ping: unknown host XXXXXX
<magma> I think I have to reboot something for the editions on /etc/resolv.conf to work
<magma> where is located the domain name system file?
<jacobw> magma: the domain name system is distributed, all name servers on the internet either delegated to by another name sever or delegate to other name servers or both
<magma> ok
<jacobw> magma: the resolver on your host uses which ever name server you've configured to query for any name it doesn't know the address of
<magma> I see
<jacobw> magma: the resolver is configured in /etc/resolv.conf, in recent version of ubuntu /etc/resolv.conf is dynamically generated by resolvconf which is configured in /etc/resolvconf/
<jacobw> magma: the domain .local resprents all the hosts known to ARP, i.e. all the hosts in the same subnet
<jacobw> magma: can you ping node1 from node2?
<magma> no
<magma> what's the arp command to see its cache?
<magma> arp
<magma> I can see the hosts there
<jacobw> if the other hosts have names, you should see them in the addresses column
<jacobw> what does `hostname` say on node1 and node2?
<magma> does it matter?
<jacobw> i don't know
<qman__> alright, this is really starting to piss me off
<qman__> for some reason, when I boot the ubuntu 12.04 installer on this UEFI board, networking doesn't work
<qman__> networking works fine in windows and in systemrescuecd
<qman__> I have two network cards, one's an r8169 and one's an e100
<qman__> I booted from PXE FFS
<Ubiquity4321> good day, all
<Ubiquity4321> so i'm trying to automount a drive for a samba share
<Ubiquity4321> i know how to get the uuid. i know how to edit /etc/fstab
<Ubiquity4321> i want to know how to copy a specific part of blkid
<Ubiquity4321> so that don't have to write it down on a piece of paper
<Ubiquity4321> i'm new to 100% cli
<Ubiquity4321> or to output it directly to the file, if that's possible
<zastern> Anybody have any thoughts about this - http://askubuntu.com/questions/236748/service-tab-completion-broken-as-root-not-even-sure-where-to-start-looking - tab completion for service names when using the service command works fine as a regular user, but doesn't work as root.
<RoyK> happy MMXIII
<magma> oh now I can ping adding the '.local'
#ubuntu-server 2013-01-06
<uvirtbot`> New bug: #1096491 in etckeeper (main) ""etckeeper unclean" is broken" [Undecided,New] https://launchpad.net/bugs/1096491
<uvirtbot`> New bug: #1096495 in etckeeper (main) "manpage talks about /etc/.metadata" [Undecided,New] https://launchpad.net/bugs/1096495
<uvirtbot`> New bug: #1096505 in etckeeper (main) "manpage fails to document "etckeeper init" for restauration of file permissions and ownership" [Undecided,New] https://launchpad.net/bugs/1096505
<mand0> anybody get a Active Directory Samba4 server running?
<qman__> not yet
<qman__> I plan to soon though
<qman__> I tried to join one to an existing AD and it wouldn't work, apparently some issue with a 2003 AD and exchange 2007
<mand0> i tried to join an existing one just now (in a test lab) and I am getting this same error: http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg1088003.html
<samba35> i have installed window 7 on ubuntu12.04 but last stage of windows did not finesh and machine goes in reboot phase ,how do repair this or ,is there a way to repair windows
<arrrghhh> Hey all.  I am migrating to a new server, and I want to setup LVM.  I can only do this on new, unpartitioned disks correct?
<arrrghhh> I can't "add" LVM to old hard disks that already have data/are formatted...?
<arrrghhh> anyone around?  curious if the ubuntu server installer changes the flags of all my disks to 'lvm'... if I can change them back to 'ext4' and the data still be intact?
<arrrghhh> to my knowledge no mkfs commands were passed....
<arrrghhh> so can I migrate an ext4 disk to LVM without losing data...?
<arrrghhh> I haven't formatted any disks, so I honestly hope not...
<zapotah> Hi. Is there yet a way to install an efi system to mdraid to be bootable from both disks.
<samba35> i have installed windows on linux with kvm ,i want to access windows machine over internet for that what i should do i have  utm (firewall/vpn/routeing /av/spam/proxy) from utm i plan to use nat
<TheLordOfTime> SpamapS, see privmsg.  or -motu
<TheLordOfTime> or both :p
<bcbrown19> Anyone know a good guide for setting resolutions with Ubuntu Server 12.10 (on a virtual machine)? I tried changing some things in /etc/default/grub, but to no avail
<Ubiquity4321> hello everyone
<Ubiquity4321> I'm trying to set up a headless samba
<Ubiquity4321> server
<Ubiquity4321> i've got everything installed
<Ubiquity4321> and i've auto mounted disks and everything
<Ubiquity4321> but I can't seem to figure out the headless part
<RoyK> erm - what is it you don't understand about headless?
<Ubiquity4321> well, i've configured ssh
<Ubiquity4321> i've got xming installed on my windows box
<Ubiquity4321> I can't figure out how to connect the two
<Ubiquity4321> i've done googled my heart out at this point
<RoyK> just launch putty and enable x11 forwarding
<RoyK> or are you trying to get the whole linux desktop on windows?
<RoyK> that's not a server thing, though
<Ubiquity4321> no, it's 100% cli
 * RoyK looks at the channel name
<Ubiquity4321> RoyK: let me see if I can figure that out
 * Ubiquity4321 looks at his 12.04 server install
<Ubiquity4321> that was the simplist explanation i've come across tbh RoyK
<arrrghhh> Hello all.  I committed some sin last night, and I'm hoping it's resversible...
<arrrghhh> I was messing around with LVM while installing Ubuntu last night, and I converted existing ext4 disks to LVM.  I didn't run mkfs so the drives should still have all the data... can I convert them back to ext4, or somehow otherwise get the data off the drivers?
<RoyK> arrrghhh: what did you do? pvcreate?
<arrrghhh> RoyK, unfortunately it was done by the ubuntu installer... so i'm not sure exactly.
<RoyK> so, is the partition table the same?
<arrrghhh> should be
<arrrghhh> I tried changing the flags in fdisk, to no avail
<RoyK> and was /home on a separate partition?
<RoyK> fsck might help
<arrrghhh> /home doesn't really matter, it's the data disks
<RoyK> check what flags are available there
<arrrghhh> cool
<arrrghhh> just fsck /dev/sddX?
<arrrghhh> Hrm... I might have to do this from a liveCD
<RoyK> you probably should
<RoyK> erm
<arrrghhh> alrighty rebooting
<RoyK> have you reinstalled?
<arrrghhh> Yes, I finished the install last night... but I didn't install on the LVM drive
<RoyK> if the filesystem is recreated, then all you have of choice is a restore
<arrrghhh> or any of the LVM drives
<RoyK> did you have separate ext4 filesystems on all of them?
<arrrghhh> yes
<arrrghhh> they were all flat ext4 drives before I did the stupid deed last night
<RoyK> do you have a backup?
<arrrghhh> Off-site, but yes
<arrrghhh> I'd obviously rather avoid that if I can, but there is backups of pretty much everything.... even /etc
<RoyK> arrrghhh: btw, there's no point of doing this from a live cd if the filesystems aren't in use
<arrrghhh> oh ok cool
<arrrghhh> when I sudo fdisk -l the disks no longer show as /dev/sdX, they show as /dev/mapper/OldOS-OldOS for example
<RoyK> sorry - don't know that
<RoyK> arrrghhh: what does cat /proc/partitions have to say?
<arrrghhh> Hrm... I'm not sure how to fsck this
<arrrghhh> lets see
<arrrghhh> hey, those look familiar
 * RoyK doesn't know his way around the device mapper
<arrrghhh> hehe.  I appreciate your help :)
<arrrghhh> If I can revive without going to backups it will save a LOT of time.
<RoyK> how many drives?
<arrrghhh> uhm... let me pastebin the output
<arrrghhh> http://pastebin.com/WdTGpFAu
<arrrghhh> 6 disks.  1 of those (sdg) is a flash drive I used to install Ubuntu
<RoyK> arrrghhh: I'd use a RAID for such amount of drives if I were you
<arrrghhh> RoyK, they're all different sizes
<RoyK> but then, looks like they're of different sizes
<arrrghhh> well, except for two
<arrrghhh> :)
<RoyK> 'cept sdb and sdc
<arrrghhh> I'm going to do 4tb drives and RAID5 eventually
<RoyK> better start with smaller ones - more bang for the buck
<arrrghhh> But... the first 4tb has yet to arrive :)
<RoyK> you can always add more drives to the raid later
<arrrghhh> Yea, but they have to be the same size...
<RoyK> yes, so start out with 2TB drives
<RoyK> or 3TB
<RoyK> 4TB are expensive
<arrrghhh> I couldn't find any WD caviar black's in 3tb
<arrrghhh> and 2tb... I guess I could've
<arrrghhh> but that's neither here nor there
<RoyK> I have a few 2TB WD drives
<RoyK> and a few Hitachis
<arrrghhh> Some of these drives are going away my friend
<arrrghhh> But unfortnately they cannot today
<RoyK> arrrghhh: http://paste.ubuntu.com/1503573/
<arrrghhh> RAID6 eh?  fancy :)
<RoyK> arrrghhh: also, if you replace all drives in a raid with larger drives, the raid will grow :)
<RoyK> just don't use partitions
<arrrghhh> Hahaha
<RoyK> raid6 == paranoia == safe
<arrrghhh> indeed
<arrrghhh> survive a double drive failure
<arrrghhh> RoyK, I'm working my way towards RAID....
<RoyK> I find it comforting to know my data is safe ;)
<arrrghhh> I wanted to learn about LVM, and like an idiot I added all my drives to it last night
<RoyK> raid-5 works too, though
<arrrghhh> when I realized it had done that, I tried to undo it and seemingly could not...
<RoyK> arrrghhh: I'm afraid it might be hard to rollback on that
 * RoyK blames saturday nights beers
<arrrghhh> oy
<arrrghhh> I rushed into the install a bit.  I was tired and excited... stupid.
<RoyK> anyway - if you can afford 3-4 2TB WD Red drives, that should be rather good for a home raid
<arrrghhh> Yea, I'll get there :)
<arrrghhh> I just put a mobo in that can handle more disks
<arrrghhh> and RAM
<arrrghhh> Wanted to rebuilt my 32-bit server 64-bit
<RoyK> why not just a 2port SATA controller?
<RoyK> or two
<arrrghhh> old mobo only had 2 RAM slots
 * arrrghhh wanted moar speed
<RoyK> well, for a home server, how much do you need?
<arrrghhh> lol
<RoyK> it won't be much faster with more RAM, unless you have very little
<arrrghhh> my indexer site...
<arrrghhh> it had built a pretty big freakin db
<RoyK> ah
<arrrghhh> I haven't really spent anything on this server since I built it in 2008
<arrrghhh> and I went really cheap on it then :)
<arrrghhh> So the server had done a great job until I started this stupid indexer...
<RoyK> I bought some 2-port controllers off ebay
<RoyK> works like a dream ;)
<arrrghhh> how's the speed?
<RoyK> arrrghhh: about the same as my pci-e v1 can deliver
<arrrghhh> Tryin to think of the expansion slots left on that oldboard too
<arrrghhh> meh
<RoyK> the speed is only limited by the spinning rust
<arrrghhh> So... you don't think I should even try fsck?
<RoyK> two drives on a single pci-e v1 lane is ok
<RoyK> well, you can, but I don't know if it's worth the time and the headache, if you have a backup
<RoyK> also, if you reconfigure this system, stick to single drives for now, not lvm
<arrrghhh> well.... I'm looking at this here...
<RoyK> since if one of the drives die, it may take down the whole lvm volume
<arrrghhh> oh
<arrrghhh> I didn't realize LVM was bad like that... eek
<RoyK> no, it's not bad at all
<arrrghhh> I guess I need to wait for RAID before I LVM...
<RoyK> but it requires the underlying devices to work
<RoyK> I use LVM on top of RAID-6
<RoyK> so that I create a volume for each purpose
<arrrghhh> Yea.  I was planning on using LVM for that...
<arrrghhh> but before I took the RAID jump
<RoyK> that's rather flexible
<RoyK> and then the MD code can take care of the redundancy
<RoyK> I lost a drive just after christmas
<RoyK> no problem, not even a reboot, the spare took over, rebuilt in a day or so
<RoyK> if that had been a concatenated LVM, all my data would have been lost and I'd have to restore from backup (on crashplan, and that's *slow*)
<arrrghhh> RoyK, which is what I am restoring from
<arrrghhh> and I'm trying to avoid...
<RoyK> hehe
<RoyK> how much data?
<arrrghhh> you really want to know?
<TheLordOfTime> I think he does.
<RoyK> arrrghhh: it may be faster from where you are - I'm in .no
<arrrghhh> 2.4tb
<RoyK> ok
<RoyK> give it a month ;)
<arrrghhh> yup
<RoyK> I can understand why you want to try to fix this...
<arrrghhh> fsck
<arrrghhh> I had to do every single drive too... lol
<RoyK> fsck -N
<RoyK> is a good start
<arrrghhh> when i fsck myself, I do it gooood.
<arrrghhh> hrm OK
<RoyK> or start with one of the smaller drives for testing
<arrrghhh> ok
<RoyK> arrrghhh: /dev/mapper/raid-ymse    6.3T  4.2T  2.1T  67% /raid
<arrrghhh> :)
<RoyK> most of that's on crashplan - I hope I don't have to restore from there...
<arrrghhh> lol
<arrrghhh> So I can't seem to fsck -N /dev/sdf... and if I do sdf1 it says "fsck.LVM2_member: not found"
<RoyK> fsck.ext4
<RoyK> or 'fsck -t ext4', which is the same thing, only portable
<RoyK> btw, what does lvscan have to say?
<arrrghhh> ok, what should I pass?
<arrrghhh> lvscan lets see
<RoyK> or pvscan
<RoyK> pastebin those and lvscan
<arrrghhh> http://pastebin.com/jUj5Dttc
<arrrghhh> the only "healthy" one is VG OS...
<RoyK> I guess starting with lvremove/pvremove on one of those, perhaps oldos?, might be worth a try
<RoyK> the tv shows and the movies are perhaps downloaded?
<RoyK> you can't do anything until the partitions aren't in use
<RoyK> they are now
<RoyK> so perhaps better use a live cd, but then, that might start lvm as well
<RoyK> so, if you can afford to lose one of those, like oldos, test with that first
<arrrghhh> ok
<arrrghhh> backup would ironically be the one to test with since its the smallest
<arrrghhh> so rebooting to livecd...
<RoyK> yeah, try that, if lvscan shows the volumes active after bootup, well, it's the same thing
<arrrghhh> I'll probably have to install lvm2 on the livece
<arrrghhh> cd*
<RoyK> don't
<RoyK> just don't
<arrrghhh> que?  livecd won't have it
<RoyK> try to fsck that device
<arrrghhh> ubuntu desktop doesn't support lvm
<arrrghhh> oh
<arrrghhh> ok
<RoyK> never mind
<RoyK> you didn't use it on the old install
<RoyK> so you don't need it
<RoyK> ubuntu desktop supports lvm, but you don't want it for this
<arrrghhh> ok
<RoyK> just try to fsck -t ext4 -N /dev/sdf1
<RoyK> -N is "don't do anything"
<RoyK> just to see if it can find the filesystem
<arrrghhh> fsck.ext4: No such file or directory while trying to open ext4
<arrrghhh> hrm..
<arrrghhh> 1 sec
<arrrghhh> RoyK, ok... i got it to run on the live environment
<RoyK> did it find the filesystem?
<arrrghhh> but copy/paste is now difficult heh.  it says the superblock could not be read
<arrrghhh> "or does not describe a correct ext2"
<arrrghhh> it says I should run e2fsck with an alternate superblock
<arrrghhh> "e2fsck -b 8193 <device>"
<RoyK> try that with -N
<arrrghhh> ok
<RoyK> so that it doesn't do anything
<arrrghhh> hrm
<arrrghhh> it says bad magic number in superblock while trying to open /dev/sdf1
<arrrghhh> then the exact same message about running e2fsck -b 8193 <device>
<arrrghhh> I guess I just run it on one of the drives and hope... I have a complete backup of this drive
<arrrghhh> and it's the smallest drive haha
<RoyK> http://linuxexpresso.wordpress.com/2010/03/31/repair-a-broken-ext4-superblock-in-ubuntu/
<RoyK> the superblock is located several places, but you need to know where to find it ;)
<arrrghhh> hrm
<arrrghhh> interesting
<RoyK> I didn't know the trick with fsck -n :)
<RoyK> s/fsck/mke2fs/
<arrrghhh> o wow
<arrrghhh> it's doing stuffs
<arrrghhh> this might actually work.  thank you so much RoyK :)
<RoyK> you'll have to re-run it without -N
<arrrghhh> yup, I am now...
<arrrghhh> well
<arrrghhh> the e2fsck -b command
<RoyK> it should rewrite the original superblock
<F3Speech> Having a problem with my cifs network shares after extetended idle on server, is there any way to tell the server not to sleep/suspend etc. so when services go to access the shares they are always there. Thanks
<RoyK> if that works, try to mount the filesystem
<RoyK> if that works, do the same for other filesystems and order a good champagne for me ;)
<patdk-lap> if it doesn't work, order a hit on royk
<arrrghhh> it's asking me to fix a ton of crap
<RoyK> that's normal
<RoyK> you've messed up ;)
<arrrghhh> hrm
<arrrghhh> ok, it's done
<arrrghhh> fdisk -l still shows the filesystem type as lvm?
<RoyK> try to mount it anyway
<arrrghhh> ok
<RoyK> patdk-lap: heh
<arrrghhh> yay!
<arrrghhh> i had to force it
<arrrghhh> -t ext4
<RoyK> :)
<arrrghhh> force the mount that is.
<arrrghhh> RoyK, mind if I PM you?
<RoyK> fdisk -l will probably remember what was set on boot
<RoyK> arrrghhh: not at all
<RoyK> arrrghhh: btw, try to reboot now, still on the live cd, and check what fdisk -l says
<RoyK> F3Speech: a server installation should never suspend
<arrrghhh> RoyK, so I rebooted and fdisk -l still shows it as lvm?
<RoyK> what is the partition type?
<arrrghhh> 8e
<arrrghhh> Linux LVM
<RoyK> change it
<arrrghhh> to 83?
<RoyK> mhm
<arrrghhh> k
<arrrghhh> reboot again?  I tried to just do a straight 'mount' after writing that change in fdisk, and it still thinks it is lvm until I -t ext4.... fdisk -l just shows the type as "Linux"
<arrrghhh> The installer showed disk types as ext4, but maybe it wasn't just showing fdisk output...
<arrrghhh> well, it showed them as ext4 before I screwed them all up lol
<RoyK> I donÂ§t think it'll change until a reboot
<arrrghhh> ok
<samba35> after mounting iscsi file (not disk) on linux how do i add /copy  (or create a file )file to that image (in the image )
<samba35> RoyK, hi
<RoyK> ho
<arrrghhh> RoyK, hrm... well without specifying the filesystem type it doesn't mount
<arrrghhh> but I don't think that really freakin matters at this point.  so long as fstab can mount it, I will be golden.
<RoyK> arrrghhh: then I suggest you move data around, recreate filesystems etc, and nothing should be lost
<arrrghhh> kk
<arrrghhh> 4tb hdd is going to be here tomorrow :)
<RoyK> arrrghhh: what does pvscan/lvscan say now?
<arrrghhh> hrm... I'd have to boot back to the server install
<RoyK> arrrghhh: then use that for backup storage, and get a small amount of 2TB WD Red drives for a RAID
<RoyK> arrrghhh: you can apt-get install lvm
<RoyK> even on a live cd
<arrrghhh> eh I guess
<arrrghhh> so
<arrrghhh> It says /dev/sdf1 is still a physical volume
<arrrghhh> Do I need to "remove" these disks from LVM before doing this?
<RoyK> guess so
<RoyK> never tried this type of recovery
<RoyK> do you have room on any of those 1TB drives for the backup data_
<RoyK> ?
<arrrghhh> nope
<arrrghhh> hence why the 4tb is on the way :)
<RoyK> well, try pvremove, then
<RoyK> probably have to lvremove first
<arrrghhh> yay
<RoyK> then pvremove
<arrrghhh> mount works
<RoyK> then fsck again
<arrrghhh> on the sdf1 dis
<RoyK> fsck!
<arrrghhh> I will have to now fsck all the other disks
<arrrghhh> oh?
<RoyK> lvremove/pvremove probably changed something
<RoyK> so better fsck -f
<arrrghhh> clean
<RoyK> good
<RoyK> then pvremove the others
<demona> Network booting from my PXE server stopped working. I changed nothing, didn't even reboot. (UNtil now, to try fixing it.) Two copies of dhclient running (one is for an LXC container). dnsmasq now refuses to start saying the local IP address is already in use. Google is not helpful. Any ideas?
<RoyK> and fsck as last time
<arrrghhh> sweet.
<arrrghhh> now to do the rest of the disks
<arrrghhh> I should be able to boot back to the server install now right?
<RoyK> yse
<RoyK> yes
<arrrghhh> cool.  fstab is empty on this install so it won't try to mount 'em
<RoyK> lvm will scan for them
<RoyK> so remove them from lvm first
<zapotah> Hi. Is there yet a good way of installing ubuntu on mdraid on efi so that it is boot redundant as well?
<RoyK> zapotah: no idea, but the general setup is to install grub on both devices, so it should work
<patdk-lap> it all depends on your bios
<zapotah> RoyK: thats how it worked in the olden days with bios, but not with efi
<zapotah> im talking about pure efi
<RoyK> zapotah: sorry, don't know
<zapotah> the server system has no compat module whatsoever
<zapotah> info on this seems to hidden behind lock and key and buried in the deepest depths of the ocean
<zapotah> to be*
<zapotah> and not just on ubuntu but on linux in general
<zapotah> and im quite baffled by the lack of information since efi is not _that_ new of a thing in computing
<arrrghhh> RoyK, thanks again for helping out with the lvm issue.  pvremove did the trick, now I just need to do the superblock trick on all my other drives...
<RoyK> arrrghhh: just don't create lvm on existing devices next time ;)
<arrrghhh> hahaha
<arrrghhh> I think I've learned my lesson thar
<patdk-lap> zapotah, no wonder your confusing me
<patdk-lap> it's uefi
<arrrghhh> I'm going to wait for RAID before I venture into LVM on multiple disks...
<RoyK> arrrghhh: just setup md (raid) on a bunch of drives, use lvm on top of that, and you'll be all set
<patdk-lap> and it is pretty new, people have only been playing with it in linux for about a year
<arrrghhh> RoyK, will do :)
<RoyK> arrrghhh: make sure it's raid-5 or perhaps -6 - redundancy is good
<arrrghhh> I was thinking 5, 6 sounded kinda complicated
<arrrghhh> But I like the ability to lose 2 disks...
<arrrghhh> eh
<arrrghhh> Assuming I'm on top of it, 5 should be enough lol
<RoyK> arrrghhh: you can start off with raid-5 and change to raid-6 later if you get a lot of drives
<arrrghhh> cool
<RoyK> mdadm  --change --level=6 --add /dev/sdX (iirc)
<RoyK> perhaps add --raid-devices=x
<zapotah> patdk-lap: it will sooner or later propably be doable more or less the same way it is done now so that the mdraid device itself maps the uuid pointers for the efi boot program from the device uuids and makes it bootable
<zapotah> or some such way
<zapotah> and grub is updated along
<zapotah> to support such a thing
<arrrghhh> uefi is sweet.  I updated from the internet within the "BIOS" screen.
<patdk-lap> everything is possible given time, but it's still too new
<patdk-lap> and uefi systems only started coming out a little while ago
<patdk-lap> I don't think any of the systems I bought last year are even uefi compatable
<zapotah> arrrghhh: uefi is full of opportunity but currently support and software implementations are limited to say the least...
<RoyK> patdk-lap: erm, most servers we got last year had uefi
<zapotah> RoyK: patdk-lap: every server bought last year were uefi
<patdk-lap> I didn't buy any servers last year
<patdk-lap> just about 15 workstations
<patdk-lap> I am buying several new servers next month
 * RoyK points patdk-lap to the channel name
<zapotah> and theres this weird ibm server thing that does not for some reason have a compat module...
<patdk-lap> royk, I can easily run ubuntu-server on a desktop system :)
<zapotah> ibm ofc wants to sell a raid card ;)
<RoyK> zapotah: yeah, let's put a small ARM system with limited memory bandwith into a badass server to do the I/O
<zapotah> RoyK: exactly and it costs a ridiculous amount of money as well -.-
<RoyK> yep
<zapotah> though seeing theres not much choice when comparing the risk of something going fubar with making a custom solution for this I just might go with it all the same...
<zapotah> definately not happy with it...
<zapotah> i wonder if windows can manage it either just yet... time to fire up that esxi efi platform and test for the hell of it ->
<demona> Screw it. Moved PXE server to another machine. Working now. The Microsoft Solution.
<qman__> zapotah, I just went through a similar adventure in EFI just trying to get ubuntu server installed in the first place
<qman__> and in my case I was simply trying to get it to detect the cd or make the networking work so it could install
<zapotah> qman__: getting it installed with just the efi partition not raid backed was easy enough
<zapotah> but then the problem hit me
<qman__> I never did get netboot working, but I managed to make the CD work by manually setting the USB CDROM type to CDROM emulation
<qman__> mine boots normal grub on MBR disks just fine
<qman__> the point of course being, documentation is virtually nonexistant, and the consensus is "don't use it if you can avoid it"
<qman__> at least with everything I have been able to find
<zapotah> basically it would be easy if grub installed on both the partitions just made some guid mappings and somehow the partition to be mounted would be fed from grub instead of fstab of whatnot
<zapotah> though i realize how that would be a suboptimal solution
<zapotah> and the uefi firmware could boot either of the disks it sees
<zapotah> and it would result in the system being loaded
<zapotah> that would be a non-raid and more like a dual partition solution as in reference to a sub-optimal solution
<zapotah> there would then need to be a mechanism to be aware of both the partitions and update things to both accordingly if needed
<zapotah> again in my mind, sub-optimal
<qman__> well, the thing is
<qman__> you can use a raid 1 and read them like normal partitions through the boot stage
<qman__> that's how grub did it before it got smart enough to actually read more stuff
<qman__> that said I don't know how EFI booting actually works
<zapotah> qman__: I dont understand it too well either. Thats why Im sort of shooting in the dark :)
<qman__> mine's got everything in BIOS compatibility
<zapotah> qman__: I wish this thing had that option as well.
<zapotah> one of the more cool features of uefi is that you can load all kinds of uefi programs into the firmware as modules (with limitations this time in flash rom size) without the damn 128kb orom limitation
 * zapotah has dabbled with bios modification and hates this limitation
<stemid> http://pastebin.com/WQkebVbA can anyone say why this kickstart partitioning won't work? whenever it gets to that stage I am presented with the default partitioning scheme confirmation. also does 12.04 have kickstart support for logvol yet? I haven't gotten that far but I thought I'd ask since I couldn't find a definitive answer online.
<qman__> yeah, it seems like a good system, but it has to actually work first, and we're not there yet
<stemid> fyi the default partitioning scheme does not have a /boot partition.
<stemid> and I noticed the mistake I made in the volgroup line with the pv name.
<stemid> but I haven't gotten that far yet anyways.
<zapotah> stemid: the documentation on logvol in kickstart is old at best but many features are not supported apparently
<stemid> yes I've seen mentions that logvol specifically is not supported but couldn't be sure if it was for 12 or an earlier version
<stemid> but regardless, seems like part has problems too. https://bugs.launchpad.net/ubuntu/+source/kickseed/+bug/48311 for example. I'm having to trial&error my way to a working method. I think preseed is superior.
<uvirtbot`> Launchpad bug 48311 in kickseed "kickstart partitioning fails with --recommended or --asprimary" [Medium,Confirmed]
<F3Speech> RoyK: the cifs error only happens after i guess hour+ i dont have the exact time. Server can run all night while im using it with no problem as soon as i logoff and go bed about hour later i get "CIFS VFS: Unexpected lookup error -112" every minute or so printed to screen
<zapotah> also not sure if you need to specify the partition to be used as lvm
<stemid> kickstart docs say part pv.01
<stemid> and then volgroup pv.01 and so forth
<stemid> but it seems like I finally got part to work. now to try volgroup
<zapotah> partition / --size=1 --grow --ondrive=sda #Rest
<zapotah> im assuming that is the lvm partition
<stemid> that would be part pv.01 instead of part / for lvm
<stemid> it will be, later
<stemid> I just had to get past the part issues
<zapotah> https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s1-kickstart2-options.html
<zapotah> most of that should work
<stemid> thanks I had the old version open http://www.centos.org/docs/4/html/rhel-sag-en-4/s1-kickstart2-options.html#S2-KICKSTART2-OPTIONS-PART-EXAMPLES
<stemid> also centos
<zapotah> at the end is a pretty good example
<stemid> it actually looks like it's working with volgroup and logvol now! =)
<zapotah> the ubuntu documentation pointed to the version i pasted
<stemid> it's past the partitioning stage, I just have to boot it up and see how it looks before I declare victory over kickstart
<stemid> the ubuntu wiki actually hinted that there was no lvm support in kickstart
<stemid> yet
<zapotah> yes the documentation is old at best in some points :)
<stemid> that's why I had some hope left
<stemid> wanted to try it
<zapotah> btw what did you change?
<stemid> I'll paste the whole file
<stemid> it's going on my wiki in swedish anyways
<stemid> just booted it up and it looks good. here's the whole file as it is. http://wiki.sydit.se/dokument:naetboota_ubuntu_12.04?&#kickstart
<zapotah> ah so it was the lvm partition designation
<zapotah> it has been so long since i used kickstart last that i can barely remember what it included :)
<stemid> I'm glad I can use kickstart instead of preseed because I can also use it for centos and rhel.
<stemid> before I have only used preseed to install debian.
<arrrghhh> Hey, what do I edit instead of resolv.conf now?
<arrrghhh> oh right I include that dns-nameservers in my /etc/network/interfaces...
<patdk-lap> or delete resolv.conf and recreate it
<Ubiquity> I'm attempting to set up wake on lan. My bios are set and ethtool is configured. I'm looking at my router right now and seeing the packet reach the connection but the computer isn't turning on. Can you help me troubleshoot?
<Ubiquity> (ubuntu 12.04 serrver)
<RoyK> Ubiquity: I don't know if wake on lan is an ubuntu thing
<Ubiquity> it's not an ubuntu thing. Just curious if someone can help me troubleshoot it
<Ubiquity> for some reason i can't speak on #linux
<Ubiquity> even though i've never visited before
<RoyK> perhaps you need to register with nickserv?
<Ubiquity4321> oh wow i feel dumb
<Ubiquity4321> i'll show myself out
<arrrghhh> Hey, is it possible to change the mountpoint of /var after I've installed the system?
<arrrghhh> I cp'd the contents of /var to another partition and tried to change fstab and reboot... it wasn't very happy.
<arrrghhh> hrm... that didn't work like I expected it to
<stemid> https://help.ubuntu.com/community/KickstartCompatibility "The post install script runs right before the reboot in the installation. " this is a very blurry statement imo because the postinstall script actually runs before user creation.
<stemid> which makes it useless for things like ssh keys.
<stemid> unless I create users in the postinstall script
<Zer0Glitch> Does anyone here have experience with hosting IRC?
<Zer0Glitch> 363 people and not one response... Wow. The NSA bots are running overdrive.
<qman__> !patience
<ubottu> Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/
<qman__> this channel doesn't move that fast
<patdk-lap> 363 people is not very many
<yeats> Zer0Glitch: if you post your actual problem/question, someone may be able to help
<arrrghhh> can I change /var's mountpoint after installing the OS?
<arrrghhh> I'd like to move it to an expandable lvm volme...
<arrrghhh> volume*
<qman__> you can, but you'll have to do it offline
<qman__> too much going on there to try and do it live
<arrrghhh> qman__, i just cp'd /var live to the lvm partition
<arrrghhh> changed fstab
<arrrghhh> and rebooted
<qman__> that can cause problems
<qman__> because there's lots of lock files and things there
<arrrghhh> k.  so copy it offline?
<qman__> that normally get removed at shutdown
<qman__> yeah
<arrrghhh> cool
<arrrghhh> other than that, it should work ?
<qman__> yes
<arrrghhh> thx :)
<qman__> if you shut down, copy it over, adjust fstab, and boot back up, it should work
<qman__> use rsync instead of cp though, make sure the permissions copy
<stemid> Zer0Glitch: you should perhaps be more specific. I've hosted a hybrid ircd with anope services about 8 years ago.
<stemid> not sure how much that means today
<arrrghhh> ah yes.  love me some rsync.
<stemid> I managed to unmount /var online once. I just shutdown all the services using /var. checked which ones using lsof. but at that point you're better off just doing it offline since the system is not operational anymore.
<Zer0Glitch> Two conjoined questions, actually:
<Zer0Glitch> - Are there resources you can recommend for the setup and secure hosting of an IRC server?
<Zer0Glitch> - Do I have to worry about any security issues with hosting IRC?
<qman__> running a public IRC server is a huge undertaking
<qman__> you have to manage it full time to defend against people using it to spam and run botnets
<qman__> I don't run one because it's too much work
<qman__> you need at a bare minimum to have at least one oper on line 24/7
<Zer0Glitch> I was wanting to run one so I could embed/link it with my hosted Wordpress blog.
<Zer0Glitch> Would it be better then to embed from someone else's service, who is running as a larger operator?
<qman__> unless you already have trustworthy staff that can run it, then yes
<Zer0Glitch> Thanks qman, I appreciate the feedback.
<qman__> it sounds to me like you just want to run one channel
<qman__> and in that case you should find a network to host you
<Zer0Glitch> Yes, only one channel... Next query: if any of you are hosting wordpress, are there plugins for Wordpress you wouldn't live without?
<patdk-lap> I would live without wordpress, every week there is a new security issue with it
<qman__> same, wordpress has an abysmal track record for security
<Zer0Glitch> So better to go hosted, rather than hosting it myself?
<qman__> if you're set on wordpress, then yes, find a host with a good reputation for fixing things when they break
<qman__> because they will
<Zer0Glitch> I presently use 1&1 and PEER1
<qman__> you know all those fedex/UPS/intuit phishing scam emails?
<qman__> those were all sent out using compromised wordpress sites
<qman__> at one point I had a content filter on my mail server filtering out URLs which contained wp-content
<qman__> it was that bad
<Zer0Glitch> Wow. That's pretty bad
<Zer0Glitch> Thanks for your help, guys
<arrrghhh> hey qman__ - I'm in the live environment... and I installed lvm2, I can see the logical volume (says it's active at /dev/OS/OS) but when I try to mount it, mount says that special device does not exist?
<arrrghhh> is there something else I have to do to initialize LVM in a live environment so I can mount it?
<qman__> you need to mkfs on the lv
<qman__> such as mkfs.ext4 /dev/mapper/logvolname
<arrrghhh> hrm.  I did that already... I guess I can do it again, I'm not going to lose anything heh
<arrrghhh> I did that in the install tho, not the live enviro
<qman__> well, your LVs should exist in /dev/mapper by name
<qman__> are they there?
<arrrghhh> ah if I mount /dev/mapper/OS-OS it works
<arrrghhh> for some reason /dev/OS/OS didn't work.  no biggie
<arrrghhh> rsyncing now :)
<arrrghhh> lol what have I done
<arrrghhh> GRUB menu is now stalling...?  I am forced to make a selection.  How do I fix that?  I don't know how I even made it stall like this...
<qman__> check /etc/default/grub to ensure that it's got a default selection and timeout configured
<qman__> then run update-grub
<arrrghhh> k
<qman__> could also be a stuck key
<qman__> or accidentally pressed key
<arrrghhh> I unplugged the kb
<qman__> ah
<qman__> might be disk order or something getting shuffled around causing it, update-grub should fix that
<arrrghhh> hrm... boot still fails on /var.  fsck.ext4 says it can't check /dev/sda3 which is my lvm aka /var...
<arrrghhh> I wonder if my fstab statement is bad
<qman__> if sda3 contains the lvm, you need to not have sda3 in fstab
<qman__> you need to have by UUID or LV
<arrrghhh> I don't have any sda's in fstab
<arrrghhh> they're all UUID... perhaps that is the issue?
<arrrghhh> the LVM is also by UUID
<qman__> no, that's how it should be
<qman__> but did you update the UUID to be the new filesystem's UUID?
<arrrghhh> Yes, I believe so...
<arrrghhh> I guess let me skip the mount and doublecheck
<arrrghhh> yup, uuid is right
<arrrghhh> oh derp.  i put the type as ext4
<arrrghhh> i'm guessing that might cause an issue
<qman__> it should be whatever the filesystem is
<qman__> which I assume is ext4
<arrrghhh> crap... it was ext4.
<arrrghhh> So I don't know what's wrong then
<qman__> well, you could try the non-UUID name to test
<arrrghhh> ok
<arrrghhh> The /dev/mapper location?
<qman__> I don't have anything to reference because I don't actually have any set up that way
<qman__> I think so, yes
<arrrghhh> qman__, so by name worked
<arrrghhh> blkid shows a UUID for the LVM device...
<arrrghhh> hrm.  there's a separate UUID for the mounted device
<arrrghhh> I think I'll try using that UUID
<qman__> yeah
<arrrghhh> Cool!  Seemed to work, thx qman__
<qman__> great
<TheLordOfTime> SpamapS, alive yet?
<TheLordOfTime> (probably not, in which  case i'll catch you tomorrow)
<anuvrat> need help installing ubuntu-server on a 2 TB hdd
<anuvrat> Does the bios_boot_partition need to be 1MB, can it not be 100 MB?
<qman__> what do you mean by bios_boot_partition?
<anuvrat> qman__: read it here http://velenux.wordpress.com/2012/07/12/grub-failing-to-install-on-debianubuntu-with-gpt-partitions/
<anuvrat> qman__: I* read it here http://velenux.wordpress.com/2012/07/12/grub-failing-to-install-on-debianubuntu-with-gpt-partitions/
<qman__> well, for one, GPT is not needed for a 2TB disk, the limit for msdos MBR is 2.2TB
<anuvrat> qman__: okay ...
<qman__> I'm looking for a more official/reputable document to answer for actually using GPT
<qman__> ok, from what all I've been reading, it needs to be mentioned that bios_grub partition and /boot are two different things
<qman__> I also see people making bios_grub partitions of all different sizes, not just 1MB
<anuvrat> qman__: yes, I have read about /boot and bios_grub being different
<anuvrat> qman__: I found something about the bios boot partition not being formatted. Is that the case?
<qman__> yes, it doesn't seem to have a normal filesystem, just grub
<anuvrat> qman__: what requires special support from motherboard, efi or gpt?
<anuvrat> qman__: installation on another drive uses efi and it boots perfectly, is it possible that my system can not boot from gpt?
<qman__> gpt allows for an mbr to be created for back compatibility, but efi requires board support
<qman__> native gpt requires board support
#ubuntu-server 2013-12-30
<linux_noob> Hopefully someone here can help me, I just finished installing Ubuntu server 12.04
<linux_noob> could not configure the network using the DHCP since I need a driver for my speedtouch adapter
<linux_noob> So "sudo apt-get update" fails
<linux_noob> in order to fix the plugin issue I followed these instructions http://techblog.mastbroek.com/all-articles/speedtouch-120g-adapter-in-ubuntu-12-04/
<patdk-lap> at some point you where planning on asking a question?
<patdk-lap> heh? what plugin issue?
<linux_noob> but "sudo apt-get install linux-firmware-nonfree also" returns "unable to locate package"
<linux_noob> question you say
<linux_noob> well, I'm just generally lost here. Was hoping for some input :)
<patdk-lap> well, that is cause it needs network access to install it
<linux_noob> and since it can't find the package "linux-firmware-nonfree" (which should fix the internet issue) I guess I need to dl it from another cpu and move it to the "new linux" one
<patdk-lap> or do it the simple way, install a temporay network adaptor that is supported
<SaberX01> the package is in multiverse but no much good if you dont have inet access, need to fix that first.  Do you have any machine with inet access?
<linux_noob> The one I'm using right now :) (windows) - My first linux exp. btw
<SaberX01> linux_noob, have a look at Apt-Offline then: https://help.ubuntu.com/community/AptGet/Offline/Repository
<linux_noob> And to make things a little more complicated I'm also receiving an error on startup " " - floppy
<linux_noob> fd0 - blacklist has not helped there, but I'm guessing that its not the cause of my other troubles
<linux_noob> Thanks for the info SaberX01, will surely try it
<orogor> hi
<orogor> anyone would knw why ifconfig is showing 0 traffic , when there s actually traffic on that interface ?
<orogor> gkrellm and speedometer wont see anything but iftop will see something
<Raboo> does all official ubuntu packages have a ppa that you can add using add-apt-repository ?
<bekks> There are no official PPA.
<Raboo> i want to add a saucy package to precies
<Raboo> precise*
<bekks> That may break your box.
<Raboo> well i feel that the risk is small for that
<Raboo> it's bup
<Raboo> a python based backup software
<Raboo> precise got a old version
<bekks> Whatever it is, mixing distributions is a bad idea.
<Raboo> ok
<bekks> Then build a new version as a deb and install it.
<Raboo> i don't know how to build deb packages.. can i get like a "build" file or something from https://launchpad.net/ubuntu/+source/bup
<Raboo> is it the xxxx.debian.tar.gz?
<bekks> Raboo: http://packaging.ubuntu.com/html/packaging-new-software.html
<mardraum> why not just upgrade to saucy?
<bekks> Or waiting until April and update to the next LTS, Tahr.
<mardraum> I don't really get the attraction, but sure.
<jamescarr> is there a friendlier endpoint or RSS feeds of AMI releases somewhere?
<jamescarr> I am currently slurping http://cloud-images.ubuntu.com/locator/ec2/releasesTable?_=1388418182404#
<jamescarr> and dealing with it
<jamescarr> the JSON is unfriendly
<Plizzo> Hello! I just created a new installation of Ubuntu server 13.10 and configured samba and avahi. Now I wanted to access it but eth0 is somehow named p2p1 and I can't access the server from other devices in the network. Any ideas?
<Plizzo> Also, the server can reach other devices, but the other devices can't reach the server
<x98server-admin> Hello what is the alternate for webmin
<RoyK> Plizzo: the new naming of devices has been around for some time
<RoyK> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<jamescarr> for fraks sake
<RoyK> x98server-admin: not sure - there was something around, but I'd suggest learning to administer by hand
<jamescarr> don't use webmin
<jamescarr> don't use things like webmin
<x98server-admin> why
<patdk-wk_> cause if you EVER have to do something it can't do
<patdk-wk_> your screwed
<RoyK> x98server-admin: it's not that hard to learn to edit the config files
<jamescarr> I had a job before taking my current job where the admin installed webmin on all the boxes
<jamescarr> one of the reasons I left
<jamescarr> thanks, but no thanks
<x98server-admin> yea but i am a first time server guy with some experiance
<RoyK> x98server-admin: webmin and friends are just shortcuts, and for all shortcuts, it lacks reliability
<RoyK> x98server-admin: what are you trying to do?
<RoyK> better ask for that, and we might be able to help you ;)
<x98server-admin> administrate my server
<RoyK> what services?
<RoyK> adding/removing users? setting up samba? apache?
<x98server-admin> ftp samba media storage apache
<RoyK> well, start with what you're trying to do now
<x98server-admin> Plus other
<RoyK> x98server-admin: just - step by step - learn it
<x98server-admin> Nothing at the moment but many peole told me to remove webmin so i wqs asking if there is a alt of it
<jamescarr> x98server-admin: they told you to remove it for good reason :)
<RoyK> x98server-admin: just don't use webmin - learn the basics first
<jamescarr> x98server-admin: it tells me that you have competent people there. Ask them to teach you what you don't know?
<x98server-admin> I know the basic commands
<jamescarr> I've always been more than happy to mentor our newbies :)
<x98server-admin> So
<RoyK> x98server-admin: just start with the service you want to setup first
<x98server-admin> Can i admin my win pc through my server
<RoyK> probably not
<x98server-admin> Wait one sec let me  get on my server
<RoyK> that is, if you mean windows policies, that probably won't work from a linux box
<x98server-admin> Im using my tablet wait a sec plz
<jamescarr> RoyK: maybe he wants to use his puppet master to provision his windows servers? If that's the case he can
<jamescarr> I doubt that is what he wants to do though! :)
<RoyK> jamescarr: hehe - he seems like a newbie, so pushing him (or her) headfirst into puppet might not be a good start ;)
<x98server-admin> Head first seems good ;)
<patdk-wk_> you should always sample first, before going in head first
<RoyK> x98server-admin: really - ask what you're trying to do, specific to the service, and you may get help
<x98server-admin> Kk
<x98server-admin> ok i am in m server
<x98server-admin> how do i remove webmin
<RoyK> apt-get purge webmin # ?
<patdk-wk_> how was it installed?
<x98server-admin> by apt-get
<RoyK> seems it's not there on my 12.04 box
<patdk-wk_> ya,  Iwas pretty sure it was removed years ago
<RoyK> perhaps there's a ppa out there :P
<RoyK> x98server-admin: did you use a ppa?
<x98server-admin> maybe but don't think so
<patdk-wk_> dpkg -l | grep webmin
<x98server-admin> but i have done sudo apt-get purge webmin
<x98server-admin> it removed step 1 done :)
<RoyK> x98server-admin: did you do much admin stuff with webmin before you nuked it?
<x98server-admin> i did user creation proftpd system health and other
<RoyK> hopefully it didn't do much harm
<x98server-admin> lol
<RoyK> now - with webmin gone - what do you try to achive?
<x98server-admin> so what can i do to see m system usage without using m server
<patdk-wk_> define system usage?
<patdk-wk_> cpu? memory? disk? ....
<RoyK> patdk-wk_++
<x98server-admin> ram useage cpu uptime tings like that
<patdk-wk_> cacti? munin? ....
<x98server-admin> what ?
<RoyK> x98server-admin: ram: free, cpu: top/sar/something, uptime: uptime
<x98server-admin> munin ??????
<patdk-wk_> well, basics, are, uptime, vmstat, iostat, df
<RoyK> x98server-admin: learn the basic tools first, move to webstuff later
<x98server-admin> yea useful info like that
<patdk-wk_> if you want pretty graphs, cacti or munin
<x98server-admin> but how would i view them
<patdk-wk_> view what?
<x98server-admin> these graphs
<x98server-admin> without going to m server
<patdk-wk_> ie? firefox? chrome? opera?
<x98server-admin> damm Y key stuck
<RoyK> x98server-admin: munin and cacti etc are tools that create webgraphs
<RoyK> x98server-admin: just don't worry about that now
<x98server-admin> oh kk
<RoyK> x98server-admin: setup the server first and do the monitoring later
<x98server-admin> kk server is installed
<RoyK> all services working?
<patdk-wk_> setting up the minitoring collection server is always a pain
<patdk-wk_> but after that is up, setting up more servers is easy to monitor
<x98server-admin> how do i ckeck my services
<x98server-admin> taskman
<RoyK> x98server-admin: ps?
<patdk-wk_> depends on the service
<x98server-admin> ps shows bash and ps
<RoyK> x98server-admin: ps fax
<patdk-wk_> ps ax
<x98server-admin> kk much better :0
<x98server-admin> so ftp
<RoyK> x98server-admin: firstly, for what do you need ftp?
<x98server-admin> also what does tty mean
<RoyK> it's old and insecure
<patdk-wk_> ftp should die
<x98server-admin> ok :)
<RoyK> ftp over ssh works well
<patdk-wk_> it's extreemly insecure, and lots of viruses created to look for ftp passwords and steal them
<x98server-admin> what should i use for file sharing
<patdk-wk_> depends what you mean by filesharing
<RoyK> anonymous ftp is ok, but using ftp with user/pass is bad, very bad
<RoyK> use ftp over ssh with filezilla or similar clients
<patdk-wk_> normally sftp replaces ftp for upload/download security
<x98server-admin> media (videos, pics, music) Documents programs
<RoyK> sftp as in ftp over ssh
<x98server-admin> + other stuff
<RoyK> for normal use, just create a user (useradd -m username) and create a password (passwd username)
<RoyK> then sftp works out of the box
<patdk-wk_> royk, please though, ftp over ssh != sftp
<RoyK> patdk-wk_: ftp over ssh is very easy to setup, though ;)
<patdk-wk_> sftp is not ftp compatable or ftp like, at all
<x98server-admin> my house has many pc's
<patdk-wk_> ftp over ssh would just be evil
<RoyK> patdk-wk_: sftp == ftp over ssh
<patdk-wk_> no
<RoyK> no?
<patdk-wk_> sftp is not ftp
<Lizards|Work> speaking of, hypothetically (okay actually), i want to set up public key authentication for SSH/FTP
<Lizards|Work> sftp is authentication over ssh, then ftp after
<patdk-wk_> no
<RoyK> well, sftp runs over ssh
<x98server-admin> i need my laptop (server) to get the files and put them on my mac (bigger stoarge)
<RoyK> patdk-wk_: what sort of sftp are you talking about? ftp over ssl?
<patdk-wk_> sftp using ssh is not ftp
<patdk-wk_> it is not ftps
<Lizards|Work> is it a file transfer protocol?
<RoyK> well, no, but it uses ssh as transport
<RoyK> patdk-wk_: perhaps I've misunderstood something there :P
<x98server-admin> kk
<x98server-admin> i need my laptop (server) to get the files and put them on my mac (bigger stoarge)
<patdk-wk_> http://tools.ietf.org/html/draft-ietf-secsh-filexfer-13
<Lizards|Work> (it did get a little pedantic)
<patdk-wk_> is nothing like ftp rfc
<RoyK> patdk-wk_: ok
<Lizards|Work> an rfc is a request for comment
<Lizards|Work> not a gospel
<Lizards|Work> ;)
<RoyK> Lizards|Work: all rfc's are ;)
<patdk-wk_> that sftp isn't even an rfc yet
<Lizards|Work> good old 2324
<RoyK> :)
<x98server-admin> ok that matter aside :)
<patdk-wk_> hmm, my screens went blank, how strange
<RoyK> x98server-admin: just create a user and try sftp or filezilla
<Lizards|Work> or scp
<x98server-admin> how do i use sftp/scp
<patdk-wk_> scp requires shell access
<Lizards|Work> i missed the definition of the issue
<RoyK> patdk-wk_: works with rssh, though
<patdk-wk_> doesn't work if you don't give a user shell access :)
<RoyK> x98server-admin: if you don't want your users to have shell access, setup rssh, google it ;)
<RoyK> patdk-wk_: rssh isn't really like shell access
<patdk-wk_> oh, I had issues with that
<x98server-admin> ftp should be fine my laptop has 2 connections 1 to the internet and 1 though a local connection with no internet
<Lizards|Work> isn't rssh what kevin mitnick used to hack... uh... i think it was IBM
<ziyourenxiang> that was rsh
<Lizards|Work> right
<Lizards|Work> rlogin
<RoyK> rsh != rssh ;)
<patdk-wk_> oh, rssh doesn't support chroot, that is why
<Lizards|Work> i know
<RoyK> patdk-wk_: it does
<Lizards|Work> i was misrememberhing
<ziyourenxiang> there is also scponly
<RoyK> patdk-wk_: we're using it extensively at work for student access, all chrooted
<ziyourenxiang> not kevin mitnick, i think it was the morris worm you are thinking of.
<patdk-wk_> royk, ya, I don't like coping all that stuff into the user folders
<Lizards|Work> he social engineered somebody into writing an rlogin file for him
<patdk-wk_> restriction to sftp solves that though
<ziyourenxiang> maybe it was kevin mitnick too. :-) that tsutomo takedown thing. session hijacking.
<ziyourenxiang> this was like aeons ago.
<RoyK> 1988
<Lizards|Work> right
<Lizards|Work> before the dawn of time
<RoyK> hehe
<x98server-admin> i dont need rssh or scp or sftp
<RoyK> x98server-admin: it's easier that way
<RoyK> x98server-admin: if you want somewhat secure logins
<x98server-admin> but its going through a local network with no internet
<RoyK> then just install proftpd or vsftpd or something
<RoyK> vsftpd is good imho
<ziyourenxiang> home wifi? so your neighbours, the TLA and the FLA can hear.
<Lizards|Work> ++vsftpd
<RoyK> but then - setting up ssh accounts aren't very hard
<RoyK> x98server-admin: just try setting up accounts and tell your friends to use filezilla
<RoyK> x98server-admin: it's easy indeed
<jamescarr> sftp is easy to setup
<x98server-admin> kk what to do next ?? thinking
<RoyK> x98server-admin: or even - if it's on a closed network with windows machine, use samba
<jamescarr> heh, I think I'd have a harder time installing ftp than I would sftp
<Lizards|Work> ._.
<x98server-admin> ftp was eas to install
<Lizards|Work> sudo apt-get install vsftp
<patdk-wk_> heh, making ftp work is a pain
<patdk-wk_> defining ports for it
<RoyK> Lizards|Work: no need for an apt-get install for sftp to work ;)
<patdk-wk_> making sure it works through nat and crap
<x98server-admin> do i need to do  sudo apt-get install vsftp
<RoyK> Lizards|Work: it just works once you've installed ssh
<Lizards|Work> i wasn't sure if vsftp was default installed ;)
<RoyK> it's not
<x98server-admin> cant find it
<RoyK> find what?
<x98server-admin> E: Unable to locate package vsftp
<RoyK> vsftpd
<Lizards|Work> E:
<RoyK> but then - no need for that
<RoyK> just use sftp
<RoyK> it works out of the box
<x98server-admin> kk take me through the setup :)
<RoyK>  
<RoyK> done
<x98server-admin> kk good
<x98server-admin>  how do i config it
<RoyK> then try with filezilla
<x98server-admin> .
<x98server-admin> ..
<RoyK> default config is to allow all users access to whatever they can access based on file permissions
<patdk-wk_> the only config you would need to do, is limit users to sftp only, or chroot if you wanted to
<x98server-admin> kk thats ok so that is setup
<RoyK> x98server-admin: try with filezilla
<x98server-admin> i use that normaly
<Lizards|Work> chroot jail all da noobs!
<RoyK> Lizards|Work: let him see it works first ;)
<x98server-admin> something is stuck my "Y" key
<patdk-wk_> man sshd_config
<RoyK> x98server-admin: hard to fix that over irc ;)
<x98server-admin> lol
<x98server-admin> i see i just need to remove
<RoyK> remove what?
<Lizards|Work> the y key
<x98server-admin> ea i broke it
<x98server-admin> crap
 * RoyK hands x98server-admin a hammer
 * Lizards|Work gets the duck tape and wd-40
<w0rmie> how can i erase a LVM crypted disk to perform a new ubuntu-server installation?
<x98server-admin> wait a sec
<x98server-admin> with a y key
<x98server-admin> trough the install ?
<patdk-wk_> w0rmie, erase?
<RoyK> w0rmie: should be quite easy to remove the old thing first ;)
<patdk-wk_> dunno why one would even bother
<patdk-wk_> just tell the new install to write a new partition table
<w0rmie> you mean disabling the encrypted mode first?
<RoyK> yeo
<RoyK> yep
<w0rmie> what about using gparted format?
<x98server-admin> postimg.org/image/8n7f0hqgi/e4f65084 (m broken ke )
<Lizards|Work> should be able to pop up the on screen keyboard
<Lizards|Work> annoying as all get-out
<Lizards|Work> but you can at least use the letter y
<x98server-admin> postimg.org/image/8n7f0hqgie4f65084
<x98server-admin> sorry
<Lizards|Work> =p
<x98server-admin> postimg.org/image/8n7f0hqgj/e4f65084
<x98server-admin> have a look at my broken key
<Lizards|Work> my key, look at it
<x98server-admin> i am using the rubber bit still there to use the "Y" key
<x98server-admin> ok so what else should i do/setup on my server
<x98server-admin> i have given up i will fix the y key later
<x98server-admin> so apache how do i set that up for lan
<Lizards|Work> http://localhost
<x98server-admin> kk yea i remember i have done that through webmin
<x98server-admin> so how can i store + upload/download files from the server else where (eg school)
<Lizards|Work> dyndns?
<x98server-admin> what is that
<x98server-admin> static dns
<x98server-admin> ?
<Lizards|Work> you need to get your external IP, if you don't have a static IP, idk
<mbnoimi> I'm looking for very small Ubuntu server edition for using it under free diskapce less than 900MB with RAM256... Does ubuntu-mini-remix is the one?
<x98server-admin> so like my public ip
<x98server-admin> mboini ckeck the on the site ?
<mbnoimi> x98server-admin: what?
<x98server-admin> mboimi: check on google what spec it is
<x98server-admin> spec it needs
<mbnoimi> x98server-admin: I googled before post my question here!!!
<x98server-admin> what about gentoo
<mbnoimi> x98server-admin: I'm accustomed to Ubuntu based distros
<Lizards|Work> debian
<x98server-admin> ok but gentoo like ubuntu-mini-remix it a distro u can build on
<Lizards|Work> surely you mean debian based
<ogra_> mbnoimi, a minimal server install should work fine on these specs
<mbnoimi> Lizards|Work: yes, debian
<Lizards|Work> probably no GUI ;)
<mbnoimi> ogra_: What do u mean?
<mbnoimi> Lizards|Work: certainly I don't need any GUI
<x98server-admin> mbnoini: y should really think on getting a bigger hdd or sss
<x98server-admin> ssd
<ogra_> mbnoimi, just a normal ubuntu server install should work under these specs as long as you dont select a ton of additional stuff in the tasksel setp during installation
<ogra_> *step
<x98server-admin> what are u planning to do whith it
<mbnoimi> ogra_: but the iso of server edition already about 800MB!!!
<ogra_> really depends what additional data or services you want to put on it
<x98server-admin> it has extra stuff thats why
<ogra_> mbnoimi, the iso brings all packages you can possibly install
<Lizards|Work> server edition ISO has all the GUI components too
<ogra_> the installed system should be under 400M
<x98server-admin> mbnoimi: what u planing to do with it
<ogra_> Lizards|Work, it doesnt
<Lizards|Work> thought it did
<Lizards|Work> that might be the dvd i'm thinking of
<ogra_> yeah
<mbnoimi> x98server-admin: I want to use it for Zentyal + OpenVPN only
<x98server-admin> kk
<mbnoimi> ogra_: I can't use the standard server edition because it's already too big to install
<ogra_> mbnoimi, how do you mean ?
<ogra_> mbnoimi, it wont take more than 500M
<ogra_> (rather less than 400 i think)
<Lizards|Work> are you trying to install to a VM or something?
<mbnoimi> ogra_: are you sure that server edition will not take more than 400M?
<ogra_> mbnoimi, dont judge it by the iso size ;) the iso brings a copy of the archive that you wont use at all
<x98server-admin> RoyK: would  Zentya be a alternate for webmin
<x98server-admin>  Zentyal
<mbnoimi> Lizards|Work: no, actually I install it under a PC built my me
<x98server-admin> bad pc spce
<patdk-wk_> !ebox
<Lizards|Work> i was just trying to figure out logic for the tiny install
<ubottu> zentyal is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/Zentyal (Project formally known as eBox - including in Lucid/10.04).
<x98server-admin> spec
<ogra_> mbnoimi, http://archive.ubuntu.com/ubuntu/dists/saucy/main/installer-amd64/current/images/netboot/ ... the "mini.iso" has the actual installer components without any packages
<Lizards|Work> like if you had to load the ISO to the drive before installing
<x98server-admin> mbnoimi: why such a low spec pc
<ogra_> mbnoimi, thats about the same that is on the server iso, just without the archive that is shipped on the server iso
<mbnoimi> Lizards|Work, x98server-admin: because I install Ubuntu under memory not HD hehehe
<patdk-wk_> if, you use a 2gig disk, the minimal-install will be <500megs disk used
<Lizards|Work> ramdisk is cool and all, but i was just trying to figure out logic
<x98server-admin> yea
<mbnoimi> ogra_: does mini.iso available for Ubutnu 12.04?
<RoyK> x98server-admin: vim is my alterantive to webmin ;)
<ogra_> mbnoimi, just replace "saucy" with "precise" in the url
<patdk-wk_> hire sysadmin, is alternative to webmin
<mbnoimi> ogra_: thanks I'll try to use mini.iso
<ogra_> patdk-wk_, hmm, do you think thats opensource ? :)
<x98server-admin> Royk: don't u mean nano :)
<patdk-wk_> open source sysadmin? sure
<RoyK> x98server-admin: no, I do not ;)
<Lizards|Work> no, it's spelled V-I-M
<Lizards|Work> ;)
<patdk-wk_> damned women keep pushing them out every year
<x98server-admin> Royk: no pretty colours
<Lizards|Work> vim has pretty colors ;)
<RoyK> x98server-admin: lots of pretty colours in vim ;)
<RoyK> nano sucks big time ;)
<x98server-admin> RoyK: nano is the best :0
<Lizards|Work> nano isn't for me
<RoyK> well, until you've learned vim...
<x98server-admin> no colours for y then
<Lizards|Work> you can turn the colors off. :syntax off
<RoyK> x98server-admin: lots of colours in vim, beleive me
<x98server-admin> i know
<x98server-admin> i configed my vnc in vim
<x98server-admin> im going to try zentyal
<x98server-admin> for my self
<x98server-admin> ahh that 712mb of ram
<Lizards|Work> <.<
<Lizards|Work> i can't do that anymore
<Lizards|Work> too many IDEs open to live with less than 16GB
<x98server-admin> yea go ide hdds
<x98server-admin> and that 1.4 ghz intel celeron m with 1mb of l2 cache  kicking in
<x98server-admin> error http://paste.ubuntu.com/6663879/
<x98server-admin> see ya
<RoyK> heh - server based on a celewrong with 1GB ram
<RoyK> well, tough luck
 * ogra_ runs several machines with a lot less in his house 
<patdk-wk_> hmm
<ogra_> http://people.canonical.com/~ogra/watchthesun/ ... runs on a 600MHz celeron/256M off a 2G sub key
<Lizards|Work> rasp pi anyone?
<patdk-wk_> I don't have a single machine lower than a dual core 2.8ghz with 4gigs ram anymore
<patdk-wk_> no r-pi's here
<Lizards|Work> so many rasp pi
<ogra_> take a beaglebone black instead, that can at least run ubuntu :P
<RoyK> rpi doesn't work too well with ubuntu
<Lizards|Work> debian though
<Lizards|Work> that's enough for me ;)
<ogra_> no, raspbian
<RoyK> same thing
<Lizards|Work> tomato potato
<ogra_> not the same thing
<Lizards|Work> it's armv6 debian
<RoyK> i know
<ogra_> its a complete recompile with different compiler options
<Lizards|Work> it has to be a recompile, it's armv6
<ogra_> which might or might not cause unwanted issues
<RoyK> guess my next server will be on debian
<RoyK> ubuntu has too many issues
<ogra_> does it ?
 * ogra_ never had any
<RoyK> lots of stuff not fixed
<Lizards|Work> heh
<Lizards|Work> our production servers are all CentOS
<Lizards|Work> for... reasons.
<RoyK> same here - but debian is dead stable
<RoyK> ubuntu is a bit cutting edge
<Lizards|Work> centOS is dead stable ;)
<RoyK> indeed
<Lizards|Work> like no updates
<Lizards|Work> dead
<Lizards|Work> but stable
<RoyK> not
<Lizards|Work> backports of security patches
<Lizards|Work> not updates lol
<RoyK> RHEL and centos focus on stability, not updates
<Lizards|Work> fedora though
 * ogra_ doesnt really get what issues you guys have with ubuntu server ... especially in the light that many high load sites use it 
<Lizards|Work> i don't have any issue
<ogra_> (not to mention that it by far smarts out any fedora based installs in the cloud)
<Lizards|Work> uh
<ogra_> (which admittedly has not much to do with real world HW servers)
<Lizards|Work> it's not really apples to apples
<ogra_> well, server Sw is server SW :)
<ogra_> the more users it has the more feedback devs get (bugs ... patches)
<MrAndy> How is it possible for me to link 9 extra IP addresses beside the main one?
<Lizards|Work> NAT
<MrAndy> And how does that work with a VPS?
<Lizards|Work> yes.
<MrAndy> I mean, how does it work?
<Lizards|Work> right. i'd have to do some googling.
<Lizards|Work> it depends on what you need to do with the extra IP addresses
<Lizards|Work> are they for failover/availability/redundancy/BGP?
<MrAndy> Just normal IP addresses that has a use for a znc server.
<MrAndy> Then made into subdomains with dns/rdns.
<MrAndy> Lizards|Work, it's an easy question. The 9 extra IP addresses I have I want to work as the 1'st.
<Lizards|Work> are these internal or external IP addresses?
<MrAndy> external
<Lizards|Work> i would look into linux NAT, i've only ever done cisco NAT
<MrAndy> What does the 'Network' parameter mean in /etc/network/interfaces?
<mattintech> ce /etc/network/interfaces
<mattintech> oops.
<w0rmie> is there a way to create a eth0 interface instead of p2p1?
<ikonia> w0rmie: it's just a udev rule
<ikonia> w0rmie: why does it matter the device name ?
<w0rmie> i would work with isc-dhcp-server, is it ok if i set p2p1 as interface on dhcp server settings?
<ikonia> yes
<ikonia> it's just a device name
<w0rmie> even the p2p1 entry doesn't show in the 70-persistent-net-rules?
<ikonia> why does that matter ?
<w0rmie> i am just trying to master my parameters thats all :)
<ikonia> it has 1 parameters, listen interface
<ikonia> why is this being made a big deal out of, point it at the interface you want it to listen on
<ikonia> what does udev rules have to do with it
<knoxy> Hi all. I'm trying to boot my server after kernel upgrade and I get message kernel panic "Cannot open root device unknown-block(0,0)". Is possible to restore this installation?
<markthomas> knoxy: You can use the grub menu to select the previous kernel
<markthomas> knoxy: Do you use a /boot partition?
<knoxy> markthomas, is a default installation (datacenter installation) of my dedicated server.
<knoxy> markthomas, how can I do to select the previous kernel?
<markthomas> knoxy: http://ubuntuforums.org/showthread.php?t=1718918
<stetho> Wonder if anyone can help me. For the past couple of days I've been trying to create my own local Ubuntu mirror. I've tried apt-mirror and rsync methods and various different source mirrors around the world. Every time the process gets to pool/main/g/gdb/gdb_7.4-2012.04.orig.tar.bz2 and stops. Just hangs while copying this file and I cannot figure out why.
<markthomas> stetho: For troubleshooting, have you tried to --exclude= that file?
<markthomas> i.e. via rsync
<knoxy> markthomas, great, I can boot the machine using previous kernel... but I remove the new kernel and reboot the machine...
<knoxy> markthomas, I dont set update-grub.... omg
<markthomas> knoxy: you will want to set the kernel that is booted, if the latest is not the default.  The next challenge, though, is to figure out why the most recent kernel wouldn't boot.
<stetho> markthomas: No - I'll try it now. Takes a few minutes....
<markthomas> stetho: I don't doubt it.
<knoxy> markthomas, is possible to change the 'menuentry' in /boot/grub/grub.cfg and remove the entries of my crashed kernel?
<bekks> knoxy: Just uninstall the crashing kernel
<knoxy> this action can stop my server in boot? or is the solution to remove this kernel of the boot? because I remove this crashed kernel using apt-get, but it is listed in menuentry
<knoxy> bekks, the crashed kernel is listed in grub.cfg and in boot
<markthomas> knoxy: I guess it depends on whether you are planning to trace down the cause in short order, as I suggested, and how often you expect the server to reboot.
<bekks> knoxy: Pastebin ls -lha /boot/ then please.
<markthomas> knoxy: If the answers are "yes" and "not often", then just set the default boot entry (I believe it's in /etc/default/grub) and start working on the cause of the boot problem.
<knoxy> http://paste.ubuntu.com/6665699/
<bekks> then use apt-get purge to get rid of it.
<markthomas> knoxy: you probably don't need to purge.
<knoxy> root@srv01:/boot# dpkg -l | grep 3.8.0
<knoxy> root@srv01:/boot#
<markthomas> knoxy: you're missing the relevant initrd file.
<knoxy> no packages
<TJ-> knoxy: As I suspected... you are missing the initrd.img for 3.8
<TJ-> knoxy: You're the 2nd person in 2 days to have that issue after a 3.8 kernel update
<bekks> knoxy: Pastebin df -h please
<knoxy> http://paste.ubuntu.com/6665709/
<stetho> markthomas: It's sailed straight past. On to gstreamer now. So there's something wrong with gdb_7.4-2012.04.orig.tar.bz2 in every mirror.
<markthomas> knoxy: please try update-initramfs -c -k 3.8.0-33
<stetho> Obviously, being mirrors, not really a surprise. But a bit odd that gdb_7.4-2012.04.orig.tar.bz2 would hang rsync and apt-mirror?
<markthomas> stetho: Not sure.  But it does narrow things down a bit.
<knoxy> markthomas, http://paste.ubuntu.com/6665721/
<markthomas> knoxy: Okay, try -k all
<markthomas> knoxy: and make sure you have the -c
<knoxy> http://paste.ubuntu.com/6665725/
<TJ-> knoxy: markthomas That should be "update-initramfs -ck 3.8.0-33-generic"
<markthomas> TJ-: I wouldn't expect that to matter, but okay.
<knoxy> http://paste.ubuntu.com/6665736/
#ubuntu-server 2013-12-31
<TJ-> knoxy: pastebin the output of "env"
<knoxy> LD_PRELOAD=/lib/lib__mdma.so.1
<knoxy> I just need to remove the entries from grub... I can remove the files *3.8.0* from /boot and run update-grub ?
<markthomas> stetho: I just did this successfully: rsync -v rsync://us.archive.ubuntu.com/ubuntu/pool/main/g/gdb/gdb_7.4-2012.04.orig.tar.bz2 .
<TJ-> knoxy: Have you recently edited a bashrc script, either /etc/profile or /root/.bashrc or similar? Because that error is usually caused by having a bash variable definition that has spaces either side of the "="
<knoxy> # some more ls aliases
<knoxy> alias ll='ls -alF'
<knoxy> alias la='ls -A'
<knoxy> alias l='ls -CF'
<knoxy> ?
<knoxy> no
<TJ-> knoxy: show us a pastebin of "env" please
<knoxy> http://paste.ubuntu.com/6665794/
<markthomas> knoxy: looking...
<TJ-> knoxy: What is this "lib__mdma.so.1" ? Is it a custom library you've built?
<knoxy> TJ-, no, this is a "unknown" library
<knoxy> if I move this library to another folder, for example, all commands stops (ls, pwd, ps, and more)
<knoxy> when I move this file to another folder, I can restore the machine using SCP to move from target folder to previous folder...
<knoxy> because all commands (includes mv) stop to work
<knoxy> I dont know what is lib__mdma.so.1
<knoxy> when I run "ls" in /lib... I can't see this file
<knoxy> when I run ls -l, so I see this file
<TJ-> knoxy: I can't find any references to it on the 'net ... I suspect it could be part of a rootkit
<knoxy> du -sh /lib/lib__mdma.so.1 - access denied
<knoxy> TJ- I talked several times about this file here
<knoxy> TJ- no one could tell me what is
<knoxy> TJ- I already suspected it was part of a rootkit
<knoxy> TJ- to disable this file, I can disable from "env" variables?
<TJ-> knoxy: Seriously, you need to rebuild that machine and secure it! make sure no other systems are re-infecting it, ensure your local PC isn't part of the problem
<TJ-> knoxy: I would guess if it is a root-kit that other files have been compromised. You need to do a clean rebuild.
<knoxy> TJ- I checked all process, users, folders, rkhunter runs, and I cant see the rootkit and other similar problem
<knoxy> TJ- Yes, this server will be reinstalled, but all files (my php application) need to be migrated
<TJ-> Can you do "objdump -t /lib/lib__mdma.so.1" and show the output to us via a pastebin?
<knoxy> TJ- yes, but the datacenter is migrating this server to another rack, please wait
<knoxy> I'm waiting the reply
<knoxy> TJ- on my Zimbra ZCS server (ubuntu) in this week, I find 3 files in /var/tmp
<knoxy> TJ- because a 0day remote exploit for Zimbra has published
<knoxy> I remove the files and block the 7071 port for Zimbra Admin
<knoxy> the files:
<knoxy> root@srv001:~/exploits# ls
<knoxy> meep.pl  minerd32  minerd64
<knoxy> anyone knows these files?
<knoxy> ?
<knoxy> minerd64 and minerd32 is used for bitcoin?!
<bekks> knoxy: Someone is using your server for bitcoin mining.
<knoxy> bekks, this files I found in my Zimbra ZCS server
<knoxy> bekks, the url for 0day exploit is http://www.exploit-db.com/exploits/30085/
<knoxy> bekks, I dont know if the server is used to more things
<bekks> knoxy: Yeah. Someone is exploiting your server. Backup important data, reinstall your server from scratch.
<knoxy> bekks, I've other servers and will find other exploits and dangerous files
<bekks> Or get the datacenter guys to investigate further, for jurisdical actions.
<knoxy> bekks, my contract is restrict just to use the infra of DC
<knoxy> bekks, datacenter guys dont have access to my servers
<aarcane> Does anybody know how to force DKMS to rebuild a module it says is already built?
<Neytiri> this is sort of a odd question, but how woud i use box A as ddos protection for box B.  both boxes are on the public internet but on different subnets and different datacenters
<stetho> Does anyone know if ubuntu servers rate limit you or block you in other ways?
<balachmar> Hi, I am looking for some documentation how to set up a multisite config for drupal using the ubuntu packages
<markthomas> stetho: I don't believe there is any kind of rate limit.  Large Ubuntu clouds mirror the mirrors just as you are attempting to do.
<aarcane> on ubuntu 12.04.3 with kernel 3.5.0-43-generic and openvswitch (A setup I have working on two other systems at present)  I get the following error when starting a guest in libvirt:
<aarcane> Unable to add bridge br0 port vnet0: No such process
<aarcane> Google provides no helpful results, and I'm unable to glean anything from the logs.
<TJ-> aarcane: See https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Virtualization_Host_Configuration_and_Guest_Installation_Guide/#App_Bridge_Device
<aarcane> TJ-, that's a similar error, but I've found that possible resolution before, and it's not a fruitful path to peruse.
<TJ-> aarcane: Permissions of the user are sufficient? "No such process" error can show when elevated permissions aren't available
<boldfield> I'm having some issues with dhcpd I was hoping someone could help me with.  I've got a host defined in dhcpd.conf setting a fixed-address, but the client is never assigned the correct IP.  I've triple checked the MAC address and tried clearing the client's leases and removing the relevant entries in server's leases
<boldfield> anyone have any ideas of other things I might try to troubleshoot
<bekks> The client still has a lease file.
<boldfield> I've tried killing the client and clearing the lease files
<boldfield> the host just gets a lease on another dynamically assigned IP, not the static one the server is configured to give
<boldfield> or... supposedly configured to give, though I can't spot the error in the config, and I've spent a while looking
<TJ-> boldfield: maybe you should pastebin the config?
<boldfield> TJ-: I've got to check that it's kosher that I do, if so I will
<aarcane> TJ-, sudo.
<markthomas> boldfield: I assume you checked the obvious, such as restarting dhcpd, making sure there were no other DHCP servers on the subnet, etc.
<boldfield> markthomas: you are correct
<markthomas> boldfield: Just checking.  Then it's time to look at the config.
<sheptard> initramfs seems to be trying to make drivers for a kernel I removed using apt
<sheptard> any suggestions?
<JanC> sheptard: initramfs doesn't "make" drivers
#ubuntu-server 2014-01-01
<markthomas> sheptard: can you pastebin ls /boot and indicate which kernel you removed with apt?
<sheptard> initrd.img-3.12.0-031200-generic
<sheptard> is the one I removed
<sheptard> well thats the version
<sheptard> and dkms/etc keep trying to regenerate that kernel
<markthomas> sheptard: Please do a pastebin of ls /boot
<sheptard> http://paste.ubuntu.com/6671076/
<Brad_> hello?
<Guest14915> does anyone know why my server does not recognize the comand "add-apt-repository"?
<bekks> Because it doesnt exist. It is "apt-add-repository".
<Guest14915> thanks
<markthomas> Guest14915: Also, make sure you apt-get install python-software-properties
<Guest14915> thanks you guys, i guess the juju website made a mistake
<sheptard> markthomas: I think its python3-software-properties now
<markthomas> Guest14915: Please paste the link.
<markthomas> sheptard: Depends on which release.
<markthomas> sheptard: Please do a dpkg --get-selections |grep linux-image and pastebin
<Guest14915> https://juju.ubuntu.com/install/
<sheptard> http://paste.ubuntu.com/6671091/
<markthomas> Guest14915: Oddly enough, both commands exist on my 12.04 system
<Guest14915> im running 12.04.3
<markthomas> Guest14915: Confirmed, both commands are present.  You need to install that package I mentioned.
<Guest14915> already done
<Guest14915> thanks you guys
<markthomas> sheptard: looking...
<Guest14915> ok so i installed juju and its giving me this error about no public ssh keys found
<markthomas> sheptard: You might want to try uninstalling the linux-image-generic metapackage.  That package's job is to make sure the kernel is updated as new kernels become available.
<sheptard> markthomas: thats what I was thinking looking at the list of packages
<sheptard> markthomas: thanks
<bekks> !ssh | Guest14915
<ubottu> Guest14915: SSH is the Secure SHell protocol, see: https://help.ubuntu.com/community/SSH for client usage. PuTTY is an SSH client for Windows; see: http://www.chiark.greenend.org.uk/~sgtatham/putty/ for it's homepage. See also !scp (Secure CoPy) and !sshd (Secure SHell Daemon)
<markthomas> Guest14915: You used apt-add-repository to enable the cloud archive?
<Guest14915> i think so
<markthomas> Guest14915: What was the syntax you used?
<markthomas> (Guest14915: I think the ubuntu cloud keyring did not get installed properly)
<Guest14915> well im trying to install juju, so ive been following the website, was there anything else i was supposed to do first?
<bekks> markthomas: Which is not an SSH key at all.
<markthomas> bekks: That's true.  It's a gpg key, isn't it.
 * markthomas is trying to do too many things at once.
<bekks> Guest14915: Pastebin the entire error you get please - in a pastebin service.
<Guest14915> ok will do
<Guest14915> http://pastebin.com/HTD4RWLp
<bekks> Guest14915: http://stackoverflow.com/questions/19069876/juju-error-error-parsing-environment-amazon-no-public-ssh-keys-found
<Guest14915> xD
<Guest14915> command not found
<Guest14915> for juju bootstrap
<Guest14915> so i think i found the problem
<Guest14915> i was following instructions for amazon webservice
<Guest14915> but i should be following openstack
<Zephree> Looking for a point in the right direction, I have an Ubuntu server with Apache2/PHP5/Postfix, when I run the mail() function in PHP I get the error "sh: 1: /etc/postfix: Permission denied" Any chance you folks know what exactly is producing the error?
<Delemas> I'm trying to test upgrading a server from 13.04 to 13.10 using do-release-upgrade -s. grub-pc is blowing up causing a fatal error. Should it actually work?
<hitsujiTMO> Delemas: i'd clone first. Whats the actual error?
<Patrickdk> it should
<Delemas> Unfortunately I can't clone. It is a physical. sec digging to exact errors.
<hitsujiTMO> can't clone a physical machine? what?
<Delemas> Well not easily.... The server is in colo and the remote access card isn't cooperating...
<Delemas> http://pastebin.ca/2520820
<cfhowlett> Delemas, issuz u haz ...
<Delemas> lol yep...
<hitsujiTMO> ahh i see
<Delemas> I note the "none" is part of the sandbox trick....
<hitsujiTMO> Delemas: http://irclogs.ubuntu.com/2013/03/09/%23kubuntu-devel.txt is the only reference to the exact error. LXC related it seems alright
<hitsujiTMO> Delemas: any references to "Path `/boot/grub' is not readable by GRUB on boot. Installation is impossible. Aborting." i find suggest that it will fail to boot after
<Delemas> dpkg -l shows: rc  liblxc0
<Delemas> but nothing else lxc related....
<Delemas> Mind you that is after the upgrade blows up...
<Delemas> hmm at least I fixed the remote access controller....
<aarcane> So I've spent about 24 hours non consecutive trying to solve this error message.  It's the only error message I have from this particular error.  There's another error I'll paste next that's related.  I can find NO information on how to resolve this error on google.  Does ANYONE know anything about this error:  error: Unable to add bridge br0 port vnet0: No such process
<aarcane> The other error occurs when attempting to define a logical network:   Unable to create bridge virbr0: No such process
<cfhowlett> aarcane, this is with virtualbox
<cfhowlett> ?
<aarcane> kvm
<aarcane> and libvirt
<aarcane> and I have two other nearly identical systems that work just fine.
<SoulRaven> hello
<SoulRaven> please help me
<SoulRaven> i can't config the snmpd to be visibile from outside
<Robert34> Hello, I want my apache webserver page to require a password (htacess) and if its a internal ip (192.168.x.x) there should be no password needed. Can someone help me?  Ubuntu server 13.10
<bekks> Robert34: Here is the best documentation on that topic: http://httpd.apache.org/docs/current/howto/htaccess.html
<SaberX01> Robert34, your .htacess should se similar to this: order allow,deny   allow from192.168.x.x   deny from all
<Robert34> bekks: Already looked at it, ty
<Robert34> SaberX01: I will try, thank you very much
<SaberX01> Robert34, Or better yet: order deny,allow    deny from all    allow from 192.168.x.x
<Kehet> okay, stupid question but is there any sensible reason why my old website host was allowing ssh only from subdomain sftp.domain.com ?
<Kehet> = should I do same on my vps
<SaberX01> Kehet, as for why, only they would know that probably. AS to the 2nd question, if I were to allow ftp, it would be in a jail shell only for better security.
<Kehet> so it wouldnt be huge security crime to allow ssh/ftp/imap/whatever from my main domain without any subdomain?
<patdk-lap> it also makes it many times easier to control and manage when they are on different servers
<gpled> anyone have a virtual server host that they like?
<holstein> ive heard great things about linode.. maybe a general linux channel would be more helpful.. or an ot ubuntu one
<gpled> holstein: thanks
<Kehet> im using gandi .. no problems so far
<mgw> what scripts are there (standard or otherwise) for managing debian/changelog and debian/control in package source?
<aarcane> can someone please paste the output of md5sum /lib/modules/3.5.0-43-generic/kernel/net/bridge/bridge.ko
<Zephree> Hello, I am running Ubuntu with Apache2/PHP5/Postfix. When running php's mail() command through apache or command line as root, I get "/etc/postfix: Permission Denied". I'm hoping someone can point me in the right direction as I've been doing google searches for the past 1.5 days with no luck. Thanks!
<TJ-> Zephree: Can you show the command line you use that results in that so we can try reproducing it?
<Zephree> TJ-: Sure, one second please.
<Zephree> php -a, then mail('example@example.com', 'Subject', 'Test');... OUtput: "sh: 1: /etc/postfix Permission denied"
<Zephree> I thought there may be an issue with the www-data user that Apache runs as, but I'm currently running as SU on the server on command line and getting the same result.
<TJ-> Zephree: IS that using sudo or as your regular non-privileged user? Also, does postfix work in other situations?
<TJ-> Zephree: If postfix works otherwise, then that narrows the possibilities
<Zephree> TJ- Tried it both ways, it never works through php mail().
<TJ-> Zephree: Then it sounds like a php configuration error for the smtp client settings
<Zephree> TJ-: Got it, I'll check again. I followed some tuts online, perhaps they steered me in the wrong direction. :)
<TJ-> Zephree: I'll look at some configs here, see what they're like
<James_Epp> I installed the dhcp3-server package and have it configured. However, I do not want it to start automatically on boot. How can I configure it so that it is not made to start on boot, but instead to be left out completely?
<Zephree> TJ-: It looks like the only line they had me set in php.ini was: sendmail_path = "/usr/sbin/sendmail -t -i"
<TJ-> Zephree: Have you checked permissions on /etc/postfix ?
<Zephree> TJ-: Let me do it now to be sure I did the right thing.
<Zephree> TJ-: It says the following when I do -l in /etc: drwxrwsr-x 3 root root 4096 (date) postfix
<TJ-> Zephree: I can't reproduce it here... "mail(...);" sends the email correctly. The "sh ..." error suggests that php is failing to invoke the sendmail command via the shell
<Zephree> TJ-: So you think the permissions are okay at the postfix level and at this point it is likely an issue in my php config?
<TJ-> Zephree: It looks like something to do with the shell being invoked... not sure if that can be affected by the PHP config but as everything else is configurable I'd suspect that is too
<Zephree> TJ-: Dumb question, if you were to google this, at this point how would you go about narrowing it down? I've searched the error, and various combinations of Ubuntu php postfix denied, etc.
<aFeijo> hi guys, I was wondering if there is any script solution that I can use to automatically install and configure all stuff that I usually use in a new server?
<aFeijo> I was wondering if there is any script solution that I can use to automatically install and configure all stuff that I usually use in a new server?
<SaberX01> aFeijo, if it's new server install look at using preseed, that will automat allot of it for you, maybe not every detail though. If you need a highly scalable solution, look into Lanscape.
<aFeijo> thanks SaberX01
#ubuntu-server 2014-01-02
<krababbel> Hi, why is there a relative path in openssl.cnf for CA_default? Where should I keep my files like certs if I want to be my own CA for a LAN? Some say in /root/ca but others suggest /etc/ssl
<krababbel> Also I want to create a certificates for my webserver and mailserver. Without my own CA signing both certs, I'd need to install multiple certificates on clients, correct? With my CA, a client could verify both mail and web certs using only the CA cert, correct?
<patdk-lap> depends on what you do
<patdk-lap> you could have some other ca sign them
<krababbel> patdk-lap: I need to sign them myself, it is just a test LAB.
<krababbel> patdk-lap: So if I sign them myself, my clients in the lab could verify all server certificates, which were signed by my CA, and the clients would only need to install my CA certificate, correct? I am a bit unsure now.
<krababbel> patdk-lap: Otherwise I could just self sign the certs on the mailserver and webserver for example each.
<patdk-lap> yes
<patdk-lap> but the server would need it's certificates and any intermediate certs (that doesn't sound like your making)
<krababbel> patdk-lap: OK, thank you a lot.
<rsd> any good suggestions for a MON replacement (if any)?
<patdk-lap> what is a mon
<rsd> system monitoring, alert, etc
<krababbel> I am unsure about LDAP authentication and /home on an NFS server. If the LDAP and NFS servers are different machines on the network, could pam_mkhomedir create the homedirs on the NFS server on first login?
<krababbel> Why is it a problem with having both local and LDAP homedirs in /home? I read that usually you should separate them, but I don't see why. Aren't UID and GID enough?
<krababbel> Maybe that's only for users which already exist locally.
<krababbel> Or is the problem that a local user trying to login and mounting their /home/... could be rejected by the NFS server because NFS may not find that user in LDAP and locally, I guess.
<krababbel> So if the same local user already exists on all machines, and the only additional users in /home would be LDAP users, then separating /home wouldn't be necessary?
<Rar9> morning. need some help with an Error 503 for installing Solr4 with Tomcat7  .. Anyone?
<ikonia> Rar9: 503 is service unavailable suggesting that it's not listening on the port you have defined, or it is listening but the application is not configured (which is common with solr)
<krababbel> Why is it a problem with having both local and LDAP homedirs in /home? I read that usually you should separate them, but I don't see why. Aren't UID and GID enough? But if there is only the same local user on all machines, and the only additional users in /home would be LDAP users, then separating /home wouldn't be necessary?
<zul> rbasak/hallyn: im adding that arm64 patch before uploading a new libvirt (1.2)
<ahnkle> i am thinking of getting a Proliant DL140 G3 for personal use. there is an Ubuntu 10.04 release. is this retired now?
<rbasak> zul: I'm not sure we should right now.
<rbasak> zul: I don't want to cause a future conflict with a Linaro patch.
<rbasak> I emailed Clark (Linaro) to get his view.
<zul> rbasak:  arrgh after i rediffed it
<rbasak> Since he's doing the libvirt armhf/arm64 enablement work which involves pushing it upstream.
<rbasak> zul: well, I did say in the bugÂ·
<zul> rbasak:  yes but im not awake yet :)
<hallyn> zul: bug 1264955 - any objections to nfs-common being in libvirt build-dep?
<uvirtbot> Launchpad bug 1264955 in libvirt "libvirt: find-storage-pool-sources work unexpected" [Undecided,New] https://launchpad.net/bugs/1264955
<zul> hallyn:  nah
<zul> hallyn:  1.2.0 has been uploaded like a half hour agao
<hallyn> yeah - and on the one hand i don't want a new uplaod just for that, but otoh if we don't do it now we'll never remember :)
<hallyn> well i've added it to my long list of libvirt bugs to work on when i have time
<zul> hallyn:  sweet...just batch them up :)
<krababbel> Hi, is there a problem sharing one public folder over samba and nfsv4 at the same time?
<jrwren_> no, no problem.
<krababbel> jrwren_: OK thanks, I'll try that.
<jrwren_> why would there be a problem :)
<krababbel> jrwren_: I asked because NFSv4 uses usually this special folder /exports
<krababbel> So I was unsure if they'd work nicely together. (samba and nfs)
<jrwren_> oh no.
<jrwren_> that /exports is just a default config. you can export anything
<krababbel> jrwren_: Thank you a lot. :)
<hallyn> zul: is ppa:ubuntu-cloud-archive/havana-staging "the havana cloud archive" ?
<zul> hallyn:  its the staging area http://www.ubuntu.com/download/cloud/cloud-archive-instructions
<hallyn> where is the real havana cloud archive then?
<hallyn> oh i see, thx
<hallyn> how do i add the apt-key?
<hallyn> eh, nm.  no matter for the test
<hallyn> zul: are you doing anything right now on libvirt apparmor bugs?
<hallyn> jdstrand: can I (later today/tomorrow) point you to some debdiffs relating to libvirt-apparmor?
<zul> hallyn:  nope just getting libvirt-python ready for mir
<hallyn> oh i thought with merge from debian you didnt' have to
<hallyn> ok.  just one more lxc thingie and then i'm hitting libvirt-apparmor hard.
<zul> hallyn:  nah i wish it was like that
<adam_g> zul, if you get some minutes today could you plz take a look at the 2013.2.1 branch updates at https://code.launchpad.net/~ubuntu-server-dev/+activereviews ?
<zul> adam_g: 2013.2.1?
<adam_g> zul, the first havana stable release
<zul> adam_g:  cool gimme a sec
<adam_g> zul, no rush
<zul> adam_g:  +1
<adam_g> zul, nice thanks
<krababbel> I want to export /home directories over NFS. Why do people say it is a problem if I do not separate the remote home folder from the local home?
<krababbel> For example like described here in the second paragraph: http://nickportertech.blogspot.co.at/2010/02/ubuntu-machine-with-nfs-home-and-ldap.html
<jrwren_> krababbel: its only a problem if you want to login to an nfs client system when the nfs server is down
<krababbel> jrwren_: OK thanks a lot, of course, I am tired. :)
<jrwren_> if that is not a requirement, then it is no problem.
<krababbel> jrwren_: I see, yes.
<hallyn> jdstrand: in qrt test-libvirt.py, there are two lines restoring "/etc/apparmor.d/abstracations/libvirt-qemu" <sic>.  Is that some intended genius, or a typo?
<tclarke> I'm setting 12.04 MAAS and I'm having trouble following the install docs...I get to the point where I need to d/l initial boot images and run "maas-cli mynam node-groups import-boot-images" where mynam is the name of my login profile
<tclarke> node-groups: error: argument COMMAND: invalid choice: u'import-boot-images' (choose from 'register', 'list', 'refresh-workers', 'accept', 'reject')
<stdaro> https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/257857 is pretty unpleasant, still applies today
<uvirtbot> Launchpad bug 257857 in openjdk-6 "openjdk-6-jdk should depend on openjdk-6-jre-headless too" [Low,Triaged]
<bravvve22> hello am newbe,in a vps ubuntu 10.04 is installed and if config gave me venet0:0,no eth0 what meen?
<bigjools> tclarke|AFK: you need to run the version of maas in the cloud archive
<adam_g> zul, ping
<krababbe1> Hi, if I enable no_root_squash on an export, could it be dangerous for the NFS server, or would that "just" allow a remote root to do anything within that export folder?
<bekks> krababbe1: yes, it could dangerous, dependingon what you are sharing.
<krababbe1> The problem is, that I want to have an NFS server export /home to clients. These clients are LDAP accounts, and I want to use pam_mkhomedir to create their homes on first login. But I get 'permission denied', and I guess it has to do with the fact that remote machines are restricted by root_squash. With no_root_squash it seems to work.
<krababbe1> bekks: The NFS server, LDAP server and client are three different machines on the LAN.
<bekks> krababbe1: Which doesnt matter, and doesnt clarify which shares you are going to share with no_root_squash
<krababbe1> bekks: The /home folder would be shared with no_root_squash.
<krababbe1> bekks: On the NFS server it would be /mnt/home, since I separated local home from LDAP user's homes
<bekks> krababbe1: Unless root isnt going to use stuff from /home, it's nasty, but somehow safe.
<krababbe1> bekks: I guessed so. :) Is there an similar alternative?
<krababbe1> using pam_mkhomedir I mean
<krababbe1> I am doing this for the first time.
<hitsujiTMO> krababbe1: i'd also ensure subtree_check is used
<krababbe1> hitsujiTMO: Thanks, I'll try that.
<adam_g> zul, anyway, /me needs sponsorship for http://people.canonical.com/~agandelman/heat-2013.2.1/ . can you help? guess heat is not seeded properly?
#ubuntu-server 2014-01-03
<krababbe1> bekks, hitsujiTMO Thanks, creating the LDAP users locally on the fileserver seems to work. :)
<jtran> when i do apt-get install mysql-server , it auto creates a mysql user and assigns it a uid.  how do i control which uid to give it?
<krababbel> Is there a major problem creating users all with GID=100?
<jtran> krababbel:  /scarcasm?
<jtran> sarcasm lol
<krababbel> jtran: Just a necessity
<jtran> oh u said GID100
<jtran> i misread that as uid 100
<krababbel> jtran: Oh, it is 100, or group=users. I have LDAP for authentication, and I didn't want to create group entries in LDAP.
<jtran> i don't see any reason why having all users in same GID would be bad as long as it is no privs in that gid
<krababbel> So I assigned all accounts stored in LDAP GID=100. It is just a small lab.
<lifeless> jtran: you can pre-create the user if you want.
<krababbel> jtran: OK, I see. Well they won't be able to put each others in "their" groups to restrictively share files for example, but I was  wondering if there was some major issue. :)
<jtran> lifeless: cool that's what i was thinking. thanks for confirming
<krababbel> jtran: I misread uid as well :)
<krababbel> Is there a way to create a group as non-root?
<krababbel> Or something similar, so that I as a user can put another user in that group, so only the two of us can access a file with group access for that group?
<jtran> krababbel:  no groups are 'root'
<jtran> other than sys
<jtran> and root
<jtran> by def users don't have the ability to add another user to a group
<krababbel> jtran: I mean, as a user, I cannot create a group
<jtran> so i dunno hwo you'd do that other than by granding sudo
<krababbel> jtran: I see
<jtran> granting sudo
<krababbel> of course, sudo may do it
<krababbel> Maybe I am tired, but this means you cannot give access to only specific users to some of your files without root?
<krababbel> I guessed you could create non-system groups as user. :)
<jtran> krababbel: u can create non-system groups?
<jtran> krababbel: u might get more hits on your question right now in #ubuntu or #linux
<krababbel> jtran, no I wnat to, I am new to Linux and sysadministration, and didn't know that users could not create groups or assign users to them.
<zul> adam_g: heat is still in universe
<zul> adam_g: done
<adam_g> zul, oh? hm
<adam_g> zul, thanks
<zul> np
<emko> any reason when i access a url with https it will autoload the index.php but when i try http it wont i have to type index.php for it to work
<emko> htaccess has DirectoryIndex index.html index.php
 * ockert is here
<brendon1981> hi everyone.... hoping someone can help me - tried to follow a guide to install headless transmission.  Now I give up and just want to fix my permissions, as when I try to mkdir in ~ directory it says "no space left on device", but when I sudo, it works fine.
<danwest> http://pastebin.ubuntu.com/6685353/
<danwest> ^^ do-release-upgrade fail
<danwest> seems I'm not the only one https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1178245
<patdk-wk_> from? do?
<rbasak> danwest: I just asked bdmurray in #ubuntu-devel.
<rbasak> (about that bug)
<rbasak> danwest: could you join #ubuntu-devel and help bdmurray, please? He asked:
<rbasak> 16:18 <bdmurray> rbasak: ah, okay.  do you know what kind of install (for  testing) the upgade was being attempted from?
<danwest> rbasak: will do, thx
<Untouchab1e> What email server would you all recommend?
<TheLordOfTime> rbasak, did you see this?  http://www.jorgecastro.org/2014/01/02/nginx-coming-to-main-in-14-dot-04/  http://arstechnica.com/information-technology/2014/01/ubuntu-puts-nginx-web-server-on-equal-footing-with-apache/
<TheLordOfTime> s/this/these/
<rbasak> TheLordOfTime: indeed!
<rbasak> TheLordOfTime: I don't think jcastro was expecting that reaction! How do you feel about it?
<TheLordOfTime> rbasak, i did have a minor spat on twitter with the arstechnica author about them jumping the gun, but jcastro's comment was made "promoted" about how it's not yet in main.
<TheLordOfTime> rbasak, a mix of happy, excited, and not caring
<jcastro> hah
<TheLordOfTime> the not caring part because i'm more concerned about things outside of Ubuntu at the moment
<jcastro> welcome to the ubuntu server team!
<TheLordOfTime> like the frigid cold
<jcastro> every day is like that! :p
<TheLordOfTime> xD
<TheLordOfTime> rbasak, at this point, though, i'm hoping nothing catastrophic happens to prevent the main inclusion
<rbasak> I hope so too.
<TheLordOfTime> because this is something everyone is happy to hear about
<rbasak> Though, if anything, the reaction demonstrates why we should have it in main.
<TheLordOfTime> indeed.
<TheLordOfTime> rbasak, i do have a feeling the Apache2 die-hard fans who despise nginx are going to have some complaining
<TheLordOfTime> but ultimately this is good news, and this reaction should serve well in favor of nginx in main (if only a little)
<TheLordOfTime> jcastro, thanks for the recognition though :)
<jcastro> :)
<rbasak> Well, there's no suggestion that apache2 will leave main.
<TheLordOfTime> rbasak, true.
<TheLordOfTime> either way, this is a good thing :)
<gdeeble> Can someone direct me in the right direction here? I have a server running Zentyal 3.2. Followed their tutorial on setting up a domain, however, only have 1 NIC. My problem is, I can get my laptop(only computer I'm trying right now) to join the domain. It says it can't find the controller, but I have my DNS set up right any all.
<hallyn> zul: bug 1244694 , could you test the proposed line in comment #17?
<hallyn> (make sure to apparmor_parser -r /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper after adding the line, of course)
<tclarke> hi, trying to get the ubuntu cloud live usb working on a laptop..when I boot, not seeing eth0...restarting networking show's it in stop/waiting...lspci shows an Intel controller available
<nwilson5> best way to allow a process to copy files between servers efficiently? ex: web server, user uploads image, web server saves image on separate computer to serve said image.
<nwilson5> can scp with web user having ssh key on image server w/ appropriate permissions
<nwilson5> although i'm not sure that's ideal
<krababbel> Hi, I got a client which can access my LDAP server for user accounts. 'getent passwd' shows them fine and I can login too with those. Lokal accounts are OK too. My problem is, that I cannot get proper write access to another server running Samba. This server does not know about LDAP user accounts, but I tried enabling them and it wouldn't help. I created a Samba share which should be accessible to users with GID=100. My LDA
<krababbel> The files created from the clien have the UID of winuser on the server. I can touch them, but not edit them. I can also delete them, although it always tells me, they are write protected.
<krababbel> not even the same lokal user on both systems seems to be able to write. I have to enable force user  for a share and set it to the lokal user.
#ubuntu-server 2014-01-04
<krababbel> OK, I forgot to smbpasswd the lokal user
<xperia> hi. i am trying to upgarde a outdated ubuntu server but i get allways the error message with apt-get dist-upgrade that the urls can not be find at ubuntu server becouse not anymore supported. how can i upgrade a outdated ubuntu server else?
<makara> hi. How can I change which servers to update from, from the command line? (I'm on 12.04 server)
<bekks> edit /etc/apt/sources.list
<makara> bekks, ok. I think it won't solve my problem though. On sudo apt-get update it returns this: "W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://archive.canonical.com precise Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <ftpmaster@ubuntu.com>"
<luminous> question: say I have some systems on an internal network that is unable to reach launchpad. I want to setup some services, which need to access a set of PPA's from launchpad, say updates for nginx and mongo or something. is it easier to automate the setup/update for a mirror of select PPA (using the PPA as the source for the pkg), or just automate the package build process to setup a custom mirror?
<Patrickdk> luminous, personally, I just use apt-cacher-ng, or you could use another proxy server if you wanted
<Patrickdk> will save bandwidth and speed up downloads too
<luminous> Patrickdk: yea, I think I need both to some degree. eg ability to build / host, but then also proxy/cache
<TheLordOfTime> is there a guide to setting up PostFix, IMAP, and SSL for both SMTP and IMAP anywhere?
<MavKen> are you using tasksel ?
<TheLordOfTime> no
<pmatulis> TheLordOfTime: very old and well-known tech, so lots of guides
<fhf> Hi all. Can any1 explain to me why this config: http://paste.ubuntu.com/6693390/ is blocking outgoing connections on my server? I mean I can ssh to machine but pinging other machines and wget http://www.google.com/ fail? What else should I add to my config to allow outgoing connections?
<RoyK> fhf: generally, using ufw, it's easier - just post iptables -vnL
<RoyK> oh, btw, why are you usin stone-age ftp?
<fhf> RoyK: I love ufw but... this is OpenVZ VM so ufw doesn't work :/ and ftp will be later replaced by sftp.
<RoyK> ok
<RoyK> pastebin iptables -vnL
<fhf> http://paste.ubuntu.com/6693443/
<RoyK> nothing drops outgoing there, but the ESTABLISHED,RELATED rule seems to be missing
<fhf> this: "$IPTBLS -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT" ? its on the top of config file
<RoyK> just can't see it in the output
<RoyK> perhaps I'm blind? ;)
<RoyK> why doesn't ufw work with openvz, btw?
<fhf> hah you are not mby this VM is haunted
<RoyK> ufw is just an iptables wrapper
<fhf> RoyK: ufw is using modprobe and it fails to start ERROR: problem with running ufw-init
<fhf> RoyK: and ufw sets some setting via sysctl and sysctl doesn't work on OpenVZ VM
<fhf> RoyK: I figured it out. I inserted "$IPTBLS -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT" instead of "$IPTBLS -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT" thanks for help
#ubuntu-server 2014-01-05
<Contigi> join #apache
<spinez> hey guys was wondering if someone could help me troubleshoot why i can't connect to my apache server outside my local network
<spinez> apache listening on all ports and port forwarding active.... ip address typed in correctly
<Patrickdk> spinez, routing
<spinez> could you be more specific?
<mickd> i recently had to move my hdd from one computer to another. everything has gone fine so far it seems except for accessing my server through the ddns while on the local network. i have a dd-wrt on my router and i know it supports NAT loopback because it did before. would swaping the hdd like this casue the problem? i think not, just looking for some insight/what to look for. thanks.
<spinez> guess you could try to setup dnsmasq on the services tab
<spinez> address=/www.domain.com/192.168.1.105
<mickd> spinez: ok, looking at the services tab i dont see dnsmasq straight away
<mickd> but i can google it
<spinez> on mine, its right under the static leases section
<mickd> spinez: would i need to set this up just because of the hdd swap? i know its not ideal, and is only temporary, as of right now its unusable anyway
<mickd> i dont want to bother if a fresh install will set it all right
<spinez> is that hdd the one running all your services?  i cant imagine putting a hdd in a different pc would cause that unless that hdd was the server
<mickd> spinez: it is. deal is its an old IDE and the new machine is SATA. so i was going to try and get it going off of usb boot. all went well, except the networking.
<Patrickdk> it's cause it has a new network controller, the mac address changed
<Patrickdk> guess you didn't setup a static ip
<spinez> well, i would imagine you need to update your settings for the new IP
<mickd> i do indeed have a static ip
<Patrickdk> who does? the *server* or dd-wrt
<Patrickdk> what is in /etc/network/interfaces
<mickd> the services work if i use the local ip the
<Patrickdk> and that local ip is the same ip as in dd-wrt for forwarding?
<mickd> the server has a local ip, i set it in the interfaces forever ago
<mickd> yes, port forwarding has the right ip and ports
<Patrickdk> when you do, ifconfig
<mickd> the interface eth0 is the same
<Patrickdk> it shows the same ip as in interfaces file?
<mickd> yes
<Patrickdk> normally it changes
<mickd> 134
<spinez> but you're trying to access them via a domain name?
<mickd> yes, dlindkddns.com
<mickd> myname.dlinkddns.com:port used to take me where i wanted
<spinez> does that match your external ip ?
<mickd> yes
<spinez> well, i do similar but i just use masqdns for the domain i want
<spinez> point dnsmasq to the domain name and local ip of the server
<spinez> so the router will just use its host file for that domain giving you the local address instead of querying dns servers for it
<spinez> works perfect for me
<Patrickdk> dnsmasq is on dd-wrt, easy to add the options
<spinez> thats what i mean.... dnsmasq ;)
<mickd> cool. i will do that. thanks for the help guys.
<spinez> now i need to figure out why my router is showing a different external ip than what showmyip websites are telling me
<Patrickdk> due to *double-nat* :)
<spinez> yea well im not sure it is double nat'd
<Patrickdk> or your isp implemented a new web cache accelerator/intercepter :)
<spinez> could be... but im in china and i doubt china telecom would be doing that haha
<Patrickdk> china? no wonder
<spinez> it was working up until a couple months ago...  gonna try clearing nvram on router and setting it back up
<spinez> both ips that i see are routeable
<mickd> i see spinez has left. i still cant seem to get in with dnsmasq. web browser timesout almost immediatly
<Plizzo> Hello, I made a mistake which lead me to reinstall Ubuntu Server. I have a RAID5 array created with mdadm and I now wish to re-assemble it. From what I understand I should be able to by using "sudo mdadm --assemble --scan", but is there any way to do so without the rebuild as I know the RAID is intact and fully functional?
<bekks> Plizzo: cat /proc/mdstat
<Plizzo> Bekks, I haven't run the command to assemble yet, so there is no mdstat to be seen
<Plizzo> bekks: But If I run the command, won't that initiate a rebuild?
<bekks> It will assembe the raid, not setting up up from scratch.
<Plizzo> bekks: So I should be able to use it directly if everything is well?
<bekks> Of course.
<Plizzo> bekks: Great, it worked, thank you! :)
<Plizzo> Which is the best way to install rtorrent on Ubuntu Server? Does the one in aptitude contain xml-rpc or do I need to configure and compile it myself?
<bekks> Plizzo: apt-get install rtorrent
<Plizzo> bekks: And If I configure RPC with RuTorrent, Avalanche etc it should work properly ootb?
<zxd> what settings to change maybe in /etc/apt ? for  Update Manager not to popup and download only twice weekly announcing only LTS releases then install the updates, I am trying to deploy  automation via chef to multiplie fresh ubuntu installations
<iKb> i own blablabla.com, is possible to redirect lan.blablabla.com to my home server that has a dynamic ip but a ddns that always point to updated ip
<iKb> on my ow server i have a dns server also
<bekks> Plizzo: I never used RPC, RuTorrent, Avalanche.
<hitsujiTMO> Plizzo: transmission-daemon is what all the kool kids use these days. it even has a web interface
<Plizzo> hitsujiTMO: The only downside is that transmission is not based on libtorrent and doesn't give the same speeds as rtorrent does.
<arc__> Hello i am haveing problems with my pdc on ubuntu server i cant connect to  can anyone help me plz
<arc__> Win xp says invalid dommain
<Patrickdk> what is a pdc?
<arc__> Primary domain controller
<Patrickdk> those died out with windows 2000
<arc__> Ok :D
<arc__> But i am using samba 4 to have my dc
<arc__> What do u say that i should use then ?
<Patrickdk> dunno
<Patrickdk> I normally stick to windows for windows things
<Patrickdk> and I have yet to use samba4
<arc__> Win server is expensive
<Patrickdk> thought it was like $600
<Patrickdk> but doesn't really matter
<arc__> Yea i dont have $600 lying around
<arc__> Lol
<dddns> Good day! Tell me please, which squid log analizers can save page title?
<arc__> How can i test my domain
<SaberX01> arc__, what you wanting to test?
<arc__> if i can logon to my domain
<SaberX01> arc__, well try:  ssh <user>@<domain-name.com>  and see if you ccan get it.
<arc__> Kk will try that
<arc__> Damm did not work nodename not know
<SaberX01> arc__, try using the IP address  .. what is you domain name ?
<arc__> x98sever
<SaberX01> arc__, Is this an internet hosted site, or a local server you have setup ?
<arc__> Sorry its local server
<arc__> not online
<SaberX01> arc__, Ok. first can you ping the IP address of the server
<arc__> yes i can
<SaberX01> arc__, and did you install openssh-server
<arc__> Yes
<SaberX01> arc__, and Apache2 I assume ?
<arc__> Most likely
<SaberX01> arc__, to connect via ssh:   ssh user@ipaddress    .. to test Apache2:  http://IPADDRESS
<arc__> Kk
<arc__> But how will that check my domain
<arc__> ?
<SaberX01> http://ipaddress
<arc__> But this is a samba4 domain
<SaberX01> You dont have DNS setup Im sure, so FQDN not going to render
<arc__> ok whats fqdn
<SaberX01> arc__, you ahve to use IP addy's unless you use DNS and nameservers
<SaberX01> Fully Qualified Domain Name .. which a local server install is not going to have.
<hitsujiTMO> arc__: or you can add entries to your /etc/hosts        for testing
<SaberX01> Thats only going to render if he's on the local host though.
<mbnoimi> How can I set specific path for log file of OpenVPN? default logging of OpenVPN stored in syslog
<odinho> How does grub find the UUID it uses? I'm moving / (root) on my server from my RAID1 to an SSD, and I ofc want grub to do the right thing. But I never see the UUID mentioned anywhere.
<raggy> hi noob question here - if i add a command to the crontab to run every minute & the task takes longer than a minute, what happens? it just runs the next task asap, or does fire up another instance?
<hitsujiTMO> odinho: as in you don't see: sudo blkid
<hitsujiTMO> ?
<odinho> hitsujiTMO: I found out I could override whatever it found by itself by using   GRUB_DEVICE_UUID=<hardcode>   in /etc/default/grub  :)
<odinho> hitsujiTMO: I guess what happens is that it looks at "what is the uuid of the current / mount right now?" and use that.
<odinho> hitsujiTMO: And obviously that won't work when I want to change where the root mount should be.
<odinho> raggy: Another instance. There's several locking ways to deal with that problems. serverfault has quite a few.
<hitsujiTMO> odinho: yes i believe it generates from /etc/mtab
<raggy> odinho: thanks, i'll have a look. any you'd recommend?
<odinho> raggy: I have normally just done it in the program itself, so that I am also locked out (or the cron is, if I did it manually). -- However, for other stuff, I'd use  run-one.
#ubuntu-server 2014-12-29
<Stuxnet> Hi all, I am a major ubuntu server newbie (using it for home server). During upgrade to release 14.04, I took a screen shot of a message about PostgreSQL. Is there a way to "paste bin" it here?
<Stuxnet> I hate to ask a question that I flat out know nothing about but trying to figure out what it's telling me is lke greek right now.
<Stuxnet> bbl
 * Stuxnet[A] is now away - Reason : 
<teward> Stuxnet[A]: turn off the away announce
<teward> Stuxnet[A]: you can upload it to imgur or somewhere
<jnollette> anyone here have any experiance with zfs?
<jnollette> my usb pool isn't mounting on reboot
<bearface> jnollette: any errors? #zfsonlinux might be helpful..
<jnollette> beatface, basically they're not mounting, so they're unavailable
<jnollette> having this issue with all usb zfs devices
<jnollette> sata devices, no problem
<bearface> 'zpool import -d /dev/disk/by-id poolname' work?
<jnollette> it says pool already created
<jnollette> created/imported
<bearface> 'zpool status'
<jnollette> http://hastebin.com/buzefijewi.vhdl
<bearface> bermuda being the usb devices pool?
<jnollette> yeah
<jnollette> and the cache for nevis
<bearface> and in the zpool import command, did you replace poolname with bermuda?
<jnollette> yep!
<jnollette> 'zpool import -d /dev/disk/by-id/wwn-0x5000c5004f2c431f bermuda'
<jnollette> minus quote
<bearface> just /dev/disk/by-id   not the wwn
<jnollette> can i destroy the device then reimport
<jnollette> i have the data on another drive
<bearface> if not, possibly a 'zpool export bermuda' followed by 'zpool import -d /dev/disk/by-id bermuda'  again not the wwn at the end
<bearface> yes, you can, might not be nessecerry though
<jnollette> now its online!
<jnollette> hmmm
<bearface> and probably with either ata-* or scsi-* names under zpool status? i've found it more reliable than wwn names
<jnollette> how could i force umount
<bearface> of a zfs dataset?
<bearface> or a whole pool?
<jnollette> trying to export and import the nevis pool to bring the cache back online
<bearface> if it is failing at export saying it's being used. you have to stop whatever is using it.. nfs-kernel-server being a common one if you have exports on the pool... cache are non important though and can easliy be removed and re-added
<bearface> non-important as the data cached are already stored on the raidz1
<jnollette> bearface, thats for the help, i got an rc.local blob to fix me up on load
<jnollette> *thanks
<bearface> :)
<jnollette> zfs is relatively painless other than this one issue... its really fast once you strip down a few of the extras
<jnollette> im getting like 200mb/s off 5 drives
<bearface> if the proper paths /dev/disk/by-id is in the /etc/zfs/zpool.cache the mountall bundled with the unbutu-zfs cache should take care of importing things at boot
<bearface> ubuntu-zfs package*
<bearface> and aye, i quite like zfs myself too :)
<grendal_prime> DonRichie, hey dude
<grendal_prime> i got this all hammered out
<grendal_prime> working it like a boss!!
<DonRichie> Nice to hear grendal_prime, I also improved my knowledge a bit since yesterday
<grendal_prime> ya my problem from what i can triage out of our session and the results was that simple client config portion.
<grendal_prime> would you like to see the tool im using to manage this now?
<DonRichie> maybe later, I will soon need to go to work
<grendal_prime> ewwww avoid that if you can
<grendal_prime> I tell all young men i meet... "Mary rich!"
<grendal_prime> it makes everything so much easyer...just remember ugly rich chicks need loving to!
<grendal_prime> I got lucky...my wife is beauty full and rich.
<grendal_prime> heheh
<DonRichie> I unfortunaly never met such a woman. Maybe some time in the future
<grendal_prime> allright well i got that entire setup process down to about 60 seconds now.  Mostly just a few packckage intstall and the config process is all done with a gui  (addin the servers and their ips that is)
<grendal_prime> and i got to tell ya its made everything on my internal network much faster
<grendal_prime> peace out im studdying
<DonRichie> Nice to hear that. Have fun!
<YamakasY> Patrickdk: found the real issue
<linocisco> anybody used OCSInventory-NG on ubuntu server?
<linocisco> with windows 7 clients agent?
<xFFFF> I've got a dumb bash problem I'm sure someone here would be able to help me out with...
<xFFFF> I'm trying to rsync stuff but failing when passing certain variables to my bash script...
<xFFFF> rsync -arvz --timeout=0 --partial --progress 'username@someserver.com:/some/path/$1' /some/other/path/ 2>&1
<xFFFF> Provided $1 is a valid file / folder / ... shouldn't the above work?
<maxb> "" quotes and '' quotes mean something different.
<maxb> The difference is pretty important here
<jerrcs> not even sure if the quotes are needed in this case, i don't believe they are.
<xFFFF> Well if for instance $1 contained spaces, rsync would fail.
<jerrcs> then use double quotes
<xFFFF> Yeah, I think maxb has it.
<xFFFF> Thanks guys :)
<ppetraki> does kdump work on ec2 instances? I'm getting a hard hang when I test, the systems don't reboot.
<patdk-wk> according to: http://xenbits.xen.org/docs/4.3-testing/misc/kexec_and_kdump.txt
<patdk-wk> no, it won't work on ec2
<ppetraki> patdk-wk, looks like I'm switching to openstack, thanks
<Crell> Hi folks. I've been trying to track down an issue with a 14.04 install for the past week.  The problem is mixed; sometimes it's a kernel panic, sometimes "recursive error detected", most often the network silently crashes when copying data.
<Crell> I had been running it on a RAID 5 array; someone in #ubuntu suggested trying it without RAID, so I wiped and reformatted one of the drives and installed 14.04 solo to try that.
<Crell> It just lost the network again doing a large transfer.
<Crell> Any idea what to try next?
<Crell> Aside from the hard drives, the rest of the system is ~9 years old and had no issues of this kind under 12.04 previously.
<rberg-> in the past I had a very similar issue with writing data over the network to a raid array, in my case the problem was interrupt sharing, I had to force msi-x on my network adapter and this solved my stability problems.
<Crell> Hm.  I don't know what msi-x is.  It also is happening even without the RAID now.
<ppetraki> Crell, just a form a transporting an interrupt request, usually more efficient
<patdk-wk> the solution rberg said, had nothing to do with raid
<patdk-wk> but just that the disks and network used the same irq
<rberg-> I cant remember what I noticed in dmesg that lead me to look at /proc/interrupts
<patdk-wk> and one of those two drivers had a bug :) and msix worked around it
<Crell> Would that also sometimes cause kernel panics and such rather than just network fails?
<Crell> The last kernel panic I saw DID say something about scsi and irqs in the backtrace.
<ppetraki> starving interrupts can cause all sorts of fun stuff, so yeah
<patdk-wk> for me, my wifi card caused kernel panics
<patdk-wk> I had to disable my wifi, install, upgrade, then I could use the wifi
<ppetraki> Crell, you can see it pretty easily in htop, the cpus being pounding on get sys % much higher than the rest
<Crell> I didn't fully understand it, but it involved those
<rberg-> for me I didnt have the issue until I added a large array because more sata controllers need more interrupts
<Crell> hm.  OK, so how do I set this network setting?
<Crell> (Presumably if this works, then I would try going back to the RAID array next and see if it works there?)
<rberg-> I would start by taking a look at /proc/interrupts to see if you have much sharing going on
<ppetraki> Crell, well you want to confirm where the problem is first before you start changing stuff. watch -d 'cat /proc/interrupts'
<ppetraki>  while you load the network and disk and see how the interrupts are distributed.
<Crell> OK, so reboot to reset the network, continue copying gobs of data while that's running, and see what happens?
<ppetraki> yup
 * ppetraki lunches, bbiab
<Crell> OK, stand by.  It could take a while to crash again.
<Crell> Thanks.
<jdzielny> hello everyone.  I'm trying to install Ubuntu Server 14.04 LTS and configure 2 of the drives in my system as a RAID 1 array, but every time I try to use the installer to do it (using the Configure software RAID option) it freezes
<Crell> OK, so IRQ 23 has eth0, as well as usb5.  It's going nuts with over 700k counts.  (Whatever the CPU0 column means.)
<Crell> I don't know what port the hard drive is on... there's several usb irq ports but none that say scsi.  (It's a SATA drive, which usually lists as SCSI, IIRC?)
<Crell> OK, there's the broken pipe message again from rsync.
<Crell> ppetraki rberg-: I have proc/interupts up on watch.  What am I looking for on it?
<ppetraki> Crell, the counters associated with the driver name of the device, the label is all the way to the right like enpFOO for network device
<Crell> OK...
<Crell> IRQ 23 is the most active, and is labeled IO-APIC-fasteoi    uhci_hcd:usb5, eth0
<Crell> Which I think means it's being shared by usb5 and eth0.
<Crell> Which, if usb5 actually means "SATA drives", would explain why it's well above 7 million after rsyncing across the network.
<ppetraki> Crell, can you identify which one is your storage controller?
<Crell> None stand out.
<Crell> There are two pata_via lines, but they're almost empty which makes sense as I have no PATA drives installed.
<Crell> usb4 is also ahci,
<ppetraki> so what does your RAID run off of? the onboard sata or like a LSI HW RAID?
<Crell> usb1 is also uhci_hcd:usb3
<Crell> RIght now there's no RAID, just one SATA hard drive.
<Crell> When I had it configured for RAID it was a 3 drive software RAID setup, created by the Ubuntu installer.
<ppetraki> OK
<ppetraki> so AHCI would be the storage controller
<Crell> So that's on interrupt 21, whcih has a counter of about 6k.
<Crell> IRQ 23 is the network, and the one that's super high.
 * Crell hasn't thought about IRQs since the DOS days...
<patdk-wk> well, that is cause mostly, irq sharing was solved
<patdk-wk> but that doesn't mean drivers do it correctly, yet
<Crell> Apparently.
<Crell> What's weird is that I never had this issue in older kernels, only in 14.04.
<Crell> But... my laptop (where I'm rsyncing the data from) is also 14.04 (Kubuntu), and it hasn't fallen over once.
<ppetraki> there's something to be said for not upgrading :)
<patdk-wk> what kind of network card?
<Crell> on-board VIA something or other.  I forget the model.
<Crell> VT6102, Rhine II (according to lspci)
<Crell> So does this sound like the msi-x thing you mentioned would be applicable, if they're not sharing irqs?
<ppetraki> doubt it
<Crell> drat
<Crell> What would be?
<Crell> Is there even a theory as to the root issue?
<patdk-wk> msix doesn't fix the irq sharing, it just makes irqs happen much less
<Crell> Ah.
<ppetraki> install and configure kdump, collect a dump, post backtrace to lp bug.  http://blog.zedroot.org/linux-kernel-debuging-using-kdump-and-crash/
<Crell> Oy.
<Crell> That's the "punt to the developers" option then?
<ppetraki> or install the 12.04 kernel, might be able to get away with it using pinning, not sure if it'll work though
<adrian_lc> hello, is there a way to install a package without launching its service?
<Crell> Running a 2 year old kernel on a current distro sounds like something well out of my expertise area.
<ppetraki> mostly, sometimes we get lucky and can find that same backtrace on interwebs and viola patch!
<Crell> Ie, I fear even more "weird things" happening.
 * Crell tries to make sense of this blog post to get a dump.
<Crell> ppetraki: Would kdump work if I'm not getting a panic most of the time, just a network disconnect?
<ppetraki> Crell, it only covers half of your cases, correct, but if you deliberately stress it you'll hopefully run into it sooner than later
<Crell> Lovely.
<Crell> So install this, and then beat on it until I can cause a kernel panic?
<ppetraki> yup
<Crell> Sigh.  Not how I had hoped to spend my day.
<ppetraki> there's option C, use this as excuse to buy a new all Intel MB
<Crell> I have had that suggested to me.
<Crell> Why intel MB specifically?
<patdk-wk> well, intel southbridge and nic :)
<Crell> That would essentially mean whole new computer, as everything else is a 9 year old AMD Athlon-based system. :-)
<ppetraki> it's more like, why AMD?
<ppetraki> not to mean anything that AMD is substandard or anything it's just for production stuff Intel is the most well traveled
<Crell> But overall it sounds like a Linux kernel incompatibility with something or other on the motherboard?
<Crell> Oh you mean Intel CPU compatible, not specifically non-Asus/Gigabyte/whoever else makes motherboards these days.
<ppetraki> yup, that's right, which leads to the decision point of 1) downgrade 2) instrument and hope someone can help you 3) upgrade
<patdk-wk> well, most of those use intel chipsets
<patdk-wk> the ones that use intel cpu but via/nvidia/... chipsets, can still be flaky
<Crell> Blargh.
<ppetraki> where 3) is upgrade HW
<patdk-wk> it might be, just installing a different nic, will get around the issue
<patdk-wk> if the issue is in the nic driver code
<ppetraki> good idea
<Crell> hm.
<Crell> Let me see if I have a spare NIC lying around.  If I do it's probably as old as the rest of the computer. :-)
<patdk-wk> then it will likely be compatable with that system :)
<Crell> I have several modems and multiple sound cards, but no wired NIC.  I DO, however, have some old wireless PCI cards. :-)
<Crell> Worth a shot, I guess.
<patdk-wk> last month, I finally trashed all my old cards
<patdk-wk> I no longer have any pcix, pci, isa nics
<Crell> I have a closet full of hardware I was going to audit and then recycle when I finished the server.
<Crell> That was a week ago...
<Crell> I used to be a hardware geek, a decade-plus ago.  Then I decided being a software geek was cheaper. :-)
<patdk-wk> garage+storage unit
<patdk-wk> finally removed the old cisco 5500's
<patdk-wk> had like 180 100mbit ports on those things
<Crell> When I moved from my apartment to a house a few years ago, I donated 7 computers to Free Geek.
<Crell> That was only half my closet.
<patdk-wk> oh, around 440ports
<Crell> I have a Pentium II CPU in my drawer; 2 Pentium III systems.
<patdk-wk> ya, I finally got ride of mine a year ago of those
<Crell> A box that I THINK has a Pentium II 400 MHz in it... :-)
<patdk-wk> I did still have some of the p3 cpu blanks
<patdk-wk> just have no reason to run pcix anymore, everything I have is pcie now, and the old pcix stuff is too slow or has limited 64bit support making it annoying
<Crell> I couldn't actually tell you which of my cards was which...
<Crell> And I fail at getting a wireless card to work in LInux.
<lnxmen> hello
<lnxmen> is there anyone?
<lnxmen> I need help with ubuntu 12
<lnxmen> and nginx
<lnxmen> every vhost returns 500/502
<teward> lnxmen: error logs - look at them
<teward> /var/log/nginx/error.log by default
<teward> read that, check what the errors are, see if you can figure it out, if not, copy-paste the error logs into a pastebin and give us the link
<teward> (note I need to head home for the end of the day so I will be slower to respond)
<lnxmen> teward: I did that.
<lnxmen> I will paste it somewhere.
<lnxmen> Generally, php-fpm says that connection was refused.
<lnxmen> http://pastebin.com/ifDRuyWF
<nxvl> is launchpad down?
<pmatulis> no
<Tobbe-82|ServerI> Good evening guys, so I'm working on setting up a webserver and I keep running into problems (on my 5th re-install). First step is to get a static ip going and it's here that I'm running into trouble. I know I should edit /etc/network/interfaces and put iface eth0 inet dhcp to iface eth0 inet static and add address, netmask, network, broadcast, gateway & dns-nameservers. My problem is
<Tobbe-82|ServerI> that I'm not 100% sure as I'm behind a normal router in a home network. I've asked my ISP to set up a static IP adress on their end so if anyone could help me understand this it would really mean a lot to me
<pmatulis> Tobbe-82|ServerI: what is your ultimate goal?
<Tobbe-82|ServerI> my ultimate goal is to be able to run a few personal website project on my own computer in my own home network. I have been following a tutorial called The Perfect Server - Ubuntu 14.04 but ran into issues that I think comes from not having configed a static ip properly in the beginning
<Tobbe-82|ServerI> (I rather like the idea to being able to use a web interface to create databases etc)
<pmatulis> Tobbe-82|ServerI: who will access the website(s)?
<pmatulis> (clients on private/home network or on the public internet)
<Tobbe-82|ServerI> At start myself and my brother for a more private information archive based website. I also have a community/forum idea that I'd like to start off in my own server enviroment until it grows
<Tobbe-82|ServerI> (I also would like to learn how to set up a proper server in the process).
<pmatulis> so not available from the internet then right?
<Tobbe-82|ServerI> available from the web so it needs to be public
<pmatulis> Tobbe-82|ServerI: need to ask again.  do you want people from the internet accessing your website?
<Tobbe-82|ServerI> No problem pmatulis, I appreciate the help. Yes people from the internet need to be able to access my websites
<pmatulis> ok
<pmatulis> so you'll need to come up with a DNS strategy so people can access your website with a name
<Tobbe-82|ServerI> right now I'm just trying to get this working on my local network. but end goal is that yes. (Still waiting for my static ip from my ISP, then i'll need to config my domain name to somehome use nameservers to link this all together).
<Tobbe-82|ServerI> Have no clue on how to do that but figure one step and problem at a time ;)
<pmatulis> there are 2 modes to do this.  â  get a static IP address from your ISP and set up a static name:address mapping in DNS or â¡ get a dynamic IP address from your ISP and change the mapping whenever that address changes (using a utility often called a 'dyndns client')
<pmatulis> the IP address here is 99.9% not for your ubuntu server.  it is for your router, usually bundled with your modem these days
<Tobbe-82|ServerI> I understand that my public ip adress towards the web is different from my local area network one.
<Tobbe-82|ServerI> so in order to set up a proper ubuntu server static ip I would need to have the static ip from my ISP first?
<pmatulis> when traffic hits your router/modem you redirect it to your server (called 'port forwarding')
<Tobbe-82|ServerI> ahh yes, it what I had in mind, just don't know the practicalities of it. I figured i'd work on getting the server set up and then put that on or is that thinking backwards?
<pmatulis> yes, backwards
<pmatulis> set up your website.  test it locally using it's private IP address.  you can set up DNS after when you want to provide access to the internet
<pmatulis> s/it's/its
<Tobbe-82|ServerI> allright, so first wait to get the static IP from ISP then configure my server with port forwarding. Wouldn't I need to have set a static IP in ubuntu in order to have a static ip to do port forwarding to?
<Tobbe-82|ServerI> ahhh good
<Tobbe-82|ServerI> so I guess then my question becomes how do I set up a "local" static IP for ubuntu that will just work for my local network?
<pmatulis> yes, port forwarding should be sending to a static address.  but with dhcp (dynamic) you can reserve an address for your server's hardware (MAC address); it's effectively static
<Tobbe-82|ServerI> so there is a way to use the MAC adress as a static adress and then automatically compensate for the dhcp given ip adress?
<pmatulis> every network card's port has a hardware address (in hexadecimal format).  when a dhcp client asks for an IP address it broadcasts it using it's hardware/MAC address.  you can instruct a dhcp server to provide a specific IP address to that specific MAC address
<Tobbe-82|ServerI> interesting, I did not know that. Would setting that dhcp setting be in my router or a dhcp server I create in my ubuntu server setup?
<pmatulis> these days every router/modem has a DHCP server built in.  use it
<pmatulis> it will have a web interface that you will allow you to configure these things
<Tobbe-82|ServerI> Oh yes, I just need to figure it out in regards to my router. Thank you so much pmatulis, I'll start here and will most likely frequent this channel daily from here on :)
<pmatulis> that's also where you configure port forwarding
<Tobbe-82|ServerI> Learning quite a lot in the process :)
<pmatulis> Tobbe-82|ServerI: np, good luck
<Tobbe-82|ServerI> thanks
#ubuntu-server 2014-12-30
<teward> any of our glorious server team devels around to answer a question?
<Patrickdk> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<teward> Patrickdk: it's developer-specific - getting an FTBFS on the powerpc build of the nginx merge that just got uploaded, looks like it's unrelated to code issues, but i wanted a second opinion - https://launchpadlibrarian.net/193675980/buildlog_ubuntu-vivid-powerpc.nginx_1.6.2-5ubuntu1_FAILEDTOBUILD.txt.gz
<teward> looks like maybe a ddebs issue, but not entirely sure
<teward> also asked in -devel but got nothing back
<teward> i'm not that worried about powerpc though
<rsully> when I run a livecd/liveusb does it load the OS into ram, or does it actually boot and run from the physical media?
<rsully> whoops probably should have been on plain #ubuntu, but either way
<MrZhi> Hey, is it possible to do an install of server over top something else like fedora core? Like would it import existing data? I'm googling and have been up for 18 hours so far with a long night ahead so any quick guidance is appreciated
<Tobbe_> Good morning everyone, I have a quick question as I am quite new to working with ubuntu and setting up a server (ubuntu server 14.04). So far i've reinstalled it 5 times due to various errors and now I am doing things block by block then verifying that it don't throw errors by rebooting. Is there a way to run the same checks that is done during boot (ie. check the load network [ ok ] styled tests etc. without rebooting?
<blackyboy> Which software can do a incremental backup in ubuntu server, I want to do incremental backup from my ubuntu server to Amazon s3, i have tried with backup gem but it's gives lot of problem. Any other software which can provide tar incremental to s3 ?
<andol> blackyboy: If you are fine with the classic model of full and incremental backups, then duplicity isn't a bad choice. It supports backing up directly to S3, and it does encryption.
<blackyboy> andol: thanks let me have a look into it. :)
<ElfBot> Anyone mind having a really quick look at my DNS file and see if they can spot the issue? http://pastebin.com/sFCgWD5A
<johncarper> I've made a special FTP user on my server that's running vsftpd with a chroot jail and made a folder "FTP-shared" in the /home directory with the FTP user access to /home/FTP-shared and selected /bin/false inside the FTP-shared folder I have a folder download with chmod 755 and upload with chmod 777
<johncarper> is the upload folder with chmod 777 a security risk on my server in this scenario?
<bananapie> my ISP has given me an IP and a subnet ( the subnet is a /30 , so one IP and one gateway ). I only have 1 server with 1 network card. Is there anyway I can set up the subnet so I can run services on the IP in the subnet without a router or second server? I was thinking maybe with a virtual interface or something
<pmatulis> bananapie: give a concrete example of what you actually want to achieve, without the mumbo jumbo
<bananapie> actually, I really don't know how to make it more concrete. I think what I want to do is impossible and I have to drive back to the data centre and install a router.
<bananapie> :(
<maxb> bananapie: Try explaining what you actually want to do instead of what you think you need to do to acheive that
<maxb> It sounds to me like what you need to do is precisely nothing, not go buy a router
<vexati0n> hey -- i have an ubuntu 14.04 server with 2 NICs. until now, the 2nd nic has been inactive, but i need to assign it an address. i edit /etc/network/interfaces, then ifup eth1, and i get the IP correctly. but 30-60 seconds later, the system removes the IP and leaves the interface unconfigured
<vexati0n> i need to fix this *without* rebooting
<dasjoe> vexati0n: find out why it's deconfiguring the interface. /var/log/syslog or kern.log should help
<bananapie> ok pmatulis and maxb, i'll try again.
<bananapie> I have a server that is connected to a new internet connection. They gave me two IPs. One is the IP that can reach the ISP's gateway, the other is an IP that I have to route via the first IP. I don't want to install a router.
<maxb> You mean they gave you two subnets?
<bananapie> so on the one interface, I have IP 192.168.20.150 on a /30. The ISP's gateway is 192.168.20.149. My second IP is 192.168.20.1 it's also a /30.
<bananapie> yes
<bananapie> so normally, I would have to install a router to use the IP 192.168.20.1 and 192.168.20.2, but I don't want to install a router. Can I just stick 192.168.20.1 on a virtual interface and the kernel will pretend to route it via 192.168.20.150 ?
<maxb> Do you actually mean they gave you those IPs, or are you anonymising them?
<maxb> Those are rfc1918 private IPs, so not reachable from the internet
<bananapie> I know they are "private" IPs. So the server on their end has a firewall that will only allow packets from 192.168.20.1, but their modem/router is expecting 192.168.20.150 on the connected NIC.
<bananapie> I think if I set ip_forward in the kernel to 1 and I set both IPs on a network card, the kernel will figure it out.
<maxb> That's a pretty weird setup they've given you
<bananapie> yes.
<maxb> Setting ip_forward is irrelevant if you've just got the one machine
<bananapie> Normally, I should stick a router in their that has 192.168.20.150 on one side with a default gw of 192.168.20.149 / 255.255.255.252. and on the other side of the router 192.168.20.2 and my server connected to this interface with 192.168.20.1
<bananapie> but I don't want to install a router.
<qman> That is an impossible configuration
<qman> Oh wait, /30
<bananapie> No. it's not impossible. It's confusing because they subnetted a 'private' subnet
<qman> Why do you need both at all then?
<qman> The one that's connected to the isp side should be all you need
<bananapie> ok
<bananapie> thanks
<qman> In your theoretical router setup
<maxb> I can't figure out what this ISP is trying to accomplish. I guess I'd first try to see if I could actually talk to the internet using the .150 IP as the source
<bananapie> ok. It works. so I did this 'ifconfig eth4:1 192.168.20.1 subnet 255.255.255.252' ; 'ifconfig eth4 192.168.20.150 192.168.20.1'; 'route add default gw 192.168.20.149' and everything works. I can ping using ping -I 192.168.20.1 [ any ip here ]
<bananapie> thanks.
<qman> This is a really weird setup, and I can think of a couple possible scenarios, but they'd be really silly
<bananapie> ok, the above commands seem to work.
<bananapie> thanks guys
<Annoyed> Greetings
<Annoyed> Quick question about the isc-dhcp-server config file.   It has a commented out declaration for a 10.* subnet "subnet 10.152.187.0 netmask 255.255.255.0 "  with a comment above it that it won't be used. "
<Annoyed> Anyone know what that's about?
<kriskropd> I accidentally stopped a do-release-upgrade in the middle of reconfiguring samba on an LTS box - the machine thinks it is running the latest stable LTS now, however about 60 packages now have errors within them and I don't know how to fix them - I've tried 'sudp apt-get update; sudo dpkg --configure -a;sudo apt-get update' but that ends with the last apt-get update failing due to "Processing was halted because there were too many ...
<kriskropd> ... errors." which it states after listing the 60 or so packages that seem to be damaged
#ubuntu-server 2014-12-31
<rzeka> I am about to set automatic backups on server but I'm wondering. Is it better to connect to target machine from source or to source from target. In 1st case, when I have 3 different sources, I cannot tell if previous backup is done so I might get 2 backups running at the same time. In 2nd method, if backup server is hacked anyone may get access to other servers with ease (login through ssh keys)
<vidarne> with taskset you can set a specific running program to a  core as root/superuser but is there a way for a regular user to be allowed to set what core a program shall use ?  i have 3 game servers runing and i dont want them to use core/treds 1-3.
<vidarne> is it visudo i have to use for that ?
<DonRichie> vidarne: You can give root permission for specific commands with sudo
<Kartagis> postfix/smtp[8844]: connect to mail.example.com[xxx.xxx.xxx.xxx]:25: Connection refused how come?
<Kartagis> guys, it seems that I can't receive any mail and I re
<Kartagis> ceive this in the log
<Lartza> Not sure if I should ask this on php or apache so... Running Apache2 with php5-fpm and proxypassmatch, problem with aliases on webapps like phpmyadmin
<Lartza> Getting a "primary script unknown" error but there's no better info anywhere
<Lartza> I use ProxyPassMatch pointing to fcgi://127.0.0.1:9000/var/www/html/$1 and Alias /phpmyadmin /usr/share/phpmyadmin
<Lartza> Fixed I think :)
<dav1dp0101> Hey, does anyone know how to remove the X windowing system and any display manager and graphics manager I may have installed? I think I installed a few different types but I don't remember what.
<jsonperl> hiya, I have a fairly non-ubuntu related "disaster recovery" question, but you folks are sharp :)
 * patdk-wk goes around popping balloons
<jsonperl> I want to host a backup web app, ready to failover if my "primary" app has issues
<patdk-wk> and?
<jsonperl> My initial thought was to take care of it via a CNAME change when said issue occurs
<jsonperl> And keep an A record already pointing to the "spare"
<patdk-wk> doesnt matter
<jsonperl> Though now I'm thinking I'll still have the same issues with TTL as I would with just a straight A record
<patdk-wk> so your question is ONLY related to dns failover?
<jsonperl> failover in general
<patdk-wk> nothing else in the scope? like failover disk data, the application itself, webservers, ....
<jsonperl> You would likely use a load balancer, and redirect?
<jsonperl> Just the webserver
<patdk-wk> there are like 5 good ways to do it
<jsonperl> Do you have a pref?
<patdk-wk> but every method has it's own time and scaling issues
<jsonperl> I like the CNAME route, since it's drop dead simple
<jsonperl> but DNS prop may be a bit of an issue
<jsonperl> and it's obviously not automated
<jsonperl> but this is a disaster situation, so I'm somewhat unconcerned about that
<patdk-wk> why is it not automated?
<jsonperl> cname switching?
<patdk-wk> ya
<jsonperl> I mean, it could be I spose
<patdk-wk> it always was for me
<patdk-wk> I had each dns server test reachability
<patdk-wk> and only serve whatever one was usable
<jsonperl> I'm not running my own dns
<jsonperl> I'm on route 53
<patdk-wk> if the dns servers can't test, then you just have to go *best* guess
<jsonperl> perhaps they have some automation for that though
<patdk-wk> and assume the dns servers have the same reachability as your testing location
<jsonperl> btw happy new year pat
<patdk-wk> they do
<patdk-wk> but I hadn't used it too much, so can't remember if it's good enough, think it is though
<jsonperl> that may be the best solution
<jsonperl> least moving parts
<Novice201y> Hi. How can I limit TLS version to ingore SSL3 on my Ubuntu 12.04 Server?
<Novice201y> Hi. I run OpenVPN Access Server on VPS's Ubuntu 12.04 and want to limit TLS version that accessing /admin via https will try something higher that SSL3.
<qman> You will need to adjust the web server's SSL configuration
<qman> How precisely you do that depends on which web server you're running
<Novice201y> qman: I don't think so - I installed only OpenVPN Access Server on this Ubuntu, changes options under SSL tab, but still ask for SSL3 on conection.
<qman> A pretty decent guide for securing SSL on some common softwares: https://wiki.mozilla.org/Security/Server_Side_TLS
<qman> If the OpenVPN Access Server runs its own web server, you will have to check with their documentation on how to configure it
<qman> The algorithm selection is handled by the service, it's not a systemwide setting
<Novice201y> qman: Thanks
<Aison> hello
<Aison> I would like to setup several vlans
<Aison> with network/interfaces it works
<Aison> and I guess with network/interfaces vconfig is used
<Aison> but now I would like to use GVRP to announce the VLAN
<Aison> this can be done eg. with
<Aison> ip link add link eth0 eth0.260 type vlan id 260 gvrp on loose_binding on
<Aison> can I somehow define a VLAN device inside network/interfaces so that GVRP is used?
<Aison> or maybe I need to define a inet manual device?!?
<jvwjgames> Hi
<jvwjgames> I need emergency help with my website
<jvwjgames> my website is sufering sever probems
<jvwjgames> i need help can someone please help me
<cryptodan> what kind
<jvwjgames> let me explain
<jvwjgames> here is my website
<jvwjgames> try to goto it
<jvwjgames> http://jvwjgames.net
<cryptodan> whats the issue
<jvwjgames> my customers recive an http error 504 gateway timeout error
<cryptodan> I can get to it
<jvwjgames> really
<cryptodan> http://i.imgur.com/YmI2sPF.png
<jvwjgames> hmmm
<jvwjgames> my phone i use mobile data and get an http error 504
<jvwjgames> but interal network can get to it
<cryptodan> when i use www.jvwjgames.net I get a not found
<jvwjgames> ya that expected cause i don't have an A record for www.jvwjgames.net only jvwgjames.net
<cryptodan> but without the www's I get to it
<jvwjgames> *jvwjgames.net
<jvwjgames> ya
<jvwjgames> but for some odd reason my phone can't from out side my network
<jvwjgames> is it my server issue or is it tmobile issue
<cryptodan> tmobile
<jvwjgames> yes tombile
<cryptodan> you can check via www.network-tools.com or another site like it
<jvwjgames> ok this is strange
<jvwjgames> it says it is ok on a website tester i used but my phone it has an http error 504
<jvwjgames> hmmm
<cryptodan> then its a tmobile issue
<jvwjgames> hmm i just don't get it
<jvwjgames> cause all other websites work
<jvwjgames> on tmobile
<jemejones> does anyone know of a tool that can see if any packages that i have installed have a known vulnerability (by pulling a feed from a cve database)?
<jemejones> i think maybe nessus can do it
<cryptodan> yes nessus can
<jemejones> i'm trying to figure out if openvas (a fork of nessus from way-back-when) can do it
<jemejones> ok - awesome
<cryptodan> jvwjgames: could be an issue between tmobile and comcast
<jvwjgames> comcast?
<jvwjgames> did you do an ip lookup
<jvwjgames> lol
<cryptodan> jvwjgames: yup
<jvwjgames> nice
<cryptodan> but it works for me so its an issue with tmobile
<jvwjgames> i have a tracert program on my android phone and it can do a tracrt just fine
<jvwjgames> from mobile data
<cryptodan> traceroute is performed via ICMP or UDP it doesnt care about port 80
<jvwjgames> this is really strange
<jvwjgames> i can get to the site via mobile data if i use https but can't if i use http
<jvwjgames> ok
<jvwjgames> i found the problem
<jvwjgames> This is a known problem with US T-mobile - they fail to route to certain web services, including to Memsource. However, there is a workaround: Always use https for all communication, including using https from Memsource Editor. That should get you connected.
<jvwjgames> You should report this issue to T-Mobile, so that they fix this. Their users seem to have similar problems connecting not just to Memsource:
<jvwjgames> so this is a know issue on T-Mobile apperantly
<cryptodan> now I know another reason not to use tmobile
#ubuntu-server 2015-01-01
<Aison> I asked about GVRP Support in Ubuntu Server, now I found my own solution
<Aison> maybe somebody is interested
<Aison> in "/etc/network/if-pre-up.d/vlan" replace "vconfig add $IF_VLAN_RAW_DEVICE $VLANID" by "ip link add link $IF_VLAN_RAW_DEVICE $IFACE type vlan id $VLANID gvrp on loose_binding on"
<Aison> and in "/etc/network/if-post-down.d/vlan" replace "vconfig rem $IFACE" by "ip link delete $IFACE"
<Aison> this change should be even done in official release. vconfig should  be replace by ip link add anyway
<TimR> Hi guys I am having apache2 issues the server isnt redirecting to the correct site and I have sites enabled but its not redirecting
<DonRichie> TimR: Error 204: Not enough information to compute solution. Aborting...
<lnxmen> Probably he has wrong root path in virtual host.
<lnxmen> Or wrong configured .htaccess
<lnxmen> Or simply he did not restart apache.
<stiv2k> hi
<lnxmen> Hi
<stiv2k> what is the simplest way to run a command as root once a day
<lnxmen> Use cron
<stiv2k> to restart my tf2 server i want it to run service tf2-server restart
<stiv2k> but it has to run with root permission
<lnxmen> Yes, that's possible.
<Patrickdk> ssh into the server
<stiv2k> lnxmen: i just put it in /etc/cron.daily or whatever?
<lnxmen> stiv2k: /etc/crontab
<stiv2k> that's roots crontab?
<Patrickdk> na, don't use /etc/crontab
<Patrickdk> use /etc/cron.daily/xxxx
<Patrickdk> root has it's own, using crontab -e -u root
<stiv2k> oh
<lnxmen> Patrickdk: Why not?
<lnxmen> I am just curious.
<stiv2k> so make a sh script, #!/bin/sh, then on the next line service tf2-server restart, save the script in /etc/cron.daily with +x permission
<lnxmen> I always used it in this way.
<Patrickdk> lnxmen, cause then you will break every UPGRADE you do to the system
<Patrickdk> why have to manually fix that file? when /etc/cron.daily was provided for this purpose?
<Patrickdk> yes you can, but you have to FIX it yourself too
<lnxmen> Okay. Thanks for the explanation.
<cryptodan> server scripts dont go in cron
<stiv2k> ?
<cryptodan> stiv2k: what are you trying to do?
<lnxmen> He tries to execute script as root daily.
<cryptodan> stiv2k: bad idea
<cryptodan> and also the tf2 server daemon will auto restart upon crash
<cryptodan> stiv2k: read this https://wiki.teamfortress.com/wiki/Linux_dedicated_server
<stiv2k> cryptodan: its about the updates
<stiv2k> cryptodan: they update the server version pretty often, every week or so, and it checks for updates when it restarts
<stiv2k> so if it doesnt restart, it falls behind in version and nobody can play on it
<cryptodan> stiv2k: you still should not be running it as root
<stiv2k> the server itself does not run as root
<stiv2k> it runs via init script
<stiv2k> that starts it under a normal user
<cryptodan> stiv2k: but if you run the tf2server script as root in cron it runs as root
<stiv2k> using start-stop-daemon
<stiv2k> start-stop-daemon --start --chuid $USER --user $USER --chdir $BINARYPATH --exec "/usr/bin/screen" -- -dmS $SCREENREF $BINARYPATH/$BINARYNAME $OPTS
<stiv2k> ^^^
<stiv2k> so the server never runs as root
<cryptodan> so then you install the cron setting under the user as your tf2 server runs under
<stiv2k> no
<stiv2k> that user cannot run init scripts
<stiv2k> init script can only be run from sudo
<cryptodan> then when that cron entry starts its starts all processes after that as root
<cryptodan> so follow that guide I linked to https://wiki.teamfortress.com/wiki/Linux_dedicated_server to do it right.  and your server should never fall behind as when you start it up it should autocheck for updates
<stiv2k> yes it does but it needs to be restarted periodically to check the updates
<cryptodan> it auto checks
#ubuntu-server 2015-01-02
<benpardo> I'm very new to setting up my own server. I'm working with Nodejs and it seems like things are going well, but I'm having trouble making my site accessible beyond localhost. Is anyone willing to take just a little time to talk with me and answer questions? I would really appreciate it. Thank you.
<pmatulis> !ask | benpardo
<ubottu> benpardo: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<Patrickdk> !ask pmatulis
<Patrickdk> it won't ask you :(
<benpardo> Thank you ubottu! Glad to!
 * ObrienDave blinks
<benpardo> Why is my Ubunto server only working with local host and not fulfilling requests from the public internets?
<Patrickdk> you have the firewall tuned on
<Patrickdk> you configured your application incorrectly
<Patrickdk> you could have done a million other things
<pmatulis> there is no cable where it should be
<Patrickdk> we could guess for the next 10years
<benpardo> I'm hosting on digital ocean
<benpardo> How can I check that my firewall is turned on?
<pmatulis> benpardo: what is the public IP address?
<benpardo> Patrickdk, I'm glad to make it easy for everyone to gather knowledge about my problem. I'm just trying to find ways to work at it.
<Patrickdk> netstat would be a good starting place
<Patrickdk> followed by tcpdump
<benpardo> http://104.131.102.147
<Patrickdk> and your config
<benpardo> what is my config?
<benpardo> How can I evaluate it
<Patrickdk> I dunno, I didn't install nodejs
<Patrickdk> you claim you did though
<benpardo> pmatulis, would you like me to make the server listen?
<benpardo> Patrickdk, I did install nodejs
<benpardo> I'm not running a highly sophisticated operation. I have metalsmith.io generating static pages and serving them with metalsmith-serve
<pmatulis> benpardo: what port is you web server supposed to be listening on?
<benpardo> pmatulis: I just launched the app. I'm listening at port 8080
<pmatulis> benpardo: confirmed
<benpardo> pmatulis, what is confirmed?
<pmatulis> benpardo: it is listening on that port
<benpardo> pmatulis, how do you know that?
<pmatulis> benpardo: but nothing loads when i point my client at that IP and port
<benpardo> pmatulis, so it is publicly available
<pmatulis> benpardo: i scanned that IP for that port
<benpardo> what application did you us on linux?
<pmatulis> benpardo: but it says 'filtered'
<benpardo> pmatulis: what does that mean?
<benpardo> pmatulis: also, the site works on local host.
<benpardo> when I connect
<pmatulis> benpardo: since this is a cloud instance, normally you need to set up a security group.  it's essentially a firewall at the cloud level.  did you do that?
<benpardo> pmatulis: I did not. How do I?
<pmatulis> benpardo: using the digital ocean control panel thingy
 * pmatulis has never used DO
<benpardo> pmatulis: I think I found the lever. It has a button you can push to make it publicly available ipv6
<benpardo> Do you think that is it?
<pmatulis> benpardo: looks like i am AWS-centric.  DO does stuff differently.  it looks like you simply need to configure the firewall local to the instance (directly with iptables)
<benpardo> pmatulis, can you try what you did again?
<pmatulis> https://www.exratione.com/2013/06/a-few-notes-on-migrating-an-ubuntu-instance-from-aws-to-digital-ocean/
<pmatulis> benpardo: no change
<pmatulis> benpardo: hint, use ufw as a frontend to iptables
<benpardo> pmatulis: Thank you. This is helping me. I don't feel like I'm floating on a raft any more. I've got stuff I can try.
<benpardo> I really appreciate it.
<pmatulis> benpardo: great.  come back with more specific questions and someone will surely help
<benpardo> pmatulis: Thank you so much. I've got some stuff to work with now
<benpardo> pmatulis: just setup my firewall. Still having the same problem at the same port: 8080
<pmatulis> benpardo: what is the output of 'sudo iptables -L -n' ?  use a pastebin
<benpardo> http://pastebin.com/hWH24Yxr
<benpardo> pmatulis, voila: http://pastebin.com/hWH24Yxr
<cryptodan> benpardo: what is your site's url?
<benpardo> mindfire.vision
<cryptodan> I dont think .vision is a TLD
<benpardo> cryptodan: .vision is a new top level domain
<cryptodan> I dont see it listed on godaddy
<cryptodan> benpardo: port 80 is not available on that domain
<pmatulis> benpardo: i do not see TCP port 8080 in your rules and i believe (i'm not an iptables expert) you have 'deny by default' for the 'public' chain
<benpardo> cryptodan: do you want to see what ufw status says?
<cryptodan> benpardo: [cryptodan@alphacentari ~ ]$ curl 104.131.102.147
<cryptodan> curl: (7) Failed to connect to 104.131.102.147 port 80: No route to host
<cryptodan> [cryptodan@alphacentari ~ ]$ curl mindfire.vision
<cryptodan> curl: (7) Failed to connect to mindfire.vision port 80: No route to host
<pmatulis> benpardo: you can always save your rules, flush them, and test.  in addition, you should see block messages in the kernel log
<pmatulis> benpardo: of course make sure the server is listening on TCP port 8080 on the appropriate interface.  use netstat or lsof to test that
<benpardo> pmatuslis: This is helping.
<benpardo> cryptodan: you both are helping very much.
<benpardo> I'm going to try more and come back
<cryptodan> also check your port assignment in your config and see if its on 8080 or 80 there
<pmatulis> sudo lsof -i:8080 -n
<lickalott> gents, when I run a showmount -a I see mounts to my server from an internal class C address that doesn't currently exist on my network.  <-- that's 1.
<lickalott> #2 is; I had to rebuild my windows box (reload OS) and now I can't access the NFS mounts from my ubuntu server without changing permissions for the world to 5.  it wasn't like this before the rebuild.  I don't want to leave all that stuff open like that.  Any ideas?
<samba35> i am faceing problem with apt-get install xxx ,when i try to install any package i got error http://paste.ubuntu.com/9658949/
<samba35> some time back i was getting error with half-installed
<samba35> i dont have kernel  3.13.0-43  i have 3.13.0-41-generic
<lnxmen> hello
<lnxmen> Why do my server refuses to load images via SSL?
<lnxmen> What do you need to help me?
<lnxmen> Developer Tool (Chromium) shows me: net::ERR_INSECURE_RESPONSE error.
<Patrickdk> hmm, servers don't load images
<Patrickdk> browsers do
<lnxmen> Patrickdk: Yes, sorry.
<Patrickdk> so what is the actual problem?
<lnxmen> When I load page via browser I can see net::ERR_INSECURE_RESPONSE error for every image.
<lnxmen> But it's only if I am using SSL.
<Patrickdk> well, fix your webpage then
<lnxmen> Yes, it's mine.
<Patrickdk> this isn't exactly #html
<lnxmen> It's apache configuration, and os is Ubuntu.
<Patrickdk> what does that have to do with anything?
<Patrickdk> did apache or ubuntu write your html page?
<lnxmen> Actually, paths are okay.
<Patrickdk> no one said anything about paths
<lnxmen> So html page is also okay.
<Patrickdk> no it's not
<Patrickdk> you just said you got err_insecure_response
<lnxmen> html code is okay itself.
<Patrickdk> no it's not
<Patrickdk> if it was, you wouldn't have err_insecure_response
<lnxmen> I am sure it is.
<Patrickdk> that is why you are having a problem
<Patrickdk> cause you refuse to look at the source of the problem, cause you believe it's not the problem
<lnxmen> Patrickdk: I am moving site from old server.
<lnxmen> It worked.
<lnxmen> I think the problem is vhost/htaccess configuration.
<lnxmen> But I can't figure it out.
<lnxmen> Patrickdk: okay, browser need to accept SSL certificate after some time.
<lnxmen> Just two clicks, and it works. lol
<lnxmen> Patrickdk: Anyway, thank you for your help.
<rberg-> I find that in Ubuntu 12.04 ssh gets started in the chroot when I am imaging with pxe / debootstrap, even with a policy-rc.d diversion, does anybody know of a solution for that?
<blackyboy> btrfs now in production environment ? Can i use in production ?
<Annoyed> Greetings
<Annoyed> Are there any known issues with ufw on server 14.04 LTS ?
<bekks> Annoyed: Why?
<Annoyed> Following the directions at https://help.ubuntu.com/lts/serverguide/firewall.html, specifically the IP masq. section doesn't allow inside machines to access the net... they can get to the server inside inteface, but nothing further. The server box itself is fine...   and killing ufw, rebooting and manually entering the iptables command does allow access
<Annoyed> iptables -t nat -A POSTROUTING -s 172.16.0.0/24 -o p2p1 -j MASQUERADE   does work though.
<cryptodan> p2p1 does not look like a valid interface
<Annoyed> It is on this system
<cryptodan> can you do an ifconfig and dpaste.com the results
<Annoyed> damned renaming tning
<Annoyed> thing, even
<Annoyed> http://dpaste.com/0VC3J0H
<cryptodan> Annoyed: what kind of interfaces are those dial up?
<Annoyed> ethernet
<cryptodan> they should be eth0 and eth1 then
<Annoyed> Ubuntu renames things
<cryptodan> no it doesnt
<Annoyed> Yes, it does
<cryptodan> Uh no
<cryptodan> I have Ubuntu 14.04 LTS Server and it specifies eth1 for my ethernet port
<Annoyed> Well, you can see the ifconfig output. And those designations work for all other networking related.. as we all manually entering the iptables command
<Annoyed> Could it be that ufw doesn't like the renaming bit?
<cryptodan> why not rename them from p2p1 to eth0 and eth1 and the p's to me signify some kind of point to point or dial up interfaces
<Annoyed> That's what it did out of the box, and everything else is configured using those names.
<cryptodan> so is this an andriod?
<Annoyed> Nope. PC
<cryptodan> because I did a google for p2p0 and it came up with mobile phones and nothing about personal computers
<rberg-> thats caused by the biosdevname package I believe
<rberg-> p for pci
<Annoyed> That is what the system refers to the interfaces as.
<Annoyed> Anyway, I don't have time to spend on it any more right now. Just wanted to see if there are any known issues
<rberg-> ok, I dont know anything about ufw, I do know why your interface is named that way :)
<cryptodan> remove that package and see if ufw will work
<Annoyed> heh. I know what files I've set up using the p2P1 / p3p1 names, but I have no idea what may have been autogenerated, and am hesitant to break that.
<rberg-> delete /etc/udev/rules.d/70-persistent-net.rules and add "biosdevname=0" to the kernel cmdline and reboot to disable
<Annoyed> Maybe I'll try that... but later. I got to git.. thanks for the suggestion
<Crell> Mystery kernel errors are the bane of my existence...
<kevinde> On a Ubuntu server, what would be the best place to download a tar.gz file that has to be extracted, compiled & installed?
<kevinde>  /tmp?
<andol> kevinde: The best place wouldn't be on the server at all, but rather rather on your workstation, where you build a Deb package to install on the server. Yet, if you insists on compiling it on the server I guess I'd go with /usr/local/src, it always being a good thing keeping a copy of the install source.
<kevinde> andol: Thank you :)
<cryptodan> kevinde: first look for the package or application via apt-cache search
<kevinde> That is always the latest version right?
<cryptodan> its the latest of whats ever in the repo
<Annoyed> Greetings again
<Annoyed> Still having fun w/iptables..  I wasn't able to get it going with ufw.. I tried manually entering " <Annoyed> Anyway, I don't have time to spend on it any more right now. Just wanted to see if there
<Annoyed>           are any known issues
<Annoyed> <rberg-> ok, I dont know anything about ufw, I do know why your interface is named that way :)
<Annoyed> <cryptodan> remove that package and see if ufw will work
<Annoyed> <Annoyed> heh. I know what files I've set up using the p2P1 / p3p1 names, but I have no idea what
<Annoyed>           may have been autogenerated, and am hesitant to break that.
<Annoyed> <rberg-> delete /etc/udev/rules.d/70-persistent-net.rules and add "biosdevname=0" to the kernel
<Annoyed>          cmdline and reboot to disable
<Annoyed> Ung.. sorry
<Annoyed> let me start that over.
<Annoyed> Still having fun w/iptables..  I wasn't able to get it going with ufw..
<cryptodan> what made you install that biosdevname package?
<Annoyed> tried entering "iptables -t nat -A POSTROUTING -s 172.16.0.0/24 -o p2p1 -j MASQUERADE" at the command line, and it worked.. allowed inside machines to see the net.. Since then, I have shut the machine down.. and restarted it. and it still works.. but now I don't understand why it is working. I had NOT saved the riles before shutdown
<Annoyed> cryptodan: I didn't install it.. installion put it in there
<teward> Annoyed: check and see if the rules are still in place - iptables -t nat -L
<teward> if it's still present just remove the rule/
<Annoyed> It's not.. since I didn't save the rule, I don't get how it can still be active
<Annoyed> root@unimatrix0:/etc# iptables -t nat -L
<Annoyed> Chain PREROUTING (policy ACCEPT)
<Annoyed> target     prot opt source               destination
<Annoyed> Chain INPUT (policy ACCEPT)
<Annoyed> target     prot opt source               destination
<Annoyed> Chain OUTPUT (policy ACCEPT)
<Annoyed> target     prot opt source               destination
<Annoyed> Chain POSTROUTING (policy ACCEPT)
<Annoyed> target     prot opt source               destination
<Annoyed> where would it save it? there is no iptables directory in /etc
<cryptodan> iptables.save
<Annoyed> root@unimatrix0:/etc# ls -l iptables.save
<Annoyed> ls: cannot access iptables.save: No such file or directory
<cryptodan> I would simply remove that package and go back to eth0 or eth1
<cryptodan> if it was a required package, I am sure that I would have it installed on my install
<Annoyed> cryptodan: I don't think the device name is the problem. If it was, the raw iptables command wouldn't have got it working, 'cause iptables would not know the device name
<Annoyed> Well, it installed on its own when I installed 14.04.1 server
<cryptodan> well if you want to use ufw, then youll need to go back to eth0
<Annoyed> Now i can't figure out why it is STILL working after reboot. iptables isn't persistant unless the rules are saved SOMEWHERE.
<cryptodan> once you write them they are saved
<Annoyed> That's just it. I *didn't* successfully write them
<Annoyed> Tried a few times, didn't recall the command right and got errors
<cryptodan> can you scroll up in your buffer and see the commands?
<Annoyed> iptables -t nat -A POSTROUTING -s 172.16.0.0/24 -o p2p1 -j MASQUERADE
<Annoyed> iptables save /root/iptables
<Annoyed> iptables-save /root/iptables
<Annoyed> and there is no file called iptables in /root
<cryptodan> look for iptables-save
<Annoyed> so I'm at a loss as to how the inside machine can get on the 'net
<cryptodan> are you trying to setup a router?
<Annoyed> yes
<Annoyed> root@unimatrix0:/etc# grep -i -R iptables-save *   comes back with nothing
<cryptodan> Annoyed: here https://help.ubuntu.com/community/Router
<Annoyed> bah
<Annoyed> is it possible that iptables sees forwarding turned on in sysctl.conf and figures out out on its own?
#ubuntu-server 2015-01-03
<cryptodan> Annoyed: it might be
<Annoyed> ls
<Annoyed> how would that work?
<cryptodan> I wouldnt know as I do not use a PC for a router
<Annoyed> I just don't get how the thing can retain the masq. settings after reboot.
<cryptodan> you can run a tcpdump session and analyze the traffic with wireshark
<bekks> Annoyed: ufw e.g. saves and loads settings upon reboot.
<Annoyed> That's not enabled right now.
<Annoyed> Ok, that mystery is cleared up.
<bekks> How did it clear up?
<Annoyed> The other machine on the "inside" was getting out through it's wlan interface.
<Annoyed> killed that, and now it's behaving as expected; can get to the router box, but no farther.
<Annoyed> So, client machine can get DHCP address & DNS from the server box. but can't get out. Going try ufw "by the book" again
<Annoyed> cryptodan: by the  way, that /etc/udev/rules.d directory you mentioned  is empty
<Annoyed> There's a readme that sends you to /etc/udev/rules.d/
<Annoyed> I think I'm gonna turn that damned thing off.. no frakkin' idea why they want to rename things anyway
<cryptodan> it shouldnt be empty
<Annoyed> Well, just turned it off in grub.
<cryptodan> Annoyed: http://dpaste.com/0KDQCFV
<Annoyed> Mine has the  readme, that's it
<Annoyed> Well, that went well. It doesn't even see either ethernet card now
<Annoyed> ifconfig shows lo, that's it
<cryptodan> Annoyed: time to reinstall
<Annoyed> This IS a new install. just doing initial setup.
<Annoyed> and there is no /dev entry for eth* of any sort
<jerrcs> you should be using "ip addr" or "ifconfig -a"
<jerrcs> the interface COULD be down.
<jerrcs> (just as a best practice, no one else seemed to comment on that)
<Annoyed> Well, there should STILL have been a /dev entry for eth(x)
<Annoyed> Apparently, you have let it rename things.
<Annoyed> The cynical side of me thinks they are overcomplicating this in order to generate paid support calls
<jerrcs> Annoyed: so it shows up there?
<jerrcs> or not
<Annoyed> the only way the machine sees it's ethernet intefaces is with biosdevname turned on. then it sees p2p1 and p3p1, both enet cards
<Annoyed> Not sure yet... but I think I might have it
<Annoyed> Setting the default policy on the input chain to accept allows the inside machine to work.... So. maybe you have to add established/related rules via ufw
<cryptodan> Annoyed: you say biosdevname exists in the latest server iso?
<Annoyed> That's what Installed. 14.04.1, downloaded last week
<cryptodan> im downloading now and will install in a VM
<Annoyed> so much for the idea or needing established/related rules.
<Annoyed> They're in the before.rules file already
<cryptodan> 3 more minutes on download
<benpardo> If I'm not supposed to do things on root, how do I get to run my reverse proxy on port 80?
<Annoyed> sudo su to get root permissions temporarily
<Annoyed> "sudo su" that is
<benpardo> Annoyed: Is that secure? That's the best way to do it?
<benpardo> Annoyed: don't mean to be a pain in the ass, I'm just new to this.
<Annoyed> As far as I know. the root account is disabled by default, but if you want to enable it, you can. But "sudo su" gives  you temp. access, usually all you need
<cryptodan> Annoyed: installing
<benpardo> what folder on ubuntu should I put the generated static files being served?
<cryptodan> Annoyed: I just installed a fresh copy of Ubuntu Server 14.04.1 and my devices for ethernet are Eth0
<teward> benpardo: it depends on the website configuration - if you're on standard Apache, I think it's on /var/www/ somewhere, if you're on nginx, you should make your own docroot somewhere
<benpardo> teward: I'm nodejs, does it matter?
<benpardo> teward: although nginx is going to be the reverse-proxy
<teward> benpardo: then refer to the nodejs configuration
<teward> benpardo: i've never used nodejs, but in all web servers and setups, the docroot varies based on the configurations
<teward> benpardo: so refer to your configurations and find where the document root is
<benpardo> teward: ah, I see. It may not actually matter and may be something I can set myself
<Annoyed> cryptodan: Maybe because I'm using UEFI setup on the drives?
<cryptodan> that wouldnt matter Annoyed
<teward> benpardo: yes, it really depends on what nodejs lets you configure.  it may have a fixed document root or a variable one, it really depends on the configurations, and really the docroot can be anywhere so long as the web server has the access it needs to the docroot
<benpardo> teward: that really helps
<Annoyed> cryptodan: Well, I dunno. I have no idea why it's renaming them. I don't really like it, but it's not worth redoing the past week's work to re-install to see what I get. I can live with odd names. And I really don't think that's why I'm having ufw issues. UFW / Iptables IS working now, 'cause I have the default policy for  the input chain set to accept. If the device names were the issue, I don't think it would work
<Annoyed> But I shouldn't have to set input chain policy to accept to get NAT to work
<Annoyed> Either UFW can't handle NAT and firwalling right, (which I doubt) or there's something I'm not seeing
<cryptodan> UFW can
<Annoyed> I would think it would be able to, but I'm not seeing something.
<Annoyed> Thanks, folks.
<Annoyed> enough on this for today
<lordievader> Good morning.
<lnxmen> Good morning. ;)
<samba35> how do i assign ip address  another guest  from guest has dhcp server (both as guest )
<samba35> using ovs version 2.0.2 on ubunut
<samba35> using openvswitch
<mustti> happy new year 2015 to all
<hariom> I have added an init script. Ran the update-rc.d command to run it after reboot (ps: http://paste.ubuntu.com/9664927/) but after reboot it doesn't run. Manually it runs fine.
<hariom> Here is my init script: http://paste.ubuntu.com/9664956/
<hariom> I have added an init script. Ran the update-rc.d command to run it after reboot (ps: http://paste.ubuntu.com/9664927/) but after reboot it doesn't run. Manually it runs fine.
<hariom> Here is my init script: http://paste.ubuntu.com/9664956/
<jefinc> anyone awake?
<ObrienDave> barely
<jefinc> uh oh too many brown bottles
<fabiofranco85> (Ubuntu 14.04 LTS) Need to change locale settings for a specific country (pt_BR). The problem is when try to use resources that use these setting it returns the wrong results. Example: In java if I try to get the currency symbol it gives me BRL when it should give me R$ and the decimal separator is , and it gives me . (and the other way around too). I came to the conclusion the problem
<fabiofranco85> is with the operating system configuration since I tried on a machine runing windows and it worked perfectly. Any suggestions?
<bekks> fabiofranco85: I guess thats correct so far (at least for the currency), since the international identifier for your currency is BRL, not R$ (which is the national one). It is the same for the Euro with EUR vs. â¬, and for the US Dollar with USD vs $.
<fabiofranco85> bekks: I understand but is there a file or some place where I can set the Display symbol for the currency?
<fabiofranco85> bekks: IÂ´m asking this because as I said it works on windows but when I run it on ubuntu server it goes wrong... and itÂ´s not just the symbol
<fabiofranco85> bekks: the decimal and thousand separator are also wrong
<jefinc> how do I create a server that is then setup so that no matter what computer I access on the network I login with the same user/password and all my preferences are the same?
<Patrickdk> ldap+nfs
<Annoyed> Greetings
<Annoyed> Any of the folks who were helping me yesterday around?
<Annoyed> anhyone here good with iptables?
<jerrcs> what's your question?
<Patrickdk> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<jerrcs> 400 ppl in the channel, i'm sure someone will know something about iptables.
<Patrickdk> do bots count?
<jerrcs> yup
<Patrickdk> and I count 3 times?
<jerrcs> yep
<Patrickdk> can I get payed 3 times?
<jerrcs> absolutely
<Annoyed> Can I specify a list of ips on an allow line? such as -A input -i [interface_name] x.x.x.x, y.y.y.y, z.z.z.z -j ACCEPT ??
<Patrickdk> no, and that is highly invalid even if you didn't
<Annoyed> Yeah, I know.. the exact syntax isn't right
<jerrcs> Annoyed: have you tried CIDRs instead? or are the IPs in different ranges?
<Annoyed> jerrcs: totally different
<jerrcs> then negative, it doesn't work that way
<Patrickdk> the solution is to use, ipset
<Annoyed> bah. I have to allow ssh anda few other things from several ips and I wanted to do it on one line
<jerrcs> first result on google - http://www.gossamer-threads.com/lists/gentoo/user/210361
<jerrcs> they give a few ideas for creating "sets" of rules
<Annoyed> Jerrcs, if you recall, I was having difficulty with ufw yesterday? couldn't get it to do NAT unless the default input policy was accept?   I ended up giving up on it and writing a manual ruleset.. that does work
<Patrickdk> do what?
<Patrickdk> input policy has NOTHING to do with nat at all
<Patrickdk> nat ONLY uses the forward rules
<Annoyed> Well, following the ufw  directions at https://help.ubuntu.com/14.04/serverguide/firewall.html to the letter, I couldn't get a natted box online unless I opened the input chain.
<Patrickdk> yes, your diagnostics where wrong though
<Patrickdk> maybe your NAT server also did DNS?
<Patrickdk> and you didn't open up DNS? on input/output chains?
<Patrickdk> therefor it *seemed* like nat was broken?
<Patrickdk> and you did apply the correct rules to allow the required icmp through both?
<Annoyed> Hmmm... Well, the machine does run DNS (full server, not cache) and it is able to resolve it's own needs fine
<Patrickdk> but can the machine you tested nat on resolve fine?
<Patrickdk> you ahve to test the whole stack
<Patrickdk> not just the end result
<Annoyed> Not sure about ability to resolve. I think it could, though.
<Patrickdk> my recommendation though, would be to use shorewall
<Patrickdk> after years of doing iptables and ipchains myself, and finding my own issues, like multible rules interacting to cause holes I didn't want and stuff
<Patrickdk> shorewall just makes my life so much easier
<jerrcs> i honestly use raw rules and not much of these programs
<jerrcs> so i cannot speak on  them
<Patrickdk> I do raw iptables too, but shorewall on anything harder now :)
<Patrickdk> but sometimes I need to get creative, and use raw iptables for things, expecially stuff like ipvs and manual protocol violations
<Patrickdk> cause shorewall isn't made to actually break things
<Patrickdk> things can get alittle fun, when you have like 15+ nic's on a system, it's just a royal pain to do all that manually in iptables
<Annoyed> But I don't have any allowances for port 53 in my current ruleset, and it works. I assume established,related allows the returns for outbound DNS queries
<Patrickdk> yes
<Patrickdk> but what accepts inbound from your machines from *behind* the nat?
<Patrickdk> workstation -> nat -> outside dns
<Patrickdk> you have to accept workstation -> nat first, before nat can go outside
<Patrickdk> that wouldn't be a forward rule, cause your contacting a dns server onyour nat box, most likely
<Annoyed> the nat box IS the dns server also.
<Patrickdk> what I said is full of assumptions about how you set things up
<Annoyed> yes, exactly
<Patrickdk> but normally that is how people do it
<Annoyed> but I do recall errors in the logs regarding port 53
<Patrickdk> on a normal home nat setup, you need to accept dhcp, dns, probably just all of icmp, and then if you get more fancy, upnp
<Annoyed> Jan  2 21:56:22 unimatrix0 kernel: [ 6682.487057] [UFW BLOCK] IN=p2p1 OUT= MAC=10:c3:7b:db:99:5a:48:5b:39:1e:29:5b:08:00 SRC=172.16.0.13 DST=172.16.0.254 LEN=79 TOS=0x00 PREC=0x00 TTL=64 ID=48164 DF PROTO=UDP SPT=5088 DPT=53 LEN=59
<Annoyed> so, if I get you right, I would have to allow port 53 from any inside interface ?
<Annoyed> I'm no iptables expert, so I assume something like ufw could write a better ruleset than I can, so I tried that.
<Annoyed> If I want to use ufw, just add a rule to allow anything from the inside interface to the input chain?
<Annoyed> Patrickdk: You still here?
<Annoyed> If so, thank you. It was indeed DNS that was being blocked
<Patrickdk> ufw doesn't write rulesets
<Patrickdk> it only is an interface between you and iptables
<Patrickdk> personally, I think it's a pointless interface
<Patrickdk> but since ubuntu/debian has no persistant iptables interface, it needed something
<Annoyed> Well, thank you anyway. I added lines to /etc/ufw/before.rules as follows
<Annoyed> # allow all on inside interface
<Annoyed> -A ufw-before-input -i p2p1 -j ACCEPT
<Annoyed> -A ufw-before-output -o p2p1 -j ACCEPT
<Annoyed> basically copied the settings for lo   and was all set
<Annoyed> working as desired
<Annoyed> And I assume that ufw can do a better job writing a ruleset than I can. =)
<Patrickdk> if p2p1 is your *local* network, should be good enough :)
<Annoyed> Yes, p2p1 is my inside interface
<Annoyed> p3p1 is outside. For some reason, Ubuntu renamed them.
<Annoyed> I tried to disable the biosdevname thing but it wouldn't even see ANY network intefaces then.
<Annoyed> As long as it works, I'm not going worry about what it calls them
<jefinc> so on my work's windows network I sign in with the same username/password from any computer within the network and it saves my settings/info etc., how do I do that with ubuntu?
<SchrodingersScat> jefinc: https://help.ubuntu.com/community/SettingUpNFSHowTo ?
<jefinc> SchrodingersScat: I will give it a go, thanks :)
<NineTeen67Comet> Hello all .. been a while since I set up Ubuntu server (I've been running 13.04 for a while) .. this time however I moved 000-default to sites-available and restarted with my virtual files in sites-enabled but it still goes to the default Apache page .. is there another command I'm missing?
<NineTeen67Comet> apachectl something?
<NineTeen67Comet> I have directories in /var/www to represent the sites I'm running (re-building) ..
<zol> Hi! I'm having trouble setting up my home network configuration. I have two NICs, one configured for WAN and the other for WAN on a subnet with range 10.0.0.0/24. The router has static ip 10.0.0.1, I can ping the router form the LAN clients, but I can't ping the clients from the router. The LAN clients can't reach outside of the LAN. I am using ufw as a firewall, I have NAT enabled to the best of my
<zol> knowledge. I'm feeling terribly lost, have been trying to fix this for 6 hours now.
<zol> and the other for LAN*
<zol> I can't seem to get outgoing packets to be allowed by UFW.
<jdstrand> zol: I suggest you look at the section on 'IP Masquerading' in 'man ufw-framwork'
<jdstrand> zol: sorry, 'man ufw-framework'
 * jdstrand wanders off again
#ubuntu-server 2015-01-04
<lickalott>  gents, having an issue I can't seem to solve....   I have some NFS shares that are accessed by 3 different machines on my network.  It's worked in the past with 770 permissions (all three machines can access fine.  Same username and password, different UID's obviously).  I had to reload the OS on a windows box (one of the 3 machines) and now I have to have the permissions @ 777 to access the folders.   I'm wondering what changed an
<lickalott> d how do I get it back?
<Patrickdk> lickalott, there is a lot of questions there
<Patrickdk> are you using nfs3 or nfs4
<Patrickdk> what are these guests? you said something about windows?
<Patrickdk> nfs3 depends on the uid for permissions
<lickalott> i want to say nfs4.  whatever the latest nfs-kernel-server is for 14.04.
<Patrickdk> nfs4 depends on what security model your running, but it generally also still needs uid and usernames to match between systems
<Patrickdk> what does, latest nfs-kernel-server have to do with version?
<Patrickdk> the latest nfs-kernel-server supports nfs2 nfs3 and nfs4
<lickalott> 1 "guest" is a WDTV media streamer (can do samba and NFS, but have it set up for NFS), 1 is a windows box, and the other is a fedora laptop.  The windows box is having the trouble right now.
<Patrickdk> oviously you configured nfs, so what did you use?
<lickalott> wait 1.  let me check.
<Patrickdk> I doubt the wdtv can do nfs4
<Patrickdk> and dunno if the windows one will default to nfs3 or nfs4
<Patrickdk> but sounds like, making it work the way you want, is going be near impossible with nfs
<lickalott> how can I tell which version?
<Patrickdk> normally? on the mount option
<Patrickdk> type nfs (rw,noatime,bg,noacl,nfsvers=3
<Patrickdk> as far as server, running nfs4 is a lot of work, and config
<lickalott> it did work like that (for a long time) before I had to reload the OS on the windows box.  It doesn't really make much sense to me which is why i can't even attempt to recreate it to work again.  Nothing has changed on the NFS side.  the only thing that changed was the UID that is being used/seen on from the windows machine.
<Patrickdk> atleast if you planned to do it correctly
<Patrickdk> as I said, it's normally luck if it works
<Patrickdk> I have no idea how windows does nfs
<Patrickdk> but nfs DEPENDS on uid's to match
<lickalott> do you mean version on the windows side or the ubuntu server side?
<Patrickdk> so you need windows to match the uid when using nfs
<Patrickdk> I mean both sides
<Patrickdk> UID must match over ALL systems
<Patrickdk> or you loose all security over nfs
<Patrickdk> when using non-kerberos based nfs
<Patrickdk> that means, unless you use kerberos based nfs4, fully configured, you depend on uid matching over all clients and server, for security
<lickalott> i'm going to say nfs3.   I don't think the built in windows NFS client module can handle nfs4.
<Patrickdk> and even with kerberos based nfs4, you still need uid and usernames to match, or it won't work right, but it will still be *secure*
<Patrickdk> I know nothing about windows nfs
<lickalott> and on the server....i used this https://help.ubuntu.com/community/SettingUpNFSHowTo
<Patrickdk> but somehow you need the uid's to match
<lickalott> that's what I was thinking.   I even went as far as to make a user based on the UID from the windows box (something like 429128847).  he now shows up with an actual username.  Then added that user to a group that has "group" priviledges on the folders in question.
<lickalott> but still can't access unless the permissions are wide open.
<lickalott> I'll look into the UID match thing and report back if I get anywhere (just incase someone else ever asks.)
<Patrickdk> I don't know about the group thing
<Patrickdk> if it's verified on the server or the client
<Patrickdk> your assuming server side though
<lickalott> true
<cryptodan> lickalott: is this Windows 7?
<lickalott> yes
<cryptodan> ill install NFS Client on my VM and see if it supports NFS4
<lickalott> looks like it does, just not natively - http://www.citi.umich.edu/projects/nfsv4/windows/readme.html
<cryptodan> I connected to my NFS4 share just fine via the client in WIndows 7
<cryptodan> Here are some of the command line switches for it http://technet.microsoft.com/en-us/library/cc754350.aspx
<Patrickdk> cryptodan, that document lists a lot of nfs3 stuff
<Patrickdk> you SURE it's using nfs4?
<Patrickdk> oh there it is
<cryptodan> I would imagine that my client would have gotten a protocol mismatch
<Patrickdk> -o sec=....
<Patrickdk> looks like windows 8 does
<lickalott> mount -o mtype=soft 192.168.1.108:/media/cyclops Z:
<lickalott> mount -o mtype=soft 192.168.1.108:/media/iceman Y:
<lickalott> mount -o mtype=soft 192.168.1.108:/media/wolverine X:
<lickalott> mount -o mtype=soft 192.168.1.108:/home/weed/ipcam1 W:
<lickalott> mount -o mtype=soft 192.168.1.108:/home/weed/ipcam2 V:
<lickalott> that's what mine looks like now.
<cryptodan> wrong slashes I think
<lickalott> it works.  its just the permissions thing.
<cryptodan> C:\Users\cryptodan>mount \\192.168.1.8\home\cryptodan\public_html U:
<cryptodan> U: is now successfully connected to \\192.168.1.8\home\cryptodan\public_html
<Patrickdk> cryptodan,  that is nfs? that looks like smb/cifs
<cryptodan> its nfs
<cryptodan> dont have SMB on my server at all
<lickalott> i can't get it to work with the slashes that way.
<lickalott> C:\Users\weed>mount -o \\192.168.1.108\CYCLOPS U:
<lickalott> Network Error - 53
<lickalott> Type 'NET HELPMSG 53' for more information.
<cryptodan> you need the full path
<cryptodan> and I didnt use the -o
<cryptodan> lickalott: and are you sure that the nfs admin and client are installed on the client machine?
<lickalott> got it to mount.
<lickalott> pretty sure....  i went into programs and features and enabled everything NFS.  Even went as far as to install "services for NFS"
<cryptodan> thats all I did and followed one of the examples in that post on that site from MS to mount it
<lickalott> what are you permissions on the server side for the folders/files?  I've made the main folders 777 but the subdirs and files that i haven't touched yet I can't access from the windows side.
 * teward coughs at 777
<lickalott> serious
<cryptodan> default for the mount
<lickalott> are you saying default for the mount as in "whatever the default is" or 777 is default for the mount?
<cryptodan> 755
<cryptodan> so its mounting via nfs3
<lickalott> yep, that's how I had it.  and it worked fine before.  For some reason I can't get to them now unless I changed the permissions (which I don't really want to do)  This server has an internet facing interface and I'd rather not have my stuff wide open.
<cryptodan> as does my server but my firewall disallows nfs from the outside
<lickalott> but you're running nfsv4 on the server side?
<cryptodan> yes
<lickalott> i have to be missing something....
<lickalott> I'm even sharing out NFS mounts from the windows box fine.  (Hanewin NFS server)
<cryptodan> let me create a new user in my VM and see if I can mount it
<cryptodan> new user mounts fine and the user doesnt exist on the server can access files but cant create them
<lickalott> will you do me a favor?  can you pastebin your /etc/exports file?
<cryptodan> /home/cryptodan/public_html 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,sync)
<lickalott> /media/CYCLOPS         *(rw,sync,no_subtree_check)
<lickalott> what does the insecure switch do for you?
<cryptodan> unauthed access I believe
<lickalott> thanks cryptodan.  It seems to be working now with 755
<cryptodan> you are welcome
<jdzielny> hello everyone.  I'm looking for some help converting a server with a single hdd over to bootable RAID 1.  It's currently hdd > LUKS > LVM
<lordievader> Good morning.
<FullEraser> hi all
<lordievader> o/
<FrEaKmAn_> hi.. today I got this email http://pastie.org/9812344 from my VPS
<FrEaKmAn_> does this mean something bad?
<FrEaKmAn_> I'm checking mail.err log and noticing a lot of invalid email address errors. is this spam?
<Patrickdk> FrEaKmAn_, it would be helpful if you told us something
<Patrickdk> we can't tell you anything so far
<FrEaKmAn_> Patrickdk, neither do I :)
<FrEaKmAn_> I don't know in which direction to go..
<Patrickdk> where are the logs? what is the name of your server?
<FrEaKmAn_> in var/log.. name?
<jdzielny> hello everyone.  I'm trying to install Ubuntu Server 14.04 LTS using RAID/LVM/LUKS.  I correctly configure all the petitions and mount points, and everything in the install goes smooth, until it gets to the bootloader install.  It runs "grub-install /dev/sda /dev/sdb" and crashes with the unhelpfully vague message that grub-install failed
<jdzielny> anyone know what the problem is?
<cryptodan> it should only be trying to install on one drive
<jdzielny> in order for it to be bootable from either disk in RAID 1 it has to be on both drives
<jdzielny> otherwise you have one drive with a bootloader and the other without.  defeats the point of RAID 1 if you have to manually install a bootloader when one of the drives fails
<cryptodan> I have a RAD 1 for my system partition, and when I Installed it it selected only one drive
<cryptodan> RAID 1*
<jdzielny> if you pull that drive out (to simulate a drive failure), you won't be able to boot
<jdzielny> if only one of the drives in the array has a bootloader, if it fails, the system can't boot
<cryptodan> it should mirror it during install and as such the data should be mirrored
<jdzielny> here's the steps I followed.
<jdzielny> 1.  Create 2 identical partition tables on /dev/sda and /dev/sdb -- /dev/sda/b1 are 250MB type FD (RAID), /dev/sda/b2 are 100% (all remaining space) type FD
<jdzielny> 2. Create md0 from sda/b1 and md1 from sda/b2
<cryptodan> so this is software raid?
<jdzielny> yes.  configured using the Ubuntu Server installer
<cryptodan> ah mine is hardware raid via controller so your setup is different
<jdzielny> yeah :-\
<cryptodan> and mdraid is gone and as such dmraid is inplay I believe in Ubuntu 14.04 LTS
<jdzielny> the issue seems to be with ubiquity, not with grub
<jdzielny> then again, I could be wrong
<jdzielny> anyway, here's the rest, maybe someone else can see and give some insight
<jdzielny> 3. md0 is configured as ext3 mounted at /boot (intended to be used for the boot files with /dev/sda and /dev/sdb as the bootloader disks)
<jdzielny> 4. md1 is configured as physical volume for LUKS
<cryptodan> try not using encryption and see if you succeed
<jdzielny> brb
<jdzielny> had to restart the laptop i'm on
<jdzielny> cryptodan, trying the raid 1 install without any encryption or LVM.  just 4 partitions on each hdd (/boot, /, swap, and home), arrayed into /dev/md0, md1, md2, and md3 respectively
<cryptodan> kk
<jdzielny> cryptodan, okay it installed without the same error as before (I think)
<jdzielny> Aside from pulling one or the other hdd out, how can I chec that it actually booted from /md0 and not from /sda or /sdb
<jdzielny> ?
<cryptodan> thats the only way
<jdzielny> using grep /dev/md /etc/fstab I verified at least that the filesystems actually are on the RAID devices
<jdzielny> confirmed that / is on md1, /boot on md0, etc. all where they should be
<cryptodan> so it sounds like the issue was in your luks setup
<jdzielny> so far anyway
<jdzielny> LoL cryptodan for future reference, make sure /proc/sys/dev/raid/speed_limit_min and _max are set high
<jdzielny> was trying to figure out why it was so slow (1000KB/sec resync, really?!?!), speed limit was set to 1k
<jdzielny> lol
<jdzielny> I'm gonna try redoing the install with crypto on and see if it works
<jdzielny> the only annoying thing is that if you use crypto for all of the partitions, you have to type in the passphrase that many times at least once.  once the system is up and running you can create keyfiles for all the partitions, but holy f**k it's annoying thta the installer doesn't just work right
<cryptodan> lol
<lnxmen> Hello
<lnxmen> Is there any magic that I changed one file and my site is totally broken?
<lnxmen> I returned everything to previous situation.
<lnxmen> And site stil does not work.
<lnxmen> No error.
<lnxmen> How come?
<bekks> Depends on the file you changed :)
<lnxmen> bekks: It was a template.
<bekks> And?
<lnxmen> And nothing more.
<bekks> Whats the filename?
<lnxmen> template.html.php
<lnxmen> As I said I returned original one.
<lnxmen> But it's still blank page.
<bekks> Could you provide a bit more information please? Like which Ubuntu do you have, which webserver, where did you get that file from, where is it stored, erc.?
<bekks> *etc
<lnxmen> bekks: surely.
<lnxmen> It's Ubuntu 14.04 LTS
<lnxmen> apache2
<bekks> You dont have to press enter after every three words ;)
<lnxmen> File was sent by my client - he said it's original one to the framework (PHPFox)
<lnxmen> Okay.
<lnxmen> So, I changed them.
<lnxmen> And next I got blank page. (wtf?!)
<lnxmen> Okay, I thought file is broken or something, so I changed files one more time (to the file that was eariler).
<lnxmen> And despite I reloaded everything (apache, mysql) I get blank page.
<lnxmen> No error.
<lnxmen> And yes, I have set php to display errors.
<lnxmen> In index.php as well as in php.ini.
<lnxmen> bekks: I don't really know what is the problem.
<lnxmen> No error given, no warning. I checked every log that is related to this site.
<lnxmen> Literally nothing.
<lnxmen> Do you need something more?
<lnxmen> Any idea?
<lnxmen> I configured everything on this server, so I believe it's well done.
<lnxmen> Permissions are okay.
<lnxmen> I reloaded cache. It works again.
#ubuntu-server 2016-01-04
<vivek_> hello all, is there a way to setup default quota limits for each new user that gets created in the system on ubuntu server?
<repozitor> is there any idea why my firewall doesn't work?
<repozitor> i installed firewall-cmd and now it's status is running, rules are fine
<repozitor> but some denied port are open from outside of server
<repozitor> for example nmap show me 587 is open, but i expect firewall-cmd should deny it
<ankitkulkarni> Hey Guys, I have to create myown distributable ubuntu iso so that I can install it again on my own pc using a pendrive . I have tried distroshare, pinguy builder , remastersys(although not available now for 14.04)  . Iso are created but no luck in installing them . I created 14.04 iso , it was of 2.2 gb around but takes lot of time to install . Normal ubuntu would take only few minutes while the distroshare one takes hours to install . Any
<ankitkulkarni> good way to create small distributable iso from my current installation .
<ikonia> I'd look at what in the process is taking time to install and why
<ikonia> and define " a lot of time "
<ikonia> a default install is actually very small, and if you're installing 2.2GB media it will take longer
<ankitkulkarni> ikonia, yeah . "a lot of time "  is actually different when created with different method . From distroshare it takes almost 3+ hours to install the system .  With remastersys it stucked on choosing the install option .
<ankitkulkarni> ikonia, I will try to debug what process exactly is taking time
<lordievader> Good morning.
<teward> anyone know a way to migrate a dovecot install completely from one server to another?
<teward> it's on an old 9.10 server, now, trying to migrate it to a 14.04 server, but i'm getting dovecot auth issues :/
<teward> (and yes I know 9.10 is old, but i'm not asking for help with that part, merely migrating it to a 14.04 system)
<Montgallet> http://serverfault.com/questions/105804/how-can-i-migrate-dovecot-from-one-server-to-another
<Montgallet> migrate dovecot to postfix possible too google search
<teward> erm, i'm trying to migrate dovecot -> dovecot
<coreycb> jamespage, keystoneauth1 has an optional dependency on requests-kerberos as a plugin, which I'm considering moving to Suggests in d/control instead of MIRing python-requests-kerberos
<jamespage> coreycb, that sounds sensible
<coreycb> jamespage, ok
<teward> Montgallet: what's unclear from the imapsync is whether I have to provide every single user's authentication to make it sync - if so, most of these have authentication that's not readily available - there's an in-house listserv solution that relies on this, so the passwords are stored internally, NOT readily able to be provided...
<coreycb> jamespage, zul: can you add a team subscriber to python-senlinclient, python-openstacksdk, python-keystoneauth1?
<jamespage> coreycb,
<jamespage> yes
<coreycb> jamespage, thanks
<jamespage> coreycb, done
<coreycb> jamespage, can you sponsor these uploads from the debian/mitaka branches?  http://paste.ubuntu.com/14401092/
<jamespage> coreycb, hey do we have a bug for the liberty point releases? I have a spare 1hr so could help with those updates if need be
<coreycb> jamespage, ah thanks. let me check.
<coreycb> jamespage, I don't think we have one yet
<rbasak> nacc: o/
<nacc> rbasak: hey!
<rbasak> nacc: so looking at bug 1318317, I first wondered what the intention was of the original author of the init script.
<ubottu> bug 1318317 in openipmi (Ubuntu) "openipmi startup script removes kernel modules" [High,Confirmed] https://launchpad.net/bugs/1318317
<rbasak> nacc: which led me to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=614394 and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=539416 that suggest that it isn't packaged with an init script in Debian, so the bug relates to the functionality Ubuntu added and is missing from Debian.
<ubottu> Debian bug 614394 in openipmi "Suggestion: init.d script and config file for openipmi" [Normal,Open]
<ubottu> Debian bug 539416 in openipmi "debian/rules: debian/openipmi.init: added init-script." [Wishlist,Open]
<rbasak> nacc: and you say that the init script comes directly from upstream?
<rbasak> Doing some archeology, it looks like the init script was added by Ubuntu back in 2008! From https://launchpad.net/ubuntu/+source/openipmi/+changelog
<rbasak> nacc: so usually I'd like to minimise our work in Ubuntu by coordinating with the original upstream source of anything we want to change, to see if we can do it there instead so we don't have to maintain being different from the world.
<rbasak> nacc: in this case it sounds like nobody else is looking after the init script anyway, so I think it's fine for you to polish it up and we can upload that.
<nacc> rbasak: well, i think it came from upstream, at least? and has been since modified ... but they are too similar to not have a common parent, it seems like
<rbasak> nacc: perhaps we should update to the latest upstream script then, assuming it works?
<nacc> rbasak: the last upstream change to impi.init was june 2010
<rbasak> nacc: and then fix the bug if necessary, and send upstream that patch?
<nacc> rbasak: i don't think it does ... but i can test it separately
<nacc> rbasak: upstream git vs. xenial: 34 insertions(+), 80 deletions(-)
<nacc> rbasak: but i see what you are driving at ... and that's why i sent an e-mail upstream
<nacc> but crickets since; let me subscribe to that list and see if my e-mail was just ignored/lost
<rbasak> nacc: thanks. As long as we do our best to contact upstream and send patches I think that's the best we can do.
<nacc> rbasak: yep, so ... would you prefer i fully cleanup the init-script? it seems to be logically a bit circuitous; or is it better to just make the minimal changes locally?
<rbasak> nacc: I'm not sure there's much point in fully cleaning it up if upstream aren't responsive and aren't likely to accept it.
<jamespage> coreycb, gah - our charms will need updating to support a .1
<jamespage> that decision lacked forsight...
<rbasak> nacc: so I think making the minimal change in Ubuntu is fine for now.
<nacc> rbasak: ok, I'll wait for testing results and then fixup my patch (should be a no-op for testing purposes still)
<nacc> rbasak: i'll also take a peek at the effect of using the upstream init-script
<rbasak> nacc: as long as it doesn't break any use cases our users might have except in some reasonable way. So for example I don't want to fix POWER at the cost of some regression on some Intel hardware. If that happens then we might need to clean it up to support both cases or something.
<rbasak> nacc: OK, thanks. Let me know when you're ready. If you think that on balance your change fixes things and probably doesn't break things, that'll be fine I think.
<rbasak> nacc: ideally IMHO /etc/modules shouldn't be required to be used for this at all, since the logic should be in one place.
<rbasak> nacc: I think that's the case with your current patch?
<coreycb> jamespage, darn, hopefully it's not too invasive
<balrogg_cs> Good afternoon everyone , someone could direct me to a good tutorial on VPN for Ubuntu Server ?
<nacc> rbasak: right, notionally, /etc/modules is separate and unconsidered; what my patch does functionally is make it non-fatal if some HW modules (as specified in the config file), fail to load
<rbasak> nacc: that sounds reasonable given that the package tries to load those by default (I think?) and hardware exists that works without those modules. Ideally we'd detect and load only required modules by default but I don't think there's any need to go that far.
<nacc> rbasak: yeah, it's more to do (at least with this bug, in particular) with some confusion (IMO) on the OP's part as to how to control the IPMI modules at runtime. Basically, the openipmi 'service' just loads whatever the config file says to load; which by default is only ipmi_si and ipmi_devintf -- hence why other stuff appears as unloaded, because that's the OpenIPMI configuration
<nacc> but then secondarily, the Power folks complain because impi_si fails to load at all on their system (and the default config has it required) and so the init-script bails out
<nacc> it might be better to treat it as two separate bugs? not sure
<rbasak> I think I also see it as two bugs, but maybe I draw the line in a different place.
<rbasak> I think "fails by default on Power" is the real bug here from a UX perspective.
<nacc> rbasak: yep, I'd agree
<rbasak> "Doesn't intelligently pick what modules to attempt to load by default" is a secondary bug I think, which I guess is much lower priority since we aren't aware of a way that would actually impact a user right now.
<jamespage> coreycb, hey - I've uploaded what I can to https://launchpad.net/~james-page/+archive/ubuntu/wily/+packages for testing
<rbasak> Even though fixing that one would be nice.
<nacc> rbasak: so the alternative fix, which might be more palatable, is to just change the default value for IPMI_SI to "no" on Power
<coreycb> jamespage, thanks
<jamespage> however I need to think about the ch fix for version detection - may need to match on major/minor version only
<rbasak> nacc: sure, but then you need to detect when to do that, and it's tricky to do in packaging because of conffile rules.
<nacc> rbasak: right, and that's what I have sort of asked the upstream community about ... and whether the init-script should be mucking with modules that aren't set to "yes" in the first place (sinc ethey might be being controlled by something else, e.g.)
<rbasak> (/etc is the sysadmin's domain, and changing it from scripts needs to respect any local changes the sysadmin made, and that's tricky)
<nacc> rbasak: yeah, I wasn't sure that would be "easier" by any means :)
<rbasak> :)
<rbasak> Perhaps the default value should be "autodetect" and the script enhanced to be able to use that.
<rbasak> But anyway, that's to fix the secondary bug, which I'm happy to leave.
<nacc> rbasak: yep
<rbasak> File it if you wish. It is useful to keep track of these things, since this bug will get resolved and forgotten.
<rbasak> Then if anyone in Debian, Ubuntu or upstream wants to make things better at least there's a list.
<nacc> rbasak: so the remaining/open question is if anyone relies on some /etc/default/openipmi configured modules not loading as fatal
<rbasak> Right.
<rbasak> That's a very good question.
<nacc> it's still detectable, for what it's worth
<nacc> the $? will have bit 1 set
<nacc> but it's not going to just quit
<rbasak> Presumably if someone is depending on that being fatal, then something will be fatal later?
<rbasak> That is: something will be fatal later anyway, if we make the current failure non-fatal?
<rbasak> If you've considered it and can't think of any other way, I think it's OK to fix it as you proposed unless/until someone points something out.
<nacc> yeah, that's true; and as far as the end-user is concerned, the "fatal"ness of it is detected the same (technically), by that $? value. They just wont' see "[fail]" from the init-script ... but I'll take a closer look today, given what you've said
<nacc> rbasak: thanks for the talkthrough, I appreciate it -- I probably will have some follow-up questions later about your archeology & investigation
<rbasak> nacc: no problem!
<coreycb> zul, can you sponsor this upload to xenial? https://code.launchpad.net/~corey.bryant/ubuntu/+source/python-keystoneauth1/+git/python-keystoneauth1
<zul> coreycb: uscan --verbose --download failed
<coreycb> zul, I've fixed that up and I'll push that to the debian repo separately
<nacc> rbasak: ah ha! since we are setting RETVAL still, evne though we proceed (rather than aborting early), we still see the service 'fail' to start/restart
<rbasak> nacc: good catch!
<genii> Does anyone know of a way to use a secondary screen as the primary with just CLI, no X/xrandr ?
<TJ-> genii: per-output framebuffer controls are via 'fbset', to swap primary<>secondary I suspect you'd need to do something to change the default from /dev/fb0
<TJ-> genii:  'con2fbmap' should do what you want
<genii> TJ-: My main prob is I have server installed headless on an old laptop which has had it's screen removed. Ssh-ing in works fine but I want to set it up that I can plug a monitor in locally and have it work if something happens
 * genii investigates con2fbmap
<TJ-> genii: you'll need to ensure there are framebuffers configured for the active outputs and a DRI/KMS driver active
<wxl> hey folks is there anyone within the server community that tends to deal with ppc stuff?
<rbasak> wxl: ask your question and find out!
<wxl> rbasak: it's not so much a support issue as an architectural one. i have been contacted by a guy with some ppc64 hardware who's trying to get *ubuntu* (so not just server) working on it and he had some question about the potential of releasing multiple kernels, which apparently debian does upstream. i'm just trying to find him a good contact to chat with.
<RoyK> wxl: it may be a good idea to use debian instead, then, since ubuntu is rather focused on x86/x64
<wxl> RoyK: except for lubuntu, ubuntu mate, and server.
<smoser> i dont know that "ubuntu is rather focused on x86/x64"
<smoser> right.
<RoyK> wxl: if you look at the updates on ubuntu, it holds a rather strong record of not updating anything that's not mainstream
<sarnold> wxl: are we talking ancient stuff like a g5 mac or new shiny stuff from ibm or others? :)
<wxl> sarnold: new, shiny.
<RoyK> wxl: may I ask what sort of hardware?
<wxl> RoyK: while i don't argue with you, this is orthogonal
<sarnold> wxl: http://cdimage.ubuntu.com/releases/14.04/release/
<sarnold> wxl: ppc64el iso is probably a fair starting point :)
<wxl> here's what he's told me so far: dual-core, 64-bit, non-Altivec-enabled e5500 core
<wxl> he also sent a pic http://tinypic.com/view.php?pic=1zbtlk9&s=5
<wxl> he really just needs someone to chat at that might have the slightest of inkling as to what he might be talking about XD
<sarnold> "remove at risk of death"
<sarnold> there's a warning you don't see every day
<wxl> apparently he's pushed support for it into the 4.4 kernel on his own
<wxl> but i guess this is a Book-E CPU and the kernel needs to be compiled with Book-E support rather than Book-S support thus the question about alternative kernels
<wxl> here's an older article that discusses Book-E if it's helpful https://www.kernel.org/doc/ols/2003/ols2003-pages-340-350.pdf
<sarnold> aha :) now we're on to something. perhaps #ubuntu-kernel then?
<wxl> ah maybe that would be best. great idea sarnold. thanks!
<sarnold> wxl: if book-e vs book-s is a mutually exclusive choice, I wouldn't be too surp0rised if someone wants a support contract to make it worth while :) hehe
<wxl> sarnold: money always helps XD
<tsimonq2> ^
<lost1nfound> hey guys, ive had a pretty serious production problem since upgrading from 15.04 to 15.10 on my EC2 m4.xlarge/c4.xlarge high-traffic web instances. the instances just become unreachable and fail status checks 2-4 times per day. seen ifquery segfaults in logs as well, but nothing logs once they go into this state, so this is a bit tricky to debug. been working with aws support for 3 weeks and no
<lost1nfound> luck. has anyone else had similar issues? or could maybe point me in the right direction?
<lost1nfound> my other 15.10 instances are fine (20+) but these 10 machines crash 2-4x a day. i have an askubuntu post about it but it hasn't gotten any response, and im not sure where to go from here: https://askubuntu.com/questions/710747/after-upgrading-to-15-10-from-15-04-ec2-webservers-have-become-very-unstable
<TJ-> lost1nfound: have you kept an open ssh session tailing the /var/log/kern.log to try to capture useful clues?
<lost1nfound> i have, but the connection times out and i never see anything around the time of the crash
<TJ-> OK, and have AWS shared any diagnostic clues with you?
<TJ-> things they've ruled out maybe?
<TJ-> those segfaults in ifup all look to be ocurring early in the life of the boot, and we could deduce there is a problem with the virtualised interface not being completely ready in some strange way
<lost1nfound> TJ-: sorry, someone pulled me into a meeting. they have not, other than telling me theyve throughly examined the host/hardware diagnostics and determined theres no problem
<lost1nfound> fairly early in the boot, one im looking happened right after remounting the root filesystem
<TJ-> is there any pseudo-console (IPMI/KVM) options other than SSH to connect to the guest?
<lost1nfound> at 2.817xxx. so not very early; the network adapter had already been initialized, etc
<lost1nfound> i dont think aws offers that :( let me look again
<TJ-> 2.8 seconds is early in terms of uptime though
<TJ-> many things happen in parallel at boot-time; those could be due to some unusual race condition
<lost1nfound> ah here we go, i found the console log and indeed, one of my frequently crashing machines logged [171009.844097] general protection fault: 0000 [#1] [ 0.000000] Initializing cgroup subsys cpuset
<lost1nfound> immediately before reboot
<TJ-> typical there's no stracktrace with it
<lost1nfound> right, yeah. i wonder if theres any way to enable any debugging information on one of the systems
#ubuntu-server 2016-01-05
<EmilienM> jamespage, coreycb: would it be possible to have Rally ( https://launchpad.net/ubuntu/+source/rally ) into Trusty?
<sarnold> lost1nfound: investigate kexec/kdump, that might work in aws..
<[Mew2]> is it possible to forward incoming connections on port 80 to another port?
<[Mew2]> or i must open port 80 to do this
<lost1nfound> sarnold: thanks! i configured kdump per https://help.ubuntu.com/stable/serverguide/kernel-crash-dump.html on 4/10 machines so hopefully i can get some good debugging info sometime later today when one crashes
<lost1nfound> and then i guess i should file a bug report with that data?
<sarnold> lost1nfound: sweet! yeah, that's probably the next step
<sarnold> [Mew2]: you could probably achieve the same thing using iptables NAT portfowarding, but that's not exactly trivial
<lost1nfound> awesome, thanks for the help, will do
<sarnold> good luck lost1nfound :)
<lost1nfound> thanks :)
<sarnold> [Mew2]: (note that's a guess on my part that iptables NAT can do it -- iptables can be made to do nearly anything, and I do know that you can do port forwarding / manipulating as part of NAT processing.. doing it onthe same IP might be different though.)
<[Mew2]> ok thanks sarnold
<imincik> Hi all, does anybody knows when we can expect Vagrant images for Xenial on https://cloud-images.ubuntu.com/vagrant/ ?
<xmj> moin!
<xmj> I just saw one of our engineers reporting that machines were 'stuck' on âStopping System V runlevel compatibilityâ, when connecting a terminal to them
<xmj> where 'stuck' means that they perform as expected, but to get to a login prompt you'd have to CTRL ALT F1.
<xmj> Anyone else seen this on 12.04.1?
<[Mew2]> sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 6000 <-- is this the correct command to forward all traffic from port 80 to 6000?
<jamespage> smb, got to the bottom of that jdb process issue i had prior to xmas - looks like the cgmanager backport to trusty is causing problems when the mount happens prior to cgmanager starting
<jamespage> indeed I'm running with the one in updates, not backports...
 * jamespage sighs
<halvors1> Hi. Ubuntu 16.04 Beta 1 ships with Apache 2.4.17. Isn't it supposed to include the mod_http2 module?
<halvors1> Or is it in a separate package?
<rbasak> halvors1: on advice from the security team, we are not building http2 support deliberately since 2.4.17-1ubuntu1
<rbasak> mdeslaur, sarnold: do we have a release note bug for the http2 drop, OOI? Shall I create one?
<halvors1> rbask: Is there a security risk?
<rbasak> halvors1: that's a question for the security team that I can't really answer, sorry. All I know is that they don't want it for security reasons. Could be LTS-length supportability rather than an immediate risk.
<rbasak> I accept that we need to communicate their real reason, so I'll try and make sure that there is something in the release notes with an explanation at release time.
<halvors1> You say the LTS release of ubuntu won't include mod_http2 when it is released?
<mdeslaur> rbasak: I don't think it was a security concern, it's the fact that it's marked "experimental", and libnghttp2 needs a MIR security audit
<mdeslaur> rbasak: if you want to support the code, including possibly doing a substantial backport if it changes, I don't have an issue
<rbasak> mdeslaur: ah, my misunderstanding, thanks. Is that the same reason as nginx? I don't see a libnghttp2-dev build-dep in nginx in Debian.
<mdeslaur> rbasak: I have no idea why it was disabled in nginx, I wasn't part of that discussion, sorry
<mdeslaur> we definitely do need to ship http2 in the lts release at some point, whether that's shipping "experimental" code right away, or releasing an SRU of a backport of whatever is considered stable at some time in the future is up to the server team to decide, IMHO
<mdeslaur> rbasak: libnghttp2 is all new code, and even got a CVE this week, so if we need it in main, now would be the time to do the MIR paperwork
<rbasak> teward: ^ who were you working with on that, please?
<rbasak> mdeslaur: noted, thanks.
<teward> mdeslaur: rbasak: sarnold
<teward> mdeslaur: I asked sarnold whether the Security team had any concerns with the 1.9.6 merge, sarnold said to disable HTTP/2
<teward> boooo, evil IRC client double post >.<
 * teward kicks hexchat out the window
<mdeslaur> teward: and nginx doesn't require libnghttp2? it has its own implementation?
<teward> mdeslaur: AFAIK yes, I've not had to pull that in as a build dep to make HTTP/2 work
<teward> tested with 1.9.6 in a Xenial VM too just to make sure, with three SSL / HTTPS checker scripts to confirm h2 was a valid proto offering
<mdeslaur> ok, perhaps it's worth discussing all 4 of us once sarnold is online
<teward> ok
<teward> https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1510096/comments/2 may be relevant
<ubottu> Launchpad bug 1510096 in nginx (Ubuntu) "Please merge 1.9.6-2 (main) from Debian Unstable (main)" [Wishlist,Fix released]
<teward> as I asked sarnold to post to the merge bug so there's at least a paper trail
<teward> perfect timing for the pings, too, I was about to put a 1.9.9 update into Xenial since Debian's dragging their heels
<teward> so i'll put that on hold
<mdeslaur> ok, lets discuss this again with sarnold
<rbasak> acvk
<rbasak> ack
<teward> ack
<teward> mdeslaur: what's missing here is that sarnold and I discussed this as well, i think the idea was that once the Sec team was OK'd with the 'real world exposure' to turn it back on
<teward> either as SRU or before release date
<teward> though, i will say that with SPDY now *gone* in 1.9.5+, HTTP/2's the only thing that would replace it
<mdeslaur> having spdy gone is a big plus
<teward> +1 there
<teward> mdeslaur: may want to wait for the reply to my question to the nginx-devel mailing list
<teward> i've pushed the question on HTTP/2 up there to be *certain*
<teward> when anyone from @nginx.org replies then that's pretty much an authoritative answer, but AFAIK it's their own implementation that has built and worked without libnghttp2
<teward> http://mailman.nginx.org/pipermail/nginx-devel/2016-January/007751.html is the base message, please let me know if I missed anything
<teward> actually
<teward> mdeslaur: rbasak: confirmed authoritative answer: what I thought I knew is correct - NGINX's HTTP/2 implementation is their own, and only has a dependency on OpenSSL 1.0.2+ with ALPN TLS extensions.   http://mailman.nginx.org/pipermail/nginx-devel/2016-January/007752.html
<teward> mdeslaur: rbasak: so nginx won't need to require libnghttp2
<teward> perhaps this is why sarnold wanted http/2 disabled for now, to let the nginx implementation get some 'real world' exposure for a while to rule out security risks?
<teward> stupid question, but when's the next server team meeting
<arges> teward: isn't it in 8 minutes?
<arges> jgrimm: ^^^
<jgrimm> arges, correct
<genii> According to the fridge, 4pm ( I assume GMT)
<teward> arges: unfortunately phone calls are evil
<teward> rbanffy: ^
<teward> erm
<teward> rbasak: ^
<rbasak> teward: ?
<arges> teward: its an IRC meeting in #ubuntu-meeting going on right now
<arges> rather it just eneddd
<jge> Hey all, my server is notifing that it has 7 updates which are security updates but when I do apt-get update && apt-get upgrade, I only see 3 upgrades which have been held back
<jge> where are the remaining 4?
<jge> only main and security repos enabled
<Sling> jge: and dist-upgrade ?
<jge> Sling: that did it, but it installed regular updates too.. was that because the security updates needed them?
<Sling> dist-upgrade installs all available upgrades
<Sling> including kernel updates
<jge> ahh, even if I only have main/security repos available?
<jge> enabled*
<yoink> jge usually it's because the packages listed with security updates also require new packages which aren't installed
<yoink> jge if you enable the "mail" option in the unattended-upgrades config, you'll get a detailed email about which packages were listed as security-updates were held back.
<yoink> I usually mannual 'apt-get install' those packages and will be prompted for the extra dependencies that aren't installed.
<yoink> for example this recently happened with the mariadb security updates on 14.04
<nacc> rbasak: quick q on logwatch ... I noticed that one of the remaining changes was moving libsys-cpu-perl from Recommends to Suggests (your change in vivid/7.4.1-2ubuntu2). Debian added libsys-meminfo-perl to Recommends in the meanwhile. The change is trivial to move libsys-meminfo-perl also to Suggests in control, but can i just put that in the same line in the remaining changes section as the libsys-cpu
<nacc> -perl in teh changelog?
<rbasak> nacc: if it's a new change, then it isn't a remaining change, so it should be in a separate top-level bullet point below rather than as a subitem of the "remaining changes" bullet.
<rbasak> nacc: note that we usually move things from recommends to suggests in an Ubuntu delta because the package is in main and the recommend is not, which isn't allowed. A suggests is allowed in that case though. So libsys-meminfo-perl does need to be moved if it is universe, but not if it is main.
<nacc> rbasak: ok, i'll verify that, thanks
<nacc> rbasak: yeah, i understand the point about remaining vs. not; but was just wondering as logically it's the "same" change. So in a future merge, we could combine them to one line?
<rbasak> nacc: that's right. In a future merge we'd combine them to the same line, but for the first merge that includes it, we call it out.
<nacc> rbasak: ok, and for tracking this, should I open a new LP bug? similar to LP 1387817
<ubottu> Launchpad bug 1387817 in logwatch (Ubuntu) "New Upstream Version 7.4.1" [Undecided,Fix released] https://launchpad.net/bugs/1387817
<rbasak> nacc: if we want a bug to track this, we usually have one that is entitled "Please merge logwatch 7.4.1+svn20150731rev294-1 from Debian" or similar with a tag "upgrade-software-version". And the changelog should auto-close that merge bug.
<rbasak> nacc: assign yourself to the bug too please. That helps avoid someone else working on it concurrently.
<rbasak> (though that sort of thing still does happen)
<nacc> rbasak: ok, will do
<nacc> rbasak: still working through the steps from the wiki, etc, but working on it
<nacc> rbasak: ok, does this look reasonable? still want to test, etc, but: https://launchpad.net/~nacc/+archive/ubuntu/logwatch/+packages
<jak2000> how to check if port 80 is opened?
<jak2000> and wich service use port 80
<tarpman> nc -v 1.2.3.4 80
<tarpman> netstat -ltpn | grep 80
<jak2000> 1.2.3.4 is the local ip?
<tarpman> is whatever ip you're wondering about port 80 on
<jak2000> http://pastie.org/10672122
<tarpman> -p only works if you run it as root
<jak2000> ahh
<jak2000> 1050/nginx
<tarpman> sounds right
<jak2000> tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1050/nginx
<jak2000> how to disable ?
<jak2000> killing?
<tarpman> if it's nginx from apt -> service nginx stop
<sarnold> probably service nginx stop or something similar
<tarpman> to disable it permanently -> systemctl disable nginx
 * tarpman wonders whether update-rc.d ever ended up learning about systemd
<tarpman> oh, it did! :)
<jak2000> systemctl: command not found
<jak2000> cant disable permanently
<tarpman> oh, are you running a pre-systemd release?
<tarpman> then echo manual > /etc/init/nginx.override
<tarpman> (IIRC)
<tarpman> been a while since I touched upstart, someone had better confirm that :)
<jak2000> sudo echo manual > /etc/init/nginx.override
<jak2000> -bash: /etc/init/nginx.override: Permission denied
<tarpman> well, yeah
<tarpman> echo manual | sudo tee /etc/init/nginx.override
<tarpman> would work
<bearface> update-rc.d nginx disable
<sarnold> sudo and echo .. >  don't combine :)
<tarpman> bearface: does that handle upstart?
<jak2000> echo manual | sudo tee /etc/init/nginx.override
<jak2000> say manual
<tarpman> yes, that's what tee does.
<jak2000> done
<jak2000> update-rc.d nginx disable
<tarpman> jak2000: please read manpages of commands if you don't know what they do.
<jak2000> tarpman tahnks
<jak2000> and yes
<jak2000> reading
<bearface> tarpman: err, fair point, not sure
<jrwren> pretty sure it does.
<jrwren> err, nope.
#ubuntu-server 2016-01-06
<lynxman> morning o/
<Jeeves_Moss> ha anyone had an issue with VMWare not keeping the sameorder of NIC cards?  Mine randomly swapps the NICs around in the OS, and I can't figure out how to lock a MAC address in the OS to a specific ETH#.  Ideas?
<rharper> rbasak: o/
<kickinz1> o/
<nacc> o/
<genii> Jeeves_Moss: That's usually done in regular *buntu wy editing the /etc/udev/rules.d/70-persistent-net.rules file, but I'm not sure if it applies inside a VM
<Jeeves_Moss> genii, that's the route I was thinking of taking.  it's just getting REALLY annoying that every time I reboot, things "move around" on me
<rharper> nacc: kickinz1 tells me that rbasak is out for now, I think we'll reschedule
<nacc> rharper: ah ok, thanks
<norc> Hi. We are operating many dozen virtual Ubuntu servers on some large ESX hosts. Currently we are hitting our limit with manual setup, configuration and maintenance. Things we want to avoid: People making local unauthorized and undocumented changes. Manual provisioning.
<norc> Things we want: A central interface from which to maintain and provision our servers.
<norc> Automated and orchestrated changes (for things like when Heartbleed was fixed) across the infrastructure.
<norc> What real options do we have here?\
<norc> (The servers run things like databases, apache web servers, mail related things, radius, ldap)
<nacc> norc: http://www.devopsbookmarks.com/config-management may be helpful
<szer> #ubuntu just told me about this place. Yay!
<szer> Ubuntu Server with Samba in a windows environment. Using windows groups in smb.conf like: valid users = @"domain\users group" And as part of that group, I get a popup for credentials when trying to connect to the share.
<szer> (i am logged in as a user who is part of that group, though too)
<szer> This is in a domain. If I try typing in my creds, i get cred prompt again
<szer> I see in the samba howto for ubuntu server a thing about apparmor and I do not have a apparmor.d/usr.sbin.smbd file
<szer> But I do have other shares already on this machine (but they are all guest ok = yes)
<szer> Any ideas?
<nacc> rharper: so i've been looking at the logwatch merge ... and searched briefly on lp and see there are 12 open bugs, many of which are not yet resolved ... does it make sense to fix some of those at the same time as merging? Or is it generally that a merge doesn't add new fixes and the a follow-on update might?
<OerHeks> szer, maybe the domain needs capital letters >> valid users = @"DOMAIN\users group"
<szer> darn, I did put it in as DOMAIN\
<szer> Thanks OerHeks
<szer> Which is correct >.<
<szer> My bad on the description
<OerHeks> np, we keep looking, maybe you can pastebin the smb.conf ?
<szer> Surely can
<szer> I just inhereited this server from another guy that was a little sketchy
<szer> I have replaced all instances of domain (which is upper caps) with DOMAIN
<szer> https://www.pastery.net/xcfqyw/
<szer> Near the bottom, there is the compliance share listed. This is the one I am trying to create
<szer> my user is part of compliance domain group
<szer> my admin user is part of Samba Admins domain group
<szer> in my head, either one should be able to connect to this share
<szer> I did restart smbd
<szer> and restart nmbd according to the samba wiki
<OerHeks> hmm i see nothing odd here
<szer> I guess I could try making another share and copying the format of all of the guest ok ones
<szer> just to make sure that it is something with the permissions
<szer> and not my process
<stratus_ss> good day all. I am looking for some help with adding a Fedora 23 image to my PXE server (ubuntu 14.04). Can anyone help diagnose my problem? I believe its an issue with the menu but I cannot quite pinpoint it
<szer> OerHeks: Yup, copied one of the others, created the folder, restarted smbd
<szer> able to browse fine and create files fine
<szer> stratus_ss:What PXE server are you using?
<OerHeks> check the permissions on the folders themselves with ls -la. You can set valid users = any to make check if there are errors or not. The testparm command is also very helpful for the samba config file part. .. these steps helped me a lot
<OerHeks> oh, it should work ..
<stratus_ss> szer: I am using what I believe is the standard... with tftpd-hpa, dhcpd, pxelinux
<szer> OerHeks: i started out with group "DOMAIN\compliance" on the folder
<szer> and nobody for owner
<szer> also tried DOMAIN\Samba Admin
<szer> no joy when testing both
<stratus_ss> the issue is that every other distro is working (Ubuntu, Mint, CentOS 6 and 7, Clonezilla etc) but when I tried to add Fedora 23, I get dropped to a dracut shell
<stratus_ss> so I suspect its something wrong with my menu entry
<stratus_ss> http://pastebin.com/TsGng0Je
<szer> swapped to DOMAIN\Domain Users (the built in group), still no joy.
<stratus_ss> some text was cut off from the original paste: http://pastebin.com/kAMsa9j6
<szer> stratus_ss:Hmmm, sorry, I don't know the fedora stuff :(
<szer> maybe fedora channel would be able to spot something?
<szer> Another place you could try is FOG imaging
<szer> can't recall what IRC server they are on
<szer> but they might have experience with it. I know they have forums as well. Just another option for getting some help
<szer> hmmm, even if I put in my user name in the smb.conf
<szer> and I chown the folder to my user name "DOMAIN\myuser"
<szer> I still can't browse to it
<szer> (of course running restart smbd between and testing)
<stratus_ss> szer: The Fedora guys actually sent me over here insisting that because its an Ubuntu PXE server its the server's fault
<szer> LOL
<stratus_ss> I know... *rolls eyes*
<szer> so the fedora peeps don't know what switches to use for their distro to install
<szer> Classic.
<stratus_ss> that was the implication...
<stratus_ss> well thanks for the reply anyways
<szer> yup. sry couldn't help ya
<szer> I was serious about FOG though
<szer> they are all using pxe booting
<szer> might take longer with the forums, but help is good :)
<stratus_ss> alright I will look into it... is FOG the Free Opensource Ghost project?
<szer> yup
<stratus_ss> I thought they only did windows stuff
<stratus_ss> though its been years since I looked at them
<szer> I think they even have a how to on live booting other distros from NFS
<szer> I know that I've seen forum posts about it
<stratus_ss> alright thanks for the pointers
<pmatulis> stgraber: is LXD container migrations supposed to work OOB? i got a "CRIU" binary does not exist error
<stgraber> pmatulis: no, it currently needs you to install CRIU (not in main) and be lucky enough to have a container that will migrate, this currently needs a rather bleeding edge version of both criu and the kernel
<stgraber> pmatulis: tych0 could tell you more
<tych0> yep, what stgraber said :). i just talked to the kernel folks, and they said 2-3 weeks for a 4.4 version, which will have all the seccomp support
<tych0> (a 4.4 version in xenial, that is)
<pmatulis> stgraber, tych0: thanks guys
<[Mew2]> Guys how do admins monitor ubuntu server?  Isn't to know everything that could compromise me, all ip trying to connect, all processes running and there usage etc..
<[Mew2]> Isn't = I want*
<[Mew2]> What if some apps don't log properly, can I have Ubuntu monitor?
#ubuntu-server 2016-01-07
<`jpg> It actually looks like Semisync in MySQL might be enough to port the Manatee state machine with reasonable safety guarantees.
<`jpg> The AFTER_SYNC mode will probably be required to ensure safety invariants are violated but it could definitely work.
<`jpg> *aren't
<[Mew2]> Anyone?
<sarnold> [Mew2]: there's loads of different monitoring tools.. nagios, icinga, bro, collectd, ..
<sarnold> [Mew2]: there's also loads of different log monitoring tools; kabana and elasticsearch seems to get a lot of press lately but I can't tell if that's just because the pictures are pretty or if there's something useful there
<sarnold> in fact the biggest problem is there's so many different tools that picking one and building on it might be difficult :)
<[Mew2]> Friend of mine had mentioned nagios
<[Mew2]> What things do server admins look for
<[Mew2]> Connections, usage, what else?
<sarnold> used memory, used syslog, used filedescrptors, number of processes, number of blocked processes, ping latency, application response latency..
<[Mew2]> Nagios will do all of this?
<sarnold> maybe not all of it..
<sarnold> there seems to be a distinction between "measure these things and graph them over time" vs "check that things are up and responsive". I'm not sure why.
<[Mew2]> Hmm
<[Mew2]> Will nagios tell me what IP address have connected to that server?
<[Mew2]> Across all ports
<sarnold> probably not; that might take some custom iptables rules, thuogh you might be able to collect them centrally using nagios once you have those rules written..
<[Mew2]> Hmm ok
<[Mew2]> I think I will start with nagios
<[Mew2]> Thank you so much sarnold :) <33
<sarnold> have fun [Mew2] :)
 * [Mew2] excited
<AlecTaylor> hi
<[Mew2]> So nagios is accessed through webbrowser correct?  Does it require a login? Can I use fail2ban on incorrect logins?
<ikonia> [Mew2]: no
<ikonia> [Mew2]: it is a web gui, yes it requires a login
<ikonia> you'd have to setup fail2ban bad bots to log scrape and pattern match incorrect logins
<[Mew2]> Thank you ikonia :) <33
<willemgf> Hi, We have a strange situation with vm's running ubuntu server 14.04 LTS on esxi-5.5. It happens that a VM doesn't boot well, it gets stuck on plymouth-upstart-bridge and doesn't go further. after performing ctrl-alt-del we see there are errors found on the disk, but we are at this point no more able to select the fix-option.
<willemgf> what could be the reason the system gets stuck and get no interaction regarding the disk-error?
<hateball> willemgf: are you running open-vm-tools on it, and have you fsck'd the disks?
<willemgf> for now we edit the grub by adding init=/bin/bash in order to perform the e2fsck to get the system back fully operational. But I would like to see the console allowing us to select the action on the error during boot.
<willemgf> hateball: yes, open-vm-tools is installed on the vm
<hateball> something seems off if they keep corrupting
<willemgf> once th efscj is done as described above, a reboot of the vm goes perfect
<willemgf> But I thought we would be able to interact by default when a disk-error appears without performing our (temporary) method to perform the fsck
<ikonia> willemgf: remove the boot splash
<hateball> if errors are found it should indeed prompt you
<ikonia> willemgf: see if there is anything else going on that you miss
<ikonia> boot into single user mode too, see if you get a clean minimal boot
<hateball> does pressing ESC remove the splash after it's "hung"? I cannot recall
<ikonia> I don't think so, it depends on how/why it's hung
<ikonia> most cases not, only if it's super slow will that work (I think )
<hateball> I think they wanted to not have to reboot to grub and pick modes/alter options
<hateball> rather see at once that it needed interaction to fsck
 * hateball removes quiet splash in /etc/default/grub
<ikonia> yeah, for me it's worth altering the splash on the fly just to see the boot process
<willemgf> ikonia: from the grub, 'quiet splash' has been removed, so this is not the problem. We also tried to apply noplymouth, since we saw at the ctrl-alt-del that plymouth-upstart-bridge was killed by TERM.
<ikonia> if you're running a server, I'd question if you ever need that splash in place
<ikonia> willemgf: so you actually get to see the full boot process
<willemgf> indeed we see the complete bootprocess, were it suddenly get stuck. no notification of the eroor in order to apply on of the options.
<ikonia> so what's the last thing you actually see
<willemgf> even ESC does not allow us to continue. Seems like something else is blocking in order to go further.
<willemgf> any way to provide a record of our situation?
<ikonia> willemgf: whats the last thing you see on the screen
<willemgf> Will paste some screenshots from my recordmydesktop ...
<willemgf> here it gets stuck: http://i.imgur.com/ZPbeWwm.png
<willemgf> when performing ctrl-alt-del we see the following: http://imgur.com/RWoIZhP
<ikonia> so plymouth looks like it's in a loop
<ikonia> which is creating a wait loop in the boot process
<ikonia> rather than hanging
<ikonia> can you boot into single usermode ?
<willemgf> that is what we thought too, but using noplymouth in grub does not solve the issue, it still gets stuck.Even in single user mode we have the same issue.
<willemgf> As I said, our only way to solve it at this moment is to apply init=/bin/bash in order to get a shell, perform the e2fsck en reboot, wereafter the vm comes up well as expected.
<ikonia> after you run the fsck and reboot normally it comes up ok ?
<willemgf> yes, then it comes up normally
<ikonia> so I would question if your actual problem is based around disk problems
<ikonia> willemgf: if you reboot that now working VM a few times, does it stay working ?
<willemgf> even that we tried already, after the e2fsck, rebooting the VM a couple of times, it comes up normally.
<ikonia> ok, so that does suggest to me - disk problems on the VM from the host is a problem
<willemgf> It only happens in case we have a situation were disk errors appeared on that vm
<ikonia> how are you installing these machines ?
<willemgf> Might be, a colleague also pointed to disk timeout as described on http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009465
<willemgf> is it worth to thy this out?
<willemgf> these are VM's installed by a template
<willemgf> And they all worked fine after installation.
<ikonia> I'd look at your template first
<ikonia> is your template made out of a corrupted disk
<ikonia> because if you are having a disk time out, I doubt an fsck would have an impact on that
<willemgf> certainly not. We have other VM's made prom the same template that reboot correctly.
<willemgf> *from*
<ikonia> so how many of your VM's are having this problem
<willemgf> Yesterday I had  VM's, today I had one. Maybe nice to know: these VM's were stopped as I needed to migrate the OS disk to another datastore.
<ikonia> roughly how many out of how many VM's had this problem
<ikonia> eg: 5 out of 100 built
<willemgf> the VM's were initiale installed on slow datastores, and are now mirgated to faster datastores
<ikonia> so they have broke in the migration ?
<ikonia> or broke at install time
<willemgf> I'm still performing this migration process. It's a new Vmware environment on which we have currently VM's running on.
<ikonia> how where they migrated
<ikonia> on the fly using vmware data migration tools ?
<ikonia> or shutdown and the vmdk's moved
<willemgf> through the migration process in the vsphere web client.The VM's were first brought down before migrating.
<ikonia> roughly how many have had this problem ?
<willemgf> Nevertheless, I would have expected that the boot of the VM would inform me in a correct way of any disk error on which I should be able to choose which action to perform. And this is not the case, so I guess some kind of bug in the upstart, not?
<ikonia> you are getting a disk warning error on boot though are yo unot ?
<ikonia> I suspect if you left it long enough for the plymouth-bridge to stop trying and time out, it would move on
<ikonia> but complain of errors
<willemgf> As we did not finished our migration process no idea of how many of the VM's, but for the   I did yesterday, I had this issue for  of them.
<ikonia> for "them"
<ikonia> how many did you have
<ikonia> out of how many machines
<willemgf> the eroor appears after applying tghe ctrl-alt-del. from that point on I saw extra lines appeared on the screen, but disappeared to fast in result of the reboot. That is why I recorded the situation and saw what came afterwards.
<willemgf> Sorry num-loch was off: I had it for 5 VM's of the (about) 12 migtrations I performed yesterday.
<willemgf> *num-lock*
<ikonia> thats a higher ratio than I'd like
<willemgf> Me too, that is why I came to ask on this channel for any possible help
<willemgf> As of this morning I'm trying out different grub-options for a VM having this issue as well, so far no solution found yet.
<willemgf> it gets stuck till I perform the ctrl-alt-del and can see the message regarding the disk-error. the VM for which I provided the screenshots.
<willemgf> BTW, strange that, when noplymouth is used in grub (edit at boot) the startup still performs actions for plymouth-upstart-bridge!
<ikonia> I dont think grub options will help disk problems
<ikonia> it really looks like everything your showing me that certain boxes have migration disk corruption
<willemgf> I totaly agree, but nevertheless I would expect to see on my console the error found on the disk(s) on which I should choose which action to apply, but We don't get this at all.
<ikonia> willemgf: I suspect if you leave it long enough it will move onto the disk error
<willemgf> now trying to boot in recover single user mode with grub options 'single nomodeset noplymouth'. Very slowly I see new lines appearing on the console, see: http://i.imgur.com/inDJV1l.png
<lordievader> Good morning.
<willemgf> ikonia: meanwhile, the boot in single user mode is still stuck, no message regarding the disk-error so far.
<willemgf> Is not the way it should happen for each vm having this issue :(
<ikonia> willemgf: how long are you waiting
<ikonia> I'm just wondering how long it would take for the plymouth-bridge to give up re-rtying
<ikonia> that console does just suggest disk errors to me
<repozitor> i have configured my firewall very well, it's state is running
<repozitor> but seems firewall don't work
<repozitor> because still some ports are open!
<willemgf> ikonia: just back from luch, and the VM is still stuck. So for 1h30, not what we should expect
<repozitor> any idea to fix it?
<hateball> !ufw | repozitor
<ubottu> repozitor: Ubuntu, like any other Linux distribution, has built-in firewall capabilities. The firewall is managed using the 'ufw' command - see https://help.ubuntu.com/community/UFW | GUI frontends such as gufw and ufw-kde also exist. | An alternative to ufw is the 'iptables' command - See https://help.ubuntu.com/community/IptablesHowTo
<repozitor> ubottu, i'm using firewalld and firewall-cmd
<ubottu> repozitor: I am only a bot, please don't think I'm intelligent :)
<repozitor> hateball, ^_^
<hateball> repozitor: Nothing I use, but at least you've provided more detail than "don't work" now
<hateball> So someone else may know
<repozitor> hateball, for example port 21 is not allowed. but nmap scanning show me it is open!
<shauno> are you scanning from the same host or another?
<repozitor> shauno. of course, other host
<repozitor> shauno, any idea?
<repozitor> i'm using firewalld for my ubuntu server
<repozitor> but if ufw work better, i can use it instead
<shauno> I have no idea about firewalld at all I'm afraid.  just grasping for low-hanging fruit ( / false positives)
<repozitor> shauno, http://dpaste.com/2MNQR54
<repozitor> i used ufw, but i still see my firewall don't block programs internet activity
<repozitor> anyone have any idea?
<lordievader> repozitor: What is the output of 'sudo iptables-save' (and 'sudo ip6tables-save' if you use ipv6)?
<repozitor> http://paste.ubuntu.com/14429768/
<repozitor> OMG, before configuring ufw, i use iptable -F
<repozitor> why these command exist?
<lordievader> repozitor: See lines 111 and 112.
<repozitor> what is wrong with 111, 112?
<repozitor> they allow me to connect by ssh
<repozitor> as you see, ufw allow ssh output on above link
<lordievader> You complain that port 22 is open, looking at line 111 that is correct.
<lordievader> Oh wait, I misread the port.
<repozitor> lordievader, i know ssh is open, it's ok
<repozitor> i want to block 8080 for example
<repozitor> why ufw don't block 8080 by itself?
<repozitor> i never allowed 8080, but ufw can't block it
<repozitor> so my question is why my firewall don't work?
<repozitor> because of ufw status show me only ssh is allowed, i say my firewall don't work
<lordievader> repozitor: 8080/tcp filtered http-proxy looks okay from here.
<repozitor> REALLY? are you familiar with nmap?
<repozitor> right now it show me it is open!!!!!!!!
<repozitor> http://paste.ubuntu.com/14429812/
<repozitor> also you can connect by teletn to 8080
<repozitor> telnet 82.102.12.142 8080
<lordievader> Hmm that is odd.
<lordievader> Since your firewall rules do suggest it is being dropped.
<lordievader> Is that machine behind a nat?
<repozitor> no
<repozitor> hhmm, it is located on RED station datacenter
<repozitor> really i dunno
<patdk-wk> does netstat show something listening to 8080?
<lordievader> repozitor: You run tomcat on that port?
<repozitor> patdk-wk. yeah
<repozitor> yeah
<repozitor> oh my god!!!!!
<repozitor> now i uninstalled it and installed it again, and before i enable ufw, i allowed ssh
<repozitor> and now when i enable ufw, my server is completely unreachable.
<repozitor> i dunno what i should to do!!!!!
<lordievader> repozitor: Try and get a console a different way (through a web service or something).
<lordievader> Not sure if your providers has that service.
<repozitor> lordievader, i know!
<lordievader> repozitor: Let me know when you have access again ;)
<repozitor> yeah, i fixed it again :))
<repozitor> last week this problem occured, and i request  a direct console, they grant me
<repozitor> anyone have any idea?
<lordievader> repozitor: Yes, try 'sudo iptables -I INPUT 1 -p tcp --dport 8080 -j DROP'.
<repozitor> lordievader, the problem is not 8080
<repozitor> i want to know why firewall don't work!!!
<repozitor> i should find a wise reason for this problem
<lordievader> repozitor: This is a way of finding out why things don't work...
<repozitor> lordievader, i DROP 8080, but it still is open!!!!!
<repozitor> test  it by telnet
<repozitor> needing a better idea
<lordievader> My browser takes a long time to connect to it...
<repozitor> lordievader, go to matrix world :D
<shauno> fwiw, 8080 is timing out for me
<lordievader> Telnet hangs on Trying 82.102.12.142...
<lordievader> repozitor: Iptables works fine if you ask me. I guess the flaw is somewhere in the ufw rules.
<shauno> is there a possibility it's being held open for your specific case by a conntrack rule?  so iptables thinks it's related to a previously-existing connection and passes your specific connection?
<repozitor> shauno, http://paste.ubuntu.com/14429957/
<repozitor> are you kidding me?
<repozitor> i swear to GOD 8080 is open!!!!
<patdk-wk> it is open
<patdk-wk> but doesn't mean that it is HITTING your server
<patdk-wk> do a tcpdump on your server for port 8080
<shauno> I get http://paste.ubuntu.com/14429966/
<patdk-wk> and see if you actually DO see the connection
<ogra_> did you make sure to flush all rules you did set with firewalld before (by rebooting after removing all bits and configs of it) before you tired other fw tools ?
<YamakasY> guy do you store your antispam loggings ?
<YamakasY> archive
<repozitor> YamakasY, me?
<repozitor> no
<YamakasY> ok
<YamakasY> why not if I may ask ?
<YamakasY> afraid for your wife ? :P
<repozitor> no, usually antispam tool need money :D
<shauno> if you have an antispam you can train, keeping hold of a pile of ham & spam to feed it can be beneficial.  if you don't/can't, there's not much worth holding on to
<YamakasY> true
<YamakasY> I mean as it's also a mail gateway, it might be handy
<repozitor> shauno, something were wrong with apache, now i fix it
<repozitor> any idea?
<repozitor> what is wrong with ubuntu?
<repozitor> firewall is running, and blocking all port except 22
<repozitor> but nmap show me there are about 8port is opne
<repozitor> open*
<repozitor> any idea?
<repozitor> how can i remove all iptables & ufw rules?
<repozitor> i want to remove everything about firewall
<lordievader> repozitor: First set your policies so that you still have access then: sudo iptables -F && sudo iptables -X
<lordievader> That should leave you with just the empty built-in chains.
<repozitor> lordievader, where i should set my policy?
<lordievader> repozitor: ACCEPT
<repozitor> hey buddy, i know how to work with iptables, ufw, firewall-cmd
<repozitor> which of them do you mean?
<pmatulis> tych0: hi, re yesterday's container migration question and the necessity of a 4.4 kernel. will that kernel, or the magic it brings, be backported to trusty or will people be expected to run the new LTS?
<patdk-wk> repozitor, hmm, using a new tool won't fix it
<patdk-wk> ufw and firewall-cmd are just tools that work ontop of iptables
<tych0> pmatulis: 4.4 is going into X, so it should be available when linux-generic-lts-xenial comes available i think
<patdk-wk> shorewall also
<lordievader> repozitor: Looking at the command I gave you, iptables policies ofcourse.
<patdk-wk> you should do like I said, and make sure that port is actually routed to your machine first
<pmatulis> tych0: ta
<lordievader> By the by, if you don't want my help, say so, it ain't my problem...
<an3k> Hi everybody. I already googled and found plenty of commands to use but neither of them gave any information. I just installed Ubuntu Server 14.04.3 LTS on my Server using its Intel RSTe RAID. In the installation process Ubuntu found RAID Container and AHCI RAID Container. I selected "use them" for both. Now after the system is running neither dmraid nor dmadm knows about the raid (status,
<an3k> etc.). I'm new to Linux Software Raids so I have no clue if the RAID is working well or not, especially because none of the commands shows useful information.
<an3k> http://paste.ubuntu.com/14430372/
<jrwren> pmatulis: I'm pretty sure trusty will get a 4.4 HWE kernel
<pmatulis> jrwren: yeah, like tych0 said, the backport kernel
<jrwren> pmatulis: ah, I didn't understand. Thanks.
<jdstrand> repozitor: you might check the output of 'sudo /usr/share/ufw/check-requirements'. it is possible your kernel doesn't have everything ufw expects and isn't fully configuring itself (this can happen in hosting environments)
<repozitor> now i'm tired, i dunno how to fix it!
<repozitor> maybe i need to fresh install ubuntu
<ogra_> repozitor, did you make sure to remove all traces of firewalld (never heard of it) and reboot before yu started playing with the other FW tools ?
<repozitor> ogra_, i can't find all traces of firewalld, but i use apt-get to remove
<repozitor> apt-get remove, apt-get clean, apt-get purge.
<ogra_> remove or purge ?
<repozitor> both of them
<ogra_> and did you reboot to make sure to get back to a virgin state
<repozitor> yeah
<ogra_> (thoough if firewalld saves some iptables rules they might still persist, no idea how it works)
<repozitor> i dunno whether ubuntu has this feature or not
<repozitor> in macosx i can save all setting of application, and after installing we can restore them by time machine
<repozitor> is there exist such this settings?
<repozitor> features*
<jrwren> repozitor: how do you do that? i've never seen that time machine feature.
<repozitor> jrwren, did you ever use time machine?
<jrwren> repozitor: yes, only to restore full system or single files, not app settings.
<repozitor> install a fresh macosx, and put time machine backup disk into your os, and them use it
<jrwren> repozitor: ah, so you mean full restore.
<repozitor> no
<repozitor> full restore change your main os to backup version
<jrwren> repozitor: how is that not full restore?
<repozitor> what?
<jrwren> repozitor: ah, i've never used time machine that way. are you saying that i can have TM backup with older OSX and use newer OSX to restore it and it will not overwrite my newer OSX?
<repozitor> jrwren, yeah, i use it for 2 time.
<repozitor> probably you know all app setting is store in this path
<repozitor> /Users/username/Library and this path is resotre when you restore app setting
<an3k> Do I really need a swap with 128 GB RAM?
<repozitor> if you reach at 100GB memory usage, yes
<an3k> ok, then I don't. Thanks :)
<shauno> I'm always tempted to keep swap, yeah.  stuff that's barely used can stay paged out and leave you with more free for whatever you bought 128 for
<repozitor> shauno, i think os don't start page in/out if he don't reach at 100GB or 80GB
<shauno> I have about 700Mb paged out, and over 50% of my ram is just caches
<an3k> I'll barely maybe never reach the 60 GB ram usage mark and I don't want swap to be on the SATA-DOM or on the raid
<patdk-wk> shauno, what is wrong with that?
<shauno> absolutely nothing.  that's why I'm advocating at least a little swap so that stuff that's not being used, doesn't need to be wired
<repozitor> shauno, in newer linux, app wont swap untill RAM usage reach at 75%, instead they chache files for faster IO, so you 700mb is full of file contents,  i guess
<patdk-wk> how can 700mb of swap contain files?
<patdk-wk> why would you put into disk, what is already on disk?
<an3k> Microsoft knows :)
<repozitor> patdk-wk, because they are dirty
<shauno> that's what I mean though.  caching files is more efficient use of ram than holding onto a process that hasn't done anything in days
<patdk-wk> heh?
<patdk-wk> cache is not dirty
<patdk-wk> that is not possible
<patdk-wk> 700mb paged out can only be application memory
<repozitor> patdk-wk, they can write into the original file's disk, because in feature os move them again into RAM
<patdk-wk> not file cache, not file buffers
<patdk-wk> then it WOULD NOT BE paged out
<patdk-wk> but written to disk
<patdk-wk> and would not show up in the 700mb of paged to swap
<patdk-wk> so the 700mb would not apply to those
<repozitor> patdk-wk:700mb paged out can only be application memory
<repozitor> prove it
<patdk-wk> I don't need to
<patdk-wk> read the linux kernel documentation on memory management
<jrwren> an3k: skip the swap partition, apt-get install swapspace, and it will be zero swap until you need it.
<nacc> an3k: the short answer is, it depends ...
<patdk-wk> you are the one saying the documentation is wrong
<nacc> an3k: and you can always use swapfiles, or swapspace
<repozitor> patdk-wk, show me a reference indicating this topic
<repozitor> patdk-wk, i never said "documentation is wrong"
<repozitor> if you think so, prove it
<patdk-wk> https://www.kernel.org/doc/gorman/html/understand/understand014.html
<patdk-wk> you think so too, prove yours :)
<patdk-wk> you did read that first sentence right?
<shauno> you don't page out disk caches, you just commit them.  if they're gonna hit the disk anyway, may as well do it properly
<patdk-wk> you don't commit disk caches
<patdk-wk> you DROP them
<patdk-wk> your commit disk buffers
<patdk-wk> cache == read, buffers = dirty writes
<repozitor> patdk-wk, answer me
<repozitor> imagine you have 8GB memory, and you working with gimp, you load a 6GB image into the disk, and now os want to load another applications, so os need 5GB of memory
<repozitor> you think os drop gimp file?
<repozitor> or swap it into swap area, so user can still work with it?
<patdk-wk> http://linux-mm.org/Low_On_Memory
<patdk-wk> the authorative source on it
<repozitor> read doc CAREFULLT
<repozitor> CAREFULLY*
<patdk-wk> repozitor, you oviously are confused and don't know what memory is
<patdk-wk> loading a file into gimp != cache
<patdk-wk> it is GIMP memory, assigned to gimp, unless gimp mmapped it, and it doesn't
<patdk-wk> so that file is no longer a file it is application memory assigned to gimp
<an3k> I think https://help.ubuntu.com/community/SwapFaq helps :)
<patdk-wk> to claim it is a file or cache is completely wrong
<patdk-wk> and that *file* is no longer a file in gimp, cause that image is not 6gigs anymore, cause gimp will have to decode/decompress and create structures to use that image
<patdk-wk> so it's likely much much larger
<patdk-wk> there is no way to claim that is a *file* anymore once it is loaded into an application
<patdk-wk> now if you want to state it correctly
<patdk-wk> you loaded your 6meg image into gimp
<repozitor> patdk-wk, hhhm, sorry, i remeber another thing
<repozitor> my question was totaly wrong!
<patdk-wk> the 6meg image is in the disk cache, and the decompressed gimp copy of that image is in gimp application memory as 50megs
<patdk-wk> the 6megs will be dropped at any time linux feels like it
<patdk-wk> but the 50meg one would have to be swapped
<pbxman> is there a way to know what console commands were sent by a user and the time they were sent in Ubuntu?
<patdk-wk> if the user is forced to use sudo, yes
<patdk-wk> if not, and you didn't force the user to use a shell that logged that info, then no ( the default )
<pbxman> auth.log?
<patdk-wk> yes, that will show logins and sudo commands
<pbxman> Does a kill -9 show up?
<patdk-wk> was it run as, sudo kill -9
<patdk-wk> then yes
<patdk-wk> if not, then no
<pbxman> ok thank you for that patdk-wk
<an3k> Is root login with password NOT permitted because one could steal the password entered when logging in?
<patdk-wk> it's just such a bad idea generally
<patdk-wk> and people don't generally use good password
<patdk-wk> so if they go through the work to enable that, heh, the should have learned the issues with enabling it too :)
<an3k> Indeed but the current approach isn't any more secure. If I MITM the password of root I am root, sure. But if I MITM the password of the user used to login then sudo I am root too
<patdk-wk> how are you doing MITM
<patdk-wk> and who said it was to protect against MITM at all? atleast I didn't
<an3k> dunno but that was one reason I got told of why direct root login isn't disabled by default
<patdk-wk> ssh fingerprints limit mitm
<an3k> yes but these doesn't help
<an3k> Let me try to explain.
<patdk-wk> these?
<patdk-wk> it matters not what authenication method you use
<patdk-wk> if you are MITM, it's already an issue when you make the connection
<an3k> There's server A. It allows SSH login as root with a password.
<patdk-wk> so I dunno how a root password has anything to do with mitm
<an3k> And there's server B. It does NOT allow SSH login as root with a password. Instead you have to login to your normal user account (which uses a password) and then sudo.
<patdk-wk> still dunno how mitm has anything to do with this
<an3k> In both cases somebody else gets root access as soon as he knows the password (of either root or the normal user).
<an3k> ok, stop focusing on MITM now ;)
<patdk-wk> yes, that is true
<patdk-wk> but the server A, they also need to know a valid username
<patdk-wk> in the second, they already know root is a valid username, and only need the password, and randomly guessing is *fine*
<an3k> vice-versa but you're absolutely right. Never really thought about this.
<patdk-wk> I do set a root password, but also don't allow root logins on ssh
<an3k> I do too but also allow root-ssh-login since it's a seperated network and just a LAN server
<thebwt> we turn off password auth for root in the sshd conf.
 * thebwt resumes lurk mode
<an3k> yeah, already found the required setting yesterday when working on the server. PermitRootLogin with-password is nicely misleading :)
<an3k> hmm, interestingly all the guides on how to setup ubuntu server on a software raid are wrong. RAID10 is supported by the installer and nothing has to be done in console or Live environment before the actual installation.
<patdk-wk> heh?
<patdk-wk> I thought it was always supported
<patdk-wk> atleast since 10.04 or maybe it was 12.04
<patdk-wk> there is only one guide that matters, and that is the ubuntu documentation though
<an3k> hmm, that isn't found using google
<an3k> and https://help.ubuntu.com/lts/serverguide/advanced-installation.html is actually not "correct". I haven't tested it so it may work but theres no need to manually create partitions on
<genii> AFAIK all RAID types are supported during install for server and alternate
<an3k> is that guide (mostly) in german for you too?
<an3k> for example I didn't created any partitions (swap and /) on the RAID drives. I simply created a partition table on each and then "Create Software RAID > Create MD"
<an3k> is there a difference between both approaches?
<binwiederhier> Hello there, I was wondering if there was still a chance to get PHP 7 into Ubuntu 16.04. I hear that it might be a bit late for that, but a little bird told me (https://bugs.launchpad.net/ubuntu/+source/php5/+bug/1522422) that rbasak might know more?! Apologies for the hightlight, I hope that's okay :-)
<ubottu> Launchpad bug 1522422 in php5 (Ubuntu) "PHP5 branch and PHP7 branch" [Undecided,Confirmed]
<ikonia> it's not released yet 7 is it ?
<binwiederhier> it has been released. a month ago or so.
<nacc> binwiederhier: I'm going to be taking a look at exactly that
<ikonia> was it really a month or so
<binwiederhier> but see it this way: "I don't know anyone running Ubuntu servers who's using the stock PHP because it's usually too old. People either use Ondrejs PPAs, compile themselves or use some other vendor (i.e. Plesk provided packages). PHP 5.6 support ends August 2017. That's 9 months of backporting security fixes. PHP 7.0 is supported until December 2018. If you ask me, this is a no-brainer. "
<nacc> binwiederhier: just starting on it now ... can try and keep you posted
<ikonia> maybe a bit too soon for a long term support release
<ikonia> pretty much every ubuntu server I see with php is the stock LTS version
<ikonia> it's no way too old
<nacc> binwiederhier: fwiw, it's in universe already just fyi
<binwiederhier> last i checked it was "proposed"
<nacc> https://launchpad.net/ubuntu/+source/php7.0
<nacc> 7.0.1-5 is in release
<nacc> 7.0.1-6 is in proposed
<binwiederhier> 4 hours ago!
<binwiederhier> ?!
<nacc> :)
<binwiederhier> wow
<Pici> ha
<nacc> enjoy!
<ikonia> universe is a good solution
<ikonia> use at your own risk, but a good proving ground
<an3k> oh great ... resolvconf is now not capable of simply adding the DNS server upon boot once VLAN is added ....
<patdk-wk> heh?
<patdk-wk> it does for me
<an3k> it did for me yesterday with the previous installation too but now it doesnt
<an3k> I use VLAN on bonding on two NICs. That worked perfectly with the previous install
<an3k> what's the logfile for all the stuff shown at booting with the [OK] at the right sideÃ
<an3k> http://paste.ubuntu.com/14431912/ ... I hadn't these issues yesterday
<an3k> if the log entries are correctly ordered then why is IPv6 and 8021q processed before bond?
<an3k> is it possible that it doesn't work because Ubuntu thinks both NICs have the very same MAC address?
<patdk-wk> how did you configure your interfaces file?
<patdk-wk> but that isn't the real issue
<patdk-wk> bond0 was created, the vlans added
<patdk-wk> but the interface is down, cause the eth0/eth1 haven't been enslaved yet
<an3k> http://paste.ubuntu.com/14431994/
<patdk-wk> remove the bond-slaves line
<patdk-wk> actually
<patdk-wk> bond-slaves none
<patdk-wk> you doubled it up
<patdk-wk> with the bond-master
<patdk-wk> should only use one of them
<patdk-wk> is vlan_raw_device optional? I always set it
<an3k> I can't remember now but afaik I got a warning or error without bond-slaves set
<an3k> I never set vlan_raw_device and never had problems (until now ;)
<patdk-wk> http://paste.ubuntu.com/14432017/
<patdk-wk> is what I use
<an3k> I just do a new install and (try to) do the same I did yesterday and see if that works
<jge> Hey all, I wrote a init script for a JAR file I would like to run under a limited system user. My command looks like this: sudo su btds -s /bin/sh -c "nohup  java -jar $PATH_TO_JAR --propertiesFile /etc/btds/btds.properties 2>> /dev/null >> /dev/null &"
<jge> However, when I run it. Nothing happens.
<jge> I've seen it start for a second or two and then shutds down
<jge> anyone's got a clue what I'm missing here :\
<shauno> no ideas, but perhaps remove the /dev/null redirects so you can see if/what it's complaining about during its short life?
<jge> shauno: ahh good idea, let me try that
<jge> bingo nohup: failed to open ânohup.outâ: Permission denied
<jge> im running the command as sudo though..
<sarnold> nohup sudo   or sudo nohup? :)
<tarpman> jge: you can replace that entire sudo, su, sh, nohup machinery with a single start-stop-daemon
<shauno> you're not running it with sudo, you're running it with su btds.  you're running su with sudo.  so most likely it's trying to write to nohup.out in the current cwd as btds
<jge> tarpman: Yeah I came across a site which recommended this as well, no idea how it works though so I picked the easiest solution
<jge> sarnold: I did sudo nohup
<repozitor> patdk-wk, shauno, http://paste.ubuntu.com/14432573/
<repozitor> can you evaluate it?
<jge> and I get :  sudo: no tty present and no askpass program specified
<jge> shauno: you're right
<jge> what if I create a nohup.out file in the current cwd and chown to btds, would that work?
<patdk-wk> repozitor, not really
<patdk-wk> you need to use -nn I think
<patdk-wk> or maybe you just didn't use -n
<patdk-wk> and you probably want to dump all 3
<patdk-wk> iptables -L -nv
<patdk-wk> iptables -L -nv -t mangle
<patdk-wk> iptables -L -nv -t raw
<patdk-wk> based on what your posted, assuming the /etc/services is ubuntu default, 8080 should not be accepted
<patdk-wk> but if it is being forwarded, or intercepted, those paths are open
<patdk-wk> cause your forward table allows all
<patdk-wk> but that normally isn't used unless nat is enabled or something
<repozitor> patdk-wk, i think about memory swap, i think you were right, my apologize :D
<repozitor> patdk-wk, you need the output of -L -nv?
<patdk-wk> it would help, your ports are mapped to services file
<patdk-wk> so I don't know if webmin is 8080 or 10000
<patdk-wk> could be anything :)
<patdk-wk> or even that http means 80
<repozitor> http://paste.ubuntu.com/14432662/
<repozitor> webmin are listen on 10000
<patdk-wk> that says you have no firewall configured
<repozitor> are port are set by default, for example http is 80
<patdk-wk> via /etc/services
<repozitor> sorry buddy, before -L -nv i stopped iptables
<repozitor> http://paste.ubuntu.com/14432692/
<repozitor> this is the output of your command after starting firewalld
<repozitor> any idea?
<patdk-wk> ya, very interesting
<patdk-wk> you see lines 6,7,8
<patdk-wk> line 6, matches to line 107, does nothing, ignored
<patdk-wk> line 7 matches line 104, does nothing
<patdk-wk> line 8 matches 97, here EVERYTHING is accepted
<patdk-wk> no nothing is ever denied
<patdk-wk> somehow your network interfaces (eth0/tun0) are not mapped to zones
<patdk-wk> zones you have are dmz, external, home, internal public, trusted, work, ...
<mkander_> I have 4 web servers that all host the same web page. When I change something in the php files I want to push it out to all servers. What is the best way to do this? Ill be doing this on Google Compute Engine, so it must be possible to just start a new node and it automatically pulls in the latest files.
<repozitor> i can just understand what you says, but can't to fix it
<patdk-wk> I can't either
<patdk-wk> I don't use or really care how to use firewalld :)
<repozitor> yea, i have these zones
<repozitor> REALLY?
<patdk-wk> heh?
<patdk-wk> your not paying for me to help ,ubuntu isn't, and I have to leave :)
<repozitor> you just said i don't use â¦
<repozitor> and i put a comment: REALLY?
<patdk-wk> atleast you know what is wrong
<repozitor> i can't sleep without firewall!!!
<patdk-wk> I said I don't
<patdk-wk> you can use anything you want :)
<patdk-wk> so I cannot answer you off the top of my head, and I don't have time to research the solution for you
<repozitor> of course
<patdk-wk> atleast you kow where the problem is though
<repozitor> patdk-wk, change your mind about firewall later :D
<repozitor> i'm serious.
<patdk-wk> why?
<patdk-wk> I don't like firewalld or ufw
<repozitor> no, i mean firewall, not exactly firewalld or ufw
<patdk-wk> I normally use iptables., shorewall, cisco acl, asa, ...
<repozitor> patdk-wk, so maybe you know how can i delete this rules?
<EmilienM> coreycb, jamespage: last week you told me we would have mitaka this week, what is the status please?
<EmilienM> (in trusty)
<coreycb> EmilienM, have I pointed you to this before? http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/mitaka_versions.html
<coreycb> EmilienM, it's a good way to check status of packages in the mitaka cloud archive
<EmilienM> coreycb: maybe I missed that, thanks a lot
<EmilienM> I'll stop asking now :-)
<EmilienM> coreycb: is it for trusty?
<coreycb> EmilienM, I think we have most everything in proposed for trusty-mitaka
<EmilienM> awesome!
<EmilienM> I'm testing it right now
<EmilienM> we were mainly waiting for that for our bump
<coreycb> EmilienM, to ready that, left column is xenial, and the right 3 columns are trusty-mitaka
<coreycb> ready=read
<EmilienM> ok makes sense
<EmilienM> coreycb: thx again
<coreycb> EmilienM, you're welcome
<repozitor> which web-based tool is proper for ubuntu?
<repozitor> except webmin
<jc_> Hi All. I have the HP 40L. I've place Ubuntu server on the 250GB hard drive and allocated it with 35GB through LVM. I've further 80GB I've assigned also from the 250GB hard drive. In addition there are two 1tb drives that I would like to put media on. How can I link these two together so that they are in a Raid that offers redundancy?
<uxfi> Hey how do I test my HTML CSS code on Ubuntu server on a VM?
<an3k> patdk-wk: I'm done with the reinstall and guess what. The exact same configuration now works
<an3k> the only issue I have left is that bond0 still has its own IPv6 address. AFAIK it shoudn't have any?!
<rbasak> hallyn: a question on the ubuntu-devel-discuss ML might be relevant to your work (cgroups), I'm not sure.
<hallyn> rbasak: i'll look in a bit, thx
<hallyn> oh -discuss.  i'm not on that
<an3k> https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2016-January/016090.html
<hallyn> answer: systemd
<hallyn> we've talked about creating a new boot-time service to create cgroups that admins want, but
<hallyn> if we do that we'll constantly be stubbing toes against systemd which wants to own those anyway
<hallyn> oh i see, someone replied to that effect already :)
<rbasak> binwiederhier: I replied in the bug.
<rbasak> binwiederhier: no problem with you pinging us for status updates on this channel at all. It's the appropriate venue to ask.
<rbasak> binwiederhier: though I should probably reply in the bug when appropriate for others to see too.
<an3k> RAID Q: Why should I create the partitions on each RAID member hdd before I create the RAID instead of simply creating the RAID and then creating the partitions on the RAID device?
#ubuntu-server 2016-01-08
<an3k> anyone here who can help with multiple vlan interfaces on bond0? The behavior is really weird
<genii> an3k: I'm pretty sure from examining the backscroll on this subject that you need to do ifenslave the devices in your interfaces file before bonding them
 * genii wanders back to things
<an3k> genii: thanks. I got the bonding working meanwhile but now I have serious issues with 11 vlan interfaces on bond0. simply said no default route gets created
<an3k> http://paste.ubuntu.com/14434427/
<an3k> is that normal behavior? and what about lines 23 - 29?
<an3k> it says link is up but then running with no active interfaces ....
<lordievader> Good morning.
<evanvarvell> ..
<evanvarvell> good morning to too
<evanvarvell> ..
<lordievader> Why the dots?
<evanvarvell> its an unresponsive response
<evanvarvell> who are you?
<lordievader> Someone in this channel?
<evanvarvell> how are you?
<lordievader> Doing good, got coffee :)
<evanvarvell> me too...i like dark coffee
<evanvarvell> ..
<hypermist> is there a way to bypass mirror step when install ubuntu server i dont have internet access on that machine currently
<hypermist> so. i need help for a work around haha
<theptr> some advice needed about a ubuntu server 14.04
<theptr> i have a vps ubuntu 14.04 , and a virtual 14.04 on my poliant at home . i used to run a rsync everyweek to have backups . Now the vps says no route to my home ip
<theptr> But when i try to ssh from my home to it works . Anybody who knows how to resolve this ? i think its something with dns
<lordievader> theptr: Can you connect on ip?
<theptr> lordievader, i can connect on ip from home to vps but not from vps to home
<lordievader> How do you try to connect to 'home'?
<theptr> lordievader, over ssh
<lordievader> What ip ;)
<theptr> i have an pfsense that has a forward in it to the virtual machine on my proliant
<lordievader> Does the pfsense box see incoming connections?
<theptr> No i checkt for that but i dont see anything in the logs
<lordievader> Right, so it never gets to the outside of your network ;)
<theptr> lordievader, so i tryed a ping to my home from a other vps an then the pfsense box answers
<theptr> but when i ping from the my main vps to my home i get a time-out
<theptr> So happy i got an backup :)
<lordievader> Can it ping other things?
<lordievader> The vps, I mean.
<theptr> yes it can ping google.be
<theptr> thats wy i dont understand the problem ...
<lordievader> theptr: Could you pastebin the output of 'ip r'?
<theptr> lordievader, its only 1 line but i will pastbin it
<lordievader> Only 1? Wut?
<theptr> lordievader, http://paste.ubuntu.com/14437116/
<theptr> the network was set by my vps provider
<theptr> i will paste you the network config that also looks strange to me
<lordievader> Please do. Along with the output of 'ip a'.
<theptr> lordievader, http://paste.ubuntu.com/14437125/
<theptr> lordievader, output ip a http://paste.ubuntu.com/14437131/
<theptr> i think its strange there is no dns line in the config of my provider
<lordievader> That doesn't matter for this problem.
<theptr> lordievader, what i can see is that it worked yesterday till 11pm
<theptr> in the logs of my pfsense
<lordievader> theptr: Try 'sudo ip r add <ip-home>/32 dev venet0:0' then try to ping your pfsense box.
<theptr> lordievader, okay will try that , i also going to eat really hungery :) thanks for your time
<ninja_> Hi, how does cgroups work in ubuntu? THere is no cgconfig and cgred daemons
<theptr> lordievader, added it but no succes
<theptr> maybe if i reboot everything ? vps and pfsense box and proliant
<lordievader> Hmm, that shouldn't matter, but I guess as a final thing...
<hypermist> Gah ubuntu server wont install without me wanting it on a network
<hypermist> D:!
<hypermist> can anyone help me ?
<hypermist> It just doesnt want to install without a network there, and well i can't give it a network
<lordievader> hypermist: What image are you using?
<hypermist> ubuntu-14.04.3-server-i386
<lordievader> Can't remember the hard depency on a network.
<hypermist> that the version im using lordievader
<tadziz> Hello. Do we have there mdadm raid specialists ? :) need some help
<tadziz> i have two 3TB disks with software RAID1 created with mdadm. After turning off pc and disconnecting one of the drives and turning pc back on my raid array is not working anymore , saying that super block is missing and when cheking details using mdadm it shows as RAID0
<tadziz> can someoen point me where to look to find solution for this ?
<lordievader> hypermist: What error do you get, anyways?
<lordievader> tadziz: How is the raid configured in /etc/mdadm.conf?
<hypermist> lordievader, its basically telling me you either have a network error or you have got a bad mirror
<lordievader> And then it doesn't let you continue?
<lordievader> You'd expect it to be able to do a basic install, that tasksel doen't work I can understand.
<theptr_> lordievader, Problem fixed , my isp had some routing/dns troubles , i called :)
<lordievader> theptr_: Ah, I see. Good to hear.
<theptr_> lordievader, thanks  for your time and help
<lordievader> theptr_: No problem ;)
<hypermist> lordievader, yea it doesnt let me continue
<tadziz> lordievader, take a look http://pastebin.com/emPVaa1n
<tadziz> it has info i put after running mdadm --details --scan
<lordievader> tadziz: What is the output of 'sudo mdadm --details /dev/md/0'?
<tadziz> lordievader, this is with both drives attached http://pastebin.com/VXj9Yf1h, and this one with one disconnected http://pastebin.com/TT0AQhrd
<lordievader> That is odd... Was there ever a raid0 on that drive?
<lordievader> Perhaps it detect some old raid config.
<tadziz> no, it was two clean new drives
<lordievader> There is no data on it now?
<tadziz> we some data just for testing
<lordievader> You could try destroying it all and rebuilding it... (not really pretty, I know)
<tadziz> you think we should use dd ?
<tadziz> we tried yesterday to rebuild but without dd it first
<lordievader> mdadm should be capable of destroying it, but perhaps dd the first couple of sectors.
<tadziz> do you think we have other options before going for rebuild ? :) it takes alot of time, haha
<tadziz> one more thing, when we have both drives attached and it is showed as raid1 and we set one drive flag to failed with mdadm raid is still working without any issues
<lordievader> Err, I don't know. Perhaps others here have more input ;)
<lordievader> Must say my knowledge of raids is not that extensive ;)
<tadziz> same for me
<hypermist> so got any idea what i can do about it lordievader ?
<lordievader> hypermist: One really hacky way could be, if there is a squashfs on the iso, extracting that to the harddrive.
<hypermist> Dont know how i'd do that
<hypermist> Cause there is nothing on that hdd at all
<hypermist> D:
<lordievader> hypermist: Reading [1] it should be possible without an internet connection. [1] http://ubuntuforums.org/showthread.php?t=2215731
<hypermist> alright
<dannymichel> Is there a script/gui i can install that will allow me to add email addresses and forwarders to my dovecot postfix email server?
<tgm4883> where would be the proper place for mysql-server packaging questions (or maybe bugs)
<tgm4883> launchpad seems to have a bunch of msyql-server projects for bugs, but nothing really for packaging, and I'm not sure where I would ask questions regarding my packaging of conf files for mysql-server
<genii> Perhaps in #ubuntu-app-devel
<tgm4883> genii: thanks, I'll try there
<devster31> hi, is it possible to add a wily repository to a trusty installation for a single package?
<devster31> meaning apt should choose that repo only for that package?
<tarpman> devster31: technically, yes. man 5 apt_preferences. but I'd recommend building a proper trusty backport in a PPA, rather than using a package built for a different release
<tarpman> devster31: whether or not it's even technically possible without upgrading large chunks of your system to wily totally depends on which package you're talking about
<devster31> this isn't core stuff, it's roundcube , php mail web interface, shouldn't break anything important
<devster31> so, in this case I would add the newer repository then specify in apt_preferences all packages whose target should be the newer repository? can I do all of them in 1 go? also if they aren't installed I need to specify the -t flag or do they get automatically picked from the preferences?
<tarpman> by default APT will pick the latest version from all available repositories; you want to set the wily repository at a lower priority, so it doesn't get used in general, and set only roundcube to a higher priority (e.g. same as trusty-updates)
<tarpman> 'apt-cache policy' (with no other args) will show you a summary of the current priorities
<tarpman> once you've set up your preferences, you don't need to use -t
<devster31> thanks, I think I'll try in a vm first
<tarpman> good plan
#ubuntu-server 2016-01-09
<extrememist> Hey all. I have a network issue I've configured the network interfaces in the file to use dhcp but. Its not assigning an ip
<extrememist> And i sort of have no idea why haha
<sarnold> can you pastebin your /etc/network/interfaces file?
<sarnold> is networkmanager getting in the way?
<extrememist>  I sort of can't pastebin I'm on mobile
<sarnold> do you get any error messages in dmesg or syslog or dhcp-specific logs/
<extrememist> And sitting next to it
<sarnold> oh, ouch, that'll complicate things :/
<extrememist> Yeaa
<extrememist> The machine wasn't installed with internet so that's the other thing sarnold
<extrememist> But I can post images s
<sarnold> oh that's not terrible
<extrememist> My screen res is also off
<sarnold> not as easy as pastebin but not terrible ;)
<extrememist> So it'll be a bit bonkers
<extrememist> http://imgur.com/2FsjR6G
<extrememist> I had to go right upclose otherwise camera would've herped
<sarnold> bad news about the photo plan..
<sarnold> I can see some dust specs on your monitor but no text
<extrememist> Lol yea
<sarnold> apparently your camera manages to hit it between refreshes or something similar. I didn't know that was possible..
<extrememist> Sort of dusty
<extrememist> Oh eow
<extrememist> I see what you mean habba
<extrememist> Flash broke iy
<extrememist> One moment
<extrememist> http://imgur.com/lDiolWc
<sarnold> man that is bad news..
<sarnold> what happens when you run ifup eth0 ? does it show up with ip link show eth0  ?
<extrememist> Nope sarnold
<sarnold> extrememist: how about 'ip link' ? does it just have a different name than you may expect?
<extrememist> Oh wait it just showed and IP for ifup
<extrememist> It just got an IP just then haba
<extrememist> Haha*
<sarnold> nice
<extrememist> Now I have to contact someone to give the machine internet heh
<extrememist> Alright logging off
<sarnold> have fun :)
<mkander_> I am setting up auto scaling on a server cluster and need to find a way for the new nodes to automatically download the web files (e.g. public_html). Also pull new updates to the html code automatically. What is the best way to do this?
<hypermist> mkander_, you could link them via an internal ip
<hypermist> that runs a cron job
<hypermist> to copy from the server to that server
<repozitor> i want to setup mail server on my ubuntu server
<repozitor> is there exist any recommended option for me?
<Sling> repozitor: plenty of options, this guide is pretty basic: https://www.digitalocean.com/community/tutorials/how-to-configure-a-mail-server-using-postfix-dovecot-mysql-and-spamassassin
<repozitor> Sling, iknew digitalocean
<repozitor> thank you buddy
<vvassilev> Hi all, I am running a few ubuntu server machines (mostly with web services), what would be the recommended backup tool. I had bacula, but I had a lot of headache with it: too complex (for my mind), too fragile...
<vvassilev> Is there something else on the market...
<lordievader> I use dirvish, rsync based. Very lovely tool.
<bekks> There are a lot of commercial, enterprise backup tools.
<vvassilev> lordievader: does it have incrementals and differentials?
<vvassilev> bekks: I try to use free tools.
<bekks> Then you have to live with the free tools.
<vvassilev> As a friend of mine says "I don't pay for women and software" :)
<bekks> Sounds like your friend is neither in a relationship, nor engaged, nor married.
<vvassilev> Absolutely..
<lordievader> vvassilev: Yes, it is rsync based.
<vvassilev> lordievader: and one can backup things like ldap, mysql, ...
<bekks> Having the appropriate backup agents, sure.
<lordievader> vvassilev: You configure it by setting which directories need to be backed up.
<vvassilev> lordievader: sorry for being annoying but trying to save some future pain: are the restores easy and is there enough documentation?
<lordievader> There was some really good documentation, but unfortunately that has gone away.
<lordievader> Restore can be as simple as copy and paste.
<vvassilev> lordievader: sounds good, definitely will look into it...
<Schalla> Hello! I would like to install Ubuntu Server on my Poweredge T20 at home (as KVM host), I am running on this desktop Kubuntu 15.10. I got 2 questions regarding that: As pure KVM host, should I go with 14.04 or with 15.10 and upgrade then in april to 16.04? And additionally, is there a way to install the OS directly on a usb stick ? The server will use the stick itself as boot device, so the step of creating a bootable usb stick is
<Schalla> unnecessary for me
<RoyK> Schalla: I'm always using LTS on servers, but as you say, it's not long until 16.04 is released, so 15.10 should probably do
<RoyK> Schalla: as for using a usb stick for the root, it shouldn't be much of a problem, but I'd suggest using two in a mirror
<Schalla> Mirroring USB sticks? That's new for me
<Schalla> How would I do so?
<RoyK> just like mirroring anything else :)
<RoyK> linux sees it as a scsi device, so it won't make any difference from a disk
<Schalla> This will be a funny night + day. :P I will just install ubuntu server now via VM (Which is a easy way actually, just vm -> boot iso -> install to usb), then I will stick that into the server + 2 new harddrives, boot ubuntu and setup a software raid for the 2 harddisks, can I add the mirroring later on? Then I just order a 2nd USB stick
<Schalla> Its atm a 16GB Sandisk one, should be sufficient as host I guess
<RoyK> Schalla: if you don't have two usb sticks now, you'll need to setup a "broken mirror", that is, a mirror with only one disk. It's not trivial to change a non-mirrored thing to a mirror later
<Schalla> RoyK: Thanks for the warning. I guess this guide covers what I am trying to do?
<Schalla> https://help.ubuntu.com/community/Installation/SoftwareRAID
<RoyK> it does :)
<RoyK> if you're unsure, try in a vm first
 * RoyK has a vm with 12ish 'drives', each 16MB, for testing
<RoyK> and 16GB for just the root is sufficient - even 4 should do
<Schalla> Solid advice. :p
<Schalla> yeah the usb stick costed like 8â¬ for 16GB USB 3.0
<Schalla> I thought thats okay and safe enough ^^
<Schalla> Regarding the broken mirror, I would basically do everything the same way but just configure 1 partition table and then still say configure a software raid -> md drive -> raid 1? The guide covers that this is not possible
<Schalla> "Number of devices. RAID 0 and 1 need 2 drives. 3 for RAID 5 and 4 for RAID 6." which makes sense, does this mean the installer wont allow less or just that it won't work ?
<RoyK> no storage medium is safe. period.
<Schalla> Sorry, bad wording. Ment that its enough space for sure
<RoyK> you have a couple of drives for storage too?
<RoyK> those USB things are usually rather low side performance-wise, USB3 or not
<RoyK> Schalla: the guid you pointed to is good, yes. just create a mirror with a single drive if you don't want to wait for the other usb stick
<Schalla> RoyK: Yeah 2x WD Red 4TB
<Schalla> Planned to do also a software raid 1
<RoyK> Schalla: also, see Boot from Degraded Disk - you really want that
<RoyK> I have no idea who came up with the idea to disable boot degraded in the first place - the whole point of having a RAID is to allow for hardware failure
 * Schalla shrugs.
<Schalla> Maybe to avoid that you dont recognize the issue without monitoring?
<RoyK> possibly, but if you don't monitor your servers, you shouldn't be running servers in the first place :P
<Schalla> :P
<Schalla> Anyway, I sadly gtg now. Thanks a lot for your help RoyK, I will give it a shot with a VM tonight! :)
<RoyK> :)
<RoyK> good luck
<Schalla> Ty!
<mordor_> Hi. happy new year.
<mordor_> I have a question, is it possible to use ubuntu server without video card? I used one to install ubuntu server, but when I removed my video card, the server doesn't work anymore (can't connect with ssh)
<mordor_> I plug and unplug multiple times to test, and I am sure it's the missing video card which prevent ubuntu server to boot.  Previously I installed debian server 8.2 on this computer, and it worked without it.
<bekks> mordor_: Sure.
<mordor_> I googled it, and maybe it's because of grub splash screen. But answers are too old and obsolete
<mordor_> my old computer is an core2duo E8400 (no gpu integrated)
<mordor_> bekks: you sure? you tried?
<bekks> mordor_: Yes.
<mordor_> I installed ubuntu-server 15.10
<mordor_> and your cpu doesn't have gpu integrated too?
<bekks> mordor_: How does my system configuration solves your issue? :)
<mordor_> bekks: don't know, just to be sure
<mordor_> do you know how to disable grub select screen?
#ubuntu-server 2016-01-10
<lordievader> Good morning.
<nasix> Hi every body!
<nasix> I have a question regarding apatche. Actually I've done all my search and found nothing!
<nasix> I want to run a simple CGI script to execute an application like xterm.
<lordievader> Xterm with apache as a parent? Sounds like a bad idea.
<bekks> OUCH
<nasix> Really?
<nasix> Ok
<bekks> Yes.
<nasix> Thank you for your response
<nasix> Actually I want to let some clients to run some application on my server
<nasix> and
<bekks> No need to use enter as punctuation sign.
<nasix> then send the application window to that specific user
<bekks> Thats not how a webserver works.
<nasix> Is there a good way to do so?
<nasix> I don't want to let those client have ssh or something like that to my server
<bekks> The only usable solution I do know of is a Citrix XENDesktop Web Server.
<nasix> let me google it...
<nasix> So this is a Desktop virtualization software.
<nasix> But my final target is to let my clients run some specific applications like firefox or so.
<nasix> I don't want to let them see the whole desktop environment
<Schalla> This sounds borked imho.
<Schalla> Where is the usecase for this?
<bekks> The usecase is to give a user a desktop or application via web.
<nasix> We have a single Linux machine which can access the internet.
<maswan> well, if you only want to run command line tools, you can have a browser terminal
<nasix> We should not connect any other machine to the internet
<maswan> graphical stuff the only reasonable way I know of is complete remote desktops
<maswan> or ssh with X forwarding
<nasix> can you check my cgi-script: http://paste.ubuntu.com/14457834/
<Schalla> bekks: Doesnt make that more sane? Or is it only me?
<Schalla> (The idea with apache spawning applications)
<bekks> Schalla: Basically its not apache which spawns the processes, but a small application (the citrix receiver, even webbased) which does it. On the client.
<nasix> when I run this as a bash script, it goes well
<Schalla> Got a small question regarding software raid, I created via ubuntu server installation a degraded raid 1 and added then later on a new partition (this is a vm, but I will do the same later on a physical server), is it normal that the number is 0 and 2?
<Schalla> https://i.imgur.com/TU4cE2V.png
<Schalla> The sync worked as intended after adding and the degraded state is also gone, just wondering about the number + minor
<nasix> bekks: Of course when I set DISPLAY variable to some thing like 192.168.1.108:0 , I can see its window on that client. (192.168.1.108)
<bekks> nasix: The mechanism behind that is totally different.
<nasix> bekks: I can see the window for a short time and then it closes automatically.
<nasix> bekks: what is that mechanism?
<nasix> bekks: can you shed light on that or guide me to some reference?
<bekks> nasix: citrix provides a lot of technical documentation.
<nasix> bekks: can I use critix for just sending a single application window upon client request?
<bekks> nasix: Yes.
<nasix> bekks: Thank you very much for your help. Is it so hard to achieve that?
<bekks> In a safe way - yes.
<nasix> bekks: Thank you. I'll try to do that. I was working on this since yesterday!
<mfaroukg> people i there is issue with the built-in firewall not blocking the google IPs
<rokusani> anyone?? i'm stuck with a dlink router port forward works on some survialance ip box but not ubuntu server :(
<rokusani> online port checker says port is closed for ufw enable ports
<lordievader> mfaroukg: What do you mean?
<rokusani> but that same online port checker shows another port i..e cameras ip box as open and is accessible using static ip
<mfaroukg> lordievader, i have built-in iptables list they should block the google all sites but still can access google , the main function for the FW is not working
<mfaroukg> lordievader, this was discovered yesterday only
<lordievader> mfaroukg: Could you pastebin your firewall config?
<mfaroukg> lordievader, http://pastebin.com/2RJ7eFSg
<lordievader> mfaroukg: Your output policy is accept without any drop rules? So, yes you can still access google.
<mfaroukg> lordievader, but how ? i have used for long time it was simply redirect me to my local website before , now it is some times redirect and some times passes it
<lordievader> mfaroukg: Are you talking about your forward table? If so, that is a mess.
<lordievader> mfaroukg: Anyhow, if you want to block outgoing connections then you need to specify drop rules in the output table.
<mfaroukg> lordievader, some thing happened after latest kernel update , or google have done something confuses ubuntu check this http://pastebin.com/TiGEeShm
<lordievader> What is wrong with that?
<mfaroukg> lordievader, and many many location with different IPs
<lordievader> Yeah, it is google ;)
<mfaroukg> lordievader, when i block some it passes others
<lordievader> That makes sense, doesn't it?
<lordievader> Much more effective to black hole the google dns records.
<mfaroukg> lordievader, but how i stop the users from searching and watch youtube .. they don't stop and network traffic is f**k
<lordievader> mfaroukg: Like I said, use the output table of iptables.
<Aboodyman> Can you block a range of IPs
<Aboodyman> ?
<lordievader> That too, you can block google's ip range.
<Aboodyman> But how to know the ip range
<mfaroukg> lordievader, Aboodyman, how i can let the tun0 only control that i don't want permanent blockage
<lordievader> mfaroukg: What?
<Aboodyman> mfaroukg: you can not do that unless you install third party software
<mfaroukg> Aboodyman, the range is in the pastebin
<mfaroukg> lordievader, Aboodyman, i have coovachilli controlling the traffic with virtual tunnel tun0
<Aboodyman> lordievader ?
<lordievader> mfaroukg: Do you handle the dns requests?
<lordievader> Aboodyman: ?
<Aboodyman> Why aren't you talking
<mfaroukg> lordievader, I changed the DNS to use the google's 8.8.8.8
<lordievader> Hmm, if you controlled it you could black hole google'
<lordievader> s domain ;)
<mfaroukg> lordievader, it was working like charm but suddenly it is throwing me down on the floor
<lordievader> Anyhow, i'd setup an ipset with all the google ip's and drop the output if the set matches.
<mfaroukg> lordievader, hard workaround :( -crying-
<Aboodyman> mfaroukg: What would you do then
<lordievader> mfaroukg: It ain't, it is actually quite lovely. Just one line in iptables and a flexible set.
<mfaroukg> lordievader, can you check this script : http://pastebin.com/T3kzb7uE it might just need some modifications
<lordievader> Ah, that is where the forward rules come from... did you write this?
<lordievader> Looking at your earlier paste of your iptables rules I'd say some variables evaluate to ''.
<mfaroukg> lordievader, i have contributed only
<mfaroukg> lordievader, do you want the iptables -S ?
<lordievader> No.
<lordievader> Like I said earlier, I'd go with the ipset approach.
<mfaroukg> lordievader, do you suggest DNS changing ?
<lordievader> mfaroukg: No, read my answer from before.
<mfaroukg> lordievader, but this firewall should redirect ALL to my local hotspot client
<lordievader> Then let it do that, besides it is not the firewall doing that, but the routing.
<mfaroukg> lordievader, you're right .... would you mind hinting
<lordievader> http://unix.stackexchange.com/questions/126595/iptables-forward-all-traffic-to-interface
<Aboodyman> ð
<mfaroukg> :-*
<dannymichel> fail2ban keeps stoping dovecot from working. is there anything i can do about that? http://pastebin.com/YMDaZPhf
<Schalla> RoyK: Everything worked out fine btw! Tested the procedure first on a VM and did it today on the real host, everything worked fine. :)
<Schalla> Just have to configure now the software raid for the 2 data disks and then start with the KVM config
<axisys> failing to install lsscsi... looks like linux header dependency needs to be resolved... but apt-get -f install fails too.. any suggestion on how to get around it? here is the apt-get output
<axisys> http://dpaste.com/37ZWRJ1
<axisys> running Ubuntu 12.04.3 LTS
<axisys> on kernel 3.2.0-60-generic
<bekks> Read line 42. You ran out of disk space.
<axisys> bekks: doh! let me clean up /boot .. 81% now
<trippeh_> ahh, all the times I've had a full /boot :))
#ubuntu-server 2017-01-02
<Genk1> Hello folks
<Genk1> What is the fastest archiving tool in Linux world ?
<mybalzitch> tar?
<Genk1> I have Gb of files that I want to backup/restore but I am hesitating to choose an algorithm
<Genk1> mybalzitch, I will need to send files over network too
<spinza> quick question on 14.04 do-release-upgrade gives me "No new release found"
<spinza> and /etc/update-manager/release-upgrades has Prompt=lts
<spinza> what am i missing?
<binia> hmm, i had that once, forgot what i did :/
<spinza> weird because 16.04.1 is out right
<spinza> if i set prompt=normal I still get no release
<spinza> when i do do-release-upgrade -c
<binia> do-release-upgrade -c -d
<binia> try this
<binia> spinza, is it fresh install?
<spinza> no an older one
<spinza> None found doing -c -d
<spinza> think i found the issue
<binia> ?
<binia> share for future references
<binia> weirdly tho, i did upgrade from 14.04 the other week, really not long ago
<binia> but it was proper fresh install
<spinza> this file /etc/update-manager/meta-release has the links that seem wrong comapred to what I read online
<spinza> http://pastebin.com/h39pjRpQ
<binia> mhm so they look for something they shouldnt or somewhere they shouldnt
<spinza> and those urls aren't foudn
<binia> ah makes sense
<binia> did you find the working ones yet and changed it
<binia> curious will it kick off
<binia> want me to check in my ubuntu 16.04
<binia> would assume those will be the working ones
<spinza> yep
<binia> [METARELEASE]
<binia> URI = http://changelogs.ubuntu.com/meta-release
<binia> URI_LTS = http://changelogs.ubuntu.com/meta-release-lts
<binia> URI_UNSTABLE_POSTFIX = -development
<binia> URI_PROPOSED_POSTFIX = -proposed
<spinza> this seems to be the write one
<spinza> http://pastebin.com/jCq7ghF1
<spinza> yep like you have it
<binia> nice one
<spinza> do-release is picking up 16.04.1 now
<binia> \o/
<binia> good job
<spinza> thanks for being a sounding board :)
<binia> no probs
<binia> decided to help a lot in january so i dont have to help the rest of the year :D
<Mis-anthrope> anyone up?
<binia> depends on a question i guess :D
<Mis-anthrope> I am using ubuntu 16.04 server (x86) with vbox. I have installed openbox as WM. After installing VBoxGuestAdditions, my server crashes the minute I run startx. There are a lot of crash logs and I dunno which log to look up to see what the problem actually is.. I looked up Xorg log and it only has information about starting up X and not why it crashed.. Any suggestions?
<patdk-lap> Mis-anthrope, you probably want #ubuntu
<Mis-anthrope> uhmm.. I am using ubuntu server
<Mis-anthrope> anyways.. will go there.. ty :)
<patdk-lap> but xorg isn't ubuntu-server
<patdk-lap> if yo uwant to get really technical: Maintainer: Ubuntu X-SWAT
<patdk-lap> and if you look under Task, ubuntu-desktop, there is no ubunt-server listed
<MASM> wep
<shambat> having some problems with a disk in my btrfs array (btrfs-progs v4.4 in Ubuntu 16.04). I have 4 disks in raid10, where /dev/sde is showing 100% busy and reads from it are causing 100% iowaits on the reading core. I started a scrub which is taking a long time, and my kern.log has a lot of: "blk_update_request: I/O error, dev sde, sector <number>" in it.
<craysiii> Hey everyone, I am quite puzzled. I am trying to install ubuntu server 16.04.1 on my desktop, and later install a DE. I am trying to set up software raid with MDADM, and ive come to a point where i need to use fdisk, but when i use CTRL-ALT-F2 to access busybox shell, it says fdisk doesn't exist, nor can I install it via apt (doesn't exist either). how can i use fdisk in this situation?
<karstensrage> for my ubuntu 16.04 to boot with my custom NSS library, i have to short circuit out for the several boot things... is this normal?
<karstensrage> http://pastebin.com/HYsJ72rB
#ubuntu-server 2017-01-03
<patdk-lap> karstensrage, that is not a very safe thing to do with security
<karstensrage> patdk-lap, what do you mean?
<patdk-lap> you are not matching the whole string, only the start of the process name
<karstensrage> so why is that bad? patdk-lap
<patdk-lap> whatever that check is doing, can be bypassed by using a process starting with the same name
<karstensrage> well its an nss library like nss_ldap
<karstensrage> and those processes are the ones that open the library but dont do anything with it
<karstensrage> or close it
<karstensrage> so that code is necessary to short circuit out if those processes open the library
<karstensrage> so if there is process that has the same starting name, i guess i would want to short circuit out as well
<karstensrage> this same problem is apparently with nss_ldap
<karstensrage> but they handled it differently
<karstensrage> debian sure makes things painful
<tarpman> karstensrage: was just about to say, nss-ldap doesn't seem to have any of those process names hard-coded in it; what did they do differently?
<karstensrage> tarpman, the work around afaict was to set a flag to either do a strong connection or a weak connection to the ldap server, in the former case keep trying after a failure, in the latter, abort if  it fails the first time
<karstensrage> i dont have that luxury
<karstensrage> i hate this way of doing it btw with the hardcoded names
<karstensrage> but im not seeing a good way around it
<karstensrage> tarpman, basically if you google "dbus nss_ldap" you can find all the discussions about the troubles nss_ldap had
<karstensrage> it was really hard to narrow it down to dbus, but once i did that, i was able to put in the right debugging to see these processes and filter them out
<tarpman> this is ringing some bells now... some of these bugs look very familiar
<karstensrage> tarpman, any other suggestions?
<rbasak> teward: I think you can set DEB_BUILD_MAINT_OPTIONS=hardening=-pie or something like that.
<rbasak> teward: https://wiki.debian.org/Hardening#dpkg-buildflags
<tarpman> karstensrage: in your position I'd be trying very hard to detect the "network unreachable" state from my module... that wasn't possible for libnss-ldap since libldap hides the network state behind the LDAP result code
<jakst> is this the right channel to ask for assistance with data recovery with Linux Raid 5 / LVM ?
<cpaelzer> jakst: it is one channel to ask - there is no one specific to ubuntu + lvm/raid
<cpaelzer> jakst: you can still go on to the wider community in #ubuntu if you find no help in the more server specific group around here
<jakst> cpaelzer: Of course, just wanted to make sure I wouldn't be chased away with torches and pitchforks for asking here :) Already tried #ubuntu but didn't get much of a response
<cpaelzer> jakst: it maybe is still a slow satrt of the year
<jakst> Well, I'll give it a shot
<jakst> The thing is, my physical and logical volumes have disappeared from LVM, and my raid array has status 'active, degraded, not started' and reports the wrong size
<cpaelzer> jakst: did any of the links that were linked there help you already ?
<jakst> No, not really
<cpaelzer> jakst: if all is gone (pv and lv and likely also vg) you have to start looking bottom up
<cpaelzer> jakst: so #1 are the raw devices like /dev/sd... still there?
<cpaelzer> jakst: from there go on with pvdisplay, maybe pvscan ... to find your pv's - and from there to vg and lv and so on
<jakst> I can see them with fdisk -l
<cpaelzer> jakst: the question is where it breaks
<cpaelzer> jakst: ok so disks are there - and for the moment we assume they are intact
<cpaelzer> jakst: you said LVM / Raid before - is it only LVM or is also an md involved?
<jakst> cpaelzer: pvdisplay and pvscan display nothing
<jakst> yes
<cpaelzer> stacked which way - are the pv's on the md array - or have you made a md array out of lv's ?
<jakst> cpaelzer:  /dev/md0 consists of raid5 of  sdc sdd and sde
<cpaelzer> jakst: ok and the pv(s) is on /dev/md0 to shape off lvms from there right?
<cpaelzer> jakst: is cat /proc/mdstat still happy about /dev/md0?
<lordievader> Good morning, happy new year!
<cpaelzer> good morning and year lordievader
<jakst> Happy new year! :)
<jakst> https://www.irccloud.com/pastebin/JcWed20J/
<jakst> This is /proc/mdstat
<cpaelzer> jakst: ok, so not lvm is broken (maybe it is later) but your md is down
<jakst> Yeah that seems to be the case
<cpaelzer> jakst: http://superuser.com/questions/603481/how-do-i-reactivate-my-mdadm-raid5-array
<cpaelzer> jakst: that should get you to activate it again
<cpaelzer> jakst: there are also commands to gather status on each member disk and such
<cpaelzer> jakst: I'd do so and store that away before starting/assembling it
<jakst> Do you mean mdadm --examine /dev/sdc etc?
<cpaelzer> jakst: and all other raid devs
<cpaelzer> jakst: I like to store debug info before changing something
<jakst> Thanks for the tip!
<cpaelzer> jakst: and then likely go with
<cpaelzer> jakst: mdadm --stop /dev/md0
<cpaelzer> jakst: mdadm --assemble --scan -v
<cpaelzer> jakst: and let us know if it worked or why not if not
<cpaelzer> jakst: the linked example has a case with out of date disks and uses force to reenable, but most of what follows depends so much on your case that you have to decide (e.g. if force is ok)
<jakst> https://www.irccloud.com/pastebin/2FlDN3O1/
<jakst> So this is the output of assemble
<cpaelzer> jakst: that is a good start
<cpaelzer> jakst:  as I read it it means it could reassemble the state and currently syncs up one of your devices
<cpaelzer> your /proc/mdstat should show it syncing with an ETA
<cpaelzer> jakst: after that you should be able to start it
<cpaelzer> jakst: what does proc/mdstat show now?
<jakst> https://www.irccloud.com/pastebin/nMMfc1Ir/
<cpaelzer> jakst: also the state of the examine output should have changed now - the disks are now part of an array
<cpaelzer> jakst: there is something like "Device Role" at the end of examine
<cpaelzer> hrm - does that mean they are all as spares (S)
<cpaelzer> need to check
<jakst> cpaelzer: Device Role is the same as before, Active device 0, 1 and 2
<cpaelzer> jakst: it very likely just needs the --force, but it is your data so I'm refusing to just say you should do so
<cpaelzer> jakst: do you have enough spare storage to dd away the raw disk content before you do so?
<jakst> No, I don't
<jakst> what does force do?
<cpaelzer> jakst: essentially it starts it anyway referring to the last line in https://www.irccloud.com/pastebin/2FlDN3O1/
<cpaelzer> jakst: from the bit I see in your case it is 98% fixing your issue, but 2% killing your data - that is why I need you to make the call
<cpaelzer> jakst: "if you search for "assembled from 2 drives and 1 rebuilding - not enough to start the array while not clean - consider --force" the net is full of recommendations to just do it
<jakst> Well I don't have enough space to backup, and it's not ultra critical to recover. Just very very nice if it works
<cpaelzer> jakst: so do the assemble with force, then start it
<jakst> It says my devices are busy -.-
<cpaelzer> jakst: it should be in recovery mode then
<cpaelzer> jakst: stop before reassemble
<jakst> Ok, but should I assemble manually? Don't know what it get sr0 from and suh
<jakst> Nvm that
<jakst> Now I forced it. Should I just mount it now?
<cpaelzer> jakst: now that you forced the assemble you should madam start it and check /proc/mdstat
<jakst> is that mdadm -A /dev/md0?
<cpaelzer> jakst: assemble might start it automatically - it is too long ago since mine just works for years now
<cpaelzer> jakst: what does /proc/mdstat show now (before searching for a start command that might not exists)
<jakst> https://www.irccloud.com/pastebin/8LzgkVch/
<jakst> Recovering
<cpaelzer> jakst: good
<cpaelzer> jakst: when that happened to me it was the day to read about upgrading to raid6 for the day two disks will break :-)
<cpaelzer> jakst: you can use it now, after the recovery is done it will provide the extra level of failsafe again
<jakst> Haha yeah, a lot of thoughts about upgrading have been passing through my head
<cpaelzer> jakst: I waited to be recovered before using it thou
<jakst> cpaelzer: Yeah I'll just check if it mounts properly, then I'll leave it to recovering
<cpaelzer> jakst: in your case pvscan might be the next
<cpaelzer> jakst: as you have pvs on the md
<cpaelzer> jakst: and then vgscan, lvscan, mount
<jakst> cpaelzer: Well it appears in pvscan, but without a volume group
<cpaelzer> jakst: it apears without vg in pvscan because the vg isn't active I think
<jakst> It's supposed to belong to vg group0
<jakst> I think. Was a while since I set it up
<cpaelzer> jakst: so pvdisplay shows your pv's
<cpaelzer> jakst: but vgdisplay shows nothing - not even inactive?
<jakst> vgdisplay shows my volume group, but only cointaing a caching disk that I never bothered to activate
<cpaelzer> jakst: and vgscan is not re-finding your pvs now?
<jakst> Nope =/
<cpaelzer> jakst: sorry I'm out of remote-usable-skills now I guess
<cpaelzer> jakst: has the pvdisplay all your pv's at least?
<jakst> cpaelzer: pvdisplay shows md0, but not the individual drives
<lordievader> jakst: That makes sense? Right?
<lordievader> Ja
<lordievader> Whoops
<jakst> lordievader: Well I think I recall that each drive was listed under pvs
<cpaelzer> jakst: if you did pvcreate on /dev/md0 you will only see /dev/md0 in pvdisplay
<lordievader> For mdraid perhaps... but if you layer lvm on top of mdraid you won't see all drives in pvs/pvdisplay.
<cpaelzer> jakst: the member disks are no more to be accessed directly or you will kill your raid
<lordievader> ^ that.
<cpaelzer> lordievader: ack
<lordievader> If you'd let LVM do the raid5 then yes, you'd see all disks.
<jakst> Ok, but trying to mount the array I get 'mount: wrong fs type, bad option, bad superblock on /dev/md0'
<lordievader> jakst: You put lvm on the mdraid right?
<lordievader> LVM ain't a filesystem ;)
<jakst> lordievader: No I guess I haven't. How would I do that without destroying the data?
<lordievader> jakst: What is the output of 'sudo pvscan && sudo vgscan && sudo lvscan'?
<cpaelzer> that ^
<jakst> https://www.irccloud.com/pastebin/v8VXxrij/
<jakst> sdb is the device I was meaning to use as cache, but never did
<lordievader> Hmm, md0 contains a PV signature but is not assigned to any volume group?
<jakst> Before the crash I had a logical volume called data
<jakst> Heh, yeah
<lordievader> jakst: Could you pastebin the output of 'sudo lsblk -o NAME,KNAME,FSTYPE'?
<jakst> https://www.irccloud.com/pastebin/0oD8Q8cw/
<cpaelzer> jakst: it will likely just complain not knowing about "data" but what does this give you?: "vgchange -ay data"
<jakst> Yeah not found
<lordievader> Sda contains rootfs I presume?
<cpaelzer> jakst: sudo vgcfgrestore --list data
<jakst> yes
<jakst> sudo vgcfgrestore --list data
<jakst> No archives found in /etc/lvm/archive.
<cpaelzer> :-/
<jakst> But if I ls that directory I can see them
<cpaelzer> ?
<jakst> https://www.irccloud.com/pastebin/4xEbEngI/
<cpaelzer> jakst: well you have backup of the group0 cache, but not of a data vg
<cpaelzer> jakst: I slowly lean to assuming you once had a data lvm, but stopped using it a while ago
<jakst> My system was up and running before new years
<lordievader> jakst: What happened that you lost it?
<lordievader> Power outage?
<jakst> Might have been, not sure. I was away
<jakst> But I also might have messed it up in my early rescue attempts
<cpaelzer> I just checked your former pastes - since /dev/md0 is a proper PV it was used as PV - I wonder why it would auto-backup the cache but not the data config
<jakst> But group0 countained the lv data, so should be correct right?
<jakst> data wasn't it's own group
<cpaelzer> jakst: are the files in /etc/lvm/archive human readable - and if yes is data in there?
<lordievader> jakst: Is the data lv defined in /etc/lvm/backup/*
<lordievader> ?
<jakst> not in backup, but in archive
<jakst> # Generated by LVM2 version 2.02.98(2) (2012-10-15): Wed Jul 15 12:27:07 2015
<jakst> contents = "Text Format Volume Group"
<jakst> version = 1
<jakst> description = "Created *before* executing 'lvcreate group0 -L20M -n dataCacheMe$
<jakst> creation_host = "NAS"   # Linux NAS 3.16.0-43-generic #58~14.04.1-Ubuntu SMP Mo$
<cpaelzer> but that seems only to be the cache device
<cpaelzer> or came more before flood control kicked you
<lordievader> jakst: Could you pastebin that file?
<jakst> https://www.irccloud.com/pastebin/xrkslaQV/
<jakst> Yeah, I accidentally pasted raw :P
<jakst> Hard to copy long texts from console...
<cpaelzer> nice, it really has a backup
<cpaelzer> not sure but you might be able to reload that with vgcfgrestore
<jakst> I could!
<jakst> And it mounted!!! My data is back!!!!
<cpaelzer> yeah
<lordievader> Whoop whoop
<cpaelzer> gz jakst
<lordievader> jakst: Nice
<jakst> Love you guys cpaelzer lordievader
<lordievader> jakst: What was the actual command you used to restore the backup?
<jakst> sudo vgcfgrestore -f /etc/lvm/archive/group0_00008-621465970.vg group0
<lordievader> Ah, cool.
<lordievader> Thanks
<jakst> I really couldn't have figured that out on my own, and I already spent a whole day trying
<jakst> Now I learned a lot as well! Thanks :)
<cpaelzer> your welcome
<jakst> So, futureproofing.... Raid6. Anything else?
<lordievader> I'd do the raid in LVM, but that is me ;)
<jakst> What's the upside?
<lordievader> More flexibility. LVM uses dmraid, like mdraid, but does so per LV instead of per disk.
<lordievader> So you can determine per LV if you want linear, raid0, raid1, raid-whatever.
<cpaelzer> jakst: also maybe share your won insight in something like http://askubuntu.com/questions/13981/recover-lvm-after-hdd-crash or a new post
<jakst> cpaelzer: Absolutely, I'll do that!
<jakst> lordievader: Okay, sounds nice. I'll have to look at that when I get more disks
<ghostal> i need to set up sendmail on my ubuntu xenial server. the server just needs to send emails to users, it doesn't need to receive anything. i found this guide on DO, whose guides i've found to be excellent in the past, but this one seems a little more confusing to me https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-postfix-as-a-send-only-smtp-server-on-ubuntu-16-04
<ghostal> particularly, i'm confused about hostname settings. should it even matter if all i'm doing is sending email?
<rbasak> ghostal: unless you're using a service provider's email relay, if your hostname doesn't resolve to your source IP (and in reverse) then many hosts will block your emails for spam.
<ghostal> rbasak: well, i'm not using a relay, i know that much :)
<ghostal> my hostname is just "mir"
<ghostal> but there is a DNS a record for the machine
<kirkland> dasjoe: hmm, I think it's supposed to default to the last LTS
<zul> coreycb: ping can you update your upstream report please?
<coreycb> zul, that's in progress, did you see I moved that btw?
<zul> coreycb: yeah im using the new location
<coreycb> zul, ok
<coreycb> zul, i'm working on barbican update-excuses failure.  waiting on a s390 instance to debug the neutron autopkgtest failure.
<zul> coreycb: ack
<coreycb> zul, do you have an MIR open for monasca-statsd?
<zul> coreycb: no there needs to be one i think
<coreycb> zul, ok i'll open one
<zul> coreycb: k
<jge> anyone ever used chrony in ubuntu before? I'm trying to query a chrony client on my network as "chronyc -h 192.168.1.22 tracking" but I'm wondering if it needs to be allowed first inside chrony.conf
<ikonia> allowed ?
<ikonia> if you're specifying it on the command line it won't take that parameter from the config
<jge> ikonia: chrony operates as an ntp client by default, if I allow a host inside chrony.conf then it becomes a server for that client (if it needs to) but I just want to query it for skews
<jge> I'll test it
<ikonia> jge: right, but you're specifying -h on the command line so it won't care about that option in the host
<ikonia> (host config)
<jge> ikonia: I'm querying the remote server from another host in the network
<ikonia> jge: yes, I understand that,
<ikonia> jge: however the fact that you're setting -h on the command line replaces that parameter from the config
<jge> same thing as an "ntpq -p
<jge> but the other end needs to allow the connection
<jge> no?
<ikonia> jge: so you're talking about the config on the remote servcer it's querying
<ikonia> rather than the client
<jge> yep
<ikonia> jge: ok, so yes you'll need to tell it to allow queries
<jge> yeah I did, let me test it
<jge> never worked with chrony so I wasnt sure
<ikonia> jge: works %80 the same as ntp
<jge> yeah the guy who has it running here swears by it
<jge> "it's so much better than ntpd"
<jge> but no explanation as to why he thought that.. had to look it up.
<ikonia> I'm not sure why it's "better"
<ikonia> I've found it "fine" but nothing to write home about as a big song and dance
<ikonia> I don't see any real world benifit over ntp
<ctjctj> Hello.  I'm attempting to mount a filesystem from an iscsi server.   My fstab has the _netdev option for the filesystem and it is using UUID.  The problem is that during the boot sequence iscsi-open start script hasn't run at the time the system attempts to mount the disk.  How do I get iscsi-open to run after network start and before the mounting of filesystems?
<nacc> ctjctj: i wonder if you need to include iscsi into your initramfs
<ctjctj> nacc, no.  I'm not booting off an iscsi disk.
<ctjctj> UUID="xyzzy" /var/lib/mysql defaults,_netdev 1 1
<ctjctj> So we boot of a local disk and then we should mount the iscsi disk before mysqld (mariadb) starts
<nacc> ctjctj: ah sorry
<ctjctj> nacc, it was a great answer, just not the one I needed.
<nacc> ctjctj: 16.04?
<ctjctj> 14.04 LTS
<nacc> ctjctj: hrm, so maybe an upstart ordering is needed?
<ctjctj> I thought that.  But we have S45open-iscsi in rcS.d which I *think* means to do this before we change out of single user mode and into a multi-user runstate.
<ctjctj> My understanding was that by putting the _netdev it would cause the mount of network devices to wait until after open-iscsi completed.
<coreycb> zul, monasca-statsd is optional so I added it to suggests
<coreycb> zul, upstream report is updated now too for ocata
<jge> ikonia: that worked but now I'm getting "517 Protocol versin mismatch" not much about it online.. the client querying is running Ubuntu and the other CentOS.. wondering if this is the problem
<ikonia> shouldn't be
<jge> ubuntu has chrony version 1.29 and Centos 1.29.1
<ikonia> I have multiple distros using it with each other
<jge> ikonia: looking at source here https://github.com/SuperQ/chrony/blob/master/client.c
<jge> something to do with a bad header?
<ikonia> jge: not sure, I'll need to look into it, but it works on mine
<jge> ikonia: are you able to query other clients as "crhonyc -h ip tracking"
<jge> ?
<ikonia> jge: I can't check it at the moment as I don't have access to those hosts from where I am
<jge> hmm ok.
<jge> I don't know then :(
<jge> it would be nice to have a switch for verbose
<ikonia> I can try it for you later on
<zul> coreycb: cool beans
<coreycb> beisner, hi can you promote python-cryptography 1.0.1-1ubuntu1~cloud1.2 to liberty-proposed please?
<beisner> coreycb, done, re: https://bugs.launchpad.net/horizon/+bug/1601986
<ubottu> Launchpad bug 1601986 in OpenStack Dashboard (Horizon) "RuntimeError: osrandom engine already registered" [Undecided,New]
<coreycb> beisner, ty sir
<zul> coreycb: mind if i sync python-muranoclient over from debian?
<coreycb> zul, fine by me
<jakst> How would I go about doing a smart scan of a disk if my ubuntu server is hosted in an ESXi hypervisor? In Ubuntu or in ESXi?
<ctjctj> How do I force open-iscsi to start before network mounts?  At this point I have a _netdev in fstab for the disk in question. open-iscsi attaches the device correctly when it runs but upstart/systemd(?) are attempting to mount the disk before open-iscsi starts
<coreycb> beisner, hey these are ready to promote from liberty-proposed->liberty-updates: cinder, heat, manila, nova, openstack-trove, sahara
<ctjctj> I'm looking at an issue that /etc/init/mountall-net.conf will attempt to mount devices that are attached via open-iscsi.  But mountall-net.conf runs before open-iscsi runs.  Is there a fix for this?
<jge> hey all, could I install a version of a package that's meant for say Xenial to Trusty?
<nacc> jge: that's not recommended or supported
<ikonia> jge: no
<jge> so I'm better of intsalling from source if I need a version that's not available in repos?
<ikonia> jge: what do you actually want
<jge> ikonia: chrony version 2.3
<jge> ttp://chrony.tuxfamily.org/doc/2.3/manual.html#Installation
<ikonia> jge: why do you want that version ?
<nacc> jge: 2.3 is not available in xenial either, afaict
<ikonia> win 12
<ikonia> oops
<jge> ikonia: I'm getting an error trying to query another chrony client in Cent0S, "Read command packet with protocol version 5 (expected) 6" and from the mailing list here it looks like it might be related to the version: https://listengine.tuxfamily.org/chrony.tuxfamily.org/chrony-users/2010/06/msg00005.html
<jge> so I wanted to test if upgrading to the latest release will help
<ikonia> I've got 16.04 and Centos 7 hosts in sync from each other
<jge> what version of chrony on both?
<ikonia> saly, I can't check as I ended up not going home tonight
<jge> I was able to test earlier from different clients (one 1.29 and the other 2.2) both CentOS and it worked, so I'm thinking is the version of Ubuntu..
<ikonia> jge: what version does ubuntu use
<jge> it appearently sends protocol version 5 when the other ends expects 6
<ikonia> what actual chrony version does ubuntu use
<ikonia> (not got a box here to check)
<jge> ikonia: it's on 1.29.1 which is the latest stable
<ikonia> jge: so applying logic, you have a 1.29 box working and 2.2 box working
<ikonia> i don't think a 1.29.1 "won't" work, when a 1.29 box does
<jge> same OS though
<ikonia> jge: so ?
<jge> well, I'm thinking it might be implemented differently.. it's clearly sending a different version of the protocol
<ikonia> so if you think it's a different implmentation, upgrading it won't do anything
<jge> if it would be the same code base then it shouldn't complain
<ikonia> jge: have you actually looked at the config or arguments to see if things can be set
<ikonia> jge: it is the same code base
<ikonia> you've just said that
<ikonia> you have a 1.29 client that works
<ikonia> 1.29.1 is the same codebase
<jge> my idea with upgrading is that the latest release could have better (compatability) with earlier versions as opposed to the opposite
<ikonia> jge: sorry, thats just blind randomess
<jge> maybe downgrade connection protocol, I don't know ..just spitting ideas
<ikonia> jge: have you even done basic research to see if the clients support both versions of the protocol
<ikonia> and if you can force the protocol, and what the default is
<jge> i looked up focing the protocol but manual doesn't have anything for that..
<jge> client obviously does not support one of the protocols
<ikonia> why though
<ikonia> as it's in the code base
<ikonia> logically it's more likley to be a configuration option
<jge> ikonia: https://github.com/mlichvar/chrony/blob/master/NEWS
<jge> check out the security fix under version 1.29.1
<jge> incompatible with previous protocol version..
<ikonia> there you go then
<ikonia> so you need to use the other protocol
<jge> but would that be referring to 1.29 or 1.28?
<ikonia> would what ?
<jge> previous protocol version
<ikonia> so 1.29 seems to support both
<ikonia> 1.29.1 seems to patch one to fix a problem
<ikonia> so the logical approach is to use the one that is supported by both
<ikonia> how to foce it the question
<ikonia> if you look there is a similar change in 1.27
<jge> hm yeah I see it
<jge> ikonia: I dont have chronyd open on the internet, maybe I could just go back to 1.29
<jge> wait a minute, I was looking at another box... the ubuntu box is already on version 1.29
<ctjctj> For anybody that cares about the open-iscsi mount on boot issue I was describing.  When we went to upstart we created a helper tool called "mountall" which processes fstab and mounts drives as they become available.  Once upstart starts the network /etc/init/mountall-net.conf runs and kills the mountall process.  BUT /etc/init.d/open-iscsi start has not yet run so any iscsi targets have not yet been mounted.  Thus the mount
<ctjctj> fails and boot hangs.  The original intention was for the _netdev in /etc/fstab to keep any mount of the iscsi device from happening.  All the other remote devices would then be mounted by commands like "mount -a -t nfs -O _netdev"  Thus /etc/init.d/open-iscsi also does a "mount -a -O _netdev" because it runs after all of NFS/CIFS and such.  Catch 22.
<keithzg> Any built-in way with systemd to have an escalating set of shutdown commands for a service? Specifically, I have a VirtualBox VM set up as a service, and I'd like it to first try VBoxManage controlvm $vmname acpishutdown, and (perhaps after a timeout) try poweroff instead of acpishutdown if the process hasn't halted.
<Curiontice> Hi! is it possible to compile squid into a package such that no shared dependency exist?
 * keithzg has tried to read systemd documentation, but for instance https://www.freedesktop.org/software/systemd/man/systemd.unit.html doesn't even *mention* ExecStop, much less document it.
<ctjctj> keithzg, i believe there is a method.  The easiest that I can think of is to just have two shutdown VM commands.  One does the acpishutdown and waits upto 30 seconds  Then the second VM shutdown runs and does the poweroff.  Since all VMs that could be shutdown with acpishutdown will already be shutdown this only catches the once still on.
<keithzg> ctjctj: Fair enough, I was thinking perhaps there was some native systemd way of doing this but that certainly sounds like it'd work. I'll try just using `/usr/bin/VBoxManage controlvm Sibrel acpipowerbutton && /bin/sleep 30 && /usr/bin/VBoxManage controlvm Sibrel poweroff`
<ZJAY>  how would i soft link a path like /Volumes to my main path /media/dumpebut/<somehugedrive> i need it to see the soft link path in a script.
#ubuntu-server 2017-01-04
<icedwater> Mis-anthrope: hello
<icedwater> I'm interested to see how this pans out in the end.
<FizzyCoffee> Yes.
<icedwater> FizzyCoffee: that too. How did you make coffee fizz?
<FizzyCoffee> Dry Ice.
<icedwater> Mis-anthrope: you'll probably have to restate your problem here about lightdm being removed because it crashes when you log in.
<ivoks> 'win 21
<zul> coreycb: i got gnocchi
<coreycb> zul, ok
<coreycb> jamespage, hi can you promote the following please from liberty-proposed->liberty-updates? cinder, heat, manila, nova, openstack-trove, sahara
<jamespage> coreycb, okies
<jamespage> coreycb, ok those are going now
<coreycb> jamespage, cool thanks
<jamespage> there are also py-crypto and python-oslo.messaging updates in proposed - are those good to go?
<coreycb> jamespage, checking
<coreycb> jamespage, those should probably wait a bit
<jamespage> ack
<cpaelzer> rbasak: in prep for my next major task it would be kind if you could re-import qemu to the importer tree
<coreycb> jamespage, can you promote 1.8.3-0ubuntu0.15.04.2~cloud3 to kilo-proposed?
<coreycb> jamespage, erm oslo.messaging
<rbasak> cpaelzer: import running.
<rbasak> nacc: FYI, in importing qemu I got:
<rbasak> Importing 2.0.0+dfsg-2ubuntu1.28 to ubuntu/trusty-updates
<rbasak> 01/04/2017 15:07:43 - ERROR:Unable to import orig tarball for 2.0.0+dfsg-2ubuntu1.28
<rbasak> Importing 2.0.0+dfsg-2ubuntu1.29 to ubuntu/trusty-proposed
<rbasak> 01/04/2017 15:08:21 - ERROR:Unable to import orig tarball for 2.0.0+dfsg-2ubuntu1.29
<rbasak> etc.
<rbasak> It seems to be continuing.
<cpaelzer> interesting rbasak
<cpaelzer> that might be related to the pristone tar changes
<cpaelzer> nacc: ^^ ?
<rbasak> cpaelzer: import complete. It's pushed something, at least. I'm not sure about those errors.
<rbasak> nacc, cpaelzer: full output: http://paste.ubuntu.com/23739412/
<rbasak> So it looks like it impacts SRUs only.
<cpaelzer> thanks
<adrian_1908> In apache2, should i use the /var/www/... directory for websites, or is there a reason to prefer something else?
<moonpup> adrian_1908: personal preference... you can create and use any directory / mount point you like.
<coreycb> jamespage, zul: ceilometer hasn't cut any releases for newton yet so I'm thinking about cutting a snapshot from master branch so we can get mod_wsgi and other updates released
<coreycb> jamespage, zul: s/newton/ocata
<jamespage> coreycb, +1
<coreycb> jamespage, ok i'll do that then
<adrian_1908> moonpup: ok, I'll stick to defaults then I think.
<zul> coreycb: cool beans
<coreycb> jamespage, should I use our current snapshot version (ie. 1:7.0.0.0rc2~dev336-0ubuntu1) or switch to this -> 1:7.0.0+git20170104.aa3f491bb-0ubuntu1 ?
<jamespage> coreycb, what will the next version release be?
<coreycb> jamespage, should be 8.0.0
<coreycb> jamespage, last one was 7.0.0
<jamespage> coreycb, 1:7.0.0+git20170104.aa3f491bb-0ubuntu1
<coreycb> jamespage, ok thanks
<nacc> rbasak: yes, that message is when `gbp import-orig` fails
<rbasak> nacc: ah, so that affects pristine tar branches only?
<nacc> rbasak: i believe so, and the 'upstream/' tags
<nacc> rbasak: that's the only use we have for the 'orig' tarballs themsevles
<rbasak> OK
<rbasak> Thanks
<nacc> rbasak: i can verify that with the source in a bit, but that's my recolleciton of the implementation
<nacc> rbasak: would be good to see why that happens in practice (feel free to open a bug)
<adrian_1908> what would you guys recommend for comfortably copying files back-and-fourth between client and server (not constantly, just the occasional website, config files etc.)?
<rbasak> nacc: bug 1654022, thanks.
<ubottu> bug 1654022 in usd-importer "Errors importing orig tarballs when importing qemu" [Undecided,New] https://launchpad.net/bugs/1654022
<compdoc> rsync
<rbasak> adrian_1908: git, possibly with git-annex. Otherwise you lose track of what version of what is where.
<adrian_1908> hmm, that's a good point rbasak
<nacc> rbasak: ack, thanks for the bug
<coreycb> zul, do you have an MIR open for python-pyroute2 for neutron?
<zul> coreycb: #1653527
<coreycb> zul, ok you mind revisiting that for mterry's comments?  i uploaded a new neutron that should fix up the autopkgtest issues.
<coreycb> zul, where's pyroute used anyway?
<zul> coreycb: hold on
<zul> coreycb:https://github.com/openstack/neutron/commit/9183da7c96df506cdfa5c83a8d4d22e34609a8f4
<coreycb> zul, I see, it was added since b2
<zul> coreycb: yeah i added it to fix the CI
<coreycb> zul, ok well shouldn't be a blocker for b2 if we backport it the cloud archive while it's reviewed
<zul> coreycb: i think its in the cloud archive now
<reyz> hello guys
<reyz> is Ubuntu Server a good solution for a Home NAS?
<reyz> mostly to serve movies etc a long all my home devices
<batman1> anyone care to give me a hand with deploying autopilot? I've been working on this for about a week now without success. I can't even get landscape to deploy. It's currently bombing with a TLS error shortly after juju launches machine-0
<reyz> any1?
<soop> reyz: freenas
<soop> http://www.freenas.org/
<reyz> soop: ubuntu is not a good solution?
<reyz>  didnt like freenas much
<soop> You could use Ubuntu but why reinvent the wheel? Unless you plan on using it for something else as well ...
<soop> what is your end goal like dlna services etc?
<soop> https://help.ubuntu.com/community/MediaTomb
<soop> or this
<soop> https://www.danbishop.org/2014/04/28/dlna-upnp-servers-on-ubuntu/
<reyz> soop: i mostly use my NAS for accessing movies via samba
<reyz> (since my family needs subtitles for the movies)
<reyz> and i also use it to store files and music
<reyz> thats it
<reyz> soop: also, freenas is not meant to be used with commercial hardware, ZFS is slow AF with non RAD setups
<reyz> RAID*
<bladernr> Hey, I'm curious, why is an apparmor update trying to remove rsyslog?
<bladernr> http://pastebin.ubuntu.com/23742021/
<bladernr> hrmm, nevermind, I need to ask elsewhere, it's due to a PPA :/
<sbeattie> bladernr: ah, I was just going to say that I couldn't reproduce that.
<bladernr> Yeah, someone snuck it into a PPA and that caused it.  I'm going to email the guy who did it to understand why.
#ubuntu-server 2017-01-05
<keithzg> Hrmm, I'm trying to hide a .git folder from a samba share and it doesn't seem to be working, which is kindof baffling me.
<keithzg> I would've assumed, particularly since it's in the root of the share, that
<keithzg> hide files = /.git/
<keithzg> would've been sufficient, but apparently not?
<keithzg> Maybe I'll just try the "hide dot files" option and be done with it.
<keithzg> Wait, *that* didn't work?
 * keithzg reads the documentation further, figures out he actually wants to "veto" the folder, aha
<xnox> cpaelzer, \o/ libvirt
<adrian_1908> hello. I have apache2 up and running and just installed PHP7 via apt. If I want to use the latter with the former, do I install the `libapache2-mod-php7.0` package, or what's the usual way to do it?
<patdk-lap> adrian_1908, personally I recommend using php-fpm, but it's not nearly as simple and easy as mod-php
<adrian_1908> patdk-lap: yeah I need easy, I'm a total beginner :)
<patdk-lap> well, just install the mod-php then, it should do everything needed by itself
<adrian_1908> Yes, I already did and am busy with it. It activated itself in Apache and appears to have changed the MPM model (I was already wondering about how I'd accomplish that myself). Very convenient.
<adrian_1908> Damn, the `mysql-server` package sure has a lot of dependencies (perl stuff). Didn't expect it too be that large.
<gecko_x2> hi
<gecko_x2> anyone know alot about LXD?
<gecko_x2> i'm just trying LXD on 16.04 first time running on a ZFS root, looking to set up LXD with MACvlan based bridging
<gecko_x2> i installed it using DIR as storage, because the underlying ilesystem is already ZFS
<gecko_x2> but does that mean LXD can't use zfs snapshots, instead it plays around with compressed files and rsync?
<gecko_x2> and does that impact performance? i'm referring to the comments section in this article: https://bayton.org/2016/05/lxd-zfs-and-bridged-networking-on-ubuntu-16-04-lts/
<gecko_x2> secondly, the concept on maCvlan is new to me, and i'm looking to try to learn if it could be used to replace IOMMU/PCI passthrough. Anyone here know alot about MACvlan/MACvtap?
<cpaelzer> thanks xnox
<lordievader> Good morning.
<fishcooker> anyone with apt module error "FATAL -> Failed to fork"  http://vpaste.net/izPXA don't say with apt-get install -f ... it emits the same error messages
<odc> failed to fork? how much free RAM do you have left?
<ws2k3> hello guys im having an issue with ubuntu 12.04 with ipv6. after some time my routing tables are getting full and then ipv6 stops working
<ikonia> why are you routing tables "full'
<ikonia> it would be a HUGE networking route table to fill a host
<fishcooker> little odc http://vpaste.net/ZPYp6 any suggests?
<ws2k3> ikonia i dont know why they are getting full i think its a bug
<ikonia> ws2k3: are they actually "full" or do they just look full
<ws2k3> ikonia i have around 100 ipv6 hosts in my network and i have a /64
<ikonia> eg: how are you determaining they are "full"
<ws2k3> ikonia how can i check that?
<ikonia> ws2k3: hang on
<ikonia> do you know if they are full or not
<ikonia> fishcooker: suggestion on what ?
<lordievader> fishcooker: Try to free up some ram any try again.
<ws2k3> ikonia wc -l /proc/net/ipv6_route shows me 21864
<ws2k3> and on some machines im already getting     wc: /proc/net/ipv6_route: Cannot allocate memory
<ws2k3>     0 /proc/net/ipv6_route
<ikonia> ws2k3: so you're out of memeory - is that because of the routes though
<ikonia> ws2k3: what's allocating these routes
<ws2k3> ikonia i dont know exacly
<ikonia> ws2k3: are they 1:1 host routes
<ikonia> or network routes
<ws2k3> ikonia i do know that im getting the canot allocate memory
<ws2k3> ikonia they are just webserver with 1 ipv6 adres and 1 ipv6 default gateway
<ikonia> ws2k3: what are the routes to then
<ikonia> can you actually display the routing table ?
<ws2k3> yes i can i think you mean ip -6 route show cache?
<ws2k3> i sended my ip -6 a in private message ikonia
<ikonia> not cache
<ikonia> ws2k3: based on what you've just pasted your networking setup is broken
<ikonia> you have dead routes on the loopback interface
<ws2k3> ikonia i know they are for a load balacer
<ikonia> you also have conflicting routes
<ikonia> you have the same network routing out of two interfaces
<ikonia> your network setup appears to be screwed up looking at that route table
<ws2k3> i think i need to delete this one 2001:1aa8:1850::/64 dev eth0  proto kernel  metric 256  expires 1906251sec
<ikonia> as I don't know your networking setup, I can't comment on what's valid, what's not valid, but you had dead routes, you have conflicting routes and you have bad routes - those 3 things are not good and suggest you have overall network problems
<ws2k3> ikonia the dead routes should not be an issues cause its just a local loopback adres
<ws2k3> and the conflicting route is a different subnet
<ikonia> it is an issue though
<ikonia> ws2k3:
<ikonia> ws2k3> fe80::/64 dev eth0  proto kernel  metric 256
<ikonia> that is NOT a differnt subnet
<ikonia> 09:51 <ws2k3> fe80::/64 dev eth1  proto kernel  metric 256
<ikonia> look how wide that network is too
<ikonia> (I hope that is acceptable as it's not secret information on a subnet that wide)
<lordievader> fe80 is link local, each ipv6 nic should have that.
<lordievader> Though wikipedia lists it as fe80::/10
<ikonia> right
<ikonia> hence why I'm saying you've got conflict
<ikonia> as it's set to route both that out of two different physical interfaces and look how wide it is
<ikonia> I don't know the network setup on this host so I don't know what's needed/not, however looking at the route table it's not good
<lordievader> Must say I have the fe80::/64 for each interface too.
<ikonia> really ?
<ikonia> that seems very odd
<lordievader> ikonia: You don't?
<ikonia> no
<ikonia> but then the only ipv6 network I'm on is really simple
<ws2k3> by now on allmost all my servers ipv6 is broken
<ikonia> ws2k3: just because of the number of routes
<ikonia> or something else
<ws2k3> ikonia my machines are getting into trouble after i get this wc: /proc/net/ipv6_route: Cannot allocate memory
<ikonia> right - I get that you're out of ram
<ikonia> but are you out of ram due to the number of routes or just out of ram
<ws2k3> ikonia machines have plenty of ram free
<ikonia> ok, so if you look at the route table you pasted there are only a few individual routes
<ikonia> so what's causing the population of that ? the dead routes ?
<ws2k3> ikonia thats why im trying to find out
<lordievader> What do you get when you try to ping hosts?
<ws2k3> lordievader most of them dont reply
<lordievader> And on the hosts themselves?
<ws2k3> lordievader also no reply
<ws2k3> lordievader i assume you mean ping from the host to the internet for example?
<lordievader> Do you see ipv6 traffic from them on your gateway?
<lordievader> For example, yes.
<ws2k3> lordievader i dont have access to the gateway so i cant check
<ws2k3> lordievader but i cant ping the gateway nor any other host
<ws2k3> i have now run sysctl -w net.ipv6.route.max_size=8388608 and then it temperarly works again
<ikonia> ws2k3: hang on
<ikonia> ws2k3: metric 256 !!!
<ikonia> ws2k3: thats 256 hops it has to know to get to that route
<ws2k3> ikonia as far as i know metric is the priority not the amount of hops
<ikonia> maybe thats my mistake then
<ikonia> your right, my apologies
<ikonia> ahhh the metric can include number of hops
<ikonia> so yes, I am right, but thats not the "factual" answer in terms of a "solid number"
<ikonia> sorry about that
<ws2k3> ikonia np im happy with all the help i can get with this strange issue
<lordievader> But, if I understand correctly, the host just adds routes as if they are candy?
<ikonia> it's a large number of routes in /proc for sure, but looking at the route table the individual routes as few
<ws2k3> ikonia and thats exacly the problem
<ikonia> ws2k3: how are the routes getting populated ?
<ws2k3> ikonia i dont know
<ikonia> (the ones that are in the route table no the 10000000 other ones)
<ws2k3> ikonia i just have one default gateway and one ipv6 address and thats it
<lordievader> Are privacy extensions enabled?
<ws2k3> ikonia it looks like im having this issue https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1065159
<ubottu> Launchpad bug 1065159 in linux (Ubuntu) "ipv6 routing memory leak" [Medium,Expired]
<ikonia> wow - a memory leak
<ikonia> medium "expired"
<ws2k3> ikonia this looks like the same behavior that im having
<lordievader> ws2k3: What kernel are you running?
<ws2k3> lordievader 3.2.0-57-generic #87-Ubuntu SMP
<lordievader> ws2k3: Well you could do the same as in the comments, run a newer kernel and see if the problem persists.
<lordievader> 3.2 is old anyways ;)
<ws2k3> on newer kernels i am not having this issue
<ikonia> ws2k3: parts of it certainly map to that bug in terms of behaviour
<fishcooker> buy some RAM ikonia :p
<fishcooker> noted lordievader... it works
<lordievader> Buy? Download: http://www.downloadmoreram.com/download.html
<zul> coreycb: neutron is blocked on pyroute2 so I think we might want to drop it for now to get b2 things going and then re-add it
<coreycb> zul, i think we're ok.  we can test b2 in proposed.
<zul> coreycb: ok
<zul> coreycb: you uploaded sahara?
<coreycb> zul, yep and a few others
<zul> coreycb: okie dokie
<xnox> does pm-suspend work in qemu-kvm virtual machines?
<xnox> it did nothing for me =(
<cpaelzer> xnox: my xenial guest goes into state "pmsuspended"
<xnox> cpaelzer, cool. what about trusty? =) with xenial kernel?
 * xnox is weird, I know
<cpaelzer> xnox: umm I can try
<xnox> cpaelzer, please, as i'm on a frankenstein trusty.
<cpaelzer> xnox: T on 3.13.0-105-generic working as well, no rebooting into linux-generic-lts-xenial
<cpaelzer> xnox: the same - working
<cpaelzer> xnox: also wakeup works a la "virsh dompmwakeup testguest-T-Xkernel-pmsuspend"
<xnox> cpaelzer, nice.... now i just need to fix why it doesn't for me =/ (as in what did I break)
<xnox> cpaelzer, you suspend using $ sudo pm-suspend ?
<cpaelzer> yes
<xnox> cool.
<cpaelzer> FYI /var/log/pm-powersave.log http://paste.ubuntu.com/23746485/
<cpaelzer> FYI /var/log/pm-suspend.log http://paste.ubuntu.com/23746488/
<cpaelzer> xnox: ^^
<xnox> for me it returns 1 and doesn't log anything in those files =(
<cpaelzer> xnox: my host is xenial
<xnox> ditto
<cpaelzer> xnox: set -x in /usr/lib/pm-utils/bin/pm-action ?
<xnox> better
<xnox> LOGGING=1 pm-suspend
<xnox> pm-utils does not know how to suspend on this system
<lordievader> Why suspend a vm when you can do a managed save?
<xnox> lordievader, just testing if my userspace changes as to how suspend is initiated breaks things or not. To make sure I don't break real hw.
<lordievader> Ah, I see.
<xnox> cpaelzer, $ cat /sys/power/state
<xnox> has for me only "freeze disk" and no "mem" do you have "mem" in your VMs there?
 * xnox ponders if I need to add more ram.
<xnox> cpaelzer, my Ubuntu Trusty Desktop says pm-is-supported --suspend -> exit code 1, as in not supported, and I have no idea why.
<xnox> cpaelzer, launched VM manually via qemu command line, instead of GUI point and click virt-manager and suspend is supported.
<xnox> i guess virt-manager is goofing with me.
<lordievader> Or the xml doesn't contain the necessary config.
<cpaelzer> xnox: wait I'll check
<cpaelzer> xnox: reeze mem disk
<cpaelzer> +f
<cpaelzer> xnox: so yes I have mem
<cpaelzer> pm-is-supported --suspend is rc=0
<xnox> cpaelzer, yeah, doing a basic VM launch using qemu -enable-kvm -m 2048 -hda foo.qcow2 results in a suspendable VM, whatever xml virt-manager generated for the foo machine does not hibernate =/
<xnox> does not suspend that is.
<zul> coreycb: ping can you rediff your openstack-charm-testing branch i would like to test out the other stuff as well
<coreycb> zul, this? https://code.launchpad.net/~corey.bryant/openstack-charm-testing/add-new-svcs-to-sparse/+merge/313156
<zul> coreycb: thats the one
<coreycb> zul, ok
<jonah> hi can anyone please help. my server seems to have lost internet connection and can no longer be reached. From the server itself I can't ping google.com either...
<ikonia> jonah: what is the error message you get
<jonah> ikonia: everything worked great, then all of a sudden a few hours ago I couldn't reached my website any more. Tried to ssh into the server and couldn't do that either. So I've plugged a keyboard in and accessed directly and the box won't ping google.com or other sites
<jonah> ikonia: no idea what happened!
<pmatulis> jonah, looks like networking is down
<jonah> pmatulis: really weird issue. all firewalls off etc. when i reboot the server internet/sites load up no problem. then when it has been on for maybe 15 seconds it can't connect all of a sudden again
<pmatulis> jonah, i would concentrate on the fact that you cannot ssh to it, if that's still the case
<jonah> pmatulis: I can ssh in initially when it first boots but as I say 15 seconds later the internet on that box drops and then I can't ssh in any more as there is no access in
<pmatulis> jonah, and what kind of server is this? a cloud instance?
<coreycb> beisner, hello can you please promote all of newton-staging to newton-proposed?
<jonah> pmatulis: i think it is due to resolv.conf only having 127.0.0.1 nameserver
<jonah> when i add another nameserver to resolv.conf it wipes it on reboot...
<pmatulis> jonah, right, that's normal
<coreycb> beisner, also neutron is ready to promote from mitaka-proposed and newton-proposed to -updates
<beisner> hi coreycb, newton staging-to-proposed done
<beisner> coreycb, neutron promoted to updates for mitaka and newton
<coreycb> beisner, thanks
<cmh_> is there a wiki somewhere that shows what debian versions ubuntu releases are based on?
<bekks> cmh_: No, since they arent based on specific debian releases.
<maswan> or, they're all based on the debian "testing" release. not very useful answer either. :)
<maswan> (or is the sync against unstable? thought it was testing)
<cmh_> since 14.04 it's unstable i think
<cmh_> not 100%
<pmatulis> maybe LTS -> testing , non-LTS -> unstable
<tarpman> https://wiki.ubuntu.com/LTS says Starting with the 14.04 LTS development cycle, automatic full package import is performed from Debian unstable
<tarpman> otherwise I think it's as pmatulis said
<maswan> ok, so, thanks. easy then. all ubuntu releases are based on Debian Sid. :)
<bekks> So its unstable for both LTS and non LTS :)
<cmh_> unstable = sid?
<maswan> cmh_: yeah
<pmatulis> bekks, since 14.04 yeah. i didn't know that
<pmatulis> take a look at /etc/debian_version , for xenial (LTS) it shows 'stretch/sid'
<ninjai> is it possible to install php 5.2.anything on ubuntu server 16.04?
<ninjai> 5.6 is already installed but I need 5.2
<ninjai> I keep getting erros like this:  php-apcu : Conflicts: php-xcache but 3.2.0-2+deb.sury.org~xenial+1 is to be installed
<ninjai> how do I tell apt to only install the one from the xenial repo?
<nacc> ninjai: not officially, no
<nacc> ninjai: also, 5.2 is incredibly old and even unsupported upstream, afaict
<ninjai> While I was expecting someone to say that, I'm well aware but it's what the client wants to support their ancient web server
<nacc> so they're fine with (potentially) CVEs...?
<ninjai> just trying to move it off of a physical freebsd box from 2007 or so, to a linux VM...
<nacc> seems like a terrible choice to me, but not mine to make :)
<ninjai> not mine either
<ninjai> I just need to get it on and make it work
<nacc> ninjai: as to your conflicts, you've add ondrej's ppa, which will conflict with the archive. And installing php5, period, on 16.04, will lead to issues with the archive (afaict), as ondrej has to provide his version of the deps in the ppa
<ninjai> ok
<ninjai> any way I can find out which version of ubuntu server had the option to install php 5.2?
<nacc> ninjai: even 12.04 (the oldest currently supported) has 5.3
<nacc> ninjai: you could look at hte publishing history for php5 to go back further
<ninjai> ok thanks
<adrian_1908> hello, I'm a bit unsure what good practices are for folders & files inside `/var/www/` regarding ownership and permissions. Could someone help clarify this for me? For static websites I can probably just let root or my user own them, but what about sites like WordPress that need to write content?
<adrian_1908> I've found conflicting advice online.
<adrian_1908> (I'm using apache2 btw)
<ikonia> adrian_1908: it's not a hard set of rules, it's an evaluation of the setup
<ikonia> adrian_1908: for example the wordpress referenc eyou use, actually needs very little write access and it's very controlled of only a few directories
<ikonia> adrian_1908: thats very different than "/var/www"
<ikonia> (just for example)
<adrian_1908> ikonia: yeah, wordpress would be very different and it's the part I'm unsure about. I don't want to create some security risk. My thinking would be to set ownership inside the wordpress dir to `myuser:www-data` and give writable folders 775 permissions.
<ikonia> adrian_1908: it's not just that simple
<ikonia> the whole structure needs to be evaluated,
<ikonia> but you're approach of varition depending on needs is correct
#ubuntu-server 2017-01-06
<hypermist> i turnt off all programs running ( that i started myself) and 827mb of ram is going no where can someone help me find the area / command i should use
<cpaelzer> hypermist: maybe just cache?
<hypermist> found out i think its flask with a mem leak cpaelzer
<cpaelzer> hypermist: ok, usually I'd say check /proc/meminfo and then depending what this points to
<cpaelzer> cache: vmtouch
<cpaelzer> anon: smem -tk -c "pid user command swap vss uss pss rssâ
<cpaelzer> kernel stuff: slabcache and such
<cpaelzer> but you found it so, I hope there is a fix/workaround
<cpaelzer> if you want the anon per mapping instead of per app smem -m -tk -c "map count pids swap vss uss rss pss avgrss avgpss"
<hypermist> welp cpaelzer okay so its not flask
<hypermist> i dont think
<cpaelzer> hypermist: then lets think together, where is your mem going
<cpaelzer> hypermist: I'd recommend to sync and drop caches and then check where /proc/meminfo is pointing to
<cpaelzer> $ sync; echo 3 > /proc/sys/vm/drop_caches
<cpaelzer> feel free to pastebin the meminfo if you want to think together about it
<hypermist> when running that command it didn't output anything
<hypermist> is it meant to ?
<cpaelzer> hypermist: the first is clearing dirty page cache, the second is dropping all clean caches
<cpaelzer> hypermist: only then you can get an idea where memory is going to
<cpaelzer> hypermist: in some hardcore cases you even need to force it to swap to shrink some internal structures, but that is more rare
<hypermist> http://prntscr.com/ds4fra
<cpaelzer> hypermist: free is what nobody should care about, cat /proc/meminfo is a better start
<cpaelzer> and you don't need pictures; just do like cat /proc/meninfo | pastebinit
<hypermist> http://pastebin.com/xiGwHSjp
<cpaelzer> hypermist: almost all that is remaining is in cache - hmm the drop should have freed that
<cpaelzer> but never the less it is memory that can be reclaimed in high memory pressure - so not really missing
<cpaelzer> you could go to force it out by cranking up a test to force it to swap and then stop
<cpaelzer> that usually gets the rest out but is unessesary stress
<hypermist> its a pain so i sort of want it fixed haha
<hypermist> so then if i have to setup a cron i can do so :D
<cpaelzer> I don't see anything really missing
<cpaelzer> it is correct that Linux tries to keep memory as utilized as possible and only gives up under stress
<cpaelzer> all the caching that is done is useful
<hypermist> so it wont screw up if i say run multiple programs
<cpaelzer> If you are afraid that you can't get all the memory in such stress cases test it
<cpaelzer> I'll create a test command - give me a sec
<hypermist> okay haha
<farhad--> why cost variable doesnt change in my code: http://paste.ubuntu.com/23751145/
<farhad--> i have always problem with this. and i know that i dont know something. some link that get me out of this problem forever.
<farhad--> oh, sorry.
<farhad> sorry to noise here.
<cpaelzer> ?
<cpaelzer> hypermist: stress-ng --vm 1 --vm-bytes 256M --vm-keep --vm-populate --timeout 10s --metrics-brief
<hypermist> do i wanna run that with sudo ?
<cpaelzer> hypermist: you can rise the amount of memory that touches and see how high you can go
<cpaelzer> hypermist: no sudo needed
<cpaelzer> hypermist: at some level this will cause swapping
<hypermist> do i have to install something ?
<cpaelzer> stress-ng is a package
<hypermist> okay installing it
<cpaelzer> hypermist: I'd recommend running dstat -tvin in another terminal
<cpaelzer> hypermist: once you see swapping to occur you know you are around the spot you can go
<cpaelzer> if you do WAY MORE the oom killer might kill something
<cpaelzer> farhad: you sure one of the switch statements is hit?
<cpaelzer> farhad: add log debuggin there to make sure
<hypermist> well i changed vm-bytes to 1024M
<hypermist> and no swapping is occuring
<cpaelzer> hypermist: that means you can go 1G without
<cpaelzer> hypermist: your meminfo suggested you can go somewhere around 1.5-1.7 to hit it
<cpaelzer> and that will then also be what your programs can consume
<farhad> cpaelzer: thank you for the feedback. i got mistake to say my problem here. i solved it in proper channel.
<cpaelzer> ok farhad
<hypermist> cpaelzer, Value 1572864000 is out of range for vm-bytes, allowed: 4096 .. 1073741824
<hypermist>  ;s
<hypermist> when i try do 1500M
<cpaelzer> hypermist: well 1G per thread seems to be the limit then, but you can easily do it still - the number behind --vm is the number how much of those
<cpaelzer> hypermist: so I'd think --vm 2 --vm-bytes 750M should do as well
<hypermist> okay
<hypermist> nope cpaelzer still hasnt made the cache drop
<hypermist> cpaelzer, is it bad that my cache doesnt drop or does it not matter
<BlackDex> hello there, is it possible to set every value you can add with sysctl also in the kernel boot params? for instance if i want to set the kernel.pid_max to a different value?
<cpaelzer> BlackDex: some are exposed by the kernel as kernel argruments, but I don't think all of them - isn't sysctl.conf early enough for you?
<BlackDex> cpaelzer: I'm using maas and juju. Some juju charms allow sysctl change, but that only happens if the charm is started. I prefere it to be done before that. And currently i have no option of setting this with juju or maas before
<cpaelzer> BlackDex: you can deploy a custom syctl via user-data by Maas which will be handed to cloud-init when instantiating, so after first startup it will have the new sysctl.conf and will read and handle it on every boot
<cpaelzer> BlackDex: would that work for you?
<BlackDex> that means that i have to change/add curtin stuff?
<cpaelzer> BlackDex: If I'm not mistaken that means you could on your maas add a preseed, so everything installed by it will get this added
<cpaelzer> BlackDex: https://maas.ubuntu.com/docs/development/preseeds.html
<BlackDex> hmm oke
<cpaelzer> That just is what came to my mind, I'm not objecting if anybody else comes up with a nice solution
<BlackDex> It's not a bad idea :). I'm just a bit against changing the preseeds deliverd by maas
<cpaelzer> BlackDex: I didn't mean to change, but to add custom bits on top
<cpaelzer> BlackDex: otherwise it is hard to maintain correctly IMO
<cpaelzer> but I never did - maybe that is hard/impossible/featurerequest to "add on top"
<BlackDex> or something like, {tag}
<BlackDex> that would be a nice thing
<cpaelzer> yes
<BlackDex> so i can create a special tag, like for kernel-opts that can be matched with a preeseed
<cpaelzer> I like the idea
<BlackDex> well, ill go add a feature request :)
<BlackDex> done :) https://bugs.launchpad.net/maas/+bug/1654515
<ubottu> Launchpad bug 1654515 in MAAS "Feature Request: Custom post-deploy (cloud-init or preseed) scripts per node linked to tags" [Undecided,New]
<hypermist> cpaelzer, does it matter if i dont clear cache ?
<cpaelzer> hypermist: ?
<hypermist> cpaelzer, well after running the command you told me to it never dropped its cache
<cpaelzer> hypermist: but the command succeded?
<hypermist> yes
<cpaelzer> hypermist: can you sudo run this http://paste.ubuntu.com/23751524/
<cpaelzer> hypermist: and paste the whole output it created into a pastebin?
<hypermist> okay
<hypermist> cpaelzer, http://paste.ubuntu.com/23751534/
<cpaelzer> hypermist: you have all you wanted 1921780 of 2038636 free, that is one of the best ratios I've ever seen
<cpaelzer> cache down to 25 MB
<cpaelzer> which is ok
<hypermist> oh okay
<hypermist> oh i see i didn't notice :D
<cpaelzer> to free the last 100M you really have to just shut down :-P
<hypermist> :P
<cpaelzer> I've seen systems wasting more memory just on device driver structures
<cpaelzer> e.g. if you plug a few thousand disks on SAN and init them all
<hypermist> haha
<BlackDex> I have a strange issue with LXD and networking. I have serveral bare-metal servers which all are accessable with an MTU of 9000, i can ping from every bare-metal host to every bare-metal host with a package size of 8972 (9000).
<BlackDex> I have a 13 LXD containers running with some services insided them
<BlackDex> the interfaces are bridged to the physical interface of the host
<BlackDex> within some LXD containers i can ping with with 8972 to the host and to other hosts.
<BlackDex> but in other i can't ping with a size larger then 1472 :(
<BlackDex> every lxd interface is located in the same bridge on the same physical interface
<BlackDex> i also can ping from lxd to lxd with a large mtu
<BlackDex> so i'm a bit puzzled
<ikonia> that sounds interesting and unusual
<ikonia> are the containers all running on same physical host
<BlackDex> yes
<ikonia> are all the bridges mapped to the same physical network device ?
<BlackDex> and from host to host no problem, lxd to same host no problem. lxd to lxd no problem, lxd to other host bad
<BlackDex> ikonia: Yes, brctl shows all on the same bridge
<BlackDex> if i tcpdump on the bridge allocated to the lxd i can see the packages
<ikonia> interesting, the symptoms sound like classic MTU problems, but as you say, you get different responses from different containers that are all using the same physical device, so the mtu under the hood is the same
<BlackDex> also if i tcpdump on the bond or br-bond i see the packages
<BlackDex> yea, and i see the same packages length according to tcpdump from the working and not working lxd's
<lordievader> What is between lxd to other host? Does the full chain support jumbo frames?
<BlackDex> lordievader: Yes, as it does for other hosts and most lxd's but not some
<lordievader> Ah, right.
<BlackDex> even restarted the host, restarted the lxd containers them selfs, no change
<ikonia> BlackDex: out of interest if you do a traceroute for a "good" host and one for a "bad" host, do you see it use the same virtual interfaces ?
<BlackDex> yes it does
<ikonia> very odd behaviour
<BlackDex> wait a second
<ikonia> you're not looking in the wrong place are you, eg: something sily like your host is running low on ram (extreme example I know) and it's forcing virutal devices to have "loss"
<BlackDex> no it does :)
<ikonia> as a test if you set the mtu small, say 256 (nice round number) does everything work
<BlackDex> ikonia: for the ping? or the network as a whole?
<BlackDex> because ping works nice on 1472
<ikonia> BlackDex: on the interface
<BlackDex> even flood ping
<BlackDex> i tried that, and that seems to be working normaly
<BlackDex> but that isn't the solution i think ;)
<ikonia> no no, I don't think thats a fix, but if that works, it does look like it's matching the symptoms of mtu
<BlackDex> yea it does. Because everything seems to work nice with a lower mtu size within the lxd container it self
<ikonia> what's the MTU on the physical card
<BlackDex> 9000
<ikonia> BlackDex: does any of your internal comms between LXD use the physical NIC
<ikonia> BlackDex: do you see where I'm going.....
<BlackDex> comms?
<ikonia> BlackDex: container 1 -> container 2 yes you use bridged virtual interface, but depending on the IP addressing that may actually have to go via the physical interface
<BlackDex> no, all interfaces are on the same subnet
<BlackDex> all the interfaces of that same subnet are on the same physical interface/bridge
<ikonia> so the odds of it flooding the physical nic as a pass through is slim
<BlackDex> yea, and that doesn't explain why container 1 can ping and container to can't
<ikonia> good point
<ikonia> BlackDex: are these live boxes, or can you play around safely
<BlackDex> kinda live, depends on what to do :)
<ikonia> shame,
<ikonia> so my gut is telling me somehow this is capacity
<ikonia> I was wondering if you could shutdown 2 - 3 of the "working" hosts and see if one of the "broken" hosts then starts working
<ikonia> /win 4
<ikonia> oops
<BlackDex> :p
<BlackDex> um
<BlackDex> i think i have just 2 hosts which i can shutdown which both work :)
<BlackDex> so i can check for the broken one
<ikonia> BlackDex: worth a shot if it doesn't cause you too much pain
<ikonia> may help prove if it's capacity or not
<ikonia> my gut is saying the symptoms look capacity, but it doesn't loook like it from the config you're sharing
<BlackDex> nope, that doesn't seem to be the case
<BlackDex> :)
<BlackDex> :(
<BlackDex> i even stop/started the not working
<BlackDex> and after starting the alrady working, the both still work
<BlackDex> also, no messages in dmesg, syslog or what so ever :(
<lordievader> Eliminating the basics, the mtu setting is applied correctly? Could you show the output of 'ip link show|grep mtu'?
<lordievader> On the host ;)
<BlackDex> lordievader: Those are all set correctly, as some containers do work with large packages, and a large ping from the host also works :)
<BlackDex> lordievader: http://pastebin.com/pqqx5D9F
<BlackDex> note that br-ens255f0 is indeed 1500, so that is correct
<lordievader> Still quite a few nics with mtu = 1500.
<BlackDex> ;)
<ikonia> BlackDex: this is most odd
<BlackDex> indeed
<BlackDex> i'm currently rechecking my switches
<BlackDex> hope i can find something there
<ikonia> I enjoy something a bit different, but this is not offering much info
<ikonia> BlackDex: there is a good idea, are they all going into the same switch ?
<ikonia> actually - ignore that
<ikonia> I've just realised how stupid that question is
<BlackDex> yes, but i figured that there is an mlag/lacp link
<BlackDex> maybe there is something strange over there
<ikonia> but it's not using the physical card at all
<ikonia> so even if there was a switch problem on the wire, it's not hitting the interfdace, you've got problems between container 1 and container 2
<BlackDex> there are no problems between container and container
<BlackDex> only from container x to host
<ikonia> ahhh, then I missunderstood that then
<BlackDex> and LACP could explain the error
<ikonia> yes, possibly, I don't think so, but it's worth ruling out
<BlackDex> because what if the contairs that DO work are going via switch Y, and the one that doesn't via switch X, and the host i'm pinging wants to go via Y
<ikonia> when you say container X to host do you mean the host they are on or a host generic on the network
<BlackDex> host in generic
<ikonia> BlackDex: yeah, worth checking then
<BlackDex> other bare-metal system
<ikonia> BlackDex: I didn't quite grasp where your comms where being dropped here
<BlackDex> Eureka!
<ikonia> got something ?
<BlackDex> its working now
<ikonia> what did you do ?
<BlackDex> the switch is a cumulus switch
<BlackDex> and i saw that the peerlink between the switches was just 1500!
<BlackDex> and probably the traffic went from switch x to y via the peerlink
<BlackDex> a 40GB link on 1500 mtu :p
<BlackDex> so i now changed them all to 9216 (because of bridging etc.. also according to cumulus docs)
<BlackDex> now the only interface on the switch with 1500 is the management link
<ikonia> wow - so you're flooding it basically
<BlackDex> well the traffic went like this...       lxd 9000 > lxd-host 9000 > switch1-port 9000 > peerlink 1500 > switch2-port 9000 > other-host 9000
<BlackDex> and because of LACP the other LXD went like this
<lordievader> Hahaha, yeah. That doesn't get you a mtu of 9000 ;)
<BlackDex> lxd 9000 > lxd-host 9000 > switch1-port#2 9000 > switch1-port#3 9000 > other-host 9000
<ikonia> winner
<ikonia> great find
<BlackDex> thanks you both for being a soundboard for me ;)
<BlackDex> It helped me to clear everything
<ikonia> always nice to see something a bit different / interesting
<BlackDex> indeed. Well i learned something again today :)
<BlackDex> which can be usefull for other stuff in the daily work
<DammitJim> guys, I am planning upgrading Ubuntu from 14.04 to 16.04
<DammitJim> I tested this on a server that has mysql-server 5.6 installed
<DammitJim> for some reason, the ubuntu upgrade removed mysql-client 5.6 and mysql-server wasn't working.
<DammitJim> I had to purge mysql-server and then install version 5.7
<DammitJim> is this a problem because I didn't originally install just: apt-get install mysql-server without specifying the version?
<BlackDex> mysql-server is normally linked to the stable version
<BlackDex> or atleast stable according to cannonical
<BlackDex> how did you install it before/
<BlackDex> ?
<DammitJim> so, it sounds like the latest stable version of mysql-server on 14.04 is 5.6
<DammitJim> apt-get install mysql-server-5.6
<BlackDex> ah, well that could be an issue
<DammitJim> so, what are my alternatives at this point?
<BlackDex> and during the install you are asked to remove old packages
<DammitJim> to upgrade ubuntu to 16.04
<compdoc> it doesnt upgrade the packages too?
<BlackDex> well, i think the best is to check what apt-get install mysql-server does currently
<BlackDex> use apt-get -n for a dry run
<BlackDex> um
<BlackDex> i mean `apt-get --dry-run`
<BlackDex> so that it doesn't do anything at all
<BlackDex> `apt-get --dry-run install mysql-server`
<BlackDex> with any luck, it is linked to the current version
<DammitJim> let me see
<BlackDex> compdoc: if a package uses a specific version number, and that packages isn't available anymore in 16.04, it won't upgrade, it will leave it there
<BlackDex> but if you have installed the mysql-client (without version) it could give some issues maybe, shouldn't but could
<DammitJim> mysql-server : Depends: mysql-server-5.5 but it is not going to be installed
<DammitJim> what the heck?
<BlackDex> hehe
<BlackDex> because you have 5.6 installed
<DammitJim> why is it talking about 5.5 when I Have 5.6 installed
<DammitJim> ah, crap!
<DammitJim> so, there are packages for 5.5 and 5.6
<BlackDex> apt-cache policy mysql-server
<DammitJim> and we just happen to use 5.6
<DammitJim> BlackDex, what are we looking for? I don't want to paste all the output of that command
<BlackDex> what version is it telling overthere
<BlackDex> apt-cache showpkg mysql-server | grep mysql-server-5
<DammitJim> 5.5
<BlackDex> 5.5 is what my 14.04 server tells also
<BlackDex> well, what should happen is the following
<DammitJim> so, am I screwed, then?
<BlackDex> if you upgrade to 16.04
<BlackDex> and do not remove any package
<BlackDex> or at least check what is wants to uninstall
<DammitJim> let me check the same command on 16.04
<BlackDex> an do-release-upgrade doesn't remove packages unless you tell it to
<DammitJim> that returns 5.7
<BlackDex> so it will leave your 5.6 installed
<DammitJim> oh really?
<BlackDex> what  you can do is not purge the package
<BlackDex> but just uninstall it
<BlackDex> purge will remove the config etc..
<BlackDex> so `apt-get uninstall mysql-server-5.6`
<BlackDex> after that, do an `apt-get install mysql-server`
<BlackDex> and you should be fine
<DammitJim> interesting concept
<BlackDex> done that with other packages then mysql and no probs at all
<DammitJim> http://askubuntu.com/questions/760724/16-04-upgrade-broke-mysql-server
<DammitJim> that's what worried me
<DammitJim> I know, I should take those forums with a grain of salt
<BlackDex> well, that can happen, but the awnser is correct
<DammitJim> what answer? The one where he says to remove all .cnf files?
<BlackDex> the most important for mysql is the config and the database
<BlackDex> as long as you have both, there will be nothing to wurry about
<DammitJim> yeah, the database was still there thank goodness
<BlackDex> you can even install mariadb instead of mysql :)
<BlackDex> maybe a good idea if possible, create a backup of the database files
<BlackDex> if you want to do a copy/past be sure to shutdown the mysql server first. Else you need to do mysqldump :)
<compdoc> my 16.04 upgraded mine to 5.7
<BlackDex> compdoc: probably because you had mysql-server installed :)
<BlackDex> and not mysql-server-5.6
<compdoc> yup. and it wouldnt stay running because of some conf changes. but all fine now
<BlackDex> so it just upgraded mysql-server :)
<DammitJim> going to mariadb is a bigger conversation with the rest of the teams
<DammitJim> compdoc, so, you did have some issues?
<compdoc> it was strange. mysql would die in the night. They changed to layout of the /etc/mysql folder and didnt want to use my.cnf, so i just had to make a small change, but took me a few days to spot it
<rbasak> compdoc: we do use my.cnf, but we have to share the path with MariaDB. Hence the changes.
<rbasak> But 5.7 also obsoleted some old configuration directives. That's the biggest cause of pain on upgrade AFAICT. We have some automated changes on upgrade for the most common things, but we can't cover everything unfortunately.
<compdoc> its been solid since
<rbasak> DammitJim: I think you should be able fix up your scenario after upgrade to 16.04. Take a backup first though just in case.
<rbasak> DammitJim: for MySQL vs. MariaDB, keep in mind that MySQL in Ubuntu is in main, and MariaDB is in universe. Both get good security support. MySQL security updates in Ubuntu come from Canonical's security team. MariaDB security updates in Ubuntu come from Otto, the MariaDB maintainer in Debian and Ubuntu.
<theGoat> i am trying to turn op a syslog-ng listener with TLS on ubuntu with syslog-ng 3.5  everything appears to be compiled correctly and i am getting no errors when running syslog-ng.  could there be something that is blocking it from setting up the listener, and would there be logs some where that would tell me why?
#ubuntu-server 2017-01-07
<rc3k2s0> hello, I'm trying to understand the /proc/ exe folder. Does it only appear when a process is running? I don't seem to be able to find it under 4.8.0-22
<Rar9> hi need help adding pagespeed to ngnix.  got problems with nginx: [emerg] dlopen() "/etc/nginx/nginx/modules/ngx_http_passenger_module.so" failed (/etc/nginx/nginx/modules//etc/nginx/modules.conf.d/phusion-passenger.conf:1
<tomreyn> Rar9: apparently the configuration file path the phusion passenger module uses to find its configuraiotn is incorrect
<tomreyn> i doubt you have it stored at /etc/nginx/nginx/modules//etc/nginx/modules.conf.d/phusion-passenger.conf - more likely at /etc/nginx/modules.conf.d/phusion-passenger.conf ?
<tomreyn> if you built nginx yourself, make sure the configuration file base path was set correctly as build option.
<tomreyn> (won't guide you on this, that's OT, maybe ask in #nginx)
<Rar9> tomreyn i`m only trying to replace the sesisitng older nginx that come with plesk and add pagespeed  - https://talk.plesk.com/threads/how-to-compile-nginx-with-additional-modules-pagespeed-cache_purge-headers-more-and-others.340640/
<Rar9> but this phusion-passenger is causing trouble
<Rar9> is there anything under step5 that might be wrong?
<tomreyn> Rar9: i dont do Plesk nor nginx compilation support
<Draggin> Hi there!
<Draggin> Could anyone point me to a comprehensive guide on DHCP on Ubuntu Server? I've been reading the manpages, and overview instructions on getting one up and running (and, for all intents and purposes, my server is running, but doesn't seem to be handing out addresses...)
<jakst> I'm trying to mount an exfat formatted usb-disk in a headless ubuntu server 14.04, but haven't had much success. Anybody care to assist? It shows up in dmesg and lsusb, but not in fdisk -l or lsblk
<patdk-lap> jakst, if it doesn't show up in those, then it cannot read the drive
<patdk-lap> what does dmesg say?
<jakst> It works when I mount it in ubuntu 15.10, but not on my 14.04 machine
<patdk-lap> I didn't ask about that
<jakst> https://www.irccloud.com/pastebin/OBrXuiHx/
<jakst> patdk-lap: This is dmesg
<patdk-lap> that says it found a new usb device
<patdk-lap> it doesn't say it found a drive
<jakst> Fair enough
<patdk-lap> no more dmesg info?
<jakst> Not from plugging the drive in
<jakst> I also noticed the usb folder doesn't exist in /lib/modules/3.16.0-77-generic/kernel/drivers. Could that be it?
<patdk-lap> is it in a usb3 port?
<jakst> No, usb2
<patdk-lap> your missing this
<patdk-lap> usb-storage 1-1.2:1.0: USB Mass Storage device detected
<patdk-lap> it could be
<patdk-lap> you must have installed a strange kernel to not have it
<jakst> patdk-lap: Hmm, I took the official iso from ubuntu.org
<jakst> Haven't tinkered with the kernel
<patdk-lap> dpkg -l | grep ^ii..linux
<patdk-lap> what does that show?
<jakst> https://www.irccloud.com/pastebin/NvddkveC/
<patdk-lap> ya, probably doesn't have any drivers, cause it got loaded as a vm image
<jakst> Okay, so missing drivers is my probably my problem then. Seems reasonable. How would I go about fixing that?
<patdk-lap> apt-get install linux-generic-lts-utopic
<patdk-lap> I think that is all that is needed
<patdk-lap> you might also need, apt-get install linux-firmware
<jakst> Hey that worked. Thanks a bunch!
<jakst> Quick and easy with some guidance :)
<fishcooker> on gzip level of compression will do effect on the speed of compression -3 always faster than -6 default level, but on bzip2 manual... will it do the same? because bzip manual said  --fast and --best aliases are primarily for GNU gzip compatibility.  In particular, --fast doesn't make things ...
<fishcooker> significantly  faster.
#ubuntu-server 2017-01-08
<guillaume___> hi, i'm trying to create and seed a torrent on my ubuntu server. I manage to create the torrent file, then i manage to add it to the client in order to download it and seed it from the server (that's the logic?) but the download doesn't start and the status is idle, can someone help me please ?
<codepython777> i have a new domain and i want to be able to send emails using that domain. Do I need to run my own smtp server? or is there a shorter way to do this?
<cncr04s> set it up with gmail
<cncr04s> think it costs $
<codepython777> cncr04s: how ?
<codepython777> cncr04s: My email traffic is very low <90 emails/day
<cncr04s> https://www.google.com/gmail/about/for-work/
<codepython777> cncr04s: Authentication failed. Please check your username/password andÂ Less Secure AppsÂ access for user@mydomain.com
<codepython777> cncr04s: that takes money
<cncr04s> anything will
<cncr04s> unless you host it on a server at home
<codepython777> cncr04s: is there an easy way to host it ? I've an ubuntu box at home, which has a ddclient setup
<codepython777> so i can access it using name
<cncr04s> postfix and dovecot
<codepython777> cncr04s: is there a pre-configured vm for something like this somewhere i could use?
<cncr04s> i don't know that
<cncr04s> https://www.digitalocean.com/community/tutorials/how-to-set-up-a-postfix-e-mail-server-with-dovecot
<krt> hey im having trouble with getting a favicon on the site krtdev.com i have the favicon.ico in my html folder im using nginx on ubuntu
<soahccc> Hey guys, I have a 3rd party process that is downloading files. Unfortunately it creates folders without giving group the permission to list directories. Is there any way to enforce that from outside aside a cronjob that chmod's over the directory?
<soahccc> I tried the thing with sticky bit / default permission but apparently the process deliberately sets the permissions this way
<soahccc> Isn't the ACL default value supposed to supersede the "standard" permissions? https://gist.github.com/2called-chaos/310768aa10929fed0f1f85405f3157b8 Otherwise what would be the point of per-group defaults?
<patdk-lap> soahccc, did you set umask before you started the program?
<soahccc> patdk-lap: I didn't modify umask but when I create a directory under that user manually it has the correct permissions so I figured it's the process?
<soahccc> patdk-lap: and the umask is 0002 so group should get 7 for directories which it does when I mkdir manually. Or do i have a misunderstanding here?
<soahccc> patdk-lap: Enabling ACLs on the filesystem is a start :D Now it tells me that the effective permission is r--
<soahccc> So I basically got it working except for the fact that newly created folders get a mask of r-- but the default:mask of the parent is rwx
<soahccc> Gosh this makes no sense, I just add a cron job, ACL problem solved =)
<krt_> Can someone explain where to add a favicon.ico using nginx?
<tomreyn> soahccc: you could use incron instead. or do the right thing and file a bug report.
<rizonz> will the php version from now in be in /etc/php instead of /etc/phpX ?
<patdk-lap> I thought it was in /etc/php/x.y
<rizonz> yes
<rizonz> but 5 was like /etc/phpX
<bekks> rizonz: Depending on the php devs, it may change again.
<rizonz> bekks: I think the Ubuntu packagers ;)
<patdk-lap> debian
<rizonz> are they still not packaging themself ?
<patdk-lap>  why would they?
<rizonz> because you want to be beter then Debian ?
<patdk-lap> ubuntu is downstream of debian
<rizonz> *better
<rizonz> but not fully
<patdk-lap> well, lets see, upstart was a ubuntu thing, but they dropped that since debian went systemd
<kaffien> how do you go about setting up a l2tp client connection in ubuntu?
<kaffien> i installed strongswan but i'm not seeing any other config options for vpn other than openvpn and import a file.
<patdk-lap> heh?
<patdk-lap> what are you talking about seeing config options
<kaffien> When using network manager to add connection the only options available for VPN  are openvpn and import connection settings.
<marchelly> Hi, I'm doing do-release-upgrade from 12.04 to 14.04 like aptitude update; aptitude upgrade and do-release-upgrade. Everything went fine and after reboot I'm not able to ssh the server but it's pingable and only FTP port is active, I can connect with FTP user. I can use rescue mode from hosting and after chroot to my working env I'm doing lsb_release -a and it shows me 14.04. So now I know that server boots and even vsftpd starts, but nothing else
<marchelly> . How should I debug this? What actions to do?
<cncr04s> it should have started a ssh on a different port just for this case
<cncr04s> it would have told you what the port was, at least it did for me
<SuperLag> Is there a simple way to only keep 2 kernels on a box?
<marchelly> cncr04s, nmap tellingme that only 21 port is running and I'm able to ftp it withnormaluser
<cncr04s> then you need physical access
<marchelly> cncr04s, Looks like i'm not able to get it, just rescue mode and chroot
<cncr04s> I don't know then. never do a release upgrade without having access to the physical console.
<cncr04s> or virtual console in the case of vm's
<marchelly> cncr04s, but as the server reboots and became pingable and even vsftpd is started it could be some init scripts problems, My working distro is not ubunu so I even do not know how to debug this
<cncr04s> if you don't have some sort of shell or way to manipulate the filesytem, your pretty much boned.
<marchelly> cncr04s, just from rescue mode and chroot. actually I have access tofile systemright now using the method
<cncr04s> well mount the filesystem and look at the logs in /var/log anc check why ssh isnt running
<marchelly> cncr04s, nothing there, the only files updated in last boot were under /var/log/upstart/ directory but nothingthat I can determine as useful there
<cncr04s> syslog would usually say something
<cncr04s> make your crontab run ssh too might yeild something
<marchelly> cncr04s, that's nice idea but I'm not sure cron is runnig there :) I even can't understand how vsftpd starts.
<marchelly> cncr04s, there are no fresh syslog logs. even dmesg is old
<marchelly> only upstart folder contains fresh files
<cncr04s> then its broken
<cncr04s> hope you got backups
<marchelly> sure, I have backups, I'm actually about to start fresh install and them restore from backups
<marchelly> *then
#ubuntu-server 2018-01-01
<Viri> hi
<Viri> ubuntu is garbage
<blackflow> networking looks broken in Artful. Services depending on network.target (WantedBy) fail to start because they can't bind to configured IP address.
<patdk-lap> ya, been having all kinds of issues iwth networking in artful
<Ussat> hmm...thiyght was just me in my VM...
<blackflow> patdk-lap: they should be configured for network-online.target, as I was just told in #systemd. True enough, it now works.
<blackflow> and btw, it's not WantedBy, it's After. My mistake.
<blackflow> (I wrote it wrong here, it's After in the unit files)
#ubuntu-server 2018-01-02
<teward> this'll sound like an idiotic question but who do I prod with a question about the Ubuntu LXD images which sit on https://cloud-images.ubuntu.com/  ?
<oerheks> teward, on of the team members? https://launchpad.net/cloud-images
<cpaelzer> good morning
<lordievader> Good morning
<cpaelzer> hi lordievader
<cpaelzer> happy new year
<lordievader> Hey cpaelzer
<lordievader> Happy new year to you too ð
<ahasenack> good morning
<cpaelzer> jamespage: dpdk 17.11 would be ready to be synced
<cpaelzer> jamespage: should I upload an OVS 2.8 with a fix to work with it until you have 2.9 available?
<cpaelzer> jamespage: or is there anything ready to be uploaded together
<cpaelzer> I mean 2.9 isn't out yet AFAIK but if you have some pre-version ready anyway let me nkow
<cpaelzer> rbasak: would you be able to look at bug 1738412 ?
<ubottu> bug 1738412 in squid3 (Ubuntu) "Init script fails test on reload/restart because of faulty regex" [Undecided,New] https://launchpad.net/bugs/1738412
<cpaelzer> rbasak: this is closely tied to the squid[3] name changes where IIRC you have some context of
<jbicha> any one from Server want to review LP: #1740160 ?
<ubottu> Launchpad bug 1740160 in tickcount (Ubuntu) "Please consider removing tickcount from Ubuntu" [Undecided,New] https://launchpad.net/bugs/1740160
<rbasak> Reviewed
<rbasak> cpaelzer: it seems reasonable. I wonder if https://anonscm.debian.org/cgit/pkg-squid/pkg-squid.git/commit/?id=6ac65f75a971a4a is needed too?
<Odd_Bloke> teward: I'm just catching up on email, so won't have got to it yet, but a cloud-images bug would be the best place to start.
<rbasak> cpaelzer: it's hardly "a very serious issue" though. Mind if I address that in the bug?
<cpaelzer> please go for the bug update
<rbasak> I don't like it when it looks like we're not addressing "very serious issue"s urgently :)
<cpaelzer> I was scared that the older version could even have logs without the ":" and so the fix might be bad
<cpaelzer> by my lack of squids messages
<cpaelzer> to compare them to the rules
<cpaelzer> rbasak: I wondered about the pid changes as well
<cpaelzer> rbasak: they are not obviously tied to but listed at the same bug number
<cpaelzer> rbasak: as I read the debian bug it is not related
<cpaelzer> it just came up while fixing the bug
<cpaelzer> rbasak: see the tail of https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=800341#15
<ubottu> Debian bug 800341 in squid3 "squid3: systemctl reports squid is running when there is a bungled squid.conf and it has exited." [Normal,Fixed]
<rbasak> cpaelzer: thanks
 * ahasenack -> lunch
<cpaelzer> hau rein
<rbasak> nacc: https://paste.ubuntu.com/26307828/ and https://paste.ubuntu.com/26307830/ are my wips.
<rbasak> The second is untested I think.
<rbasak> But the point was to make debugging easier.
<nacc> rbasak: ack
<shadoxx> How do I setup a machine to have its hostname changed on first boot? Right now I have a provisioning script (salt-bootstrap) that restarts the networking services near the ends of its run.
<shadoxx> Before the restart, i'm changing the machine's hostname. Network restarts and the machine loses its ip
<mason> shadoxx: https://help.ubuntu.com/community/CloudInit maybe
<shadoxx> Was hoping to avoid that lol
<shadoxx> If that's the best option, I'll try and hack it in
<mason> I suspect it's the standard option, and as such has value.
<blackflow> mason: o/
<mason> blackflow: hey there
<blackflow> ltns :)
<mason> heh
<mason> I'm actually ducking out for dinner presently, but I'll be back on before long.
<blackflow> mason: np, just wanted to say hi
 * mason bows low. Hello. :)
<shadoxx> mason: i figured it out. the hostname is updated, but /etc/hosts isn't updated, so systemd is having issues
<sarnold> heh you're lucky to find out about it so quick then :) systemd's not the only thing that'd find that situation confusing
#ubuntu-server 2018-01-03
<shadoxx> sarnold: it's actually three separate steps to get it to work without cloudinit - update hostname normally, reflect changes in /etc/hosts, run 'service networking restart'
<shadoxx> works like a charm now
<sarnold> cool :)
<mason> And, back.
<jbicha> shadoxx: are you aware of hostnamectl ?
<shadoxx> jbicha: i am
<shadoxx> i found an error in my template that may prevent the default hostname change stuff from taking over
<shadoxx> redeploying the template now, and i have my script set to hostname ctl for later
<lordievader> Good morning
<cpaelzer> good morning
<lordievader> Hey cpaelzer happy new year
<cpaelzer> hi lordievader, for you as well
<rbasak> nacc: reminder to take a look at https://bugs.launchpad.net/ubuntu/+source/php-defaults/+bug/1699659/comments/9 please
<ubottu> Launchpad bug 1699659 in php-defaults (Ubuntu Xenial) "phpquery always returns 0" [Undecided,Fix committed]
<Aison> why are systemd link files ignored? I had to create my own 10-network.rules in /etc/udev/rules.d/
<Ussat> So, a FYI regarding this Intel bug/fix comming:  https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=2
<Ussat> just a FYI
<jamespage> coreycb: doing a bit of tidying on aodh/ceilometer/gnocchiclient re ujson and trunk package build fixes
<coreycb> jamespage: ok. that is under MIR right?
<jamespage> coreycb: yeah but doko quite rightly pointed out its pretty much unmaintained upstream
<coreycb> jamespage: oh..
<jamespage> I raised that with the telemetry devs - general agreement to switch back to using json
<coreycb> jamespage: ok good
<jamespage> coreycb: https://review.openstack.org/#/c/530891/
<jamespage> actually - https://review.openstack.org/#/q/topic:bug/1737989+(status:open+OR+status:merged)
<nacc> rbasak: thanks for the poke
<rbasak> nacc: do you have any git-ubuntu MPs that need reviewing right now?
<nacc> rbasak: https://code.launchpad.net/~nacc/usd-importer/+git/usd-importer/+merge/334662 if you can, that's the script fixes
<nacc> rbasak: it's nont passing jenkins, but I believe the code is correct
<rbasak> OK
<georgem1> any idea if magnum-ui will be packaged by Ubuntu? https://github.com/openstack/magnum-ui
<nacc> georgem1: is it packaged by debian?
<rbasak> Maybe a question for coreycb? ^
<georgem1> nacc: it doesn't seem to be packaged by debian
<nacc> rbasak: oh good catch, openstack related
<nacc> rbasak: i also have a few older MPs we probably need to talk about for correctness; let me know when a good time would be
<nacc> rbasak: i think also https://code.launchpad.net/~nacc/usd-importer/+git/usd-importer/+merge/334659 could be  reviewed
<nacc> rbasak: and https://code.launchpad.net/~nacc/usd-importer/+git/usd-importer/+merge/334675 (lower priority, affects build only)
<jamespage> nacc: that was a good first response tho
<jamespage> nacc: we've at most tended magnum from the original debian packaging to avoid having to drop it from Ubuntu
<jamespage> magnum-ui - well that needs a contributor todo the packaging tbh
<rbasak> georgem1: ^
<nacc> jamespage: thanks for the info
<cpaelzer> jamespage: fyi I pushed the upload I did to git also
<cpaelzer> jamespage: so whenever you pick up 2.9 you know about those as well
<cpaelzer> jamespage: some build dep changes due to dpdk changes for example
<nacc> jamespage: re LP: #1740892, being relatively neophyte to corosync/pacemaker, what is the actual error condition? That pacemaker restarts? Or that it fails to successfully restart?
<ubottu> Launchpad bug 1740892 in pacemaker (Ubuntu) "corosync upgrade on 2018-01-02 caused pacemaker to fail" [Undecided,New] https://launchpad.net/bugs/1740892
<cpaelzer> jamespage: FYI https://code.launchpad.net/~paelzer/britney/hints-ubuntu-bump-openvswitch/+merge/335670 the bump to the badtest
<nacc> powersj: can you get me `snap version` and `snap list` from the jenkins host?
<nacc> powersj: i guess technically, from the VM that ran: https://jenkins.ubuntu.com/server/job/git-ubuntu-ci/229/console
<powersj> the VM should just be the daily xenial image from uvt
<powersj> right?
<powersj> nacc: here is the host if you are still interested: https://paste.ubuntu.com/26314305/
<nacc> powersj: i wasn't sure, but that soundns right
<nacc> powersj: yeah i suppose so, since i think the jenkins job is using a VM and then installing the lxd snap there?
<nacc> stgraber: --^ fyi
<powersj> nacc: here is what it would produce if you ran now: https://paste.ubuntu.com/26314319/
<powersj> and we use the lxc/lxd that is in the image
<powersj> so no lxd snap
<nacc> powersj: hrm
<nacc> https://jenkins.ubuntu.com/server/job/git-ubuntu-ci/230/console
<nacc> it's referring to a snapd path
<nacc> (both my jobs that failed are doing this)
<pmatulis> 'snap search' outputs a short list of snaps. why?
<nacc> pmatulis: that might be better asked in #snappy :)
<pmatulis> nacc, ty
<stgraber> nacc, powersj: I don't have the push issue here with the same snapd and core versions
<stgraber> is that on 16.04 + hwe kernel?
<stgraber> trying to figure out whether the 4.10 kernel is making a difference somehow
<powersj> linux-image-4.4.0-104-generic
<powersj> nacc: that looks like during the snapcraft cleanbuild right?
<nacc> powersj: afaict, the snap is already built
<nacc> powersj: sorry, was afk, the error occurs during our `git ubuntu build` test
<nacc> powersj: when we push tarballs from the host to lxd guest in the jenkins VM
<nacc> powersj: "host" being the VM itself, sorry
<Aison> hello
<Aison> how can I make isc-dhcp-server to listen to all available network devices?
<nacc> Aison: is that not he default behavior?
<Aison> nacc, I thought the default behavior is eth0
<nacc> Aison: ah you might be right
<sarnold> I think a dhcp server is very much the kind of thing that you should know which interfaces it is listening on
<Aison> sarnold, yes, on all of this router. I defined now all of them in the isc dhcp settings (almost 70 devices)
<sarnold> Aison: oof :) no wonder you wanted wildcard binds..
<Aison> yes :-)
<mason> https://bpaste.net/show/11fcf07f4744
<mason> oh, I'm late late late
#ubuntu-server 2018-01-04
<masber> good afternoon, I have an ubuntu vm and I can't make the network to work... this is the error message I see in journalctl -xe --> Failed to start Raise network interfaces.
<masber> failed with result 'exit-code'.
<masber> the story is I created a new vm and installed ubuntu in it but I make a mistake selecting the primary nic to setup the server ip
<masber> so after installation I went to the /etc/network/interfaces and changed the interface name to the right one
<masber> then I rebooted the networking server and I am getting this error since then
<masber> I am running ubuntu 16.04
<masber> any idea?
<allquixotic> Will the Canonical Livepatch Service be able to implement LPTI on running kernels or will a reboot be required?
<lordievader> Good morning
<cpaelzer> good morning
<lordievader> Hey cpaelzer how are you doing?
<cpaelzer> ignoring my habit to see things worse than they are, actually good :-)
<cpaelzer> how about you lordievader
<cpaelzer> had a good start?
<lordievader> Doing good here, got tea for a change
<lotuspsychje> for the users that might ask about kpti, !kpti has been updated
<trippeh> !kpti
<ubottu> Spectre and Meltdown are security issues that affect most processors, mitigated by a set of Linux kernel patches named KPTI. | General info: https://spectreattack.com/ | Ubuntu (and flavors) info: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown | An Ubuntu Security Notice will be released when updates are available, subscribe at https://usn.ubuntu.com/usn/
<cpaelzer> thanks lotuspsychje
<cpaelzer> ahasenack: ovfs2-tools is good now
<cpaelzer> ahasenack: do you want me to sponsor it as it is now?
<ahasenack> saw your reply, just replied to that
<ahasenack> running the dep8 tests locally
<cpaelzer> ahasenack: ok - give me (or another uploader) a ping when you tihnk it is ready
<ahasenack> ok
<ahasenack> I can actually upload that one, just not push the upload tag
<cpaelzer> ahasenack: well we can revise the tag if needed
<cpaelzer> ahasenack: so I could push the tag as-is
<cpaelzer> which means you can upload if you are happy after some time
<cpaelzer> and if not we can change the upload tag to whatever is the truth then
<ahasenack> let's wait a bit, I'm getting a silly dep8 test error
<ahasenack> which works if I run it interactively in a vm
<cpaelzer> ok, waiting for you
<ahasenack> basic                FAIL stderr: yes: standard output: Broken pipe
<ahasenack> that's just a "yes | fsck.ocfs2 -f -y -F $DISK 2>&1"
<cpaelzer> ahasenack: I have seen such issues
<cpaelzer> being part of a "when does what die" problem
<cpaelzer> in my case I had an unlimited read from /dev/urandom piped to something else
<cpaelzer> yes is an endless-til-killed stream
<cpaelzer> so it might be the same issue
<cpaelzer> if you can run with a limited amount of "y" try that ahasenack
<ahasenack> what do you mean limited amount of "y"?
<cpaelzer> like 4 times "y" instead of infinited
<rbasak> I thought a pipeline was supposed to return the result of the final command?
<ahasenack> oh
<cpaelzer> rbasak: that is mostly just a warning
<cpaelzer> in my cases it didn't even affect RCs
<cpaelzer> ahasenack: does it for you?
<ahasenack> I'll try after lunch
<cpaelzer> affect RCs?
<rbasak> Oh
<rbasak> Hold on
<rbasak> You need allow-stderr in your Restrictions
<rbasak> IMHO, that's a wart with dep8 tests.
<ahasenack> rbasak: the test passed in the past
<ahasenack> and I don't get this error when I run it in a vm
<rbasak> OK, so that's a race that cpaelzer is describing.
<cpaelzer> yeah if that goes out on stderr then rbasak is right, the allow is needed then
<ahasenack> so yes, allow-stder would work around it
<ahasenack> but I will try a bit more some other tricks before
<rbasak> If you're going to use a yes | pipe, then I think an allow-stderr is required.
<rbasak> And would be the correct fix.
<cpaelzer> OTOH if you can easily do the same with limited number of "y" do that
 * ahasenack -> lunch
<cpaelzer> expect might the the best but also most complex solution
<teward> i think i asked this but didn't get a response, who do I have to bother about the cloud-images.ubuntu.com linux images for lxc/lxd, because there's a small issue with them...
<teward> minor but annoying ultimately :P
<cpaelzer> stgraber: ^^
<cpaelzer> oh sorry
<rbasak> teward: ask the actual issue please :)
<cpaelzer> are lxc/lxd even on cloud-images?
<rbasak> ubuntu:xenial comes from somewhere. No idea where :)
<teward> cpaelzer: the disk images for {INSERT_UBUNTU_RELEASE_HERE} are.  https://paste.ubuntu.com/26319708/ for `lxc remote list` output
<teward> rbasak: the cloud images, when spun up by LXC/LXD, don't get the hostname added to /etc/hosts
<teward> which can in some cases cause issues with `sudo` and such
<teward> Debian's cloud images have no problem with this
<cpaelzer> ok, then they are pushed there
<cpaelzer> still I think the hightlight to stgraber ^^ still is the right one for you
<rbasak> teward: http://cloudinit.readthedocs.io/en/latest/topics/modules.html#update-etc-hosts
<rbasak> "If this is set to false, cloud-init will not manage /etc/hosts at all. This is the default behavior."
<teward> rbasak: so, then, where does one make that change
<rbasak> lxc profile edit default
<rbasak> config:
<rbasak>   user.user-data: |
<rbasak>     #cloud-config
<rbasak>     manage_etc_hosts: localhost
<teward> *throws the !pastebin factoid at rbasak*
<teward> just saying :P
<teward> i'll update that
 * rbasak throws the manual back at teward :-P
<teward> rbasak: *very* odd though that that only happens for the Ubuntu images
<rbasak> Ubuntu images from ubuntu:xenial etc. uses cloud-init by default
<rbasak> images:debian/sid/amd64 etc. do not.
<teward> ah, makes senses.
<teward> sense*
<rbasak> I'm not sure what images:ubuntu/... does. Presumably they don't use cloud-init. But I normally want cloud-init, so I never use those ones.
<teward> nor I ;P
<teward> well now I"ve updated all my LXC/LXD systems accordingly heh
<rbasak> smoser: any thoughts on moving the manage_etc_hosts default? When do people _want_ it to be false, except to avoid breaking existing setups?
<Odd_Bloke> teward: rbasak: cpaelzer: ubuntu:* and ubuntu-daily:* come from cloud-images.u.c.; images:* come from stgraber.  The cloud-images project would be the appropriate place on LP to file a bug. :)
<stgraber> though in this case, whether it's a bug is debetable, I've always found it weird that cloud-init doesn't generate the /etc/hosts entry, but based on its documentation, it's clearly deliberate
 * rbasak files bug 1741277
<ubottu> bug 1741277 in cloud-init (Ubuntu) "manage_etc_hosts default is unhelpful" [Undecided,New] https://launchpad.net/bugs/1741277
<teward> stgraber: could we not override that on our side of things?
<teward> because in 99% of cases you are probably going to *expect* it to not cause `sudo` to explode in the lxc/lxd container with an 'unable to find hostname HOSTNAMEGOESHERE' error
<teward> it still works, but...
<stgraber> teward: the eaiest would be to have a different default in cloud-init when dealing with a container, though I'd expect the sudo issue to be just as true inside a cloud instance, so not sure why this is lxd-specific
<teward> stgraber: so far i've only noticed it in lxd.
<smoser> rbasak: well, in cases where dns works, a sane cloud that provides dns entry in its dhcp (or otherwise) provided dns servers
<teward> but since i don't have any non-lxd cloud-init-initiated instances... :P
<smoser> in such an environment, the cloud knows the right result for looking up hostname, and if you put an entry in /etc/hosts for localhost, you break what would have worked.
<smoser> the issue is not "lxd specific", its "broken cloud specific".
<smoser> why should the platform not provide an answer for the hostname that it gave the instance from the dns server that it provided to the instance.
<rbasak> Is it sane for the local hostname to result in a round trip around the network via DNS?
<smoser> maybe
<smoser> i honestly think that the sane fix is to stop sudo from doing that nonsense.
<smoser> no one uses sudo like that anymore
<smoser> with one sudo config spread across multiple hosts
<rbasak> I think it's reasonable for a system to expect to always get a result from looking up itself.
<rbasak> On lxd, we're in a position to make that happen.
<smoser> where the hostname of the system is looked up via dns
<rbasak> The question is just about which component should manage that.
<smoser> really, i think the solution for "get rid of that warning from sudo" is to *get rid of that warning from sudo*
<rbasak> It's not just sudo.
<rbasak> Other stuff breaks too.
<smoser> like ?
<rbasak> I don't recall.
<rbasak> I don't see it often, because I consider a system not being able to look up its own name as broken and always fix that first.
<smoser> and anything that *does* depend on it is honestly probably broken.
<smoser> in some way, its overly simplistic solution to "whats my IP address" or something like that.
<rbasak> Perhaps
<rbasak> But it's still broken for a system to not be able to look up its own ame
<smoser> yeah.
<TJ-> There was a bug recently affecting sudo causing hangs when the system was offline, when mdns was installed for nsswitch, too
<smoser> i do think we should look at doing this better, and have a solution for bionic that does the best thing.
<smoser> can anyone actually  justify sudo doing a hostname lookup?
<rbasak> I don't object to fixing sudo. I just don't think that resolves the issue from the user's perspective.
<smoser> in a year > 2000 ?
<TJ-> nacc and I were debugging it, bug 1295229
<ubottu> bug 1295229 in nss-mdns (Ubuntu) "With 'hosts: mdns4' in nsswitch.conf, getaddrinfo() returns -5 (EAI_NODATA) when network interface is down" [Undecided,Confirmed] https://launchpad.net/bugs/1295229
<rbasak> The problem with sudo is that the file format specification will continue to permit hostnames even if nobody uses that facility.
<rbasak> Changing that is extremely difficult.
<rbasak> So then it becomes a request to optimise sudo to not do a lookup unless it needs it.
<TJ-> could the solution be in nsswitch instead ?
<rbasak> TJ-: then it wouldn't (easily) be configurable though.
<rbasak> OTOH, the solution in /etc/hosts is fine and is configurable.
<smoser> TJ-: i think that is in the realm of 'myhostname' plugin or something
<smoser> https://www.freedesktop.org/software/systemd/man/nss-myhostname.html
<TJ-> smoser: yeah, thanks for jogging my memory on that one... we found that a recent addition
<rbasak> That's interesting
<smoser> TJ-: https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1730744
<ubottu> Launchpad bug 1730744 in systemd (Ubuntu) "sudo is slow (10 seconds) when hostname is not resolvable" [Undecided,New]
<smoser> that is related
<smoser> i came to the party late
<smoser> is that the bug that was originally being raised.
<rbasak> I suppose the real intention of my bug is "not all platforms running cloud-init make the system hostname resolveable by default"
<rbasak> I'd consider it resolved as soon as that is true.
<TJ-> smoser: in the case that nacc and I investigated, we found the nsswitch.conf "hosts: ..." entries didn't seem to be processed in the order, or according to the rules, as documented
<smoser> well, all this is part of why i leave it untouched.
<TJ-> right, the case was a default 17.10 install too.
<ahasenack> hi, can someone please import ubuntu-fan into the ubuntu server git repo?
<ahasenack> rbasak: cpaelzer ^
<cpaelzer> ahasenack: I'll do so
<ahasenack> thanks
<cpaelzer> running
<cpaelzer> but I think it had a native git
<cpaelzer> check d/control maybe?
<ahasenack> hm
<ahasenack> nothing in there
<ahasenack> not even a single url
<ahasenack> readme points at http://www.ubuntu.com/fan and that's it
<ahasenack> the other url is for iana.org
<ahasenack> that /fan one is a 404, btw
<ahasenack> cpaelzer: beware the launchpad login oauth token prompt, it's easily missed
<nacc> cpaelzer: please also add to whitellist
<cpaelzer> nacc: what would happen if we don't other than getting out of sny?
<cpaelzer> sync
<nacc> cpaelzer: it breaks the assumption that all of the existinng repos keep up with the publisher
<nacc> cpaelzer: but htat's it :)
<cpaelzer> ahasenack: imported
<cpaelzer> nacc: added to whitelist
<nacc> cpaelzer: thx
<cpaelzer> rbasak: I didn't realize what you meant with a fetch being needed along that
<cpaelzer> anything I should do?
<ahasenack> !kpti
<ubottu> Spectre and Meltdown are security issues that affect most processors, mitigated by a set of Linux kernel patches named KPTI. | General info: https://spectreattack.com/ | Ubuntu (and flavors) info: https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown | An Ubuntu Security Notice will be released when updates are available, subscribe at https://usn.ubuntu.com/usn/
<rbasak> cpaelzer: we mean the bastion gets the whitelist straight out of the git tree rather than the snap. So the bastion needs the updated whitelist fetching with git before it'll take effect.
<rbasak> (also the loop restarting I think)
<rbasak> I don't think you need to do anything.
<rbasak> But probably useful to know that.
<cpaelzer> thanks rbasak
<Neo4> I'm going to make user root owner apache, put in pache2.conf root instead www-data?
<Neo4> it can solve permission problem?
<Neo4> now I need always pug www-data to growp of my user
<Neo4> ???
<Neo4> in ubuntu-server guide I've read it is possible
<Neo4> but not recommended, why?
<nacc> Neo4: you want root to own your website?
<nacc> Neo4: then if there is an exploit of apache2, the exploiter has root on your system, potentially
<Neo4> nacc: yes, for apach2 can be able write files always
<nacc> Neo4: that's not a reasonable thing for a webserver
<Neo4> nacc: do you think it's possible? Exploit can be if I install some module?
<Neo4> nacc: ok, I'll train comond chmode and find than
<Neo4> nacc: always should after install site put rights 775 and 664
<Neo4> if would apache is root I right would be 755 and 644
<Neo4> nacc: or I should just change owner of files from my user to www-data
<Neo4> but I put root and neo is my current user to www-data goup and www-data to group neo and root
<Neo4> and rights 775
<teward> NEVER add anything to the `root` group
<teward> NEVER add yourself to the `www-data` group
<teward> NEVER add the webserver www-data user to your own user group
<teward> use actual permissions to control things instead
<Neo4> teward: why?
<teward> or ACLs for 'customized' rules.
<teward> Neo4: permissions and security
<Neo4> teward: if filewill have ownerchip www-data and froup www-data I can't copy this file ?
<teward> you *never* want to give `www-data` or non-root system accounts access to `root`
<teward> Neo4: I think you need to get an understanding of how file permissions work
<teward> and how the underlying system permissions structures on LInux work
<Neo4> teward: I know
<teward> BEFORE continuing.  my two cents.
<Neo4> teward: see www-data create his own files and owner will be www-data : www-data
<teward> i know this - i'm very fluent in filesystem permissions.
<teward> the problem is
<teward> you want someone *else* to access the data
<Neo4> I'll connect to my server and can't edit this files, because apache creates it with 755 rules and my user is not in group
<teward> well, the `root` user has god access to everything that doesn't have specialized access permissions
<Neo4> teward: yes, for edit files
<teward> want to know another method?
<Neo4> teward: if my user neo in www-data group I put right 775 or 664 and can do it
<teward> Neo4: alternativelyt
<teward> you can give your user ownership of the files
<teward> www-data as group access
<teward> and set the setgid bit for all directories
<teward> and set the setgid bit for all directories within the document root*
<teward> thereby giving both you and `www-data` read/write/edit permissions.
<teward> which is the ***proper*** way to do this.
<teward> ... well one of them anyways.
<Neo4> teward: I don't know what is setgid bit
<teward> https://askubuntu.com/questions/767504/permissions-problems-with-var-www-html-and-my-own-home-directory-for-a-website/767534#767534
<teward> steps 1, 2, and 3.
<teward> replace `/var/www/html` with the actual directory for your document root
<teward> and stop messing with who has group access to whichever account
<teward> then go research how filesystem permissions work in Linuxl
<teward> because if you don't understand the basic permissions structure, IMO you should probably be hiring someone to do this stuff for you instead :P
<teward> (yes i'm salty and grumpy today, i've been staring at code all day so my eyes hurt and i have a massive headache)
<teward> (and four servers explodified so i'm busy rebuilding those... what a horrible day for me >.>)
<Neo4> :)
<Neo4> ok
<Neo4> thanks, will try
<teward> step 3, the setgid bit, just basically says "Any file created in this directory with default permission settings will get the group set to the same group-ownership as the folder it's created in" in a nutshell
<teward> it's far more complex than just that
<teward> but it's the brief explanation
<teward> now i need coffee
 * genii slides one on over to teward, ASAP
<ahasenack> I'm trying to debug a dns resolution problem in an lxd container (artful),
<ahasenack>  /etc/resolv.conf has nameserver as 127.0.0.53
<ahasenack> that's systemd-resolved
<ahasenack> and it's not working: dig @127.0.0.53 just times out
<ahasenack> but dig @some-external-dns works just fine
<ahasenack> so where do I find out which forwarders systemd-resolved is using?
<ahasenack> tcpdump -i any port 53 shows nada when I use dig against @127.0.0.53, just dig's requests, but no reply
<ahasenack>  /etc/systemd/resolved.conf has everything commented (#)
<teward> ahasenack: did you *specify* any DNS servers in the network config or in the host for the LXD bridge/dhcp to assign?
<ahasenack> it's an lxd deployed via juju into that host, which is deployed via maas
<ahasenack> the host is fine
<ahasenack> the host also has 127.0.0.53 in resolv.conf,
<ahasenack> and its /etc/systemd/resolved.conf is also default
<ahasenack> so I also don't know where the host is getting the upstream dns from
<ahasenack> but host is bionic, netplan all the way
<ahasenack> hm
<ahasenack> juju also used netplan for this xenial container...?
<ahasenack> it just created the netplan config, but netplan is not even installed in this container, so that's a no-op
<ahasenack> but yeah, /etc/network/interfaces has no dns-nameserver config
<ahasenack> just the domain
<ahasenack> and the loopback interface does have a dns-nameserver, and that's 127.0.0.53
<ahasenack> which is what ended up in /etc/resolv.conf
<ahasenack> but why would 127.0.0.53 not even try the root servers
<ahasenack> I think it's in a loop
<sarnold> I don't think systemd-resolved is a recursor, is it? I thought it just forwarded queries to another server
<sarnold> I wouldn't expect it to check roots itself
<teward> sarnold: last i checked it isn't
<teward> but if systemd-resolved has no upstream DNS set for forwarding it just implodes
<teward> same issue I had with dnsmasq ;)
<patdk-lap> odd, everyone has released patches now, except ubuntu :(
<dax> to be fair, Debian only *just* released
<mason> https://security-tracker.debian.org/tracker/CVE-2017-5753 doesn't reflect a release yet
<mason> nor https://www.debian.org/security/
<mason> Ah, but https://lists.debian.org/debian-security-announce/2018/msg00000.html
<oerheks> https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/
<oerheks> "everyone has released patches now" ???
<TJ-> patches are great, what we need is built and published binaries :D
#ubuntu-server 2018-01-05
<masber> good afternoon, I am getting an error while restarting networking service after update my /etc/network/interface file to change nic configuration https://bpaste.net/show/c3e83d9fe9d5. any idea?
<sarnold> restarting networking has usually lead to useless boxes
<sarnold> does systemctl reload networking    do a better job?
<masber> sarnold, Failed to reload networking.service: Job type reload is not applicable for unit networking.service.
<sarnold> :(
<masber> sarnold, ifconfig ens224 up && ifconfig ens224 up works
<masber> but I still don't understand why it fails if using systemctl...
<masber> I have always used centos, so I guess I am using ubuntu in the wrong way?
<sarnold> masber: I've always used ip directly when wanting to make 'live' changes
<sarnold> I have no idea if the systemd networking service files are supposed to be safe like this or not :(
<masber> I see
<masber> well best thing to know if things are stable is rebooting the server and check again I guess
<sarnold> yeah
<delt> Hello
<delt> installing spamassassin on my mailserver gives me this error: install: fatal: unable to read tcpserver: file does not exist
<delt> or even running dpkg --configure spamassassin
<delt> in fact i get this error when installing/uninstalling any package when spamassassin is installed
<delt> full strace of dpkg --configure spamassassin -- https://pastebin.com/bDvtfyrV
<delt> i extracted the spamassassin .deb file and the only reference to tcpserver in it, is:
<delt> [pts/2][root@vhost0]:/tmp/spamassassin# grep -r tcpserver .
<delt> ./usr/share/perl5/Mail/SpamAssassin/Message/Metadata/Received.pm:       # by tcpserver or a similar daemon that passes rDNS information to qmail-smtpd.
<sarnold> delt: it's odd that install from /usr/local/bin/install is used
<delt> sarnold: just a symlink to /usr/bin/install
<delt> actually /usr/local/* are symlinks to ../*
<delt> it opens "." and then chdir's a few times to /usr/local then fchdir's back to . (file descriptor 3) ...not very informative :/
<delt> doesn't dpkg keep more detailed logs about wtf is going on?
<delt> Setting up spamassassin (3.4.1-3) ...
<delt> install: fatal: unable to read tcpserver: file does not exist
<delt> dpkg: error processing package spamassassin (--configure):
<delt>  subprocess installed post-installation script returned error exit status 111
 * mason sticks like glue to ifup / ifdown.
<nacc> delt: actually, that could be the issue, it may be setting some path based upon install's path? Can nyou remove that symlink and try
<nacc> delt: it doesn't really make sense to have /usr/local/bin symlink to /usr/binn
<masber> hi, so this is what is happening... I wanted to change the IP address of a nic so I edited /etc/network/interfaces file and changed the IP there, then I put the interface down and up again. What ubuntu did was to add the ip instead of replacing it. Why is that?
<sarnold> because you removed the knowledge of the ip address when you edited the file *before* running the ifdown command
<sarnold> it's far less intelligent than you may expect :)
<masber> it should read the interfaces file and act according to that no?
<masber> hahaha
<sarnold> I'd guess it tried to remove the new address ('succeeded' in the sense that the address wasn't there any more), and then add the new address ..
<masber> yeah weird
<sarnold> you're not the first and probably not the last to trip over this. :)
<masber> yes I know hahaha I find it funny
<cpaelzer> good morning
<jamespage> rbasak: hi! so coreycb and I chatted about your suggestion to do a 'binary only' version of pxc 5.7 for bionic
<jamespage> rbasak: however that's going to push work from packaging -> charm and make upgrades more complex so I'd rather re-align the 5.7 packaging with what we have for 5.6
<ahasenack> rbasak: hi, any idea how I could build ubuntu-fan with git-ubuntu? https://pastebin.ubuntu.com/26325413/
<ahasenack> it's 0.12.8~17.10.1 in artful already
<ahasenack> I filed a bug, got another backtrace when trying to build bionic's version
<ahasenack> cpaelzer: if you could take another quick look at https://code.launchpad.net/~ahasenack/ubuntu/+source/ocfs2-tools/+git/ocfs2-tools/+merge/335651
<ahasenack> the dep8 tests pass now, I followed your suggestion from yesterday
<rbasak> ahasenack: as a workaround, try sticking the orig tarball in the parent directory by hand.
<ahasenack> rbasak: it's a native package, the tarball is not named "orig" if that matters
<ahasenack> there is no orig tarball for the new version I'm building
<ahasenack> unless I create one manually, but that's just my current directory tarred up
<ahasenack> anyway, I'll hack my way around it without g-u for now
<ahasenack> going old-school :)
<teward> rbasak: I think you and I had the same idea with regards to the latest evil things on Ask Ubuntu :p
<teward> assuming that was you who posted about it (Meltdown/Spectre)
<rbasak> teward: sounds like someone didn't check for duplicates before posting :)
<teward> rbasak: sounds like I've been on the phone all day with clients on this issue, got no sleep, need 500 cups of coffee and a raise.
<teward> :p
<teward> either way, yours is the parent, both are 'protected' from newbies, and have our answers together.
<teward> that's gonna be the canonical question we point people at, I think.
<rbasak> Looks good
<teward> the only thing keeping me awake?  I'm on my 5th bottle of Barq's red creme soda.
<teward> and that's just pure sugar
<teward> rbasak: was there an official 17.04 EOL announcement?
<teward> (https://askubuntu.com/questions/992232/what-is-ubuntus-status-on-the-meltdown-and-spectre-vulnerabilities#comment1601291_992617 - why i ask)K
<rh10> teward, well, it ,means one of them in not patched yet?
<teward> rh10: you aren't rbasak.
<teward> i'm well familiar with the problem
<teward> and that wasn't what i was asking
<rbasak> teward: I've not seen an announcement yet.
<teward> that's what i was wondering, rbasak.  Since the commenter there was about "17.04 isn't patched and won't be"
<rh10> teward, well, yep im not a rbasak
<rh10> sorry about that
 * teward yawns
<teward> sorry if I seem irritable, i've had a crap morning .>>
<rh10> np
<rbasak> teward: could be worth asking the security team if they're preparing updates for Zesty. Sounds like something they'd know :)
<albech> I am doing backups on disk and want to replicate the backup across two different storage devices. Does anyone know if it is possible to create a raid 1 across two different storage devices (SANs) Thought it would be smarter than manually copying from one dive to the other after successful backup. Will the raid be able to resync if one of the devices go offline for updates etc?
<sdeziel> albech: have you looked into DRBD?
<albech> nope
<sdeziel> that would be my starting point :)
<albech> looking now ;)
<albech> cheers
<patdk-lap> really depends on your goal
<patdk-lap> drbd is raid1
<patdk-lap> but with it comes all the issues of raid1
<patdk-lap> like, admin deletes a file from the backup server by accident
<patdk-lap> your other backup server will NOT have it also
<patdk-lap> zfs send/recv is more ideal for this, but depends on using zfs though
<albech> patdk-lap: thanks for the input. will look into this as well
<apb1963> There's far too much info out there to know what's current and what's ancient history... and what's ancient history warmed over.
<apb1963> 16.04: Anyone have a link to the current recommended method of setting up a NIC as an access point?
<patdk-lap> find a wifi device that actually supports it first?
<patdk-lap> then install whatever custom hostapd patches you need to support that device in ap mode
<apb1963> Thanks, kind of vague.  Got a link?
<Henster> hi guys please help a noob, i cannot see anything on my screen after installing server 16.04 tyhe screen card is connected to the dvi port since my new screen has no vgi ,, i can connect via ssh ...
<patdk-lap> no I don't and yes, cause it's different for every single nic
<apb1963> well, that's interesting because this link seems to explain it pretty well... but I was looking for something "official".  Sadly, ubuntu docs are out of date and... well.. weak in general.
<Henster> i have installed and get this message :  xrandr
<Henster> Can't open display
<apb1963> https://askubuntu.com/questions/180733/how-to-setup-an-access-point-mode-wi-fi-hotspot/180734#180734
<Henster> omg i need to test my eyes I have hdi and its working ..lol
<rbasak> nacc: FYI, I have some validated Sources file fetching code working
<nacc> rbasak: nice, do you want to propose a stacked MP?
<rbasak> nacc: needs some polishing, tests, etc.
<nacc> rbasak: ack, i wanted to add some more tests to my MP anyways
<nacc> that test the code as-is, including the network interactions a bit
<cpaelzer> ahasenack: looking at ocfs again
<cpaelzer> ahasenack: or was that done by someone else?
<dpb1> cpaelzer: he wanted you to
<cpaelzer> on it
<ahasenack> cpaelzer: oh, you are here
<cpaelzer> ahasenack: done
<ahasenack> thanks
<cpaelzer> ahasenack: I'm here again I should say
<ahasenack> :)
<cpaelzer> I had some things waiting for builders and I wanted to check
<cpaelzer> but not yet
<ahasenack> cpaelzer: can you push the upload tag?
<cpaelzer> I can, I'll ping you
<ahasenack> cpaelzer: also, what's debian equivalent of excuses? Just to check if the failure happen there as well
<ahasenack> but I can file a bug nonetheless
<cpaelzer> ahasenack: with so many updates, please confirm that "dcacab19" is "the right thing"
<cpaelzer> ahasenack: debci
<cpaelzer> I'll fetch alink fro you
<ahasenack> dcacab199c3800e14c23342b6a0861d3ec5cc35d
<ahasenack> yep
<cpaelzer> ahasenack: https://ci.debian.net/packages/o/ocfs2-tools/unstable/amd64/
<cpaelzer> all good for them
<cpaelzer> interesting
<cpaelzer> ahasenack: this is one of the packages affected by a bug
<cpaelzer> it refuses to tag for non clean Dir
<cpaelzer> but nothing is unclean
<cpaelzer> we had a abug for this, I should make ocfs2 affected as well
<cpaelzer> oh well in this case it really was something of gitignore
<cpaelzer> ahasenack: tag pushed
<cpaelzer> IIRC you can upload right?
<ahasenack> I can
<ahasenack> cpaelzer: I reviewed your postfix mp, btw
<cpaelzer> saw it ahasenack
<cpaelzer> thanks
<ahasenack> ok
<cpaelzer> I also see the incoming MPs but I wont get to it before Monday
<cpaelzer> a chance for the others to review
<ahasenack> np
<cpaelzer> otherwise I'll pick them up then
<Kyoku> if i start using the bionic beaver daily build will i be able to apt upgrade it to the full LTS release when it's out or will i have to do a complete reinstall?
<dax> ubottu: final | Kyoku
<ubottu> Kyoku: If you install a development version of Ubuntu bionic and keep up with package updates, then you will be upgraded to the official release of 18.04 when it comes out. To make sure, type Â« sudo apt update && sudo apt full-upgrade Â» in a terminal.
<teward> Kyoku: yes, but I strongly advise you to wait (dailies aren't updated)
<teward> (at least not at the moment)
<Kyoku> thanks teward i just tried installing on vmware and it failed anyway so i'll wait
<teward> Kyoku: yeah I would not be working with the dailies or Bionic unless you have to.
<teward> if you just want Bionic for testing, consider an LXD container of it instead
<teward> which is what I've got running
<teward> ... at least, when I'm testing things, I do.
<Kyoku> i was actually trying to set it up as an LXD host for testing but yes I'll try that too
<teward> Kyoku: i'd say stick to LTS for the LXD host part, because we know that works, while running guests within that.  YMMV, but that's my opinion
<dnegreira> im quite happy with multipass for my virtual machine needs, it also runs bionic
<keithzg> Hmm what does "DNE" mean in the security notices, ex. https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-5754.html ?
<keithzg> (I mean, I get the meaning contextually it just oddly bugs me that I can't figure out the acronym, haha)
<dpb1> does not exist, I belive
<TJ-> correct
<Henster> hey guys teamviewer threw me under the buss can some one please suget a ssl version of a remote desktop that can connect to my server and desktop clients ?
<tomreyn> Henster: there's ssh, and you can tunnel through it.
<Henster> im usingh tightvnc on my oter pc's just wondering if the data is encrypted ?
<oerheks> vnc over ssh is, vnc itself not.
<tomreyn> vnc does not provide transport layer encryption, ssh does. you can combine the two. if you want it faster than vnc, use x2go.
<Henster> x2go , ok nice tx
<nchambers> why not just use ssh?
<Henster> i need to remote desktop in my moms windows pc , she is blond and its for our bussiness so need a all arounder
<Henster> ill use ssh for my servers
<sarnold> twenty years ago I used PC AnyWhere to do windows things
<sarnold> I wonder if it still exists. and works without a modem.
<tomreyn> for windows, the most commonly used protocol for this purpose would be rdp (also works on linux for clients and servers). but i think you need to buy separate licenses for this on windows if you want multiple users watch/interact with the same screen at the same time (that is both the remote user connecting and the local user in front of the screen).
<tomreyn> -> ##windows
<sarnold> hehe yeah, asking a bunch of linux server folks what windows tools to use ..
<sarnold> I know putty can do ssh for windows people but I doubt X11 forwarding gets very far :)
<Henster> ha ha , i need to asdk real IT peeps :)
<Henster> I think x2go will work for me ,, i have a server the clients can point to
<tomreyn> If you want to try something with ubuntu bash on windows and X there i bet donofrio in #ubuntu-on-windows would love to assist.
<qman__> you can also do RDP over SSH
<nacc> rbasak: are you ok if i cherry-pick parts of your dsc-builder repo? e.g., https://git.launchpad.net/~racb/usd-importer/commit/?id=49b01ff87e66e86f49f8639caef5a37088e13427
<nacc> I think that became the case in the devel branch refactor
<nacc> rbasak: I would like to pull in your dsc_builder changes, as then i can use them to test the dsc changes for ahasenack's bug
<mwynne> Hello. Can anyone tell me if the octavia-common package is available in ubuntu 16.04?
<nacc> !info octavia-common xenial | mwynne
<ubottu> mwynne: Package octavia-common does not exist in xenial
<nacc> mwynne: no such package in ubuntu, period.
<nacc> mwynne: did you mean octave-common?
<mwynne> nacc: No, Octavia, the OpenStack load balancer.
<nacc> mwynne: i see python packages for octavia, but not the one you named
<nacc> !info python3-octaviaclilent | mwynne
<ubottu> mwynne: Package python3-octaviaclilent does not exist in artful
<nacc> !info python3-octaviaclient | mwynne, sorry:
<ubottu> mwynne, sorry:: Package python3-octaviaclient does not exist in artful
<nacc> !info python3-octaviaclient bionic
<ubottu> python3-octaviaclient (source: python-octaviaclient): Octavia client for OpenStack Load Balancing - Python 3.x. In component universe, is optional. Version 1.1.0-1 (bionic), package size 27 kB, installed size 244 kB
<nacc> mwynne: --^ there, it's in 18.04 :)
<nacc> mwynne: i'd check UCA, i guess; maybe coreycb or jamespage would konw for sure
#ubuntu-server 2018-01-06
<nacc> rbasak: i threw up a WIP MP based off dsc-builder that was really easy to put together a few basic unit tests for the bug ahasenack hit
<nacc> rbasak: in other words, nice work!
<keithzg> Huh, I think the roundcube packaging is currently at least a bit bogus, it seems to skip jquery.min.js and thus basically nothing works!
<keithzg> Wait, nevermind, Friday-night brain fuzz; turns out I don't actually have it installed from the package on my instance. I'm a bit confused by how it somehow changed then, though, huh.
 * keithzg calls it a night, hopes that speculation patches drop soon!
<Kyoku> is there an equivalent of rc.local in Bionic Beaver? i just noticed there's no rc.local file
<tomreyn> Kyoku: if it doesn't exist, yet, just create a root-owned, executable shell skript /etc/rc.local and add what you want.
<tomreyn> also /join #ubuntu+1 for bionic support
<tomreyn> also you may need to 'sudo systemctl enable rc-local'
<ikonia> thats assuming it's still got a unit file
<ikonia> it may have been removed
<xibalba> curl colosandiego.com
<xibalba> sorry wrong window
<albech> 16.04 Getting 'device appeared twice with different sysfs paths' on my syslog - https://paste.ubuntu.com/26332288/ I know that brtfs is using same uuid on all devices in a raid, but this error seems to cause other problems on my system. Any idea what can be done to fix it?
<pankaj_> I have been googling for "what is mailing list and how to join" but nothing is satisfactory. Please I want to join. How to do. Frustated searching.
<TJ-> pankaj_: for Linux?
<pankaj_> TJ-: Yes.
<TJ-> pankaj_: see my response in ##linux
<rh10> join #docker
<rh10> sorry
<bananapie> linux is reporting a wrong load average, 5 minute average is showing as 88.56, but based on cpu usage, waiting for io, swapping, and actual performance, it should be closer to 0.54
<bananapie> is there any way to reset or fix the load average without a reboot?
<patdk-lap> I don't know what any of those have to do with load avg
<tomreyn> very much actually, if you read 'man uptime'
<tomreyn> but he's loooong gone
<patdk-lap> no
<patdk-lap> cpu usage, io wait, and stuff is how much cpu is used and waiting on drives
<patdk-lap> load is how many programs WANT to run
<patdk-lap> there could be many factors why the are running or blocked
<patdk-lap> cpu and disk are only two
<tomreyn> runnable or uninterruptable state
<tomreyn> "A process in a runnable state is either using the CPU or waiting to use the CPU."
<patdk-lap> I constantly have this issue with nfs usage
<tomreyn> so running processes  do count
<patdk-lap> when the nfs server stops responding
<patdk-lap> low cpu, low disk
<patdk-lap> extreemly high loadavg
<Kyoku> i'm having problems with DNS lookups in bionic beaver, anyone else experiencing problems with the systemd resolver?
<tomreyn> Kyoku: /join #ubuntu+1
<Kyoku> thanks tomreyn
#ubuntu-server 2018-01-07
<sudormrf> hey guys! so I am trying to set up an internal certificate authority at home. what I would like to do is have a certificate per device that requires it with a SAN for that specific devices host name and IP. that way if I navigate to the host name or IP I don't get the nag warning about certificates (after I import the root to my devices). I've been googling it, but am not coming up with exactly what I need. I see that I can set req_v3
<sudormrf> in the openssl.cnf file and then do the subjectAltName bit, but that seems to insert those SANs for every certificate. Is there no way to make it ask me what names it should use for the certificate? does it have to be hard coded in openssl.cnf every time?
<sudormrf> I could change the cnf file each time I do the CSR, but that seems sort of silly
<sudormrf> seems like there has got to be a better way
<sudormrf> a way that I am missing
<wxl> hey folks just a heads up: artful dot one is failing for server, as it is for lubuntu alternate (also d-i), both issues with dependencies, albeit different ones
<tomreyn> i was thinking there's no point release installers for non lts releases, change in policy? that'd be good.
<rbasak> powersj: see wxl's comment above
<ZeroWalker> can i ask questions about linux on a pi, probably basic stuff as i suck at it. Or is this about developing and support about just that?
<JanC> ZeroWalker: this channel is about using ubuntu as a server  :)
<JanC> so assuming you are using Ubuntu as some sort of server on a Raspberry Pi, this is the right channel
<ZeroWalker> goodie ty
<trekkie1701c> New kernel patches seem to work.  Bootable, at least.
#ubuntu-server 2018-12-31
<seekr> teward: (I don't know whether this inquiry is out of scope for this channel; I trust someone will tell me if it is.) I've just moved a site to a new server, and think I'm having a problem with mod_rewrite.  I followed the instructions in an article at https://www.digitalocean.com/community/tutorials/how-to-set-up-mod_rewrite as to "a2enmod rewrite" and "service apache2 restart" (on an Ubuntu server).  The module was not previously activated,
<seekr> judging by the absence of a message to that effect being produced.  However, I'm finding that accesses that used to work (on the previous server) are now failing.  :(
<seekr> I've grown very tired and must now go into nap mode for a while.  |-)  But if anyone has any possible answers, please do let me know
<cryptodan_mobile> seekr: I would check the error logs. Something might be used that's depreciated in the new server
<seekr> cryptodan_mobile: I was thinking similar thoughts whilst drifting off into sleep mode, from which I've just emerged.  Will investigate.  Thanks.
<seekr> cryptodan_mobile: It was a good idea, but I see nothing in error.log corresponding to such a problem.  The access.log file shows only 404s for attempts to access the root-level pages.
<seekr> Joomla! is supposed to redirect such accesses, I think.
<seekr> I suspect mod_rewrite isn't fully active, and will thus resume my search for advice on enabling it in Apache.
<seekr> There must be a way to do a simple test, which I will now seek.
<seekr> tomreyn: Good morning.  Anybody home?  :)
<tomreyn> seekr: hi, i'm around.
<seekr> great - I have some questions about mod_rewrite in Apache.  Do you have knowledge/experience in that area?
<seekr> I got some info on #httpd last night, but I'm having trouble sorting through it.  I'm looking for a quick-ish way to diagnose and fix my problem,
<seekr> which is that Apache appears to not be doing URL rewriting for the Joomla! CMS.
<tomreyn> generally, please just ask your questions right away, most folks over here are quite pragmatic and prefer this over exchanging greetings (which causes back and forth).
<tomreyn> this said, thanks for the kind greeting ;)
<seekr> tomreyn: Well, I just stated my question in a generic sort of way - it's the most specific I can be at this point, I think.
<seekr> yw
<seekr> tomreyn: I have several menu items on all pages in a navigation area, which use URLs that refer to pages at the root of the site (e.g. /meetings) - but I'm getting 404 errors in the Apache log for those links.
<seekr> Apache should intercept those URLs and replace them with what's needed to load the appropriate pages.
<tomreyn> you're asking about apache httpd. apache is a project incubator, an umbrella for many projects. the webserver just happens to be the most known one, but it's good to specify "httpd" (you did by mentioning the channel name)
<seekr> right
<seekr> I mean the Apache server.  :)
<tomreyn> the rewrite module comes with its own logging engine. by default, it doesn't log.
<tomreyn> consider enable logging for the rewrite engine, so you'll know whether it's working and what it does / how it handled requests.
<seekr> I did see something in one of the articles about setting a logging level, but I'm not sure where to put the directive - apache2.conf?
<seekr> tomreyn: I'm looking at https://httpd.apache.org/docs/current/mod/mod_rewrite.html
<tomreyn> you can either put this in the module configuration file, those are in /etc/apache/mods-available/
<seekr> It gives the example:  LogLevel alert rewrite:trace3
<tomreyn> in this case it should be rewrite.conf
<tomreyn> this will then apply server-wide
<tomreyn> or you can do it in the virtuaolhost configuration file, which should be located in /etc/apache2/sites-available/
<seekr> Well, I see only symlinks in mods-enabled
<tomreyn> i didn't write "mods-enabled"
<seekr> tomreyn: can it be done in the .htaccess file for the site?
<seekr> yes, I know - you wrote "nods-available."
<tomreyn> i wrote "/etc/apache/mods-available/", which is wrong, it should have been "/etc/apache2/mods-available/"
<seekr> right
<seekr> Are all the *.conf files in that directory processed when the server begins operation?
<tomreyn> https://httpd.apache.org/docs/current/mod/mod_rewrite.html is the right documentation indeed. as with all apache configuration directives, the directives listed there always say which context they can be used in. for example, the "RewriteCond Directive" can be used in the server config, virtual host, directory and .htaccess contexts
<tomreyn> this should answer <seekr> tomreyn: can it be done in the .htaccess file for the site?
<seekr> gotcha
<tomreyn> yes, and when it is reloaded <seekr> Are all the *.conf files in that directory processed when the server begins operation?
<seekr> I guess I'd assumed that the files in mods-enabled get moved from mods-available when specific modules are selected for use - but my assumption may be incorrect.
<tomreyn> .htaccess files are special in this sense since they have to be loaded and parsed every single time a file or location in the same directory or below of them is accessed
<tomreyn> ... so they cause considerable load, which you should try to prevent.
<seekr> .htaccess, though less efficient, perhaps, is probably cleaner, in that it's site specific.  I've not yet figured out how to run multiple sites -- that's one of my next tasks after I get mod_rewrite working.
<tomreyn> please read https://help.ubuntu.com/lts/serverguide/httpd.html to get a general understanding of how ubuntu uses the apache httpd <seekr> I guess I'd assumed that the files in mods-enabled get moved from mods-available when specific modules are selected for use - but my assumption may be incorrect.
<seekr> will do - thanks
<seekr> tomreyn: Does the load come from the server having to load the module each time it's working on the site whose .htaccess contains that directive?
<tomreyn> .htaccess files are directory specific, plus have this property of being changeable during operation of the webserver (no reload needed). only if you depend on this property you should actually use the,. otherwise use the same directives in a <directory></directory> scope of the virtualhost configuration file. <seekr> .htaccess, though less efficient, perhaps, is probably cleaner, in that it's site specific.  I've not yet figured out how
<tomreyn> to run multiple sites -- that's one of my next tasks after I get mod_rewrite working.
<tomreyn> virtual hosts (multiple sites) should go to /etc/apache2/sites-available - read https://help.ubuntu.com/lts/serverguide/httpd.html for a better understanding of the mechanisms involved in this.
<tomreyn> yes <seekr> tomreyn: Does the load come from the server having to load the module each time it's working on the site whose .htaccess contains that directive?
<seekr> tomreyn: I was thinking of the other directives - to show debugging info - that one should be site-specific, not the one that refers to loading mod_rewrite - but if it gets loaded when the server starts due to the symlink we were talking about, there's no need to put the directive to load the module into .htaccess
<seekr> (or into apache2, which you already told me is unnecessary)
<seekr> *apache2.conf
<tomreyn> it's not a huge amount of load if you've only got a few visitors, but you should always try to set your systems up with the most performance, as long as it doesn't mean you're disabling required features or adding a lot of complexity.
<seekr> agreed - sounds completely reasonable
<tomreyn> by site-specific, do you mean in the virtualhost scope or spache 2 httpd scope (the highest/greatest)?
<seekr> tomreyn: I just realised that I'm still thinking like I'm in a shared hosting environment.  I suppose that a fair bit of what's in the .htaccess that comes with Joomla! could get moved to Apache config files.
<tomreyn> yes, this is usually so. web applications' documentation often describe configurations in a way that is suitable for shared hosting where there are many users whith ftp or sftp access to their every web space.
<seekr> I mean that if I run several (web) sites, and I'm only trying to debug one of them, or to alter some property that applies only to a specific site, the appropriate directive should go into the .htaccess at the top level directory for some specific site.
<tomreyn> if you don't do shared hosting (can still host multiple sites, just control is central with you) then it's better to move directives from htaccess files to a different context
<seekr> yeah
<tomreyn> ideally, your goal should be to be able to disable htaccess functionality entirely
<tomreyn> you will not always be able to achive this goal but try to get as close as possible.
<seekr> not a bad goal - as long as what's in those directives doesn't mess up operation of sites based on some other technology (e.g. Wordpress or plain HTML).
<tomreyn> this will also help you identify overlapping directives
<tomreyn> when you run a service, you aim for the most concise and condensed description of the service. you also try to remove dynamics, so that you can reliably tell how a service will behave at a given time based on the configuration you have. if you have .htaccess and shared hosting where users may change them files during apache httpd restarts, you can't really tell very well how things are behaving at a given time, since the user may have
<tomreyn> changed consfigurations.
<tomreyn> now think of the user being someone malicious who got write access to a shared webhost. if they can change htaccess files they can do more harm. so you don'T want them if you can get around it.
<seekr> I guess that if (as I do in the VM server setting) I have complete control of the Apache server configuration, I can achieve what is now being done by means of .htaccess files within the sections of the config file that describes setting up the right environment for each specific site.
<tomreyn> effectively you (almost) always want to have virtualhosts, even if you just have a website, it should go into a virtualhost configuration file in /etc/apache2/sites-available (with a symlink to that in sites-enabled)
<seekr> I've become quite sensitive to security concerns, having recently gotten my sites on a shared hosting server locked due to one or more attacks, at least one of which added code to a whole lot of PHP files.
<tomreyn> so you'd have a site specific virtualhost context. once you have this, you can just add a directory context into this virtualhost configuration and move the contents from a htaccess file there.
<tomreyn> most of the time what causes your websites to be successfully attacked (they are generally under attack all the time, but most attacks dont succeed) is outdated webapps.
<seekr> tomreyn: yeah - that's what I want to do - even though what I'm doing now is only intended to be temporary, it's a wonderful learning experience - I want to see how to set things up such that, as happens in a shared server setting, the URL used to access the server is used to direct control to files pertaining to some specific web site.
<tomreyn> whole servers are usually successfully attacked due to insecure ssh passwords.
<tomreyn> do you have your own doamin name and can fully manage dns for it?
<seekr> Well, the one attack of which I became aware, which may not be the one producing what I was told by the server admin was producing attacks on other sites, was based on my attempt to configure a forum on my site.  As soon as I set it up, I began getting lots of subscription requests from what I assume to be a botnet, based in Poland.
<seekr> I have several domains.  I can control one of them - I could move another of them - and a third one would require my making a request to the domain name owner.
<tomreyn> that's normal. but not the cause. subscription requests, also successful subscriptions by spammers to a web forum should not yet allow them to make your server run requests against other web hosts on the internet.
<tomreyn> if someone was able to make your server / shared host run requests against other internet hosts it means they gained some kind of (limited or full) code execution (at least php / whatever the web forum was based on)
<seekr> That's what I'm hoping, because I'd like to run the latest version of the site, since my previous backup goes back six weeks.  I intend to disable to forum module, in any case.
<tomreyn> seekr: if you have full control over a dsingle domain name, including the ability to create subdomains, that's good enough for testing virtual hosting.
<seekr> tomreyn: The clamav output provided by the server administrator showed a ton of infected PHP files, and evidence that the attack that caused that corruption happened several years ago, when I was attempting to run a standalone forum system.
<tomreyn> right, so years ago you were running indecure software or insecure configurations of a software there, allowing the site to get compromised
<seekr> I had actually seen the strange code in lots of PHP files, but had assumed that it had gotten stuck there by the web hosting company for some commercial purpose.  In retrospect, I should have investigated as soon as I noticed that code a couple years ago!
<tomreyn> back to domains. if you can create subdomain1.mydomain.com through subdomain2.mydomain.com and point them to your webserver, you'll have a good foundation to test virtualhosting.
<seekr> tomreyn: I only ran that site for a year or so, but the damage was done - didn't matter that I was no longer running that site, since the infection was throughout my account.
<seekr> yes - that's precisely what I intend to do
<tomreyn> your web host should never touch anything you place in a shared host, unless it's for security reasons, but then then they'd tell you. it should be under your exclusive control.
<seekr> So I want to both direct access based on what domain name is being used, if I decide to move one of the domains (which I won't do unless the old host turns out to be too much trouble to go back to), and the second is to set up subdomains, which I've only ever done previously by means of cPanel
<tomreyn> (this said, they might inject ads when delivering content if they'Re a free webhost, but this should never mean they change your files.)
<seekr> tomreyn: yeah - I should have immediately opened a trouble ticket - was too lazy - "live & learn!"  :-\
<tomreyn> sure, we all make progress this way. ;)
<seekr> slowly but surely  :)
<seekr> "Experience is a hard teacher" (and the moon is a harsh mistress :) )
<tomreyn> so what i'd do about learning how vhosts work is to create three directories, one for each (sub)domain / virtualhost: /var/www/site1 to /var/www/site3, and in these directories just place an index.html file saying "site1" to "site3".
<tomreyn> and then you setup the virtualhosts for those documentroot's in /etc/apache2/sites-available
<seekr> good idea - is apache2.conf the best place to put the virtual host directives?
<tomreyn> then you a2ensite those. and finally you reload the apache2 (httpd) service
<seekr> right
<tomreyn> no, virtualhosts should not go to apache2.conf
<seekr> I'm looking forward to playing with that stuff!
<tomreyn> in fact you should try not to touch apache2.conf. unless you're changing something which is already in there.
<seekr> Sorry - I guess what I meant is things like "<Directory /var/www/>" for each of the subdirs - or is there a cleaner method?
<tomreyn> (but even then you may prefer to override what's in apache2.conf by a configuration in /etc/apache2/conf.d/)
<tomreyn> "you setup the virtualhosts for those documentroot's in /etc/apache2/sites-available"
<tomreyn> directory directives are probably virtualhost specific and should thus go into a virtualhost configuration file in /etc/apache2/sites-available
<seekr> Ah - so the directive I just mentioned just sets the top-level file system path, and the subdirs for the sites are assumed to reside within that one (alongside the default "html" dir).
<seekr> In fact, I guess there should be something somewhere that designates "html" as the presently one-and-only directory hosted by the server.
<tomreyn> if you have a virtualhost with a documentroot of /var/www/mysubdomain then you may want to have a "<Directory /var/www/mysubdomain>" directive in there to apply directives to this / location of the site
<seekr> But that stuff would go into the conf file in sites-available?
<tomreyn> yes
<seekr> okay - I'll poke around in that directory and see what I can find that looks interesting.  :)
<seekr> I've been reading that Ubuntu httpd article, and am finding it quite interesting - thanks again!
<tomreyn> here's a different take on ubuntu / debian style apache httpd configurations and virtual hosting: https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-16-04
<seekr> BTW, I found an expert in the #httpd channel last night who seemed somewhat hostile to the Debian style organisation of files talked about in the Ubuntu article.  Seems they don't care to provide support for such systems.
<tomreyn> i'm saying "ubuntu / debian style" since not all linux distros use this configuration file layout, others do it differently. the apache httpd project actually does it differently, too, and some of those folks don'T like the debian / ubuntu wqay at all.
<seekr> Thanks much - yeah 16.04 is what I'm running, for reasons we talked about earlier.
<seekr> tomreyn: yeah - that's what I experienced - seemed like a different tribe or religion  :)
<tomreyn> thumbs is very annoyed by the constant stream of debian / ubuntu folks coming to #httpd expecting that the configuration file layout they see on their file system is 'the standard' and make false assumptions based on it.
<tomreyn> which i can understand, and which is why i'm pointing this out.
<tomreyn> this said, i do like the configuration layout debian + ubuntu use, and think it makes a lot fo sense for the most part.
<seekr> yup - that's who I was talking to - he at least pointed me at a few articles, but refused to go into specifics - told me to go away and RTFM, in effect.  :)
<tomreyn> yeah that's who you become when you're constantly confronted with people using the software you wrote in a way you didnt mean it to be used.
<seekr> lol
<tomreyn> it's actually a sad story. but it's nice that he's still helping out. there are other channels with many people in them where you never get a response at all.
<tomreyn> #kvm is one of them
<tomreyn> you may get a response but if you do it's by someone else seeking support there as well
<seekr> KVM is the VM thing that some folks use rather than vbox?
<tomreyn> yes, or the other way around
<tomreyn> there are gazillions more kvm installations than there are vbox installations
<seekr> depends on your perspective, I guess - there was some other thing that KVM required when I looked at it over a decade ago - maybe not really required.  Seemed harder to set up and use than vbox.
<seekr> really!  hmmm
<tomreyn> kvm is used by cloud hosts in huge numbers, vbox is used on some desktops
<tomreyn> but they each have their uses in these environments.
<seekr> tomreyn: I'm finding largely the same thing in the #joomla channel - the fellow who sysops the channel knows a lot and is friendly and helpful when he's around, which hasn't been all that much lately - there are somtimes devs there also, who chatter amongst themselves about arcane topics.
<tomreyn> (but you can also use kvm on a desktop, which i do, and there may also be people using vbox on servers, it's technically possible)
<seekr> ah - the Ubuntu server I'm using may itself be running in a kvm environment, then.
<tomreyn> virt-what would tell
<seekr> what's that?
<tomreyn> a command and package providing this command which you can run on a linux shell to tell you which virtualization environment, if any, you're operating in.
<seekr> oh - I'll take a look...
<tomreyn> i guesses. it could guess wrong.
<seekr> "bash: type: virt-what: not found"
<tomreyn> *iT
<tomreyn> weird error message
<tomreyn> whic ubuntu is this?
<seekr> I can install it, though.
<seekr> 16.04 LTS
<seekr> I used "type virt-what" to get that message.
<tomreyn> oh, i didn't mean that you should type "type"
<tomreyn> just "virt-what"
<seekr> I know - I just like to look to see if unknown things are installed, rather than just trying to run 'em.  :)
<tomreyn> maybe you should also install command-not-found
<seekr> I installed and used the command, and it says "kvm" (!)
<tomreyn> so you were right
<seekr> yup - I am every once in a while  :)
<seekr> "command-not-found - Suggest installation of packages in interactive bash sessions"
<seekr> hmmm - I do generally get such messages, actually.
<tomreyn> then it's probably already installed
<tomreyn> it should be
<seekr> "bash: type: command-not-found: not found"  :)
<tomreyn> and the reason you didnt get the helpful hint on hoiw to install virt-what was just that you started the command with "type", which exists as a command
<seekr> yeah - I did so intentionally
<seekr> % command-not-found
<seekr> command-not-found: command not found
<tomreyn> dpkg -L command-not-found
<seekr> https://termbin.com/nomv
<tomreyn> ...lists the files installed from the given package, and their locations on the file system.
<seekr> okay - I'll take a look - later - want to get back to my Apache config activities, ya know  :)
<tomreyn> yeah, good luck there. i'll take a bit of a break.
<seekr> okay - catch ya laters, then - thanks for all the help!
<tomreyn> actually i'm still around, but so are others. just ask if you have more questions.
<seekr> will do
<TJ-> Anyone ever experimented/tested with dmdedup ?
<tomreyn> maybe ask in #arch ;)
<TJ-> hmmm, why, has that distro adopted it?
<tomreyn> is there anything they have not adopted? :)
<tomreyn> i guess they are more adventurous than the average ubuntu server maintainer
<tomreyn> but i have no facts for you, just wasting time, sorry.
<TJ-> I'm kinda scared of entering #arch :p
<tomreyn> you'll be greeted by dust fairies whispering RTFM in a grumpy sysadmin voice to your ears.
<hadifarnoud> I'm struggling to setup my floating IP in ubuntu 16.6
<hadifarnoud> this is my config: https://gist.github.com/hadifarnoud/2ca1bc8f4f2723fd1eee0c2601058875
<hadifarnoud> cannot ping my IP nor can I see it in ifconfig
<hadifarnoud> can anyone help, I'm lost. no idea what went wrong
<tomreyn> hadifarnoud: chances are the network interface name differs from eth0
<hadifarnoud> tomreyn: it does show up in `ip a` but still no ping
<hadifarnoud> it's frustrating
<tomreyn> ok, well i'm not really into cloud init
<tomreyn> also there's no "ubuntu 16.6"
<tomreyn> 78.47.223.238 does respond to ping for me.
<tomreyn> 138.201.116.62 also
<tomreyn> no luck on 2a01:4f8:1c17:5d80::
<tomreyn> hadifarnoud: ^
<hadifarnoud> it says inet6 2a01:4f8:1c17:5d80::/64 scope global deprecated
<hadifarnoud> in IPv6 I know nothing. no idea if I did it right
<tomreyn> well the gateway is wrong
<tomreyn> actually it can be right, ignore me
<hadifarnoud> is it ok that it says scope global depricated?
<hadifarnoud> `ip a` says that
<TJ-> hadifarnoud: you've got an illegal IPv6 address there
<TJ-> hadifarnoud: the interface identifier set to all zeros is reserved for subnet-router anycast
<hadifarnoud> TJ- can you help me set correct one?
<hadifarnoud> is this one correct? address 2a01:4f8:1c17:5d80:78:47:223:238
<TJ-> hadifarnoud: try "address 2a01:4f8:1c17:5d80::1/64"  (note the 1)
<hadifarnoud> does this include `2a01:4f8:1c17:5d80:78:47:223:238`
<TJ-> hadifarnoud: 'address' specifies a specific address; If you want to use "2a01:4f8:1c17:5d80:78:47:223:238" then you'd want "address 2a01:4f8:1c17:5d80::78:47:223:238/64" - although I'm not sure how ifupdown handles IPv6 addresses with IPv4 identifiers
<hadifarnoud> I think I set that IPv6 for my email server. probably need to set something that works with that
<hadifarnoud> TJ- ^
<Annoyed> Greetings
<Annoyed> Two questions... I can disable the built in ntp sync daemon with " timedatectl set-ntp off ", install Chrony, start that, and that will sync the system clock along with providing ntp service to the inside network, correct ?
<jayjo_> I have a gitlab instance that I want users to be able to push/pull from using ssh but I also want to be be able to access this server via ssh for maintenance tasks. Is there a way to do some sort of ssh proxying based on user or some other method to keep the default port 22 usable by both applications ?
<TJ-> jayjo_: are there any ideas here you can adopt? https://stackoverflow.com/questions/33042817/have-sshd-forward-logins-of-git-user-to-a-gitlab-docker-container
<jayjo_> thanks - that looks like an exact solution, using docker as well
#ubuntu-server 2019-01-01
<Annoyed> Greetings.
<Annoyed> I'm looking for a way to tell 18.04 LTS server to use 127.0.0.1 is its system resolver. I've got a running DNS server on this machine, and this server IP is handed to all devices that connect to the LAN, and all works But when using the machine directly through a shell, it insists on using 127.0.0.53. it overwrites /etc/resolve.conf every reboot, so while I can set it there, the config doesn't stick. Where can I change that so
<Annoyed> it's persistent?
<tomreyn> systemd-resolved.service(8)
<Annoyed> Ok, thanks. let's see what that has to say
<Annoyed> Yes, thanks a lot. I think I can do what I want with that info.
<SJr> Did Ubuntu Server just hard code the timezone of the server to UTC? (not the system clock but the value of /etc/localtime)
<cryptodan_mobile> SJr: not that I know of and why would they?
<SJr> I dunno, I just find it weird that it is set that way. I guess there is no reason I can't change it.
<cryptodan_mobile> What makes you think its hard coded?
<tomreyn> cryptodan_mobile: if SJr is referring to subiquity, i think this may be so, since there is no prompt for a timezone during installation, and i dont see how the users' preference could be determined automatically.
<tomreyn> so i guess it may *default* to UTC, but would not call it 'hardcoded'
<cryptodan_mobile> I've been prompted with timezone selection using ubuntu server 12.04 to 16.04
<cryptodan_mobile> I get this https://assets.digitalocean.com/articles/1404_optional_recommended/choose_timezone.png
<tomreyn> cryptodan_mobile: right, subiquity is the 'new' default server installer. 18.04 has it, maybe also some previous non LTS releases.
<tomreyn> your screen shot is from debian installer, whic h is still available as the "alternative installer" at 18.04
<cryptodan_mobile> Why change
<cryptodan_mobile> Everything was so much easier with old installer.
<tomreyn> did you actually try the new one?
<SJr> Thanks, tomreyn.
<tomreyn> yw
<cryptodan_mobile> Nope but not being able to select time zones at install is stupid
<tomreyn> you can subscribe yourself to bug 1726683
<ubottu> bug 1726683 in subiquity "does not allow configuration of timezone" [Undecided,New] https://launchpad.net/bugs/1726683
#ubuntu-server 2019-01-02
<seekr> tomreyn: Hi.  I just wanted to send a quick thanks again for your help, especially for pointing me at the Ubuntu Apache server setup article - between it and another one I found by searching on "test mod_rewrite," I found why rewriting wasn't happening.  It turns out that there was a directive in apache2.conf that was preventing .htaccess file from being processed.  Though I'd like to bring the content of those files back into server config ones
<seekr> eventually, just getting the thing working for now, which it now is, is good enough.  Much obliged!
<tomreyn> seekr: you're welome. i'm surprised, since "AllowOverride none" (which disabled .htaccess parsing) is not set by default.
<tomreyn> you should totally work towards making it that, though, for performance and security reasons https://httpd.apache.org/docs/current/howto/htaccess.html#when https://haydenjames.io/disable-htaccess-apache-performance/
<tomreyn> also, as a side note, if you like php to be at least somewhat 'debuggable' then you should not use mod_php but fpm with either nginx (which IMO is usually faster) or apache httpd.
<tomreyn> it's also a matter of resources you hve available, though. if you're generally short of ram, and dont have a lot of requests, mod_php can be the better option
<seekr> tomreyn: Thanks for the further advice.  Actually, "AllowOverride None" was set in the default apache2.conf that came with the new install from the repo.  I came across something that suggested messing up an .htaccess file to try to generate an error, which would indicate the file is being processed (which it wasn't), that did the trick.  s/None/All did the trick!  I will check out those other things you suggest when I get a chance.  Thanks again!
<tomreyn> seekr: hmm actually i may be misinformed and "AllowOverride None" is indeed set by default. sorry then.
<tomreyn> s/None/All/ is not what you want there
<tomreyn> i'd keep "AllowOverride None" in apache2 and override this within the virtualhost scope by another AllowOverride directive, which (unless All is needed) I would then limit to those directive types you actually need in .htaccess files. https://httpd.apache.org/docs/current/mod/core.html#allowoverride
<cryptodan_mobile> seekr: you are welcome even though I suggested something minuscule
<Ussat> Huh, anyone had any ossues putting Ubuntu 16.04 LTS on a Dell Prescision 5820 ? keeps saying no nic detected
<Ussat> issues
<Ussat> google is not telling me anything
<seekr> tomreyn: thanks - will do ASAP
<cryptodan_mobile> Ussat: server or desktop?
<lordcirth> Ussat, seems like a pretty new PC?  Try 18.04, or install the hwe kernel.
<lordcirth> Actually I think Desktop installs hwe by default
<Ussat> Yea...we got it...1804 worked, but client insists MUST be 1604...we will see where this goes
<Ussat> Yea its a workstation/server....1604 shit all over, turns out this thing has 4 X 1TB SSD's in it and they did not bother to tell us how they want it configured...I have email out to the requesting client....sigh...
<Ussat> but I think they will bitch about 1804 because of the software says it requires 1604
<tomreyn> can anyone recommend a good efi-amd64 CLI based live system which comes with (or where it's easy to add when live) madadm, lvm2, cryptsetup?
<tomreyn> i'm using the d-i server installer now, dropping to a shell. its not ideal, but workable.
<lordcirth> tomreyn, I think parted magic has a boot option to boot without X
<tomreyn> thanks. i forgot free + open source on my requirements list
<tomreyn> when, on a uefi booted system with GPT and ESP, you run update-grub and grub-install /dev/sda , which files would you expect to see in and below /boot/efi/EFI ?
#ubuntu-server 2019-01-03
<rsully> Any tips on how to debug a VM that is stuck on cloud-init during boot? It gets through network devices and routes, then hangs
<lordcirth> rsully, anything in syslog?
<rsully> lordcirth how could I check that exactly? ssh isn't started yet
<lordcirth> Ah, so you don't have a tty either? Just a stuck boot?
<rsully> nada, this is what I see from VNC: https://other.f1le.me/view/f289d95e48a629db386f2143436e9afd
<lordcirth> rsully, ah ok. In the past I've added a serial console device to the VM, logging to disk.  Depending on the type of freeze, that might get you more info
<tomreyn> rsully: are you sure it's stuck? since i've seen VMs spill this info after they printed the login prompt
<tomreyn> i.e. it may just be up and running and waiting for you to ssh
<tomreyn> (but yes i guess you should be able to hit enter and get another login prompt)
<rsully> tomreyn no its stuck, ssh does not work, does not respond to enters
<tomreyn> i see
<Gorian> so, no more "minimal" or "vm" install option in the ISO installer - any other options besides the cloud image + cloud init?
<tomreyn> Gorian: mini.iso still exists, and so does the alternatice server installer
<Gorian> where/
<Gorian> *?
<tomreyn> ubuntu.com/download
<Gorian> ... that's not helpful at all. I've searched all over the damn place, and I've found cloud images,which seem to require cloud-init. I've found installers for powerpc and arm (alternative installers), and the normal ubuntu server install, which has no minimal options
<Gorian> I've found nothing offering minimal installation, and linking to the default download page certainly doesn't tell where it is either
<Gorian> ubuntu.com/download > "Ubuntu Server"  https://i.imgur.com/cwR5qrA.png "ubuntu 18.04 LTS and Ubuntu 18.10"
<Gorian> ubuntu.com/download > "https://www.ubuntu.com/download/cloud" - disk images that require me to write them directly to the VM disk and then initialize with cloud-init, as far as i can tell from running them
<tomreyn> Gorian: on the first screenshot you posted, does it mention "alternative"?
<Gorian> yes, did you see my comment about the alternative installers being for powerpc and arm and shit?
<Gorian> http://cdimage.ubuntu.com/releases/18.04.1/release/
<Gorian> as far as I can tell, the AMD64 installer is no different from the one linked on the main page
<tomreyn> it is
<tomreyn> it is different
<tomreyn> the default server installer how has "live" in its file name. the alternative one is the old-style debian-installer without "live" in its name
<Gorian> So, when I mentioned that I looked at all of those things, and then specifically mention that I looked at the alternative installers, wouldn't it have been easier to say "the alternative installer for amd64 does what you want" instead of linking the same download page I had already scoured for a half hour?
<Gorian> ð¤¦
<tomreyn> i told you you may want the alternative installer, first thing i told you.
<tomreyn> the other image i mentioned, mini.iso, has always been next to the netinstall stuff, and it still is: http://archive.ubuntu.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/
<tomreyn> this said, i'm not sure either of these installers offer a "minimal installation" choice. they just create small installations by default.
<tomreyn> the smallest you can create is probably one based on debootstrap, though.
<tomreyn> but then, size is not everything. will it suit your needs? i don't know (i don't know your needs).
<Gorian> tomreyn: I'll try it out, thanks.
<Gorian> and, actually, debbootstrap isn't a terrible idea ð¤
<tomreyn> Gorian: if you debootstrap, go with the latest, i.e. disco
<Gorian> noted
<Ussat> Just classic......
<Ussat> Lab orderes a Dell w/Ubuntu 16.04 preinstalled, thats OK......they get it configured with 4X 1TB SSD's, thats ok also,  it has NO LVM......WTF ya all smoking ??
<Ussat> sigh
<Ussat> So I figure, OK NP, I will reinstall with the Dell iso....oh wait.......its all scripted and gives you no configure options
<Ussat> it just does its thing
 * Ussat cries
<TJ-> Ussat: reconfigure in-place from initialramfs :)
<Ussat> I am waiting to see what the Lab wants it set up as...... if they just want a big 4TB disk, I am just gonna add them all to / and be done with it
<Ussat> I just find it totally assinine they did not use a LVM
<Ussat> I am half tempted to call Dell to find out
<Gorian> use ZFS?
<Ussat> I could I guess......
<Ussat> need to see how they want this thing
<Gorian> ah, that's always fun
<Ussat> Oh its ALWAYS fun here
<tomreyn> there are actually applications where you want JBOD, such as big data
<Ussat> Right, this isnt that
<tomreyn> so *maybe* they actuall yordered it this way
<tomreyn> ok
<tomreyn> ;)
<Ussat> and if youre doin "big data" 4TB is a drop in the bucket, a VERY big bucket
<tomreyn> true. also you'd probably want a small raid-1 for the OS
<tomreyn> Gorian: fyi https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1810123
<ubottu> Launchpad bug 1810123 in debootstrap (Ubuntu) "Merge debootstrap 1.0.112 from Debian Sid" [Medium,Fix committed]
#ubuntu-server 2019-01-04
<Glorfindel> is there any reason to not add 18.04 repos to a 16.04 install?
<Glorfindel> I'm needing the latest version of some software and it's not availible for 16.04
<teward> Glorfindel: yes, because it'll try and install *everything* from the 18.04 repos where there's newer software versions
<teward> and it *will* torch your system when it tries to do that
<teward> It'd be better to try and nitpick the individual packages, or find backported versions, or find a PPA for them (even though we don't support PPAs)
<Glorfindel> hm alright... (out of curiosity, what do you mean by torch my system?)
<Glorfindel> I'm averaging one dep resolution per day, it's been slow
<teward> Glorfindel: as in it'll break libraries, software versions, dependencies, etc. to a point where it will basically be easier to reinstall from scratch than fix the damage.
<Glorfindel> ahh, I see
<teward> Glorfindel: might be easier to know what's in 18.04 that's not in 16.04 in this case
<Glorfindel> well, libssl.so.1.1 was missing, along with libcrypto, but I did get those added, so now it's letting me know about the next unresolved deps, and there's probably more where they came from
<Glorfindel> I ended up building from source and cp'ing the required files to the server program dir
<teward> Glorfindel: well depending on the software it might be semi-easy to do a backport in a PPA rather than manually recompile as you're doing.  But without knowing the specific software in question :P
<Glorfindel> teward: what does creating a ppa backport all entail? I'm testing the minecraft bedrock alpha
<Glorfindel> right now the main holdup is figuring out what package has what lib
<teward> oooo, yeah that might be problematic.
<Glorfindel> it was working until they updated the deps and basically made ubuntu 18.04 the mandatory "plug and play" release
<Glorfindel> dropped all other distros as well, so maybe I should check into updating my vps :/
<teward> Glorfindel: yeah that kind of makes you need to do an upgrade.  Pretty sure that isn't going to backport cleanly, especially if it needs newer OpenSSL versions (1.1.1 is really new)
<Glorfindel> it's a shame about my uptime though :/
<Glorfindel> getting close to a year
<positivefix> Hi folks! I'm on a fresh 16.04 server install and snapd doesn't seem to work. This is the output of systemctl status snapd.service: https://paste.ubuntu.com/p/KSw4D6W46H/
<positivefix> And the output of journalctl -u snapd: https://paste.ubuntu.com/p/X3V63Ys8nY/
<neildugan> \join #arduino
<Ussat> OK, lets assume I am ssh -X into a Ubuntu 1.004 server that has a GUi, whats the disk utility called so I can launch it over the ssh tunnel ?
<Ussat> 16.04
<Ussat> NM got it
<lotuspsychje> place your details here geard along with your kernel version
<geard> hey guys, i'm having some issues with load testing against NGINX, when I hit the server with Jmeter I am getting ~3Mbbits of through put using iperf, when not running the jemter I get ~9Gibts of through put. Linux HQLB161 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
<geard> lotuspsychje: thanks.
<TJ-> geard: could it be the Java overheard, or possibly some process/rlimits restricting JMeter?
<geard> TJ-: the JMeter stuff is running on 6 client machines(physical) the Ubuntu server seems to be the one having the issues as iperf is running from an independent system.
<geard> iperf client that is*
<ironhalik> hello
<ironhalik> I'm trying to set up iscsi with multipath on xenial server as the initiator, and Dells scv2020 compellet storage as the target
<ironhalik> and I'm hitting roadblock after roadblock :/
<ironhalik> right now - I'm logged into all the targets which appear to be working, but I don't see any block devices on them
<ironhalik> all I can see when checking the session, is: Attached SCSI devices: Host Number: 11 State: running - someone has any experience with anything like that?
<ironhalik> mhm weirdly enough, after reinitiating the session couple of times - iscsiadm discovered a single LUN on one target, but not on the otherones (and all the targets should have the same LUN for multipath)
#ubuntu-server 2019-01-05
<trippeh> sarnold: 44Gbps of low latency live video over TLS on the low power intel quad core now
<trippeh> kernels becoming faster :)
#ubuntu-server 2019-01-06
<tomreyn> that's lovely!
<fatbrain> Hi, is there a way to install 18.04-lts server on mbr instead of gpt?
<tumbleweed> boot via legacy BIOS, not UEFI
<fatbrain> my system doesnt support UEFI
<tomreyn> fatbrain: so it'll be straight forward, i guess
<foo> I migrated from old digital ocean droplet to a new one. Same server specs. Ubuntu 14.04 to 18.04 (fresh install). I'm having a strange issue where some of my processes are hanging and building up (eg. ones that run every minute). I see this when I strace it, wait4(-1, ... until it dies: https://paste.ofcode.org/EHDvp2N6Y9cpbt2yNVAJ2v - can anyone else here make sense of this?
<fatbrain> tumbleweed, tomreyn: problem is that when I install the installed automatically partitions the disk using GPT instead of MBR. My system refuses to boot the using the "legacy bios" 1M grub core.img partition.
<tumbleweed> is that the new installer? I don't know much about that
<tomreyn> fatbrain: try the alternative server installer then. this, however, will also default to gpt partition tables if storages are >= 2 TB
<fatbrain> tumbleweed: probably, haven't installed since 16.04 don't remember what that looked like
<fatbrain> tomreyn: my system storage is 120GB, I'll try the alternative installer.
<tomreyn> also be sure to create msdos/mbr partition tables before you start the installer. it would not replace the existing gpt's
#ubuntu-server 2019-12-30
<tomreyn> can someone point me towrds the right way of configuring a serial console which will show all systemd output on uefi booted 18.04.3?
<tomreyn> actually i have it working on the serial, but it' now partially missing on the attached monitor
<tomreyn> here's what i'm using https://paste.ubuntu.com/p/h34YyCH9MG/
<tomreyn> using those separate GRUB_TERMINAL_* (while commenting out GRUB_TERMINAL) doesn't seem to help.
<tomreyn> the problem is that on the HDMI monitor I only see the output until SCSI initialization (which I think is when a graphics mode switch takes place), after that the next output i see there is the login prompt, whereas the serial console keeps showing output all the time from SCSI activation until login prompt
<lotuspsychje> tomreyn: i presume you already looked on https://help.ubuntu.com/community/SerialConsoleHowto ?
<tomreyn> yes, i think this is pre-systemd and outdated
<lotuspsychje> kk
<tomreyn> hmm actually i should try with two console= options on grub
<tomreyn> oh i have that
<lotuspsychje> yeah looks like your paste matches
<tomreyn> thanks, though, lotuspsychje :)
<tomreyn> changing console=tty1 to console=tty0 didn't make a difference.
<tomreyn> hmm maybe i just need / want !bootlog
<tomreyn> !bootlog
<ubottu> To get a more verbose log of the boot process, remove "quiet" and "splash" from the kernel boot parameters and add "debug systemd.log_level=info". For info on editing kernel boot parameters, see https://wiki.ubuntu.com/Kernel/KernelBootParameters
<ChmEarl> smb, I have a focal buildroot setup and I can try some patches against recent Xen to fix the busted parts
<ChmEarl> smb, ping
<felix_221986> Hi experts, I have an Ubuntu 18.04.3 LTS on an eve-ng machine.
<felix_221986> My problem is every time I restart it, I have to do sudo dhclient ens3 to have IP from my Local network
<felix_221986> fandre@ubuntu-server:~$ cat /etc/network/interfaces
<felix_221986> # ifupdown has been replaced by netplan(5) on this system.  See
<felix_221986> # /etc/netplan for current configuration.
<felix_221986> # To re-enable ifupdown on this system, you can run:
<felix_221986> #    sudo apt install ifupdown
<felix_221986> auto ens3
<felix_221986> iface ens3 inet dhcp
<felix_221986> Am I missing something else?
<teward> netplan is how you configure your ethernet
<teward> not /etc/network/interfaces
<teward> unless you've reinstalled and reenabled ifupdown
<teward> felix_221986: ^
<teward> do you have ifupdown installed again, or are you using Netplan?
<shibboleth> https://techpiezo.com/linux/switch-back-to-ifupdown-etc-network-interfaces-in-ubuntu/
<felix_221986> I have Netplan but is the first time I use it
<felix_221986> Thanks guys, I'll give it a try with this link ;)
#ubuntu-server 2020-01-01
<sdhd-sascha> hey, if so, then - happy new year :-)
<shibboleth> https://ubuntu.com/download/server
<shibboleth> 18.04.3 did away with the old installer?
<shibboleth> i should go with 18.04.2?
<qman__> no, click the link for alternative installer
<tomreyn> happy 2020!
<tomreyn> yesterday, someone (non-paying) reported that they were unable to set up ubuntu advantage on 14.04 starting from http://ubuntu.com - they got stuck at the point where they had the "new UA" installed on their system but it failed connecting to the repository. with a timeout, whereas the user was able to browse https://esm.ubuntu.com/ (other than https://esm.ubuntu.com/ubuntu/pool/ ) fine.
<tomreyn> so, in case anyone canonical-ly feels like reviewing this - it may be generally broken currently.
<CodeMouse92> .ping
<CodeMouse92> Can anyone help? My Ubuntu 16.04 server suddenly won't connect to the internet via ethernet. It WAS working, and then abruptly stopped doing so.
<tomreyn> hmm, year 2020 bug?
<tomreyn> anything on the logs?
<CodeMouse92> Not really. But when it boots up, it takes something about 6 minutes waiting for the network card to wake up
<CodeMouse92> It ain't midnight here yet tomreyn
<CodeMouse92> `sudo lshw -class network` shows the network card has the logical name enp4s0, and says link=yes
<tomreyn> i was only joking about "2020 bug" really
<CodeMouse92> The router is even lighting up when the network cable is plugged in
<CodeMouse92> kk
<CodeMouse92>  /etc/network/interfaces has the enp4s0 IN it...
<CodeMouse92> auto enp4s0 \n iface enp4s0 inet dhcp
<tomreyn> and "ip link" and "ip addr" say?
<CodeMouse92> ip link shows enp4s0 as the second device (after lo), and it says <BROADCAST,MULTICAST,UP,LOWER_UP>
<CodeMouse92> state UP
<CodeMouse92> ip addr shows it as well, and it has the same ipv6 after link/ether that ip link showed
<tomreyn> that's the MAC
<CodeMouse92> kk
<tomreyn> are there no inet or inet6 lines for the interface on ip addr?
<CodeMouse92> Also shows inet6 followed by another ipv6 addy
<CodeMouse92> Sorry I can't pastebin, no net on the affected box :P
<tomreyn> thats fine so far. did you expect it to have an ipv4 or ipv6 address?
<tomreyn> and what does the ipv6 (inet6) address start with?
<CodeMouse92> I'll check in just a sec, I'm grepping the system logs. I have an ungodly number of messages about this device, all from today...
<CodeMouse92> kernal: [2644115.104543] IN=enp4s0 OUT= MAC:60:(redacted):00 SRC=104.(redacted):229 DST=192.168.1.11 LEN=60 TOS=0x00 PREC=0x00 TTL=45....
<CodeMouse92> And then finally, this evening, I have DHCPDISCOVER on enp4s0 to 255.255.255.255 port 67 interval 18
<CodeMouse92> And stuff along those lines
<CodeMouse92> tomreyn: Okay, so the inet6 address starts with fe80::62a4...
<tomreyn> those IN OUT lines are usually iptables policy violations.
<tomreyn> fe80:... is like 127.0.0.1
<tomreyn> so you really have nbo ip address assigned via dhcp
<CodeMouse92> Okay, uhm...how in the hayseed do I fix this?
<CodeMouse92> It was weird, too, because it just aborted smack in the middle of the day, no warning
<CodeMouse92> s/aborted/stopped working/
<tomreyn> is this in a data center, or a home / office building?
<CodeMouse92> Home
<tomreyn> did you reboot the router, yet?
<CodeMouse92> I have router access. Actually, using the same network connection :P
<CodeMouse92> Yes
<tomreyn> did you reboot the server, yet?
<CodeMouse92> Twice
<tomreyn> i'm assuming the router does dhcp, right?
<CodeMouse92> It's supposed to
<tomreyn> sudo dhclient enp4s0
<CodeMouse92> It's thinking
<tomreyn> add -d
<tomreyn> ctrl-c and add -d
<CodeMouse92> okay
<CodeMouse92> Yeah, there's that DHCPDISCOVER on enp4s0 to 255.255.255.255 port 67 interval (whatever)
<CodeMouse92> Listening on LPF/enp4s0/60:a4:4c:5b:ac:91
<CodeMouse92> Sending on LPF/enp4s0/60:a4:4c:5b:ac:91
<CodeMouse92> Sending on Socket/fallback
<CodeMouse92> And then those DHCPDISCOVER messages, over and over and over
<tomreyn> run    tail -f /var/log/syslog     in a separate tty or temrinal window
<CodeMouse92> Okay, it's spitting
<CodeMouse92> "DHCPDISCOVER on..." message
<CodeMouse92> And then...
<tomreyn> do you see more of these IN=... OUT= ... messages there?
<tomreyn> on the syslog tail, that is
<CodeMouse92> UFW AUDIT IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1
<tomreyn> so dhclient failed, or is still trying?
<CodeMouse92> Still trying
<tomreyn> can you ctrl-c dhclient, disable ufw, try again?
<CodeMouse92> Sure
<tomreyn> before you restart dhclient press enter on the log tail a couple times
<CodeMouse92> Well, it already spewed
<tomreyn> just so you'll know where the new logs dhclient causes start
<tomreyn> what is it spewing on syslog?
<CodeMouse92> It will do one DHCPDISCOVER, and then a huge mess of IN=lo OUT= MAC=00:00....:08:00 SRC=127.0.0.1 DEST=127.0.0.1
<CodeMouse92> Like, two dozen per
<tomreyn> so ufw is still effective
<CodeMouse92> I have fail2ban involved
<CodeMouse92> Is that going to mess?
<tomreyn> maybe
<tomreyn> stop it, too
<tomreyn> and check remaining rules with iptables -L
<tomreyn> --flush if there are leftovers
<CodeMouse92> I still have some DOCKER-... rules and some UFW-... rules
<CodeMouse92> But they all say 0 references
<tomreyn> is this a docker guest, or host?
<CodeMouse92> Guest
<CodeMouse92> Er
<CodeMouse92> NO
<CodeMouse92> DOcker host. I'm running docker within it.
<CodeMouse92> I went ahead and did `systemctl stop docker` too
<tomreyn> ok, so are the docker rules gone now?
<CodeMouse92> No
<CodeMouse92> Nor are the UFW rules
<CodeMouse92> Weirdly
<CodeMouse92> But...trying to dhcp again, fwiw, I don't get the in/out lines anymore
<CodeMouse92> What's further, it just mumbled something about link up....but it's still doing DHCPDISCOVER
<tomreyn> good. but still no ip address is assigned?
<CodeMouse92> No, it's still milling about
<tomreyn> and iptables -L is now empty?
<CodeMouse92> It is now
<CodeMouse92> *not
<CodeMouse92> NOT
<tomreyn> <tomreyn> --flush if there are leftovers
<CodeMouse92> I already did. Three times
<tomreyn> so whats left?
<CodeMouse92> sudo iptables --flush
<CodeMouse92> The same. DOCKER and ufw rules
<CodeMouse92> 0 references
<tomreyn> do you have ACCEPTs for dpt:domain and bootps for udp and tcp in INPUT?
<CodeMouse92> How would I check?
<tomreyn> iptables -L
<CodeMouse92> Chain INPUT (policy ACCEPT)
<CodeMouse92> target        prot opt source        destination
<CodeMouse92> Chain FORWARD (policy ACCEPT)
<CodeMouse92> target        prot opt source        destination
<CodeMouse92> Chain OUTPUT (policy ACCEPT)
<CodeMouse92> target        prot opt source        destination
<CodeMouse92> And then...
<CodeMouse92> Chain DOCKER (0 references)
<CodeMouse92> target        prot opt source        destination
<CodeMouse92> That identical format is repeated for...
<CodeMouse92> DOCKER-ISOLATION-STAGE-1, DOCKER-ISOLATION-STAGE-2, DOCKER-USER
<CodeMouse92> And over a dozen "ufw-..." rules
<tomreyn> but 0 references for all of those, right?
<CodeMouse92> Yes
<tomreyn> hmm, do you have another NIC and ethernet wire? i'd try cross testing those next.
<tomreyn> do you have other computers on the same network which can successfully get an iup address via dhcp now?
<CodeMouse92> I'm on one of them ;)
<CodeMouse92> I have another ethernet wire, but no other network card
<tomreyn> and this computer you're IRC'ing from got its ip address assigned after the server lost connectivity?
<CodeMouse92> Yes
<CodeMouse92> BTW, the local network addresses are hardcoded. For example, this is always 192.168.1.4, and the server is always 192.168.1.11
<tomreyn> does the router list connected devices, and devices it has handed ip addresses out to via dhcp?
<CodeMouse92> It should
<CodeMouse92> It's weird, it shows Ethernet 1 (where the server connects) as being up and connected
<CodeMouse92> The router says Connected DHCP 98.144.168.105 for the whole network
<tomreyn> are those ip addresses hard coded on the systems or do you mean that the router will always assign the same ip address via dhcp?
<CodeMouse92> Yeah
<tomreyn> "Connected DHCP 98.144.168.105" would refer to the routers' WAN connection
<CodeMouse92> Except it shows that specifically under Ethernet
<tomreyn> so towards the internet
<tomreyn> well, silly router, i guess
<CodeMouse92> Okay, yeah, there's something spooky here
<tomreyn> the ip address is of charter communications, which i assume will be your ISP
<CodeMouse92> On a whim, I just fired up a different machine
<CodeMouse92> One that is connected via ethernet, NOT via wifi like this machine
<CodeMouse92> NO NET
<CodeMouse92> Same issue
<tomreyn> so i guess your routers' dhcp server is broken. if this device is serviced by your ISP, you could get support from them.
<CodeMouse92> Crap.
<CodeMouse92> Okay, well, at least it ain't UBuntu
<tomreyn> i need to leave for now. good luck!
<CodeMouse92> I've reenabled UFW and Docker
<CodeMouse92> and Fail2ban
<CodeMouse92> Thanks!
<tomreyn> yw
#ubuntu-server 2020-01-02
<jiffe> ubuntu server installer doesn't support booting off zfs yet?
<tomreyn> jiffe: i think there's only experimental support for zfs on / in desktop installers so far.
<tomreyn> "We announced 6 months ago that support for deploying Ubuntu root on ZFS with MAAS was available as an experimental feature." "We want to support ZFS on root as an experimental installer option, initially for desktop, but keeping the layout extensible for server later on." https://ubuntu.com/blog/enhancing-our-zfs-support-on-ubuntu-19-10-an-introduction
<jiffe> ah, well that kind of sucks
<jiffe> seems like zfs would be more useful on a server environment
<jiffe> maybe desktop is a better testbed
<jiffe> so doesn't sound like I'd be able to manually install server on zfs by any means either, I'd still need to use desktop live version
<ducasse> jiffe: there used to be a guide on github somewhere, you could still use that, i guess
<ducasse> jiffe: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
<jiffe> ducasse: yeah I saw that, that requires dektop live version
<ducasse> yeah, but you can just get rid of the desktop stuff and install the -server metapackage
<jiffe> worth a shot, eh?
<aberrant> hi all
<aberrant> I have a question regarding apt and packagekit.
<aberrant> I run a headless server, and packagekit logs TERM events every day
<aberrant> so I decided to disable and mask it
<aberrant> now apt is reporting errors on update
<aberrant> is there a way to resolve this so I don't get any errors or events?
<tomreyn> what do you mean by "TERM events"?
<tomreyn> aberrant: ^
<tomreyn> and whic ubuntu server version is this?
<aberrant> tomreyn: sorry for the delay
<aberrant> tomreyn: daemon.warning: Dec 29 02:17:17 elemental systemd[1]:  packagekit.service: Main process exited, code=killed, status=15/TERM
<aberrant> tomreyn: this is 19.10
<aberrant> I get multiple messages per day
<aberrant> troubling especially due to https://blogs.gnome.org/hughsie/2019/02/14/packagekit-is-dead-long-live-well-something-else/
<aberrant> if I remove packagekit I also remove ubuntu-server, which I probably don't want to remove.
<chapman_r> Good morning and Happy New Year all.  I'm using Ubuntu 18.04.3 LTS and trying to set up Postfix to use an Exchange server as a relay.  I have read and followed numerous guides and instructions but I cannot get it to authenticate properly using PLAIN, LOGIN or NTLM which the Exchange server supports LOGIN/NTLM.  Any help with this would be most
<chapman_r> appreciated. Thanks in advance.
<JanC> did you check the logs in both servers?
<chapman_r> JanC, I do not have access to the logs on the Exchange server.   If I remove NTLM as a Mech I get an error that no Mechs can be found. So it seems that postfix defaults to using NTLM.  I can manually authenticate using "Auth LOGIN" with base64 encoded username/password and I can send an email.  but when postfix relays an email it uses NTLM and it
<chapman_r> will fail.  I get the following error message.SASL authentication failed; server mail.domain.com[x.x.x.x] said: 535 5.7.3 Authentication unsuccessful
<chapman_r> JanC, it seems that postfix doesn't/cannot use LOGIN which works for me doing it manually
<tomreyn> LOGIN was an attempt to have secure authentication over insecure transports / links. nowadays almost everything is TLS, or should be, and PLAIN is the better approach for that, so you probably don't want LOGIN.
 * tomreyn can't comment on NTLM though.
<chapman_r> My tests are using "telnet mail.domain.com 25" and base64 encoded username/password.  could this be an issue with TLS encrypting with something other thant base64?
<chapman_r> tomreyn, see above
<chapman_r> I'm sorry "encoding not encrypting"
<chapman_r> topmreyn, Also when I try using PLAIN, postfix complains about not having any Mechs (Mechanisms)
<chapman_r> when encoding the username and password I use "echo "username" | base64; echo "password" | base64"
<tomreyn> chapman_r: i was merely commenting on whether it makes sense to use LOGIN rather than PLAIN, in general, nowadays. I don't expect LOGIN to be no longer supported, nor to break when used in combination with TLS.
<chapman_r> tomreyn, gotcha, thanks for the comment.  I'm not very familiar with TLS and it's encoding/encryption methods.
<tomreyn> actually i mixed this up, sorry, LOGIN is just base64-encoded username + ppassword, just like PLAIN. what i had in mind in terms of '(attempting to) securely authenticating without TLS' is CRAM-MD5, you should not need to use this naymore.
<tomreyn> chapman_r: LOGIN is defined in an expired RFC draft, never got official, shoul dbe considered obsolete, but is still implemented often, but also really similar to PLAIN.
<chapman_r> tomreyn, I haven't tried CRAM-MD5 yet.  Yes the documentation seems to suggest not using PLAIN or LOGIN anymore.  The exchange server only supports "LOGIN or NTLM"
<tomreyn> PLAIN would be ideal over encrypted transport from postfix's point of view, i guess. i suggest you ask in #postfix oder #dovecot (if you're using their sasl) about how to make the two mail server variants cooperate properly.
<chapman_r> tomreyn, Oh I hadn't thought about that.  I'll try #postfix and see if anyone can help.  Thanks again for your help I really appreciate it.
<tomreyn> https://doc.dovecot.org/configuration_manual/authentication/authentication_mechanisms/ supports NTLM via samba.
<chapman_r> tomryn, thanks I'll give it a read.
<JanC> it probably also depends on how the Exchange server is configured...
<chapman_r> JanC, Yes, and trying to get that information is next to impossible. I have had tickets in for weeks trying to get someone from the Exchange group to help, but no one has responded.  Gah! lol
<tomreyn> in the ms exchange domain, never-touch-a-running-system-if-you-somehow-managed-to-get-it-to-run is not just a recomendation but a punishable law, and deeper understanding of the systems' configuration is considered somehwere between wizardry and godlikeness. and those higher beings may not wish to talk to a commoner linux admin.
<chapman_r> tomreyn, OMG this is so true lol
<JanC> if they actually know how their system works, they can also tell you what it's configured to expect...
<teward> Exchange is pain regardless :p
<JanC> I wonder if IIS still includes a mailserver?
<JanC> that actually worked more or less like a normal mail server
<teward> JanC: Last I checked, IIS doesn't bundle a mail server with itself
<teward> but Exchange bundles IIS
<teward> because that's how most of the Exchange protocol stuff communicates (HTTPS for web services and crap)
<JanC> IIS used to come with SMTP & POP3 servers at least
<teward> i'd have to check what a base IIS comes with, but it's still annoying with Exchange defaulting to NTLM auth.
<teward> TYPICALLY someone would set up a connector that lets you use STARTTLS+PLAIN or encrypted, at least in most environments I've seen that, except from mail gateways delivering in which case they're just plain whitelisted as individual send/receive connectors
<JanC> (as wel as HTTP, FTP, gopher, etc.)
<JanC> NNTP too IIRC
<JanC> Exchange didn't even exist in the old Windows server days  :)
<JanC> see e.g. http://www.it-notebook.org/uncategorized/article/email_server_win_2003.htm
#ubuntu-server 2020-01-03
<orenii> Hey everyone, I've got some issue with Ubuntu Server 18.04 LTS running on a Windows Server 2012 R2 (on Hyper-V). I indeed set a static ipv4 in `/etc/netplan/01-netcfg.yaml` and can ping my LAN network but I can't ping `8.8.8.8`.
<supaman> orenii: sounds more like a problem with the network configuration in the hyper-v
<mybalzitch> do you have a default route set?
<orenii> supaman, I have other linux vms running (debian 9) and they work perfectly (static IP set)
<orenii> mybalzitch, ```
<orenii> mybalzitch, `
<orenii> Kernel IP routing table
<orenii> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
<orenii> 0.0.0.0         192.168.8.200   0.0.0.0         UG    0      0        0 eth0
<orenii> 192.168.8.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
<orenii> `
<orenii> (sorry for the duplicate message, thought I could edit it)
<supaman> please use a paste service for pastes like that
<orenii> Yes sorry, you're right. But well, I found my mistake because I didn't check it twice -> gateway is `192.168.8.200` which is the DNS server whereas the gateway should be `192.168.8.254` and what's funny is that I did the mistake twice (I reinstalled the server)
<supaman> ah, good :-)
<orenii> Now it's just a DNS issue, it should be easily fixed I guess, thanks a lot guys
<jiffe> 19.10 server installer seems to have issues, its crashed on me 3 times in different places now
<supaman> virtual or bare metal?
<jiffe> bare metal
<supaman> faulty memory?
<jiffe> I can test but I don't think so, its been running fine up to this point and I've been doing a lot of memory intensive work
<supaman> or a broken iso file ... checked the md5 sum of the download file?
<jiffe> memory is good, md5 checks out, I'm going to reflash the usb stick
<compdoc> jiffe, use memtest86+, and check the SMART info in the hard drive
<aberrant> morning
<aberrant> is anyone else seeing frequent clock drift messages in their syslogs?
<teward> aberrant: not on my systems, no.  Sounds like your system clock keeps drifting and needs reset by NTP / chrony / time sync
<aberrant> teward: yeah. Why do you think that is?
<aberrant> teward: it started happening after a reboot yesterday
<aberrant> (after I did a purge of a bunch of software)
<teward> not sure.  normally drift is caused by hardware clocks constantly drifting, I don't know why you'd suddenly start seeing a LOT of thsoe messages at once
<teward> and clock drift can come from a number of different sources of issues
<compdoc> those batteries for the cos clock only last a few years. if the battery measures below 3v, then it needs replaced. there are commands to test reading and writing to the motherboards cmos clock.
<compdoc> *for the cmos clock
<aberrant> compdoc: this is a brand-new mobo. Let's see if I can find some of those diagnostics
<aberrant> seth@elemental:~$ cat /proc/driver/rtc | grep batt
<aberrant> batt_status     : okay
<compdoc> theres also a bios setting in some boards that prevent writing
<compdoc> anyway, BF4 is calling me. theres a bullet with my name on it
<aberrant> could someone paste /proc/driver/rtc from their system?
<compdoc> https://pastebin.com/VsuwjfQb
<aberrant> thanks
<aberrant> mine looks identical
<aberrant> ok, shutting down to see if it's a battery issue. brb
#ubuntu-server 2020-01-04
<jiffe> ok so I ran memtest86 and that passed with no errors, smartctl indicates no disk errors, so I'm not sure why ubuntu server install keeps failing
<jiffe> this has been running ubuntu 16.04 for a couple years now
<jiffe> only thing different is I'm using a different disk as I'm not prepared to blow away the original yet
<tomreyn> failing how?
<tomreyn> and you're installing 16.04 or something else?
<tomreyn> jiffe: ^
<jiffe> I finally got it to work, I had to zero out the disk I was installing to first
#ubuntu-server 2020-01-05
<Skyrider> Can I ask a ufw related question here?
<Skyrider> Might as well.. Used ufw to block an ip range x.x.0.0/16, yet it doesn't appear to work. All the other ufw rules are working, just not this ranged deny I created.
<tomreyn> Skyrider: i would not recommend ufw for a server firewall. rather use iptables directly or some framework around it such as shorewall.
<JanC> well, ufw is a framework around it
<tomreyn> yes, but... not a complete or really good one
<JanC> it should be able to do what most people need for a simple server, no?
<tomreyn> yes, as long as they don't use the GUI for managing it.
<tomreyn> thats my personal POV, anyways
<JanC> ufw itself doesn't have a GUI
<tomreyn> gufw is a separate package, but i think it's preinstalled.
<tomreyn> ...on desktops
<JanC> I doubt it is
<JanC> it never supported ufw correctly, and hasn't been updated in a decade probably?
<tomreyn> hmm its in universe, probably not then, right
<Skyrider> Using ufw as I prefer its simplicity.
<tomreyn> so let's say ufw can be fine, just dont use gufw
<JanC> Skyrider: I assume you didn't forget to reload the firewall after adding that rule?
<JanC> also "doesn't appear to work" is rather vague
<Skyrider> all rules added through ufw should be instantly loaded.
<Skyrider> As for gufw, don't see a point in that seeing I use a headless server.
<Skyrider> And "doesn't work", I blocked an ip range and had to block the IP range in nginx as well. The blocked IP's keeps showing up in nginx's logs, while it shouldn't be logged at all as ufw should deal with it.
<Skyrider> Maybne ufw ip range deny/reject is borked?
<JanC> there is no other rule overriding it?
<Skyrider> Guess that's a fair point I haven't considered. allow port 80 I suppose.
<Skyrider> But shouldn't deny/reject override allow?
<Skyrider> It is listed in iptables: -A ufw-user-input -s 159.138.0.0/16 -j REJECT --reject-with icmp-port-unreachabl                                                                                                                                                             e
<Skyrider> As for ufw, was last updated 2018-12-14
<JanC> rule ordering?
<JanC> you'd need to have the deny for that range before the one to allow port 80
<JanC> as the first one that matches will be applied
<JanC> Skyrider: ^^^
<Skyrider> Thanks JanC, but I double checked. All rejects in ufw are set to top.
<Skyrider> [10] Anywhere                   REJECT IN   159.138.0.0/16
<Skyrider> [11] 80/tcp                     ALLOW IN    Anywhere
<Skyrider> 1 to 9 are also rejects.
