#ubuntu-server 2006-03-13
<Drac[Server] > Can anyone in here tell me how to enable USB keyboard support in the console?
<Drac[Server] > I need USB keyboard support.
<Drac[Server] > Nevermind.
<Drac[Server] > Fixed it.
<nictuku> nwu is very fast now
<nictuku> the upload of the current packages and update candidates list is now smart =] 
<mkrufky> infinity: you around?
<mkrufky> I have most mssql functionality working through the use of php5-sybase .... but it doesnt let me use stored procedures... I've googled it, and everything I found says that it should work with php4.1 and later...  I need access to the procedures, "mssql_init(), mssql_bind(), and mssql_execute()"  ... what else do I need to do to get this functionality?
<infinity> mkrufky: You're missing that what you're googling for is php-mssql, which is a different extension from php-sybase_ct (which is what we ship)
<mkrufky> yes, exactly
<mkrufky> so, are you telling me that there is no way to do this under ubuntu?
<infinity> mkrufky: php-sybase doesn't support those three functions.
<infinity> Not without rebuilding PHP, I suspect.
<mkrufky> yes.  I realize that... That is exactly my problem.  Is there anything I can do?
<mkrufky> ok, i am willing to rebuild it, if necessary
<mkrufky> infinity: is that the final answer?  or is there anything else you can tell me?
<infinity> apt-get install build-essential fakeroot; apt-get build-dep php5; apt-get source php5; cd php5-5.1.2; (edit debian/rules, debian/modulelist, and debian/control to build the "mssql" extension instead of the "sybase_ct" extension); dpkg-buildpackage -rfakeroot -uc -us
<mkrufky> ah, awesome... will do right now
<infinity> For dapper+1, I intend to ship both extensions, but that's not going to happen for dapper, now that we're WAY past feature freeze.
<mkrufky> i understand
<mkrufky> infinity: above instructions are aimed at dapper?  or they should work for breezy as well?
<infinity> Should work on breezy too, except for s/5.1.2/5.0.5/ where I mentioned a version. :)
<mkrufky> aha... ok
<mkrufky> thanks
<mkrufky> ok i will have to pastebin this error:
<mkrufky> http://pastebin.com/591642
<mkrufky> i think dpkg-source is probably part of build-essential ... strange that build-essential wont install
<infinity> Okay, you'll need to manually install "libsnmp5-dev", but your GCC problems, I don't know.
<mkrufky> yikes
<infinity> Do you have packages installed from multiple releases, or from backports, or other sources?
<mkrufky> i had an alternate root partition set up (using alternate grub menu entry), where i installed a few dapper packages.....  now im back in breezy, but the both roots are using the same /var tree
<mkrufky> so, on THIS root partition, all 100% breezy, but i may have borked my apt database
<infinity> apt-get -o Debug::pkgProblemResolver="true" build-essential
<infinity> See if the output is enlightening.
<infinity> Oh, wait.  Both were using the same /var?
<infinity> Yeah, you completely hosed the dpkg database, then.
<mkrufky> yes... i know that was a bad idea
<mkrufky> yup -- you're right
<infinity> Your system has no idea what packages it has installed, AT ALL.
<mkrufky> hehehe
<mkrufky> good thing this is still a test box
<infinity> If it's a test box, I highly recommend reinstalling it, then. :)
<infinity> You won't get much done with apt/dpkg in that state.
<mkrufky> ok.... so, i think i will wipe it out tomorrow AM, then re-install breezy, then do the above after
<mkrufky> yes... sharing /var was a bad move -- i admit it
<mkrufky> hopefully, after a clean install this should work..... but if not, will you be around here tomorrow ~ this time?
<infinity> Vaguely this time, yeah.
<mkrufky> okay....... hopefully all will be well and i wont need to ask anymore questions, but if not, i'll be sure to look for you again
<mkrufky> thanks
<allee> hi, anyone an idea why preseeding: pkgsel/install-pattern  string ~t^kubuntu-standard$|~n^openssh-server$|~n^rsync$|~n^gmetad$
<allee> is ignored?
<allee> aptitude install '~t^kubuntu-standard$|~n^openssh-server$|~n^rsync$|~n^gmetad$' does not complain :(
#ubuntu-server 2006-03-14
<allee> mhmmm, my laptop does not know pkgsel/install-pattern but uses: base-config/package-selection ... let's try ...
<allee> yeah, that was it!
<Psi-Jack> Hey
<Psi-Jack> Anyone here know where apache's suexec docroot is set to?
<infinity>  /var/www
<Psi-Jack> Hmm..
<Psi-Jack> You SURE?
<infinity> Last time I checked.
<infinity> --with-suexec-docroot=/var/www  <--- apache2
<infinity> --suexec-docroot=/var/www       <--- apache1.3
<Psi-Jack> Hmmm.
<infinity> (PS: I'm the apache maintainer)
<Psi-Jack> Odd. I tried using it in /var/www, and I got 500 errors.
<Psi-Jack> Hmmm.
<Psi-Jack> I see.. Odd.. heh.
<Psi-Jack> [2006-03-09 00:02:49] : target uid/gid (2000/2000) mismatch with directory (0/0) or program (0/0)
<Psi-Jack> Heh
<Psi-Jack> Least it's progress somewhere.
<infinity> suEXEC is a bit picky about only changing id/gid to the owner if the script being run, etc.
<infinity> s/if the/of the/
<Psi-Jack> Well, this, is complaining about the directory itself.
<Psi-Jack> Because the directory is root:root owned. heh
<infinity> Well, generally you'd want to set up subdirectories to do this from. :)
<Psi-Jack> Well, yeah. That's what I'm doing.
<infinity>  /var/www/www.site1.com/suexec, /var/www/www.site2.org/suexec
<infinity> And set owenership on those directories to the users who are being suEXECed.
<Psi-Jack> Heh, yeah.
<maswan> Actually suEXEC is very picky, about alot of things. :)
<Psi-Jack> I'm trying to find the most /logical/ means to run a community-hosted site, where people can have their own websites.
<Psi-Jack> I don't think suexecusergroup works in <directory> entries, does it? heh
<Psi-Jack> Nope, it doesn't/
<Psi-Jack> Hmm.
<Psi-Jack> Seems like the only truely "secure" way I'd be able to do it, is by using username.members.domain.tld, for each user. That's just.. Ouch. :)
<infinity> What's wrong with that?
<infinity> mod_vhost will do it automatically for you, so you don't have to do an entry for each user.
<Psi-Jack> Well, there's no automagic way to handle that, really.
<Psi-Jack> Hmm...
<infinity> And a wildcard DNS entry takes care of the DNS problem.
<infinity> Then you're done.
<Psi-Jack> How would mod_vhost determine the username to suexec to?
<infinity> mod_vhost does vhost templating, so you can extract the username from the Host request, and use that to define the DocumentRoot and suEXEC paramaters, logfiles, etc.
<Psi-Jack> Ahhh. i see.. Makes sense. That is usable.
<infinity> And was pretty much specifically designed for this use case. :)
<Psi-Jack> Heh he.
<Psi-Jack> Well, it's going to be quite useful for me. :)
<infinity> (For ISPs with massive numbers of "hosted users", to be precise)
<Psi-Jack> And I got the idea, now, to use nss-mysql to help with it all, too.
<Psi-Jack> But, yeah, definately much better than my alternative thoughts on how to do it. LOL
<Psi-Jack> Wee@
<Psi-Jack> Anyone know the difference between libnss-mysql and libnss-mysql-bg?
<Psi-Jack> Hey, anyone know if 16 chars is the /max/ username/groupname Linux can actually properly support or could it go up to 25?
<allee> Sun Galaxy X4100: mpt* drivers in dapper.  Two (or one) simple disk are detected during installation. When 2 disk configured as RAID1 in the LSI controller, kernel fails to detect the disk
<allee> scsi0 : ioc0: LSISAS1064, FwRev=01040000h, Ports=1, MaxQ=511, IRQ=169
<allee> Is this a limitation of the driver or a bug?
<allee> SLES9 has no problem with the RAID1 of the LSI controler.  And uses it as sda.
<allee> SLES9 uses: Fusion MPT SAS Host driver 3.02.62sus
<allee> ubuntu uses:
<allee> Fusion MPT SAS Host driver 3.03.04
<fabbione> allee: you are on the wrong channel
<fabbione> either open a bug on malone or talk with kernel maintainer on #ubuntu-kernel
<allee> fabbione: 'k I'll do both
#ubuntu-server 2006-03-15
<Psi-Jack> infinity: You around?
<infinity> Vaguely.  Doing release wrangling stuff right now, not much time to be helpful today.
<Psi-Jack> Ahh. alrighty. :)
<Psi-Jack> Looks like mod_vhost_alias goes in left->right order for %N anyway, after looking at it long enough.
<Psi-Jack> Ahh,. and VirtualDocumentRoot /can/ be used inside a virtualhost section, excelent. :)
<Psi-Jack> infinity: Hmm. The only thing I got a problem with, is.. How do I get <username>.members.domain.tld from the address for the vhost, into SuexecUserGroup to use as it's username group?
<Psi-Jack> infinity: Or, what is the userdir set for apache's suexec? default, public_html?
<infinity> Yeah.
<Psi-Jack> Grrr
<Psi-Jack> This... is fracking annoying.
<Psi-Jack> I can't get .php to suexec even in UserDir methods, unless it's chmod'd executable and run as cgi-scripts handler type. AddType/Action isn't suexecing. :/
<infinity> Well, yes.  suexec is for CGI.
<infinity> Apache doesn't exec PHP at all.
<infinity> You'd have the same problem with mod_perl, mod_python, etc.
<Psi-Jack> Well, I'm not using mod_php for .php scripts in this user homepages vhost.
<Psi-Jack> I tried to figure out how to get <username>.members.domain.tld to SuexecUserGroup, but, I can't figure out how to get the <username> for the SuexecUserGroup. heh
<infinity> No, but you're using mod_actions, which isn't quite CGI either.
<Psi-Jack> The only last-hope alternative I got, is suphp
<Psi-Jack> But, in Breezy, that's php4. heh
<infinity> Rebuilding it isn't rocket science, mind you.
<Psi-Jack> Heh, I'm beginning to think that's about the best way to do it.
<Psi-Jack> Kk. How do I rebuild it into a new package? I know it's really simple, I just don't do it often.
<Psi-Jack> fakeroot debian/rules binary is one way I think.
<infinity> apt-get build-dep [packagename] ; apt-get source [packagename] ; cd package-version; (edit crap you want to edit); dch -i (make a changelog entry); dpkg-buildpackage -rfakeroot
<infinity> Ish.
<infinity> (dch is in the "devscripts" package)
<Psi-Jack> Think I got it. ;)
<lapo> hi
<Pygi> Hi lapo
<Pygi> how may we help you? :)
<lapo> Pygi, upgrading dovecot version for dapper for example :-)
<lapo> updating
<Pygi> what's the issue lapo? :)
<lapo> bug #33457
<Drac[Server] > I'm shutting the server down temporarily. I'll be back.
<morrow> Hi
<mkrufky> ok, i'd like to announce that I (finally) got stored procedures working for m$sql under php5 / breezy, after patching the php code and recompiling from scratch.....  I already understand that this functionality will not be officially added to Ubuntu until after dapper+1 , so, I'd like to upload these packages somewhere, in case someone else can benefit.  Is there a place where I can do that, or should I use my own server?
<Pygi> perphaps in backports?
<mkrufky> yes, might be a good idea... I'm giving it some good testing right now... Later on i'll investigate into how the motu stuff works
<Pygi> k, great job btw. :)
<mkrufky> thanks... i really owe the credit to infinity -- he told me how to do it
<Pygi> hehe ;)
<Psi-Jack> Anyone know how to get pure-ftpd to use NSS for uid/gid lookup of files?
<Pygi> Will we finally be releasing flight CD's for -server or we'll stick with daily builds?
<infinity> mkrufky: You had to patch the PHP source?  <raise eyebrow>
#ubuntu-server 2006-03-16
<mkrufky> hey
<mkrufky> infinity: sorry for the delay
<mkrufky> infinity: yes, I had to patch the php5 source -- there is a bug in the php5.0.x code that breaks stored procedures for certain variable types
<mkrufky> infinity: unfortunately, i must leave now... sorry that we missed each other again
<mkrufky> infinity: I tested it -- all works very well... I'll try to meet you in here on Monday, and I'll show you the patch that fixed the problem
<mkrufky> infinity: if you need like to contact me  before then, feel free to email me -- {my nick} at linuxtv dot org
<mkrufky> btw, the patch that I applied was backported from php 5.1.x .....  so, we shouldnt need it for dapper ++
<mkrufky> sorry, really gotta go... bye all
<Pygi> bye
<mkrufky> Pygi: bye
<KotZer> im preparing an old-pc-server (P II) with ubuntu .. it will have two hard disks. 1. 4 gb 2. 2gb.. can you suggest me a partitioning scheme ?
<KotZer> im thinking of mounting /var on the 2 gb
<spike> KotZer: it really depdens what the pc will do
<spike> hi fabbione
<spike> KotZer: ie, it's not very useful to go with a very complex partitioning scheme on a workstation
<spike> KotZer: anyway, what you can do is installing with LVM. so if you decide to change the partition scheme afterward it'll be able to do it easily
<KotZer> ok thnx... it will run only an apache-php-lamp application and probably an ftp server
<KotZer> isnt lvm installed by default ?
<spike> KotZer: you can choose to isntall onto LVM at install time, it's not default
<KotZer> *apache-php-mysql i mean
<KotZer> oh ok
<Pygi> hi Matt
<Pygi> please see my mail
<Pygi> thanks
<Pygi> Matt, wake up
<Pygi> it's urgent
<mgalvin> Pygi: i am in many other channels, you need to ping my nick to get my attention... I read you email... since flight 5 has not even been annouced yet anyone using it should know that... at any rate is it all still alpha software and the web page already states that
<Pygi> mgalvin: yes, BUT IT'S URGENT
<Pygi> people think it's real flight 5
<Pygi> it's already announced on digg that it's official
<Pygi> think wat will be if it hits slashdot
<Pygi> problems for us
<Pygi> just please go change it ;)
<mgalvin> we cannot control the press, i will try to contact digg... i will make a quick note on that page
<Pygi> mgalvin: true, but please change the page to what I told you in the mail
<Pygi> ;)
<Pygi> I saw that you can change it, so...
<Pygi> better to prevent, that to make it spread :-/
<mgalvin> yup, i will make the change in a sec
<Pygi> k, thank you very much
<Pygi> :)
#ubuntu-server 2006-03-18
<Pygi> fabbione: ping
<nictuku> nwu now have a GTK interface for administration
<Fawzib> hello
<Fawzib> anyone knows how do I start the server-expert install in dapper? before I could type 'boot: server-expert' how can I do it with the new menu?
<allee> 'ifup -a' seem to go to background. At least DHCP is not yet done when cluster client tries to mount NFS shares.  :(  Is there a 'wait until network IF is up' switch script?
<hunger>  /join #kubuntu-devel
<allee> hunger: kubuntu specific?
* allee added init script that waits for default route to be defined
<mkrufky> I have an .htaccess and .htpasswd set up, and I'm pretty sure I did it correctly..  It seems as if apache is completely ignoring them.  It was working fine in a prior installation.... Can someone help me idiot-proof my setup?
<mkrufky> .... is it supposed to be " AllowOverride All"  or "AllowOverride AuthConfig" ??   what's the difference between the two?
<mkrufky> ...ok well i found the answer to THAT question... still not working though :-(
<mkrufky> no matter what I do, it is behaving as if "AllowOverride None" is still set .... and yes, I have restarted apache after setting it
<mkrufky> anybody have any idea why my .htaccess files are being ignored?
* mkrufky doesnt expect a response, but it would be nice...
<mkrufky> ;-) just say the word and i'll provide some more details
<mkrufky> infinity: i have that php5.0.5 mssql stored procedure bugfix patch for you... let me know if you wanna see it
#ubuntu-server 2006-03-19
<infinity> mkrufky-gone: Mail me about it, please.  adconrad@ubuntu.com
<nictuku> The prototype for nwu is done. We need more user testing before making an "announced release".
#ubuntu-server 2007-03-12
<CrummyGummy> Hi all, any idea why this keeps on showing up in my logs?
<CrummyGummy> SysRq : HELP : loglevel0-8 reBoot Crashdump tErm Full kIll saK showMem Nice powerOff showPc unRaw Sync showTasks Unmount
<CrummyGummy> Nobody is logged on at the time.
<mralphabet> nobody that you know about!
* mralphabet is just kidding!
#ubuntu-server 2007-03-13
<supervillain> Hello, how do I check the Release.gpg file validity manually?
<supervillain> hello, is there a way to check each .deb file locally from the internet?
<supervillain> I want to check it's md5 and signature validity
* CrummyGummy slaps mralphabet for putting horrible thoughts in his mind...
<[miles] > morning guys
<[miles] > does anyone know how to select a different kernel when installing? I want to run UBS 6.06LTS as a guest in VirtualBox, but it does not work with the SMP kernel
<lionel> [miles] : iirc all kernels have smp enabled since 6.06
<[miles] > hi lionel 
<[miles] > mmm ok
<lionel> hi [miles] 
<[miles] > you know of vbox?
<lionel> How are you ?
<[miles] > I'm fine mate, just banging on with all this PHP coding :-\
<lionel> Yes, I know, but i never give it a try
<[miles] > it's good...
<[miles] > fast...
<[miles] > I have a W2K guest, and it runs real smooth, and much faster and smoother than vmware 
<lionel> I use VMware here
<[miles] > lionel, give vbox a whirl
<lionel> The good thing with vmware is that it is easy to install and you do not have bad surprise
<[miles] > jaja
<[miles] > yeah, if you host box ain't 64 bit
<[miles] > then it's a pain
<[miles] > believe me
<lionel> I have a 64 bit box, but install it in 32bits ;)
<lionel> I think there is a Debian ITP for VirtualBox
<lionel> maybe for Feisty+1 :)
<[miles] > no idea
<[miles] > you install 32bit Ubuntu yeah?
<[miles] > actually I might do the same...
<[miles] > I'm sick of buggy flash and no acroread etc
<lionel> for the VMware box, yeah, I installed 32bit
<lionel> for other server, I use 64bit systems
<lionel> but all the 64bit desktops system here have 32bit Ubuntu
<[miles] > ok
<[miles] > I think I will reinstall my work desktop, and swap to Kubuntu also
<lionel> damned :)
<[miles] > ?
<lionel> KDE is evel (just kiding !)
<[miles] > jeje
<[miles] > gnome sucks
<gubluntu> How do i get my server to consult localhost running bind9 for an answer on a A record query? (i.e. i am behind a firewall and my postfix cant deliver mail to iself using an external address, so i need it to return an internal one locally)
<gubluntu> ...o just read topic.. sorry
<eamonn> hi, just want to mention that the latest update to the kernel on feisty-server doesn't boot on my test machine. I had to escape out on grub and choose 2.6.20-9-server (instead of 10). What logs do you need?
<eamonn> I'm not sure I created any, actually. It wasn't able to mount root, which is on RAID5. I ended up stuck in busybox.
<eamonn> The problem I had looks very similar to https://bugs.launchpad.net/ubuntu/feisty/+source/mdadm/+bug/75681. I have three SATA drives, 320GB each, configured as RAID5. However /boot and swap are on small, non-raid partitions (on the first and second drives, respectively).
#ubuntu-server 2007-03-14
<aedes> what is considered development discussion?
<kakkoi> how do i install LAMP on ubuntu 6.10 desktop edition?
<kakkoi> thank you in advance
<AMSmith42> It seems that becuase Ubuntu Server specifies apt-gets from port 80, my SonicWALL thinks it is trying to access a proxy and disconnects it.
<AMSmith42> Kind of a hassle.
<mralphabet> oO
<AMSmith42> hehehe
<mralphabet> I think that's a poor logic design on the sonicwall heh
<AMSmith42> Perhaps, but it is there, non-the-less.
<mralphabet> I mean, it's port 80 traffic . . . the firewall either needs to pass it or use the built in / network proxy server which should get the file anyway
<mralphabet> still, interesting
<AMSmith42> I can switch the proxy-filter on and off, but I don't want to leave it off because I use it for a reason.
<mralphabet> sure
<AMSmith42> I use Ubuntu desktops and apt-get doesn't have any trouble.
<AMSmith42> It is only when Ubuntu server specifies x.x.x.x 80 that I get disconnected.
<AMSmith42> Well, the apt-get in server, anyway.
<mralphabet> oh, that's interesting
<AMSmith42> I had other words for it until I figured out what was going on.
<AMSmith42> :P
<mralphabet> because it specifies the port it may make a difference
<AMSmith42> Right.
<AMSmith42> It triggers the SonicWALL's filter.
<mralphabet> as opposed to it just being normal traffic
<mralphabet> right
<mralphabet> that's certainly an odd one to track down
<AMSmith42> Yes, I couldn't think of where I could change that either.
<mralphabet> nor can i
* mralphabet does a grep -r 80 *
<Linuturk> I'm thinking of upgrading my Dapper server to Edgy. I am currently running a LAMP install, and a cacti install. what are the caveates of dist upgrading a server?
<Linuturk> i know this isn't a support channel, but Ubuntu is waaay to busy for a server question. 
<AMSmith42> I'm pretty lost too, or I'd help you out.
<Linuturk> what is your issue AMSmith42? maybe I can help
<AMSmith42> I was just discussing earlier how Ubuntu server sometimes specifies port 80 when it does an apt-get (that is, some packages come from x.x.x.x 80) and that triggers my SonicWALL's proxy filter so I get disconnected after 90% installation.
<AMSmith42> I can turn the proxy filter off, but that isn't ideal.
<Linuturk> isn't there a way to make an exception on your proxy
<Linuturk> some sort of rule table
<Linuturk> ?
<AMSmith42> I don't think that I can specify an IP and a *port* for the firewall to disregard.
<AMSmith42> Only an IP. I have to check into it further.
<Linuturk> I don't think I've heard of a firewall WITHOUT port forwarding by ip
<AMSmith42> Port forwarding does that? I thought I'd have to set up a rule in the filter.
<Linuturk> well, I'm slightly confused
<Linuturk> are we talking a firewall
<Linuturk> or a proxy server
<Linuturk> or, a combo of the two?
<Linuturk> if it is a combo, see the device's manual before believing anything I say
<Linuturk> that is specialized stuff
<AMSmith42> The SonicWALL is a firewall and filter. It has a setting to disallow access to proxy servers.
<AMSmith42> When apt-get tries to get a package and specifies port 80 the SonicWALL thinks that the server is trying to access a proxy. 
<Linuturk> what does the proxy server do with normal port 80 traffic?
<Linuturk> web browsing?
<Linuturk> etc?
<Linuturk> my experience with a Squid is the proxy server downloads the packages first, then serves them up to the client
<Linuturk> it is a long delay
<Linuturk> but once it is down
<Linuturk> it is almost instantly available to any other machine on the network
<AMSmith42> Web traffic from a browser is allowed.
<AMSmith42> The sonicwall doesn't bother apt-get if it doesn't put 80 in the url.
<AMSmith42> Only when apt-get uses <remote server> 80 does the sonicwall get triggered.
<AMSmith42> I'm not using a local proxy server.
<AMSmith42> The SonicWALL is set up to block access to internet proxies so that students can't get around the sonicwall filter.
<AMSmith42> content filter, that is.
* xenalise is away: Shopping! :D
<AMSmith42> Sweet...shopping
* xen[a] lise is back.
#ubuntu-server 2007-03-15
<N6REJ> would someone mind clearing up a tiny confusion I have with NFS vs Samba.  Do I need both or is simply NFS enough?
<rambo3> wake up
<Tigerman> Silly question that might be asked in the wrong chan coming up.. Ive used RedHat with kerio mailserver, is there a similar mailserver program for ubuntu?
<mralphabet> kerio is redhat specific?
<mralphabet> as in, it only works on redhat?
<Tigerman> seems so
<mralphabet> I'm not sure I believe that
<mralphabet> it works on suse
<mralphabet> Supported operating systems
<mralphabet> Kerio MailServer runs on Windows 2000/XP/2003, server and desktop editions, Red Hat Linux, SUSE Linux and Mac OS X. Kerio Outlook Connector for Microsoft Outlook runs on Windows 2000/XP/2003.
<Tigerman> well, some people suggesting alien the kerio rpm, but well
<Tigerman> they want money for kerio now as well :p
<mralphabet> those are the OS's that kerio supports out of the box, it does not say "will not work on anything other then these"
<mralphabet> ahh, so you are looking for a free option
<Tigerman> preferrably
<Tigerman> i think kerio was free when i used it, actually
<mralphabet> so what you really want is a free option to kerio, regardless of what it runs on
<mralphabet> because if it runs on one linux os, it will run on the next
<Tigerman> well, if i can get the source and compile it on ubuntu..
<mralphabet> binary based distrobutions are fine too, you don't *have* to compile anything
<Tigerman> id preferr binary based, yeah
<Tigerman> i hate starting to edit things and having to install things manually.. im lazy :p
<mralphabet> rpm based distrobutions are slightly more difficult, but still not anything to kill yourself over
* mralphabet is spelling distrobutions wrong ;(
* Tigerman dont care about the spelling as long as they work
<mralphabet> anyway, I use sendmail
<mralphabet> ;)
<Tigerman> mm.. lots of people do.. i dont like sendmail
<mralphabet> so i don't have a recommendation for you ;(
<Tigerman> communigate pro someone just recommended me
<AMSmith42> Is there a defacto webmail server?
#ubuntu-server 2007-03-16
<Godsey> what is the correct entry for /etc/hostname if the host is just domain.com?
<ivoks> that can't be hostname
<ivoks> computer.domain.com is ok, but domain.com is a no go
<mralphabet> Godsey: you need a fully qualified domain name in /etc/hostname
<mralphabet> like ivoks said, someserver.somedomain.sometld
<r00tintheb0x> hello all
<AzMoo> Hi, I've installed a LAMP server but I need to customize my PHP configuration. I need to add several things, such as mcrypt and PDFLib. Are there packages to do this?
<AzMoo> Oops, this is the dev channel. Sorry :)
<Godsey> ivoks, mralphabet turns out just the domain.com is ok
<Godsey> as domain.com is the fqdn
#ubuntu-server 2007-03-17
<davascript_home> is xtradius in apt
<GHost__> hello
<ivoks> hi
<GHost__> i need to install an ubuntu-server to my parallels virtual machine
<GHost__> but after the boot it cannot find the cddrive
<GHost__> it's a normal intel chipset emulation
<GHost__> any idea?
<ivoks> i've never used parallels
<ivoks> on KVM it works without problems
<GHost__> i think it's libata
<GHost__> the normal cdimage works grate
<GHost__> is there any diff in the kernel on the server and the desktop image?
<ivoks> it's minimal
<ivoks> different scheduler and no preempt on server
<GHost__> hmm
<ivoks> it has some of other changes, but those two are only relevant
<GHost__> not in this case
<GHost__> i think
<ivoks> so, -desktop-6.06 works ok, and -server-6.06 not?
<GHost__> nop its 704 
<ivoks> 7.04?
<ivoks> that's not released yet :)
<ivoks> if you found a bug, report it
<GHost__> feisty-server-i386.iso
<GHost__> i know
<GHost__> if i found it i will
<GHost__> but first i need to find it
<GHost__> i have only an sda device
<GHost__> and sg0 
#ubuntu-server 2007-03-18
<arrummzen_> Why does '/etc/init.d/apache2 restat' hang?
<arrummzen_> It stops the Apache2 server, but it doesn't start it again... it just sits there...
<arrummzen_> well, after ~5 seconds it finishes waiting and starts the server again... but why the wait?
<mralphabet> it's waiting for apache to completely shut down
<mralphabet> is this an apache server with lots of threads?
<mralphabet> or child processes?
<blanky> hey guys
<blanky> how come sudo /etc/init.d/apache2 restart doesn't return any output
<blanky> when I go to my site and view php files they're just text files
<shawarma> blanky: You probably didn't install a php interpreter (libapache2-mod-php5)
<blanky> shawarma: I did, actually what happened was that my friend had compiled apache from source
<shawarma> Er.. ok.
<blanky> hey guys, sudo /etc/init.d/apache2 start doesn't return anything and it's not running from the looks of ps aux | grep httpd
<blanky> what should I do
<aedes> has anyone gotten lvm and device-mapper working?
<fabbione> aedes: i use lvm on almost all my machines
<aedes> fabbione: have you tried 7.04 server?
<aedes> it seems to be working with 6.10, but in 7.04 the mapper seemed not to be setup right
<fabbione> aedes: yes feisty server and desktop. both works fine
<fabbione> what problem do you have?
<fabbione> be more specific
<aedes> fabbione: well the directory /dev/mapper wasn't created in feisty
<fabbione> aedes: lvm2 is installed only if at install time you create a volume on lvm.
<fabbione> aedes: if you didn't do so, just apt-get install lvm2
<aedes> yeah, I manually installed it
<fabbione> and you should get it then
<aedes> hmm, well I've got 6.10 on there now and lvm seems to be working fine.  An even bigger issue with 7.04 was when I upgraded the kernel it wouldn't boot, and complained about not finding any array to assemble in the mdadm.conf (the root fs is not on raid)
<fabbione> aedes: known bug. the mdadm maintainer is working on it
<aedes> ahh, so that's just with the newest kernel?
<fabbione> it's not a kernel problem.
<fabbione> it's an mdadm script error
<aedes> so that's in the initrd?
<aedes> because when it says that its the 3rd line of the boot
<fabbione> yes in the initramfs
<fabbione> it doesn't matter when it says it
<fabbione> we make the kernel shuts up a lot at boot
<fabbione> and print only real error
<aedes> hmm, yeah I just noticed that ... I liked the more verbose printing
<fabbione> remove quiet from the boot options
<aedes> ahh, nice
<aedes> so what would you recommend edgy or feisty?
<fabbione> for servers? dapper
<aedes> hmm, why is that? the LTS?
<fabbione> yes the LTS
<fabbione> because edgy had 6 weeks less of development and stabilization. feisty is not out yet
<fabbione> so for server stick with dapper
<aedes> hmm, what kind of sacrifices are made by choosing dapper? (less new functionality?)
<fabbione> almost none and i would look at the benefits of an LTS
<fabbione> not having to upgrade a server every 18 months rock
<aedes> I'm not sure what LTS provides, I'm not sure if I'll be using it.  This is a personal server
<fabbione> well then read about it
<fabbione> these are all FAQ a bit offtopic for this channel
<aedes> yeah, thanks.  You've answered my original questions
<AlleAuei> buongiorno
<AlleAuei> qualcuno potrebbe aiutarmi?
<[miles] > english please
<AlleAuei> ok
<AlleAuei> can you help me with ubuntu?
<[miles] > maybe, ask
<AlleAuei> well i have a problem with installation of ubuntu
<AlleAuei> sorry ffor my english:)
<[miles] > AlleAuei: no problems... I don't speak Italian, however I do speak spanish
<[miles] > if thats easier for you
<AlleAuei> thanks for all but i find a italian:)
<[miles] > ok, I don't know any italians here in this channel, just french
<AlleAuei> :)
<Chris_Tan> hello
#ubuntu-server 2008-03-10
<owh> My cron.daily doesn't appear to be running. An entry shows in the log, showing that the crontab entry runs, but the scripts in /etc/cron.daily don't appear to run. Initially I thought it related to one of my own cron.daily scripts, but removing that has made no difference. run-parts --test shows all the scripts you expect. Any ideas?
<someone> does ubuntu run hald by default /
<owh> Hiya, just had a bad experience upgrading from Feisty to Gutsy and before I send an email to u-d-d about it I was wondering if anyone might have a look at it before I hit send. The email is titled: Fixing Bug #1 and while it mentions some of the issues, it's not specifically about those: http://paste.ubuntu-nl.org/59112/
<owh> Feel free to be brutally honest in your response. I'm looking for alternative perspectives.
<kgoetz> owh: i know people bringing up bug1 every time they have a problem raises my blood presure, dont know about other people though
<kgoetz> not many places do support upgrades (eg) - microsoft sort of do, redhat dont at all.
<pwnguin> yea
<pwnguin> first rule of getting stuff done is to not make em angr
<owh> It started with a raised blood pressure, but I felt that it wouldn't be constructive. I toyed with a rant of all the broken things, but thought that was also not constructive, so I tried to find ways of indicating how we might improve things. I'm not sure if my email does that the way I think/hope it does.
<kgoetz> fwiw, your teh second person i've seen in 4 hours complaining about feisty -> hardy upgrades
<pwnguin> and citing bug one is basically saying "YOU THINK THIS IS OKAY?!? WELL THEN VERYONE WILL JUST STICK WITH MS!"
<owh> kgoetz: This is Feisty -> Gutsy
<kgoetz> owh: er, yeah sorry. i meant that :$
<owh> pwnguin: I did say that as my subject, because I think that's valid. What I was attempting to do with this email is get beyond that. I'm not sure I succeeded.
<kgoetz> i expect you would be told to take it to the discussion list too (but not being on it or u-d i cant comment)
<pwnguin> owh: you use the phrase We a lot
<kgoetz> s/comment/comment for sure
<pwnguin> have you been testing upgrades from gutsy to hardy?
<owh> The reason I'm raising it here is because some of the people here know me and I figured they would smack me if I needed it.
<owh> pwnguin: No, I've not been working on gutsy -> hardy upgrades.
<pwnguin> the surest way you have to make that work is to probably start reporting bugs now ;)
<pwnguin> make an image, install it, upgrade and see what all broke in your opinion
<pwnguin> its essentially too late to fix feisty -> gutsy upgrades
<kgoetz> pwnguin: which tbh i find a rather anoying pice of ubuntu policy
<pwnguin> funny thing to hear in a server channel
<pwnguin> its not policy
<owh> Not only that, it's not really sustainable. It means that upgrades won't ever get better.
<kgoetz> whats a funny thing to hear in a server channel?
<owh> It means that lessons learnt since release never get incorporated.
<pwnguin> owh: you mean like uuid disks?
<owh> So, you're no better off being conservative than doing an upgrade on day 1.
<owh> pwnguin: You mean the fstab fix?
<pwnguin> i mean the system by which the kernel establishes a uuid for each disk, independent of the device detection order, and is used, among other places, in fstab
<owh> pwnguin: I'm not sure what you are getting at.
<owh> Aren't we all aiming for a better user experience than anywhere else? MS and RedHat included?
<pwnguin> that system was put in place because upgrading the kernel can cause detection order reversals
<owh> So, are you giving me an example of a fix that was made after release to make an upgrade work better?
<pwnguin> that'd be an example of a fix that was "a lesson learned" and the upgrade getting better
<pwnguin> more importatnly
<pwnguin> of that list of bugs, theres about 3 legitimate upgrade complaints
<owh> I'll bite. Why should I as a user upgrading from the previous stable release to the current stable release, six months later, still have to deal with the same problems as someone doing that upgrade on the first day of the release?
<owh> pwnguin: You're referring to the list of bugs in the email?
<pwnguin> in the pastebin, yes
<owh> pwnguin: They were included in the draft to give some sense of the issues experienced from an end-user experience, they weren't intended as bug reports as such.
<pwnguin> interesting
<owh> What?
<pwnguin> im trying to figure out why the icons changing is a negative end user experience
<owh> pwnguin: Because their function didn't change and I didn't change theme. The icons changed for no benefit to me as an end-user.
<owh> s/icons changed/icons changed appearance/
<pwnguin> so you've expressed no preference for them
<owh> That's an interesting view.
<owh> Hmm.
<owh> As an end user I used the icons that came with the selected theme would be the best way to respond I think.
<owh> But I think I see what you're saying - I didn't choose any specifically, so why should it matter if they change?
<pwnguin> indeed
<owh> I suppose my threshold for change is lower than others.
<owh> That's interesting in its own right :)
<pwnguin> most server people hate change ;)
<owh> That's because people start yelling at you if things change :)
<kgoetz> afk. heading home
<pwnguin> a few of these bugs seem like straight up bugs. they'd be the same if you reinstalled or installed for the first time.
<kgoetz> back in a fe hours probably
<kgoetz> *few
<owh> Thanks for your comments kgoetz.
<pwnguin> i agree that config file diffs suck
<owh> I have a file full of them, and I don't even know where to begin. If I could choose "Replace all and keep a backup" then I would.
<owh> If that option came with a log, that would be even better.
<pwnguin> hmm
<owh> At least I'll have whatever the package maintainer thinks is a good config and I can check and fiddle later.
<pwnguin> is the GNOME frontend to apt default yet?
<owh> Mind you, sometimes the diff picks up spacing changes :(
<owh> You mean synaptic?
<owh> I'm not sure what you're referring to.
<pwnguin> no, i mean like the user query stuff
<pwnguin> like "run avahi service?"
<pwnguin> dpkg-configure i think
<owh> I don't understand the question.
<pwnguin> when you upgrade a package, it asks you questions
<pwnguin> or even just install it
<owh> During the upgrade I was given some dialogs that showed a diff, PHP.ini showed an integrated debconf window, some comments came on the terminal view, nothing was really consistent.
<pwnguin> that system has a normal terminal interface, but ive been using the GNOME frontend for ages
<owh> The dialogs were just simple "Keep" "Replace" boxes with a diff in the window.
<owh> Are you talking about the different ways to interact with the user, that is, via text-only, text dialog, gnome, etc?
<pwnguin> yes, i think so
<pwnguin> debconf frontends
<owh> Yes, we're talking about the same thing.
<owh> One mo, answering the door.
<owh> As I said: "During the upgrade I was given some dialogs that showed a diff, PHP.ini showed an integrated debconf window, some comments came on the terminal view, nothing was really consistent."
<owh> The PHP.ini side-by-side view was a ripper in uselessness.
<owh> Example output line: ; such as dynamic images ; such as dynamic images
<owh> tidy.clean_output = Off tidy.clean_output = Off
<owh> That's two config lines "side-by-side"
<owh> Very helpful.
<pwnguin> when i get diffs, they show up in a GNOME window with scrollbar and standard +- notation
<owh> What I was attempting to do with my email is get beyond specific "bugs" and talk about methodology, but I'm guessing that got lost in translation.
<owh> Well, there were those as well, but it wasn't consistent.
<pwnguin> no use talking about methodology if you don't practice it
<owh> But does that mean you're saying that I need to lodge a bug report for each instance. That seems hardly productive for anyone.
<pwnguin> not quite what im saying
<owh> Sorry, what were you trying to tell me.
<pwnguin> im saying you're walking into u-d-d and demanding changes in what other people do
<owh> I suppose I'm trying to poke someone in the eye, yes.
<owh> Are you saying that I'm not practicing what I'm preaching?
<pwnguin> im saying nobody's going to want to practice your suggested changes if even you don't help with them
<owh> I was trying to be constructive about it and attempting to start a discussion.
<owh> Fair point.
<owh> How do I best help then?
<pwnguin> cats, mice and bells
<owh> ROTFL
<pwnguin> number one would be to identify which of those complaints are upgrade bugs versus regressions
<owh> So, my first step would be to write down a list of what broke or what needed attention?
<owh> Then I suppose use LP to see what gives.
<pwnguin> more importantly, i recall a session at the UDS for hardy
<pwnguin> about automating upgrade testing
<owh> I'm all ears.
<pwnguin> looking for it
<owh> Yeah, ditto, found https://blueprints.launchpad.net/sprints/uds-boston-2007/+roadmap
<owh> https://blueprints.launchpad.net/ubuntu/+spec/automated-installation-testing
<owh> Well the priority is high, it's been approved, but there does not appear to be any content - lovely.
<owh> Is that what you were referring to?
<pwnguin> im not sure
<pwnguin> i may be remembering incorrectly
<owh> I think we're agreeing that my email won't help.
<pwnguin> i think if you aimed to be a bit more constructive
<pwnguin> suggest a UDS session
<owh> Suggestions?
<pwnguin> narrow down the examples to those that are upgrade bug, and maybe avoid a condescending "we" if you're not a Ubuntu Developer
<owh> I'm not a formal Ubuntu Developer, but I am a contributor, have provided bug fixes and attend meetings. Does that not qualify?
<pwnguin> which meetings?
<owh> ubuntu-server meetings.
<pwnguin> physically?
<owh> I'm in Australia, run a one-man company, no not physically.
<pwnguin> well, if "we" means you're asking to help fix it, go ahead and use "we"
<owh> You bet, I'm not dumping this from on high - or low - I intend to assist the process.
<owh> But you raise an interesting point which I'll consider.
<pwnguin> i mean, there are upgrade bugs
<pwnguin> i'd like them gone
<owh> You and me both :)
<owh> Ok, let me revert my workstation via format/restore from backup, document the bugs and see what gives. How is that as a way forward?
<pwnguin> well that would give you a bit stronger position in the email
<owh> So, are you saying, keep the email roughly as it stands and attach a proper bug list, or are you saying, start again with a different email?
<pwnguin> roughly as it stands. maybe drop the "why should i even bother?" line
<owh> Yeah, it would have gone through another filter - the original draft had venom dripping from it - it helped me, but not Ubuntu :)
<pwnguin> so what's the recommended upgrade procedure for fedora? ;)
<owh> That's not even funny - well, after I recovered from choking from laughing that hard.
<owh> It's interesting though.
<pwnguin> last i knew it was download the latest iso and reformat
<owh> What is the Ubuntu recommended upgrade procedure for ubuntu-server and ubuntu-desktop. Is it the same?
<owh> On my deployment servers I build a new VM image and deploy that.
<owh> s/deployment/production/
<pwnguin> on desktop, they try to beat people into the update manager tool
<pwnguin> not sure how that works with server
<owh> On the desktop it didn't work for me today - I'm looking forward to the LTS -> LTS upgrade *not*
<owh> Fundamentally, config change management is IMHO borked.
<Kamping_Kaiser> still at it blokes? *grin*  ( i'm kgoetz )
<owh> Yup.
<owh> I mean, on the desktop, if you select a number of NTP servers, that might change the config file, but during the upgrade that should resolve itself.
<pwnguin> well, almost every app fails to provide a config migration system
<owh> So is that the real problem?
<owh> I mean, if we identify that as an actual problem, we can go about doing something about it.
<owh> s/an/the/
<pwnguin> well, i mean, that's what the diffs are for
<owh> Yeah, but they still show white-space differences and show diffs for files I never touched.
<owh> I suppose that's a bug :)
<pwnguin> does it?
<pwnguin> i thought there was a way it would handle untouched config files
<owh> It did for me. Lemmie see if I can dig up an example.
<pwnguin> gentoo actually has a system aside from debconf for etc files
<owh> The first four lines from a /etc/pam.d/gdm diff:
<owh> -auth	requisite	pam_nologin.so
<owh> -auth	required	pam_env.so
<owh> +auth    requisite       pam_nologin.so
<owh> +auth    required        pam_env.so readenv=1
<owh> Line 1 and 3 are the same.
<pwnguin> they are not
<pwnguin> but they are trivially the same
<pwnguin> one uses tabs, the other uses spaces
<owh> I'm sure it matters for the config parser :)
<pwnguin> if it worked before
<pwnguin> it works now
<pwnguin> it matters to diff, but not your average config parser
<pwnguin> this is an argument for xml ;)
<owh> No, my point is, that if I have to make an assessment to see if this change matters, I have to wade through crap to find real changes.
<pwnguin> right; xml wouldn't give you that sort of crap
<pwnguin> oh, i guess there's also that readenv=1
<owh> Yes, so that means that the change is really only "readenv=1", but I cannot glean that by a quick glance. An end user with no programming experience has no chance and I don't recall actually ever changing this file at all.
<owh> Mind you, there's a whole different conversation about patches that include config files changed between different maintainers - yuk.
<pwnguin> ideally, there'd be an md5 or some other hash to compare with so it knows when you've changed it
<Kamping_Kaiser> it seems that using a diff system that can ignore whitespace would save a lot of issues
<owh> But that is only one example. Here's another one. In /etc/default/acpi-support, I added mysql, samba and vmware-server to the STOP_SERVICES so my machine would sleep, but then the maintainer has a config that does something with SPINDOWN_TIME=12, here's a pastebin: http://paste.ubuntu-nl.org/59114/
<owh> So, now I have a local change that matters and a maintainer change that may matter. What do I choose?
<pwnguin> 3 way merge ;)
<owh> Sure, but now my grandma is doing the installation - what does she choose :)
<kraut> moin
<pwnguin> if your grandma can edit conf files, she can read a diff :P
<Kamping_Kaiser> owh, why did you grandma edit /etc/default/acpi-support ?
<henkjan> hmm, ipv6 autoconfiguration is broken after enabling ipv6 in ufw
<owh> Because her grandson came to fix her laptop that didn't sleep.
<owh> Here's another, I ticked the box in System Administration to disable Avahi. The config file reflects that, but the new config file has it enabled again.
<pwnguin> well the grandma solution is to make sleep work correctly without the file hack, and blind overwrite the config file
<owh> Ah, but the sleep fix comes as a bug fix to vmware-server :)
<owh> And to mysql for that matter :)
<pwnguin> owh: legitmate problem, and you can't propose a policy / procedure fix to that until you know why it happened
<pwnguin> well, grandma's getting quite technical
<pwnguin> but that's not the point
<owh> She's like that :)
<owh> No.
<pwnguin> the kernel suspend and hibernate stuff is braindead
<owh> Oh yeah.
<pwnguin> it'll just get worse and worse as we get more cores
<owh> So, my proposed "global" fix for all this crap is this. Replace all config files with default installation settings after backing up the existing config file. Log the replacement in an installation log file. In most cases, that will do the right thing. In those cases where it doesn't there is a path back.
<owh> I suppose generic machine (perhaps XML) config files is another way to go, but you'd need to track which of those where modified by whom, too hard.
<Kamping_Kaiser> you replace every config file on my server/s and i'll kill you ...
<owh> Well, have you got a better idea?
<Kamping_Kaiser> yes. dont fsck up thousands of servers so a few grandmas dont need to press Y
<owh> No, I think you're missing my point.
<owh> The challenge is to come up with a mechanism that deals with local change while incorporating new configurations.
<owh> I mean, for example change the default pam authentication module across Ubuntu, something which most people wouldn't fiddle with for example.
<Kamping_Kaiser> debconf is meant to handle lots of those problems.
<owh> Ah, but then its use seems incomplete at best and faulty at worst.
<owh> I agree that it should work, but it doesn't appear to.
<Kamping_Kaiser> i havent read enough in how it works on a technical level
<pwnguin> the thing is, ubuntu sets the debconf to default
<pwnguin> so you never get asked questions like "enable avahi?"
<pwnguin> it just assumes the answer
<pwnguin> it's a tricky balance
<pwnguin> surely you've upgraded debian once?
<owh> So, then the mechanism that changes that when I tick the box to say "Disable Avahi" isn't updating debconf?
<pwnguin> _highly_annoying to sit there for two hours playing "wait for the dialog"
<owh> I've upgraded Debian many, many times, why do you think I came to Ubuntu :)
<Kamping_Kaiser> --priority=high
<Kamping_Kaiser> or --priority=low
<owh> Yes, but asking a question 15 minutes into a 2 hour install is a recipe for sitting at the keyboard for another 1 hour 45 minutes afterwards.
<owh> Making the installation last 3 hours 45, rather than 2 hours.
<owh> Or in my case, 10 hours instead of 2.5 hours.
<Kamping_Kaiser> you can complain about to much info, or to little, not both at once ;)
<owh> (Only to find bugs that have been open since early December specifically relating to this upgrade :)
<owh> You have to realise that this is about manageability. I as a human am impatient and can only manage a few inane questions at a time.
<owh> I had a full days work ahead of me this morning, I've done none of it and the system restore will likely take another three to six hours, so if I'm lucky I'll only have lost 24 hours. Likely in reality it will be much more.
<pwnguin> isn't this why professionals have a hot spare?
<owh> Fundamentally that was what my email was about. It should not be like that. Our user experience should be better.
<owh> pwnguin: You mean on a laptop, sure, as soon as IBM supports a normal size drive in the PATA socket :(
<owh> Meanwhile, pwnguin, your suggestions were helpful and I'll spend some time incorporating them after I've restored my machine. Thanks for your time.
<pwnguin> honestly, i tihnk the level of effort needed to make upgrades painless is something that will probably need money backing
<owh> Well, that in itself is a useful observation.
<soren> We invite people to help us test upgrades before the release. After release, if upgrade issues arise, it's perfectly likely that sysadmins will have devised workarounds for these issue. If we go ahead and release a new gutsy iso, those workarounds a likely to break. I am not very much in favour of doing anything at all that might break it for happy, current users of gutsy. Troublesome upgrades are bad, but breaking existing setups is inexcusable.
<pwnguin> your average college CS student isn't going to bother fixing the upgrade process when they can just as easily work around it and move on with life
<owh> soren: So how is one supposed to "prepare" for an upgrade?
<soren> TEst it.
<pwnguin> heh
<pwnguin> in -server no less, this should be expected behaviour ^_^
<owh> soren: That assumes a whole level of infrastructure that isn't often there at all. Even in server environments - albeit single server environments.
<soren> owh: There's no way we can take everything into account. That's why we rely on the likes of *you* to test stuff so that we can fix it in time *before* release.
<soren> It's the curse of providing a flexible, useful system.
 * pwnguin wonders whether paying canonical the big bucks makes upgrades easier
<owh> So, the best way to do that is to build a virtual server, then upgrade that - of course that won't actually help in the case where the virtual server itself doesn't upgrade.
<owh> But it will help.
<soren> If all we provided was an OS where all you could change was 5-6 checkboxes and perhaps a hostname, upgrades would be easy. Fortunately, that's not the case.
<Kamping_Kaiser> pwnguin, probably paying a few CS students to do testing would be more effective
<soren> owh: Virtual servers are certainly handy for testing, yes.
<owh> Transferring that to my workstation, I suppose I could have built a virtual copy of my workstation and attempted to upgrade that.
<owh> As I said earlier, in production I'd build a new image from scratch and play with that, then once happy, deploy it, but it's pretty hard to do that for a workstation.
<soren> It's simple, really. We just need more *real* people, testing this on *real* setups, and telling us about the problems (or better yet: provide patches).
 * owh nods.
<soren> Upgrades should work.
<owh> ROTFL
<owh> Sorry.
<soren> And in the general case, they do.
<soren> !
<owh> Yes, it's the edge cases where it breaks.
<soren> Indeed. And you you have one of those edge cases, please do yourself and everyone else with similar setups a favour and test things before the release, so that we can fix it.
<nijaba> And as soren says, it is NOW that we need testers where the edges are when upgrading from dapper|gutsy -> hardy
<soren> AS you have observed yourself, once the release is out, we're unlikely to touch it.
<owh> Yes.
<soren> ..so you might as well upgrade *before* the release and help us fix issues. There's *very* little point in waiting for months and months.
<owh> I have come to that realisation, though I confess that I had enough of "breaking" machines with Debian to want to move to something better - it seems that this might not be possible - just yet.
<soren> I'm not saying you should fire up the crack pipe and upgrade to hardy the day after gutsy releases like I do. Right around now would be a reasonable time, though.
<owh> That's very funny and I thank you for your humour.
<soren> Which part?
<pwnguin> crack pipe?
<soren> You upgrading now instead of later?
<owh> Living on the edge by installing Hardy the day after Gutsy releases.
<owh> I suppose I should start with making my Gutsy installation work first, then break it again in Hardy :)
<soren> That's what I do. I'd like a "crack" alias for whatever's the current development release. That'd save me the trouble of fiddling with sources.list once every 6 months.
<pwnguin> heh
<owh> You mean sid :)
<pwnguin> more like experimental
<soren> Too much confusion :)
<pwnguin> but upgrades arent even always stable entry windows till alpha1
<soren> Right.
<pwnguin> anyways, it'd be nice at least to see some more automated tests on packages
<soren> We do a lot of that already.
<soren> It's just not very good a exposing all the corner cases.
<soren> At all.
<pwnguin> where are the reports hiding at, exactly?
<soren> pwnguin: I'm not sure.
<owh> soren: To deal with corner cases perhaps we should find a way to "capture the essence" of a system, that is, the installed packages, their configurations and then automatically upgrade an image of such a machine.
<owh> Something like dpkg --get-selections.
<pwnguin> i say just keep a set of images around
<owh> Together with a copy of /etc
<soren> owh: I don't think that'll do much good.
<soren> owh: In fact, I think we do that already (install a random set of packages, test upgrade, rinse, repeat).
<owh> But my machine isn't random. I selected a whole set of packages to get something done. For some reason - or many - the upgrade is painful.
<owh> Capturing that isn't likely to be trivial, but I think it may be doable.
<soren> owh: You claim to have never touched a configuration file?
<owh> No, I claim that in many cases I've not touched a configuration file, but the upgrade process is still offering it to me. In a few cases I specifically edited a config file. In those cases sometimes a three way merge is needed. The vast majority of cases I did not touch the config.
<owh> I captured most if not all of the prompts during the upgrade.
<soren> If you haven't touched a config file and you're prompted about changes to it, that's a bug.
<soren> No doubt.
<owh> There were many.
<soren> Please file bug reports.
<soren> Otherwise, we'll never fix it :)
<owh> Fair enough.
<soren> pwnguin: http://people.ubuntu.com/~mvo/automatic-upgrade-testing/
<owh> That's a project for the morning.
<pwnguin> i knew there was one
<pwnguin> i couldn't find a blueprint for it or a UDS meeting
<owh> Gotta run. Later all. Thanks for the input.
<soren> https://blueprints.edge.launchpad.net/ubuntu/+spec/auto-dist-upgrade-testing
<pwnguin> ah, i was looking at the wrong release
<NineTeen67Comet> Help .. this morning at 6:03 the outside world can no longer see my web server. I've checked with my dns server (afraid.org), rechecked my router (Linksys WRT54GS), re-checked my server (apache2) and I'm out of ideas on where to track the loss of connection (I can get out from my server for aptitude updates, pings etc) ..
<nijaba> NineTeen67Comet: what's the URL of your web site?
<NineTeen67Comet> If I use the IP address (outside) it will access my main sites (default VirtualHost) site, but all the links are broken since they are URL links.
<NineTeen67Comet> http://www.openlug.com (one of 11 on the server)
<NineTeen67Comet> Outside IP is 122.145.71.123
<nijaba> hmmm...  it resolves to 122.145.71.122
<NineTeen67Comet> That's what I see too .. but when I use a "what's my ip" service it is 122.145.71.123 .. even afraid.org has this correct ip address...
<NineTeen67Comet> Who/what/where and how is that incorrect? .. sigh and I thought today was going to be a home work day .. lol
<NineTeen67Comet> Could it be GoDaddy (my registrar) being told to send my URL addresses to another dns server?
<nijaba> NineTeen67Comet: check your DNS entry for www and ensure that you are pointing to the right address
<NineTeen67Comet> Looks like it .. I've got 6 sites with afraid.org and they are all grouped with this IP address (even www, ftp., irc. and the URL w/out www) .. my subdomains too ..
<Jeeves_> Hi all!
<Jeeves_> Is anyone here interested in a installtest for Hardy on a Sunfire T1000 and a Sunfire X4500?
<Jeeves_> I'm going to produce these this week, I think
<Kamping_Kaiser> Jeeves_, i'm intereste in trying a T1000 this week.
 * Kamping_Kaiser should have done it last week, so this week works for me
<Jeeves_> Kamping_Kaiser: I'll let you know when I get one
<Kamping_Kaiser> Jeeves_, cool. you likely to get one in the next week?
<Jeeves_> Kamping_Kaiser: It should ship tomorrow
<Jeeves_> I think it'll be thursday
<Kamping_Kaiser> 60day trial?
<Jeeves_> Yes
<Jeeves_> I'm going to run the Hardy release on two T1000's and a X4500
<Kamping_Kaiser> we are closing in on day 20 of ours
<Jeeves_> (We run nl.archive)
<Kamping_Kaiser> so putting ubuntu on this week would be handy
<Jeeves_> Kamping_Kaiser: I'll try to get it done
<Kamping_Kaiser> Jeeves_, whats your timezone?
<Jeeves_> https://weblog.bit.nl/files/2008/03/ubuntu-setup.png
<Jeeves_> Kamping_Kaiser: .nl (ehm, GMT+0100?)
<Kamping_Kaiser> hm.
<Kamping_Kaiser> thats 8/9 hours different to me, which is perfectly aquard for going through it together
<Jeeves_> :)
<Jeeves_> Ehm
<Jeeves_> So. It would be ok if I were really early, right?
 * Jeeves_ isn't very great with timezones :)
<Kamping_Kaiser> i'm +10.30 am i think (usually +9.30)
<Kamping_Kaiser> i need to work out the netinstall infrastructure too
<Jeeves_> Kamping_Kaiser: I'm working very early thursday
<Kamping_Kaiser> its been quite a while since i netinstalled a sun, and we dont have the infrastructure to do it at work :(
<Jeeves_> So if I'm awake at 05:00 AM, that would be ok for you, right?
<Kamping_Kaiser> Jeeves_, yes that would. i can stay at work until ~8.30pm my time, which is roughly 12:00 mid day your time (i think)
<jdstrand> henkjan: what version of ufw?
<Kamping_Kaiser> http://www.timeanddate.com/worldclock/city.html?n=5
<henkjan> jdstrand: 0.13
<henkjan> jdstrand: should be some default icmp handling
<Jeeves_> Kamping_Kaiser: Coolio. We can play than :)
<Kamping_Kaiser> Jeeves_, :)
<Kamping_Kaiser> Jeeves_, do you have existing netinstal infrastructure setup? or do you have a "best" reference i should look at?
<jdstrand> henkjan: can you file a bug report with what used to work, what doesn't and what it will take to get it to work again?
<Jeeves_> Kamping_Kaiser: We have a pxe-booting infra
<jdstrand> henkjan: I hope to upload a new version today, and if the changes are small, I can get them in
<Jeeves_> No hardy support yet
<Jeeves_> And no sparc-support either
<henkjan> jdstrand: i'm looking at wich icmpv6-types are needed for proper ipv6 autoconfiguration
<jdstrand> henkjan: before filing though, try 0.14
<Jeeves_> Kamping_Kaiser: And I hope we have it by thursday
<jdstrand> henkjan: (it's in the archive already)
<Kamping_Kaiser> Jeeves_, ah ok. i had a PITA of a time to get my sunblades installed. i'll try to get working sparc infra at work pre thursday
<jdstrand> henkjan: and IIRC, you are on gutsy? 0.14 should work much better on gutsy
<jdstrand> henkjan: but if you still have a problem, please file a bug report and I'll get it fixed
<henkjan> jdstrand: okay, ill first check with the 0.14 version
<jdstrand> henkjan: great! please let me know how 0.14 works out for you (good or bad)
<henkjan> hmm, unpacking http://nl.archive.ubuntu.com/ubuntu/pool/main/u/ufw/ufw_0.14.tar.gz gives an ufw-0.13 directory
<zul> morning
<jdstrand> henkjan: check the changelog
<soren> henkjan: Use "apt-get source ufw" instad
<soren> henkjan: Or even better: Use bzr (but apt-get source will tell you this, IIRC)
<jdstrand> henkjan: yes, apt-get source works fine
<jdstrand> soren: that is odd though-- the tarball is 0.13.
<henkjan> i did an apt-get source on an hardy, and that gives me an ~/ufw-0.14/
 * jdstrand nods
<soren> jdstrand: apt-get source renames it for you :)
<soren> jdstrand: Well... dpkg-source does, actually.
<henkjan> ah, okay
<jdstrand> soren: I was just curious why it was that at all
<jdstrand> soren: I did a bzr export
<jdstrand> maybe I need to look at that command some more
<soren> jdstrand: Oh? You don't need to do that.
<jdstrand> soren: do tell
<jdstrand> I am in 'trunk'
<soren> jdstrand: Just pass -I to dpkg-buildpackage, and it'll leave out .bzr
<soren> (I assume that's why you do the export)
<jdstrand> yes
<soren> Ok. Well, just pass -I to dpkg-buildpackage instead and you should be all set.
<soren> (if it's not a native package, you probably want -i instead, just FYI)
<fujin> I'm looking for a better CLi email client than Mutt, Anyone suggest one?
<fujin> too many nubs in #ubuntu
<_ruben> hehe
<zul> pine?
<henkjan> whats wrong with mutt?
<faulkes-> henkjan: try asking it to talk directly to exchange when sending mail and requiring authentication ;
<faulkes-> otherwise, it's my favourite
<Jeeves_> Kamping_Kaiser: I just heard that the T1000's are in
<Kamping_Kaiser> Jeeves_, niiice :)
<Kamping_Kaiser> Jeeves_, i'll be taking a SunBlade150 to work to act as a test unit for doing netinstalls
<Kamping_Kaiser> hopefully that means i'll be able to do $stuff with the blade in a few days (unless we work ourselves out quick
<Kamping_Kaiser> )
<Jeeves_> Kamping_Kaiser: I'll try to boot it tomorrow afternoon
<Kamping_Kaiser> Jeeves_, ok. thats tomorow night for me, so i'll probably be here :)
<Jeeves_> :)
<youngmusic> Very strange thing happening here: I got a server that reboots about every 5 minutes. There's nothing in the logs, and it even happens in single user mode. It's not a clean reboot either, just a power-off and the machine restarting. I begin to suspect a hardware problem, but does anyone know of other possibilities?
<faulkes-> depends if it's happening at regular 5min to the second intervals
<faulkes-> however, just "about" every 5 min, could be a heat issue, perhaps a dead fan for the cpu's
<youngmusic> no, not that strict. It can be 2 minutes or 10 minutes also
<youngmusic> yes, good idea. IÄºl check that out
<Jeeves_> youngmusic: What kind of machine is it?
<youngmusic> it was a bit dusty in there, but all fans are working. The cpu has passive cooling. It's also an airconditioned server room, so temperature should be safe
<Jeeves_> youngmusic: Does it have ilom or something?
<youngmusic> Jeeves_:  It's a dell poweredge, about 5 years old
<youngmusic> ilom?
<Jeeves_> Yes, or some kind of logging
<Jeeves_> check the BIOS for SEL of BCM stuff
<youngmusic> there's syslog
<youngmusic> messagebus
<youngmusic> (oops, I just realize that particular server still runs on fedora, not ubuntu. I hope that's not a big problem?)
<youngmusic> \
<youngmusic> (being on the wrong forum, that is)
<Kamping_Kaiser> youngmusic, you'll need to ask #fedora
<Jeeves_> youngmusic: I mean hardware logging
<Jeeves_> Not linux-logging
<youngmusic> ah, no i don't think it has something like that
<Jeeves_> ok
<faulkes-> is it managed via drac or?
<faulkes-> or any of dell's management tools?
<youngmusic> seems there was a utility partition, but its erased. (the server was installed before my time)
<youngmusic> Jeeves_:  in the bios itself, there is no logging.
<youngmusic> i 'm gonna try and find a diagnostic disk
<coffeedude> dendrobates: morning.
<dendrobates> coffeedude: morning
<kris_ph> hello.. how remove nagios and its configuration.. everything.. since I want to reinstall it
<Kamping_Kaiser> kris_ph, apt-get --purge remove nagios
<Kamping_Kaiser> plus any relevent packages
<kris_ph> I got this error: Package nagios is not installed, so not removed
<kris_ph> but it is installed..
<kris_ph> I followed the steps/guide by ubuntu geek
<Kamping_Kaiser> perhaps its called something else
<Jeeves_> dpkg -l | grep nagios ?
<zul> hi mathiaz
<kris_ph> dpkg -l | grep nagios <<< but no output.. it simply goes back to the terminal line..
<kris_ph> I followed this guide to install it.. http://www.ubuntugeek.com/nagios-network-monitoring-system-setup-in-ubuntu.html
<zul> mathiaz: survivied the snowfall?
<nijaba> kris_ph: you compiled nagios and installed it yourself.  Normal that it is not seen as a package
<kris_ph> nijaba: yeah..
<nijaba> kris_ph: to remove, you now have to do it by hand...
<kris_ph> nijaba: okay.. what will I do then?
<kris_ph> nijaba: this must be a mess.. I read somewhere in the forum (ubuntu)..pointing me to ubuntugeek.
<nijaba> kris_ph: look at the install part of your make file and check where the bits and pieces have been installed, then write to the ubuntugeek writer, asking that a warning be placed on this howto
<kris_ph> nijaba: I just discovered this: https://help.ubuntu.com/community/Nagios2 which I want to follow
<nijaba> kris_ph: look at the first comment on the ubuntugeek how to...  the reply is "because I want to have the latest version".
<kris_ph> nijaba: is there a way to do it?
<nijaba> kris_ph: nothing simple, no.
<kris_ph> nijaba: uhuh... any recommendation to undo this? I have file manager installed in my box.. I could easily view files in graphical...
<nijaba> kris_ph: as I said, look at the install part of the make file you invoke to install it in the first place
<nijaba> kris_ph: that is the only way to tell where bits an pieces have gone
<kris_ph> nijaba: how to see it?
<kris_ph> nijaba: where could I find it?
<nijaba> kris_ph: in the directory where you uncompressed the tarball you downloaded
<kris_ph> nijaba: ows.. can i simply delete those subfolders under nagios?
<nijaba> kris_ph: I am posting the following comment on this blog: Could you please put a HUGE warning at the top of this how to to warn people that installing a package using this method is not the recommended way to do things and should be reserved to experts that will not have trouble uninstalling it by hand?  Thanks a lot."
<nijaba> kris_ph: unfortunately I will not be able to drive you on this...
<kris_ph> nijaba: are you one of the developers for ubuntu?
<kris_ph> nijaba: how about installing the guide located in https://help.ubuntu.com/community/Nagios2 ? would it overwrite it?
<nijaba> kris_ph: I am doing the product management, but others on this channel are the dev.
<kris_ph> nijaba: thank you.. I must believe you..
<nijaba> kris_ph: you can try, but that may lead to inconsistencies
<faulkes-> heya nijaba
<nijaba> hello faulkes-
<kris_ph> nijaba: how to remove this one.. ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios?
<kris_ph> hello faulkes. M glad you're back.. I'm sure you can help me with this..
<nijaba> kris_ph: "sudo rm /etc/rcS.d/S99nagios" should do the trick.  You can also use update-rc.d
<kris_ph> nijaba: thanks.. do I need to run update-rc.d too ?
<faulkes-> hrmm, now to see about remotely installing -server via drac onto this 1955 blade
<faulkes-> should be lots of fun I imagine
<nijaba> kris_ph: no you do not, but using it can help you remove all references to nagios in the init.d scripts
<kris_ph> nijaba: thank you.. so..all I need now is to execute to stop the service right? and install it acco to ubuntu guide
<faulkes-> zul: is there a ebox-mail component in the works?
<zul> faulkes-: not that I know of, the ebox guys were still working with it and it doesnt use dovecot or postfix
 * faulkes- nods
<faulkes-> question was posted in the forum thread, I'll reply to it
<mathiaz> kirkland: wrt bug 39157, rather than giving a detailed explanation on how-to create a debdiff, I'd suggest to give links to documentation on wiki.ubuntu.com (eg: https://wiki.ubuntu.com/PackagingGuide/Recipes/Debdiff)
<kirkland> mathiaz: good idea
<kirkland> mathiaz: done.
<zul> mathiaz: im doing the samba upload later today
<mathiaz> kirkland: most of the packaging questions should have an answer in the packaging guide - https://wiki.ubuntu.com/PackagingGuide/
<kirkland> mathiaz: i see
<sommer> mathiaz: hello, I ran into an issue with slapd and apparmor this weekend
<mathiaz> kirkland: it used to be a docbook guide maintained by the doc team
<mathiaz> kirkland: but now it has been moved to the wiki so that anyone can edit it.
<sommer> mathiaz: slapadd/slapcat can't save or import when in enforce mode
<mathiaz> kirkland: dholbach has been responsible for the migration and would welcome any comments on that
<mathiaz> sommer: do you see any error messages ?
<faulkes-> mathiaz: for x64 intel -server, which iso would I want to grab?
<mathiaz> sommer: I mean audit messages
<sommer> yep, it has to do with the ldif file
<faulkes-> gonna see about testing a5 on a dell 1955 blade
<sommer> if you add the ldif file to usr.sbin.slapd it will work
<mathiaz> faulkes-: you can try -amd64
 * faulkes- nods
<mathiaz> faulkes-: -i386 should also work
<faulkes-> kk
<faulkes-> yeah, but 4 x64 cpu's and 4gb of ram, I'll go with x64 first try
<sommer> is their a way for the profile to recoginze *.ldif?
<mathiaz> sommer: meh.. wired
<zul> damn 'puter
<mathiaz> sommer: slapadd and slapcat should run whith slapd stopped
<sommer> mathiaz: correct
<mathiaz> sommer: as they modifies the backend db directly
<mathiaz> zul: have you seen the new upload from steve this weekend ?
<mathiaz> zul: they're also preparing an upload for 3.0.28a that has been released over the we
<zul> mathiaz: yep Im using it as a pase
<zul> base even
<zul> hmmm...maybe we should wait then
<mathiaz> zul: it may worth sending them all the relevant patches and make sure they can be part of 3.0.28a
<mathiaz> zul: also I'm not sure which one are relevant to Debian.
<zul> mathiaz: I can do that
<zul> I think the winbind files one and the documentation fixes
<mathiaz> zul: for the documentation fixes, I'm not sure that it will get accepted. steve grumbled when I mentionned the bug last week
<mathiaz> zul: but it's worth a try
<zul> Ill go through the svn see what they have
<mathiaz> zul: you may also check with steve for a FFexception
<zul> sure
<sommer> mathiaz: stopped or started when using slapadd/slapcat the apparmor error still happens
<sommer> mathiaz: here's the error message from slapcat: http://paste.ubuntu-nl.org/59142/
<zul> mathiaz: I have to bug slangslek about some ebox stuff anyways so Ill bring it up with anyways
<dantalizing> mathiaz: regarding the openvz info you asked me to document, was there specific info you wanted?  i put an initial page up here: http://wiki.ubuntu.com/OpenVZ
<ubotu> Launchpad bug 39157 in samba "Samba initscript does not conform to the LSB Spec." [Low,Confirmed] https://launchpad.net/bugs/39157
<faulkes-> oki's, went for a6 to test out, we'll see how that goes
<kirkland> the map where you select your timezone on installation, is that provided by tzdata?
<iclebyte> having trouble getting my head around ubuntu's method of using virtual hosts.. i've created 2 entries in /etc/apache2/sites-available one called iclebyte.ubuntu1 and one called dotproject.ubuntu1 both have been enabled and the symlinks are listed in sites-enabled however only the iclebyte.ubuntu1 vhost works. the dotproject.ubuntu1 vhosts points to the default pages...
<iclebyte> any ideas?.. i'm not getting anything in the logs thats helpful
<\sh> iclebyte: you need to enable them in sites-enabled via a symlink from sites-available
<iclebyte> thats what i said i've done..
<iclebyte> " the symlinks are listed in sites-enabled"
<iclebyte> but only the first vhost is working..
<\sh> iclebyte: can you pastbin both vhost configs pls?
<iclebyte> yup 1 sec
<mathiaz> sommer: you've got this error message even when slapd is not running ?
<iclebyte> http://pastebin.com/d761bca7d
<sommer> mathiaz: correct
<\sh> iclebyte: where is your namevirtualhost directive?
<mathiaz> zul: great - although I don't think we'll need a FFe for 3.0.28a as this is a bug-fix only release. But mention it just to be sure.
<mathiaz> sommer: that is ... strange
<iclebyte> \sh, I read on a forum somewhere that it should only be in the 'default' vhost file..
<\sh> iclebyte: hehe...
<\sh> iclebyte: if the default file is not enabled, you don't have one
<mathiaz> jdstrand: have you test slapdcat/slapadd when generating the slapd profile ? See the backlog for sommer problem
<\sh> iclebyte: you need to add somewhere NameVirtualHost *
<iclebyte> the default host is enabled ....
<sommer> mathiaz: I thought so, if you add the ldif file to /etc/apparmor.d/usr.sbin.slapd with "w" or "r" depending it works fine (after reloading the profile)
<\sh> iclebyte: hmm...what says error log?
<mathiaz> dantalizing: great - that's a good starting point. Could you document a little bit more what is available in hardy ?
<\sh> iclebyte: and btw...your entry "serveralias"..you don't need it with the same name as ServerName...
<dantalizing> mathiaz: will do
<iclebyte> do i need the serveralias entry atall?
<iclebyte> i seem to have fixed it now ... some how.. thanks for your time anyway! =)
<mathiaz> dantalizing: could you also list the how-to pages on help.ubuntu.com ?
<mathiaz> dantalizing: a first task would be to update the how-to pages to make them relevant for hardy.
<\sh> iclebyte: nope....you need it only for e.g. "ServerName www.yourdomain.tld" and you want to point "yourdomain.tld" to the very same webspace..
<dantalizing> mathiaz: k
<iclebyte> oh right.. that doesnt really apply on my virtual machines then =P. cheers for your help though dude.
<mathiaz> dantalizing: if the kernel has openvz enabled, it should be much easier to get started with openvz in hardy
<\sh> iclebyte: you're welcome
<dantalizing> mathiaz: yes
<dantalizing> mathiaz: what i dont understand is what is "a separate flavour (of the kernel) via the debian/binary-custom.d/ infrastructure"
<dantalizing> this is where ben told the openvz devs the patches would be applied....if i'm using the right terminology
<dantalizing> i've downloaded and built the kernel with the patches, so i know its in, i just dont know how that translates to a universe package, or where to see the end result
<mathiaz> dantalizing: which kernel did you download ?
<dantalizing> from git, ubuntu/ubuntu-hardy
<mathiaz> dantalizing: looking at the ubuntu-hardy git tree, I don't see any openvz flavour in debian/binary-custom.d/
<dantalizing> mathiaz: it was just loaded friday, could that be the reason?
<mathiaz> dantalizing: well - I'm looking at gitweb on kernel.ubuntu.com
<mathiaz> dantalizing: I haven't done a git checkout of the kernel tree
<mathiaz> dantalizing: did you see an openvz flavour in debian/binary-custom.d/ ?
<zul> yeah I dont see it anywhere either
<dantalizing> mathiaz: lemme check my laptop at home
<mathiaz> dantalizing: There is a xen directory in there - I'd expect an openvz there too.
<dantalizing> like i said, i didnt konw what to look for so i did close and just grepped for some of the patch code
<dantalizing> mathiaz: its quite possible i misunderstood the exchange, shall i fwd you the email i'm looking at?
<mathiaz> dantalizing: hum. Reading through the changelogs, I don't see any references to openvz
<mathiaz> dantalizing: sure
<dantalizing> mathiaz: done
<mathiaz> dantalizing: ok - read throught it.
<mathiaz> dantalizing: the patches haven't been loaded in the kernel yet.
<dantalizing> ic
<mathiaz> dantalizing: you may drop by #ubuntu-kernel to ask what's going on the openvz front.
<dantalizing> ok
<mok0> ping soren
<nijaba> mathiaz: I thought that an OpenVZ kernel was going to be put in universe, not that we would have it in our kernel
<mathiaz> nijaba: that is the case
<mathiaz> nijaba: there will be an -openvz the same way as -xen flavour
<mathiaz> nijaba: but the source is included in our kernel tree
<dantalizing> where do i find debian/binary-custom.d/ ? a subdirectory of the kernel source?
<nijaba> mathiaz: ok, I did not get that part, sorry...
<mathiaz> dantalizing: yes
<mathiaz> dantalizing: at the top of the tree, there is a debian/ directory
<dantalizing> i knew that....er... not
<dantalizing> mathiaz: ok, i'll not assume we have a -openvz flavor yet with the documentation
<mathiaz> dantalizing: well - it seems that ben said he would accept the openvz patches.
<mathiaz> dantalizing: so I'll try to figure out what's hapening on that front
<dantalizing> mathiaz: you still want me to ask in #ubuntu-kernel, or wait?
<mathiaz> dantalizing: I'd ask in #ubuntu-kernel.
<mathiaz> dantalizing: if openvz will be available in the next upload, I'd wait a bit for updating the documentation
<mathiaz> dantalizing: once there is a -openvz flavour in hardy universe, it will be easier to get things tested and update the documentation.
<dantalizing> mathiaz: ok
<jdstrand> mathiaz, sommer: I am awaer of the slap* apparmor stuff and need to think about the fix more. I can say that redirecting slapcat's stdout works fine
<sommer> jdstrand: cool, thanks
<zul> sommer: there should be a whole bunch of ebox fixes this week hopefully as well
<sommer> sweet, I wrote up the ebox section to the docs, but haven't committed it yest
<sommer> yet even... probably will this evening after another proof read
 * faulkes- watches alpha6 install via dsl to the colo via drac
<faulkes-> interesting little process
<faulkes-> and given my new employer seems cool with me testing out ubuntu alpha's on our hardware so I can repot back to ServerTesting, kinda cool
<sommer> I want a colo drac :-)
<sommer> sounds like a new energy drink... heh
<mruiz> hi all
<sommer> hey
<faulkes-> heh
<faulkes-> if it's gonna be a drink, it's going to have alcohol in it
<brewmaster> is there a netinstall for kubuntu similar to debian?
<brewmaster> i'm trying to install on a friend's comp but I think his CD rom is bugging out
<brewmaster> hangs at random %'s during the final step of the install
<Centaur5> brewmaster: https://help.ubuntu.com/community/Installation/QuickNetboot
<brewmaster> Centaur5, thanks, I'll give it a try
<brewmaster> "There should be no other DHCP servers running in the subnet"
<brewmaster> what about my router?
<Centaur5> Then you'll have problems if the client gets an IP from your router instead of the server.
<sdrowkcab> hello
<zul> mathiaz: im going to upload samba since there is no eta for 2.0.28a
<sdrowkcab> what hapens when you install a LAMP server?
<nxvl> hi all!
<nxvl> what's the efforts focused on in this part of the development circle?
<sdrowkcab> When I install a LAMP server will it install the standard ubuntu server plus apache etc.?
<sommer> sdrowkcab: yeppers
<zul> nxvl: bug fixing
<sdrowkcab> So it just saves the hasle of doing it manualy?
<nxvl> zul: only bug fixing?
<zul> nxvl: pretty much..
<mathiaz> nxvl: there is a lot of stuff that needs attention
<sommer> sdrowkcab: yep, but if you don't want mysql for instance you'd probably want to install the packages by themselves
<sdrowkcab> I want to set up a game server plus web server so I think I should go for LAMP. right?
<nxvl> mathiaz: a lot of stuff as in a lot of bugs?
<sommer> sdrowkcab: sure, there's a ton of info regarding the LAMP stack if you're just starting out
<mathiaz> nxvl: yes
<nxvl> mathiaz: ok, i will take a look at some bugs :D
<sdrowkcab> I allready have ubuntu and xubuntu I just wanted to know the difference between LAMP and non-LAMP installation
<sdrowkcab> thanks
<sdrowkcab> btw Is ubuntu server better than freeBSD?
<sommer> oh ya, bsd isn't even linux :-)
<mathiaz> nxvl:  I've come accross the tiny-erp package
<mathiaz> nxvl: https://bugs.launchpad.net/ubuntu/+source/tinyerp-server/
<sommer> heh... I think it's more what you want to do with it that really matters
<mathiaz> nxvl: there seems to be a couple of bugs that are easy to fix
<nxvl> btw, is there any efforts already on writting a centralized adminstration console for common services?
<nxvl> mathiaz: thnx :D
<sommer> nxvl: I'm not sure if this is exactly what you're looking for, but it's an interesting project: https://hosted.fedoraproject.org/func/
<mathiaz> nxvl: no. That wouldn't be appriopriate for the current release cycle.
<mathiaz> nxvl: but something that could be discuss at the next UDS
<mathiaz> nxvl: you should register a blueprint if you're interested in that.
<nxvl> mathiaz: thats what i meant, i want to go to the UDS with something started
<mathiaz> nxvl: well - if you're interested in this topic, starting to do some background research and drafting a spec is a good step to take
<nxvl> zul: btw, did you take a look at the smb.conf(5) patch i upload?
<zul> nxvl: yes its in there
<nxvl> mathiaz: yep, i will (want to) start some research, drafting and start some develop before UDS
 * faulkes- prepares ServerTesting template
<nxvl>  /j ubuntu-driver
<nxvl> sorry about that :P
<soneil> nxvl: good luck .. something I'd love to see, but always seems to meet opposition (the old "real admins use nipple clamps" argument )
<nxvl> soneil: yes, the problem with server developing is that the "users" are people who actually knows what thay are doing and want to have the total control os their systems
<nxvl> soneil: but i have some ideas that maybe can work
<sommer> mathiaz, jdstrand, lamont: not sure who should handle this, but I just noticed that the bind9 apparmor profile is named apparmor-profile...
<lamont> sommer: sigh.  what should it be named>?
<jdstrand> sommer: in the packaging
<sommer> probably usr.sbin.named
<jdstrand> sommer: when installed its usr.sbin.named isn't it?
<sommer> not from the package I just installed
<lamont> -4 changed it
 * jdstrand goes checks on that
<lamont> -6 will have the fix
<jdstrand> lamont: oh-- you changed it? ok then
<jdstrand> lamont: yes, it the absolute path to the binary it protects, with '/' replaced with '/'
<lamont> with .
<jdstrand> haha
<jdstrand> '/' with '.'
<sommer> heh... thanks lamont
<jdstrand> so yes, usr.sbin.named
<lamont> sommer: is there a bug open for this?
<sommer> lamont: don't think so I just discovered it going over the docs
<sommer> I can file one if you'd like
<lamont> the sad? part is that I _just_ uploaded -5
<lamont> sommer: sure.
<sommer> ok, will do
<lamont> that'll give me something to request the sync against. :-)
 * lamont waits for a bug number so that he can commit and upload
<sommer> lamont: Bug #200739 filed
<ubotu> Launchpad bug 200739 in bind "bind9 apparmor profile is named apparmor-profile" [Undecided,New] https://launchpad.net/bugs/200739
 * lamont migrates the bug to bind9 where it belongs
<sommer> woops
<lamont> -rw-r--r-- root/root       587 2008-03-10 14:36 ./etc/apparmor.d/usr.sbin.named
<sommer> lamont: should the bind user have write access to /etc/bind ?
<jdstrand> sommer: it needs it for dynamic updates
<lamont> sommer: not generally, no
<sommer> jdstrand: ya, I noticed that
<lamont> 2755 root:bind
<jdstrand> sommer: but the DAC (ie unix permissions) are still enforced
<jdstrand> sommer: so even though the apparmor doesn't stop it, the DAC will
<jdstrand> s/apparmor/apparmor profile/
<sommer> so if I have a master - slave scenerio /etc/bind should be 2755 bind:bind ?
 * lamont can hardly wait for hardy, so he can turn on selinux instead of apparmor
<jdstrand> 2775
<lamont> I'd use /var/lib or /var/cache for secondary stuff
<lamont> and/or updates
<lamont>  /etc shouldn't be writable by the bind user
<lamont> sadly, the jnl files go in the same place as the zone data, which hurts
<jdstrand> lamont: ideally no, but journal files get written wherever the zone file are
<jdstrand> oh-- yes
<jdstrand> you knew that ;)
<lamont> jdstrand: and that's why the package delivers an empty /var/lib/bind
<lamont> .jnl files don't belong in /etc, so let's move the zone data to fix  that... :(
<sommer> lamont: gotcha, can you take a look at: http://doc.ubuntu.com/ubuntu/serverguide/C/dns-configuration.html
<sommer> if you only add the config for a slave the zone files want to go into /etc/bind
<sommer> should that be documented different?
<lamont> slave zone goes in /var/cache/bind
<sommer> lamont: do you have to configure that?
<lamont> in fact, slave zones can go in the config without a leading path name, since named's working directory is /var/cache/bind
<lamont> so just drop the /etc/bind/ from the paths for secondary
<sommer> gotcha
<lamont> for dynamic, you'll need to have /var/lib/bind/ in front of the zone data.
<jdstrand> lamont: have you uploaded -6 yet?
 * lamont points at "Configuration Schema:" in /usr/share/doc/bind9/README.Debian.gz
<lamont> to debian, yes.  it'll be in the dinstall run
<lamont> tomorrow.
<lamont> somewhere in there I should subscribe ubuntu-archive and make it a sync request
 * lamont goes to do that
<jdstrand> lamont: you are too fast
<sommer> lamont: thanks I'll look through that :-)
<jdstrand> lamont: this talk of /var/lib/bind made me check the apparmor profile
<lamont> jdstrand: the commit was waiting for the bug number... :-)
<lamont> jdstrand: so is it broken?
<jdstrand> it should have '/var/lib/bind/* rw,'
 * lamont ponders the meaning of 'DAC'
<jdstrand> discretionary access controls
<jdstrand> as opposed to MAC
<jdstrand> sorry for the jargon
<lamont> np.  I expect that directory perms are MAC though, no?
<lamont> anyway, not sure what, if anything, a proper install should need to write in /etc.
<jdstrand> apparmor provides the MAC, and it is supplemental to the unix permissions
<lamont> ah, ok
<jdstrand> so if the unix perms so 'no', apparmor will not override that
<jdstrand> (think the most restrictive wins in this case)
 * jdstrand can't type
<jdstrand> say 'no'
<lamont> right.
<jdstrand> lamont: I only did the /etc/bind as someone will undoubtably just use /etc/bind for their zone and dynamic updates
<lamont> so I'll ignore that for the moment, and let you file a bug on bind9 with a patch to the apparmor-profile (vs -6) and then we'll catch it after bind9 gets a chance to age for a couple of days?
<lamont> there are lots of places that use /etc/bind in ways it was not meant to be used... like say, for secondary zones
<jdstrand> lamont: no problem-- I'd like to test it some more anyway
<lamont> (putting secondary zones in /etc violates FHS)
 * jdstrand nods
<lamont> "Every zone file contains 3 resource records (RRs): an SOA RR, an NS RR and a PTR RR."
<lamont> Bzzzzt!
<lamont> every zone contains an SOA RR, one or more NS RRs and probably other data.
<sommer> talking about the docs?
<lamont> (reverse zones tend to have PTR RRs to be useful, forward zones tend to have A and/or AAAA RRs as well as other stuff)
<lamont> yes
<lamont> overview, final para
<sommer> yeah, that line was in the original doc, which wasn't very good
<sommer> thanks, I'll reword that
 * lamont has come to hate the title "caching nameserver", despite it's ubiquitous use.
<lamont> all nameservers cache
<lamont> some nameservers are authoritative
<sommer> heh, why sounds like it will give you money :-)
<lamont> some nameservers will accept recursive queries.
<lamont> some do all 3
<sommer> should that section be renamed?
<lamont> dunno
<lamont> they're the terms everyone expects...
<sommer> ya, I referred to the DNS howto for a lot of that info
<lamont> "caching only nameserver" is a common term, which means a nameserver that is only authoritative for 0.0.127.in-addr.arpa, localhost, and friends.
<sommer> I'll mention that you can do all three types on one server
<lamont> yeah.  my nameserver, for example, is primary for a bunch of zones, and secondary for a few others
<sommer> cool, thanks for the feedback lamont
<sommer> I appreciate it
<lamont> (and allows recursion from the home network, too)
<soren> mok0: You pang me (much) earlier?
<mok0> soren, still there?
<soren> mok0: I was *just* about to leave.
<mok0> soren: I just wanted to discuss with you my bug 200648
<ubotu> Launchpad bug 200648 in kvm "guest machines falls behind time" [Undecided,New] https://launchpad.net/bugs/200648
<soren> mok0: Waaay too tired. Sorry :)
<mok0> soren: np
 * soren heads to bed
<mok0> see you later!
<owh> After supplying the patch for the ntp bug with the kind assistance from kirkland, I'm looking at the qa-hardy-server list: https://bugs.launchpad.net/ubuntu/+bugs?field.tag=qa-hardy-server and all that is left is a long running bug about cups-pdf. Bug #147551, posted 1 Oct, 2007. I'm not sure it even has a good description, let alone a handle on what's causing it.
<owh> Hmm, the bot didn't pick up the bug: https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/147551
<owh> Any ideas?
<mok0> owh: looks complex
<owh> They always do, but I've found that most of the time, the fix is a single line - the trick being to find that line :)
<owh> More than anything, there appears to be a lot of noise in the bug.
<mok0> owh: there's a fix in a comment from 2007-12-06
<mok0> owh: yeah, hard to see if it's just a configuration error
<mok0> owh: can you reproduce it?
<owh> No, I'm not seeing anything like that. And I don't see aa-complain in my packages list either. Not sure where it's from yet.
<owh> Soh, I read: sudo aa-complain cupsd as sudo apt-get install aa-complain - not sure what my brain is up to this morning :)
<owh> s/Soh/Doh/
<owh> Ah, a bit of RTFM with aa-complain, it doesn't fix it, just hides the error in the syslog.
<mok0> hm
<sommer> owh: aa-complain is part of apparmor
<owh> Yeah, my brain wasn't reading what the screen said :)
<owh> I'm reading https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/152537 which appears to be the same bug.
<mathiaz> owh: it's a long bug and boils down to non-standard configuration for users.
<mathiaz> owh: most of them have non-standard home directories, or symlinks to somewhere lese.
<owh> So, is this a real bug, or a support request?
<mathiaz> owh: hum... good question
<mok0> mathiaz: there are loads of bugs in Ubuntu related to having your home directory on an automounted nfs share
<mathiaz> mok0: bugs related to AppArmor or bug in general ?
<owh> mok0: Perhaps that's food for another tag?
<mok0> mathiaz: various general bugs
<mok0> owh: could be
<mathiaz> bug 147551 is really about cups-pdf, apparmor profile and non standard configuration
<owh> The second bug I showed seems to start in the same place, but travels to a different destination.
<owh> I'm beginning to think that it's a support request.
<mok0> owh: yeah
<owh> Not that there's anything wrong with that, but we cannot come up with a patch for it :)
<mok0> owh: it should be moved to questions
<mathiaz> owh: well - there is a solution in the bug
<kirkland> owh: mathiaz and I were talking earlier today about getting more of the scripts in /etc/init.d LSB-compliant
<mathiaz> owh: which is to look at the apparmor messages and update the profile for cups-pdf
<kirkland> owh: seems that many of those scripts are missing a "status"  action item
<mathiaz> owh: there are also some issues in gutsy if the directory is on an nfs share
<mathiaz> owh: kirkland: right
<mathiaz> that could be another task to look at
<owh> mathiaz: Specifically related to this bug, or in addition to this bug?
<mathiaz> I've added an item to the ServerTeam Roadmap
<kirkland> owh: just an item in general, some relatively low hanging fruit, lots of little shell scripting patches
<mathiaz> owh: it's a general tip when debuging apparmor problems
<owh> mathiaz: You mean, if you have an nfs share directory and you're having apparmour issues - which this cups-pdf bug report looks like, then you're saying, that's a debugging tip?
<owh> kirkland: Yes, but what output should the "status" call make?
<kirkland> owh: it should tell you whether or not the given service is running
<owh> kirkland: Is there a specified format / exit code for that?
<kirkland> so something like:
<kirkland> root@t61p:~/bin# /etc/init.d/ntp status
<kirkland>  * NTP server is running.
<kirkland> root@t61p:~/bin# /etc/init.d/apache2 status
<kirkland>  * Usage: /etc/init.d/apache2 {start|stop|restart|reload|force-reload|start-htcacheclean|stop-htcacheclean}
<mathiaz> owh: there is a specific format for the output of the status command IIRC
<kirkland> NTP has a working "status" section, but Apache does not
<mathiaz> owh: the lsb standard documentation has it
<kirkland> mathiaz: owh: yes, definitely in the LSB spec
<kirkland> i can dig that up if you want me to...
<mathiaz> however, most of the init script in debian have their own status output :/
<owh> Cool, in my spare life, I'll have a look at that, but meanwhile what do we do about this outstanding qa-hardy-server bug - I know I can remove the tag, that will make it go away :)
<kirkland> mathiaz: I'll prepare a report of all of the init scripts in a hardy-server install, listing which ones are missing a "status" section by Wednesday's meeting
<owh> kirkland: I'll give you a hand when I have a mo.
<mathiaz> kirkland: great - it should be that hard to automate
<kirkland> mathiaz: nope, just an xargs ;-)
<owh> mathiaz: You mean should not right :)
<mathiaz> kirkland: I've listed in the Roadmap a link to the wiki page in w.debian.org where they track this effort
<kirkland> mathiaz: ah, nice
<owh> kirkland: Uh, that's only for the scripts you have installed, not for all of them.
<owh> kirkland: Unless you're doing something else funky.
<kirkland> owh: well, i say we target the ones in a Hardy Server install
<mathiaz> kirkland: implementing the status command is the easiest task.
<kirkland> mathiaz: yes, I agree
<kirkland> mathiaz: it's also the most useful
<mathiaz> kirkland: defining proper dependencies is harder
<owh> kirkland: That's only true if you install *all* the tasks I suppose.
<mathiaz> kirkland: I'd start by making a list in main
<kirkland> mathiaz: it's one of the things i've heard many RH admins complain about when making the switch to Ubuntu
<mathiaz> kirkland: a list from packages in main
<mathiaz> kirkland: agreed
<owh> Anyone got any issues with me removing the qa tag from https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/147551, or is there a better way?
<mathiaz> owh: well - I'm convinced we should remove it.
<owh> Anyone else?
<mathiaz> owh: heu... *not* convinced
<owh> Huh?
<kirkland> subtle difference mathiaz ;-)
<owh> I'll say :)
<mathiaz> owh: it's an annoying bug that keeps coming over and over
<mathiaz> owh: I'd like to discuss it with pitti
<mathiaz> owh: it's related to cups so it's up to him.
<mathiaz> owh: the reason it's tag with qa-hardy-server is that it's apparmor related and we're "looking" after apparmor
<owh> Fair enough. Is that a self-appointed action item for Wednesday :)
<mathiaz> owh: but we won't have a proper solution on the apparmor side to handle these situation
<ubotu> Launchpad bug 147551 in cupsys "cups-pdf fails to generate file when user does not print to default ~/PDF (apparmor vs.cups-pdf inconsistency)" [Wishlist,Confirmed] https://launchpad.net/bugs/147551
 * mathiaz waves at ubotu
<owh> ROTFL
<owh> I wonder.
<kirkland> mathiaz: what's the best way to get a comprehensive list of packages in main?
<owh> I have found lots of printed pdf stuff from cups-pdf in /var/spool/cups-pdf. I wonder if the "non printed jobs" are all in there?
<owh> kirkland: There was an email about that, lemmie have a looksee.
<owh> As in, a snippet that gave you the answer.
<kirkland> owh: lemme have it!
<mathiaz> kirkland: you can grep in a Packages.bz2 file from the apt repository
<owh> I'm looking, I'm looking.
<mathiaz> kirkland: http://archive.ubuntu.com/ubuntu/dists/hardy/main/binary-amd64/
<owh> mathiaz: Pah, that's cheating :)
<kirkland> mathiaz: perfect, thx
<mathiaz> kirkland: you could also have a look at madison and rmadison
<mathiaz> kirkland: you need a package name, but it'll give you the component for the package
<owh> Doh, I just thought of a better way, one mo.
<kirkland> mathiaz: so I'll create a VM, install all 5914 packages in main, then `ls /etc/init.d/ | xargs -i {} status > /tmp/out`
<kirkland> :-P
<owh> I'm having a little fight with the packages search engine, but I suspect it will submit to my will and give us a list :)
<mathiaz> kirkland: well - considering that you have a local mirror, you could also go through all the packages, list their content and check if they have a init script
<mathiaz> kirkland: dpkc -c .deb
<owh> Oh, loveley, the search engine is giving responses, but there is no standard name for the init.d script :)
<kirkland> mathiaz: that's just what I was doing
<mathiaz> kirkland: dpkg -c .deb
<kirkland> installing 5900 packages in a VM was a joke
<owh> I think we got that :)
<mathiaz> kirkland: that way, you could even have a list of universe and another one for main
#ubuntu-server 2008-03-11
<owh> kirkland: Hah, found it, in the U-D mailing list: This gives you multi-verse/universe for example: aptitude search '~i~s.*verse' - Thanks to Tollef
<owh> There is also a grep-status package that can show you: grep-status -nsPackage -eFSection 'universe|multiverse' -a -FStatus ' installed' | sort
<owh> (The discussion was about Security/Support status of packages)
<kirkland> mathiaz: owh: okay, I gotta run now, but I'll hack up a script that will give us a report on Main and Universe later tonight
<owh> Cool.
<alstone> hello
<alstone> I am in need of php help mainly the imap_open command
<sommer> alstone: what's your question... I'm not sure if this is the correct channel though
<alstone> sorry for the delay @work
<alstone> it seems that imap_open doesn't work how can I chk this?
<owh> alstone: As sommer suggested, this may not be the correct channel for your question, but phpinfo() should tell you at least if imap is installed. As for "Doesn't work", you need to specify.
<sommer> $imap = imap_open("{$server:143}", $username, $password); if ($imap) { print $imap };
<sommer> alstone: I know that the php5-imap package works... used it today :-)
<owh> Anyone got the hardy .iso URL handy?
<alstone> ok so maybe I need to install that package
<sommer> alstone: if you want to use the imap functions, you betcha
<sommer> owh: I have a hardy iso :-)
<owh> Ok, come over for a cuppa, bring the CD and I'll install it.
<alstone> yeah if it isn't installed by default then it ain't there...  |-P
<sommer> I can email it to you... heh
<owh> sommer: I'm thinking that it will bounce :)
<sommer> heheh... probably
<owh> alstone: Well, that's easy fixed: aptitude install php5-imap
<owh> In case anyone cares: http://cdimage.ubuntu.com/releases/hardy/alpha-6/
<owh> sommer: I'm guessing you'll be late for a cuppa today then?
<owh> :)
<sommer> cuppa?  cuppa joe?
<owh> I'm about to put the kettle on and make a coffee :)
<sommer> heh... cool
<alstone> hmmm did that but I am still gertting the same error
<sommer> alstone: are you using apache?  if so you'll need to restart it after installing php5-imap
<owh> alstone: Which particular error are you getting?
<alstone> ah ok
<alstone> it is a plugin for a blog
<owh> Crap, this .iso is coming down at 5.6k, it will take 24 hours. Anyone got somewhere I can download the hardy alpha-6 iso at 1Mbit?
<alstone> says that php-imap isn't there
<owh> alstone: So, the error you're seeing isn't a php error it's a blog error from some php application?
 * owh has visions of swamp land ahead.
<alstone> yeah saying that it can't find php-imap
<owh> First things first, can you make a simple .php script with this content: <? phpinfo() ?>
<alstone> no I am getting a different error
<owh> I'm trying to determine if PHP thinks IMAP is installed, not if your application thinks it is.
<alstone> that gave me a syntax error
<alstone> owh that is
<owh> alstone: A PHP syntax error?
<ScottK2> lamont: Would you please look at Bug #200308.  I'm almost certain it's a configuration error on the part of the reporter, but since it's with 2.5.1, we probably ought to be sure.
<alstone> sorry yeah "syntax error near unexpected token `('"
<alstone> but no worries now I get a can't connect error so I need to fix that
<lmr_> all: Hey guys, I'd like to help out the server team on the test area, specially test automation and test infrastructure. I'm reading the material on the wiki, and I was wondering if you guys use autotest http://test.kernel.org/autotest to automate your testbucket
<owh> lmr_: We were having a similar discussion yesterday, the only resource that I think currently exists is this one: http://people.ubuntu.com/~mvo/automatic-upgrade-testing/
<owh> One of the other things that came out of the discussion was that most automated tests were passing fine, it was the edge cases that broke down - the suggestion being more eyeballs to do manual testing.
<mathiaz> lmr_: you should talk to kirkland about autotest
<mathiaz> lmr_: I think he's working on this
<lmr_> kirkland: Hey dude, good evening, you there? :)
<mathiaz> lmr_: as for testing effort, one area were we would welcome is the upgrade testing from dapper to hardy
<lmr_> mathiaz: Since they're LTS releases
<mathiaz> lmr_: he is away for now - he may be back later tonight
<owh> mathiaz: Are we supporting that straight through?
<owh> mathiaz: As in, Dapper -> Hardy direct?
<mathiaz> owh: dapper -> hardy ? yes
<owh> Excellent. My clients will be happy :)
<mathiaz> owh: we'll support gutsy->hardy *and* dapper->hardy
<mathiaz> So we need more testing on the dapper->hardy upgrade
<lmr_> mathiaz: I see
<owh> Well, as soon as I have this hardy server cd down I'll start :)
<mathiaz> I'd like to test as much as we can on the server side
<mathiaz> things like making sure that mysql, postgresql, openldap, postfix upgrade correctly
<mathiaz> lmr_: so if you have experience in automating tests, you're more than welcomed !
<lmr_> mathiaz: Our distro fvtr includes a lot of ltp tests, filesystem stress, memory stress, java support tests
<mathiaz> lmr_: fvtr ?
<lmr_> mathiaz: functional verification test regression
<lmr_> mathiaz: We're putting a lot of work on automating that, including reporting
<mathiaz> lmr_: yes - reporting is the hardest part
<lmr_> mathiaz: And of course making sure all the resultant code is maintainable and good enough
<lmr_> mathiaz: Most of this work is already upstream
<mathiaz> lmr_: I've used different testing frameworks - they lack good reporting
<mathiaz> lmr_: which upstream are you refering to ?
<lmr_> mathiaz: autotest upstream
<lmr_> mathiaz: autotest is filling quite a lot of gaps on open source testing infrastructure
<mathiaz> lmr_: ok - autotest seems to focus on benchmarks
<mathiaz> lmr_: and track performance regressions
<lmr_> mathiaz: Not really, it could be used to do any functional or performance (benchmark) testing
<lmr_> mathiaz: My team uses it mostly for functional testing
<sommer> might be a dumb question, but can you give an example of functional testing?
<mathiaz> lmr_: I've briefly looked at the src code, and most of the test that were integrated were performance tests
<mathiaz> sommer: does mysql server run ? YES-NO
<sommer> aaahhh... right I'm with ya
<mathiaz> sommer: that would be a functional test in the case of a dapper->hardy upgrade
<lmr_> sommer: Functional testing: we run filesystem stress tests to see if the filesystems won't break/corrupt
<mathiaz> sommer: making sure that after the upgrade the mysql server is still running
<lmr_> sommer: You might know that, functional is making sure the functionality is ok
<sommer> cool, that would be good to automate :-)
<mathiaz> yeah - there are multiple categories when it comes to tests type
<lmr_> mathiaz: I agree that we have many benchmark tests on autotest right now
<lmr_> mathiaz: But there are some tests that we are developing, still internal, that we'll release upstream if possible
<lmr_> mathiaz: autotest is being extensively used by google now, and we also benefit from it. Actually autotest is an offspring of an internal project we have
<mathiaz> lmr_: great - you should definetly get in touch with kirkland then
<ubotu> Launchpad bug 200308 in postfix "Upgrade doesnt work if using mysql dict types" [Undecided,New] https://launchpad.net/bugs/200308
<ScottK2> lamont: Also, would you please give back postfix 2.5.1-1~dapper1 on all archs except i386?
<owh> What's the delay with ubuto?
<owh> s/o/u/
<sommer> he's angry at life... I think
 * mathiaz would bring a cup of coffee to ubotu 
 * owh has coffee to share, but getting it there might be an issue :)
 * jjesse thinks you just pour it through the tubes that connect everything in the interweb
<owh> Thanks, now I need to clean my keyboard :(
 * mathiaz unplugs his isp cable and looks through the pipe
<owh> mathiaz: Be careful, the light isn't good for your eyes!
<ScottK2> leonel: Are you around?
<ScottK2> leonel: When you get to the scrollback...  Theres a new clamav release candidate.  I was wondering if you could look through it and see if there are any security fixes in it we need for earlier releases.
<kgoetz> shouldnt the new tzdata target 'gutsy' not 'gutsy-proposed'?
<leonel> hey ScottK2  I did a  fast review when  it came out and didn't found any security bug just bugfixes   I'll do a  deeper check
<ScottK2> leonel: Thanks
<alstone> exit
<faulkes-> alrighty then, now onto 7.10 via drac
<faulkes-> owh: I'm slowly getting around to your .diff, today has been insane and it's still not done
<owh> faulkes-: Remind me, which diff are we talking about?
<faulkes-> server-survey
<owh> Ah, cool.
<owh> How did the exam go?
<faulkes-> some of the stuff you mentioned in your email has been fixed and in my branch
<faulkes-> it went, more or less, probably got an A
<owh> :)
<owh> Most of the changes were removing <br> at the end of the questions.
<owh> One had a <p> wrapped around it.
<faulkes-> yeah
<faulkes-> I didn't see <br>'s but I did see some <p>'s still in there now, no worries, I'll rip it apart after a good night's sleep
<owh> mathiaz: https://wiki.ubuntu.com/OnnoBenschop/ubuntu-server/init.d-status
<mathiaz> owh: awesome
<owh> I made it scan /pool/main/ so I'm not sure if that shows what we intended.
<owh> I was trying to think of a way to run the init.d script in some environment to make it show what "status" would actually output, but I couldn't think of how to deal with all the tests within each script - I suppose installing each matched package would do that.
<owh> Oh, I emailed kirkland so when he wakes up he doesn't repeat my effort :)
<mathiaz> owh: yes - sounds like a perfect job for kvm
<mathiaz> owh: if you scan pool/main/ on the ubuntu-server iso, you'll only get the package from ubuntu-server
<mathiaz> owh: there are more packages in main than the one on ubuntu-server
<owh> Yeah, wasn't that what we wanted?
<mathiaz> owh: I was looking for all of main
<owh> Well if there is a full package mirror, then just run the script on the bottom of the page. I don't have such a mirror.
<mathiaz> owh: kirkland has a full mirror
<owh> I'll put a caveat on the wiki page so it's obvious.
<mathiaz> owh: so your script should be usefull to him
<owh> mathiaz: All good then :)
<Jeeves_> Morning
<kgoetz> Jeeves_: hello :)
<Jeeves_> How's everybody doing?
<kgoetz> giving solaris a poke before it gets nuked later in the week
<kgoetz> and going crazy with minicom/bourne shell/compaq keyboard combo :)
<twb> That sounds... masochistic.
<kgoetz> thats a good word for it
<owh> Just in case you're wondering if you're playing with the right distro: http://www.google.com/trends?q=debian%2Cubuntu%2Credhat&ctab=0&geo=all&date=all&sort=0
<_ruben> hehe
<nijaba> morning
<Jeeves_> Morning!
<steve176> Hi. How should I set environment variables on ubuntu 7.10 server? I've tried /etc/environment /etc/profile and /etc/bash.bashrc. They are visible from the terminal, but when I log in via ssh they aren't visible to my shell scripts
<steve176> Same problem as this guy http://ubuntuforums.org/showthread.php?t=586993
<_ruben> strange .. using /etc/profile for that purpuse worked just fine for me the other day (including ssh)
<steve176> +rubun: it's been driving me nuts!
<steve176> _rubun: For example - if I type echo ${JAVA_HOME} I get /usr/lib/jvm/java-6-sun, but if I try and run catalina.sh (tomcat startup script) it fails because JAVA_HOME is not defined. When I hardcode JAVA_HOME in the startup script tomcat starts up perfectly
<steve176> _rubun: This is a clean ubuntu 7.10 install done just a few days ago.
<_ruben> it could be that catalina.sh is started in a non-login sub-shell which would explain the lack of that env variable
<_ruben> been ages since i worked with tomcat though (and even that was on deprecated versions by then)
<_ruben> doesnt that script use 'su' to run stuff as a particular user? i vagely recall smth like that
<steve176> _rubun: my /etc/init.d/tomcat script does (sudo -u tomcat), but the one in /opt/tomcat/bin doesn't
<_ruben> is this the official ubuntu tomcat packages or from source or smth? .. if it's an official package you might need to file a bug report
<steve176> _ruben: I think I have the answer
<steve176> _rubun: I didn't know that you need to export NAME=VALUE for /etc/profile
<_ruben> ah
<steve176> _rubun: You don't need to for /etc/environment, however this seems to be ignored by ssh and only /etc/profile is used
<_ruben> never used /etc/environment myself
<kraut> moin
<soul1> s
<pschulz01> Jeeves_: ping
<Jeeves_> DROPPED!
<Kamping_Kaiser> !!
<Jeeves_> ICMP is for hackers ;)
<Jeeves_> pschulz01: What up?
<pschulz01> haha..
<pschulz01> Jeeves_: I was hoping for an introduction from Kamping_Kaiser.. re. T1 testing :-)
<Jeeves_> Ah yes
<Jeeves_> I've got two
<pschulz01> What do you think will be the tricky bit?
<pschulz01> (Getting Ubuntu onto it.)
<Jeeves_> pschulz01: No clue. I've never seen a SPARC before :)
<pschulz01> Oh. ;-)
<Kamping_Kaiser> :o
<Jeeves_> I think I'll connect it to ALOM
<Jeeves_> And than let you know the username/password/ip
<Jeeves_> :)
<Jeeves_> I've got this afternoon free-ish
<Jeeves_> (my afternoon that is)
<pschulz01> I have other Ubuntu server(s) that I can connect it to. (DHCP)
<infinity> Someone has free T1s?
 * infinity perks.
<Jeeves_> infinity: www.sun.com/tryandbuy
<pschulz01> infinity: These are not the T1's you are looking for.
<infinity> Jeeves_: Oh, that's not quite free. ;)
 * pschulz01 waves hand
<Kamping_Kaiser> pschulz01, incase i didnt meantion it, you can get a support ticket or 3 from canonical as part of try and buy, but i doubt they support alpha releases
<pschulz01> Kamping_Kaiser: No you didn't. :-)
<Kamping_Kaiser> pschulz01, oops :|
<pschulz01> Kamping_Kaiser: Will we need it?
<Jeeves_> Kamping_Kaiser: I'm in contact with a product manager of Sun NL
<Kamping_Kaiser> pschulz01, no </famous last words>
<Jeeves_> He's very curious if/how stuff worsk
<Jeeves_> s/sk/ks
<Kamping_Kaiser> Jeeves_, cool
<pschulz01> Kamping_Kaiser: Jeeves_ Do the servers support DHCP, or it is BOOTP?
<pschulz01> s/it si/is it/
<Jeeves_> pschulz01: Yes.
<Jeeves_> (It's either one of them :))
<pschulz01> ?
<pschulz01> Ok :-) As long a it's a standard :-)
<infinity> bootp/tftp if you want to remote boot them (but dhcp3 in Ubuntu will do bootp as well)
<pschulz01> Kamping_Kaiser: So we need.. DHCP/BOOTP with a kernel, initrd and root FS (maybe).
<pschulz01> infinity: Yup..
<pschulz01> Kamping_Kaiser: Got all this ready to go..
<Kamping_Kaiser> pschulz01, need a tfptd to
<pschulz01> Kamping_Kaiser: That's there.. on 'smoke'
<Jeeves_> I'll connect it in an hour or two
<Jeeves_> We'll see how it does
<pschulz01> Jeeves_: :-) Catch you on the 'flip side'
<zul> yay ebox is in universe
<nijaba> \o/ for zull
<Jeeves_> Is ebox 'fixed' allready?
<nijaba> zul, even
<Jeeves_> I tried it two weeks ago
<Jeeves_> And it had some bug
<Jeeves_> (no, I didn't file them, sorry)
<zul> Jeeves_: well then Im not sure, there has been a couple of bugs fixed
<zul> faulkes-: ping
<faulkes-> zul: yo
<zul> faulkes-: ebox has been uploaded to universe so we dont need any testing from my ppa, can you tell the forum people to redirect their bug reports (if any) to launchpad?
 * faulkes- nods
<zul> faulkes-: thanks
<faulkes-> you have a LP link handy for reporting?
<zul> https://launchpad.net/ubuntu/+source/ebox
<faulkes-> done
<faulkes-> hmmm, anyone got suggestions for load-balancing / failover with gutsy -server?
<zul> heartbeat
<faulkes-> zul: thanks
<henkjan> jdstrand: i filed a bug againts ufw and ipv6
<jdstrand> henkjan: ok thanks
<jdstrand> henkjan: I should be able to start looking at it soon
<mruiz> hi all
<mathiaz> hi mruiz
<faulkes-> mathiaz: submitted the 1955 stuff to testing
<soren> mok0: re bug 200648..
<soren> mok0: You're running gutsy on the host? What about kvm kernel modules?
<mok0> soren: kernel modules: version 59
<ubotu> Launchpad bug 200648 in kvm "guest machines falls behind time" [Undecided,New] https://launchpad.net/bugs/200648
<mok0> soren: sorry for the wait, I was away
<soren> mok0: It would be lovely if you could try with something more recent.
<soren> mok0: No worries.
<mok0> soren: on gutsy?
<soren> Yes.
<mok0> what version?
<soren> WEll, I'd prefer you tested on hardy, but failing that, a backported kvm-62 would be ook.
<mok0> I can test it on hardy too
<mok0> soren: in fact, I prefer to do tests on another machine than our production one
<mok0> soren: and my workstation is a daily upgraded hardy
<mok0> soren: you mean you have fixed the issue?
<soren> mok0: I don't have timing issues.
<mok0> soren: aha. Are your guest doing anything ;-)
<soren> mok0: Not right now, no :)
<soren> mok0: How much are they lagging?
<mok0> soren: perhaps up to 10 minutes per hour
<mok0> soren: of course, I cannot rule out it's an installation issue
<mok0> soren: we are trying to fix it with a cron script that runs ntpdate
<soren> mok0: And it only happens when the vm is busy?
<soren> mok0: Don't.
<soren> Really.
<mok0> soren: ok, why?
<soren> Because ntpdate jumps in time, which is a bad, bad idea.
<mok0> I don't know exactly why or when it happens. We haven't analysed it in detail
<soren> You want throttling, not warping.
<mok0> hm...
<mok0> soren: I will see if I can analyse the situation better for you
<mok0> what parameters would be relevant? Load on the guest/host?
<soren> Both, yes.
<Jeeves_> http://gallery.bit.nl/main.php?g2_itemId=29440&g2_navId=x0f18a14b
<mok0> soren, btw, would it be possible for kvm to set a user-definable process name (instead of just "kvm")? It is useful when you have several guest machines, that you can see who is who in a ps ax listing...
<soren> mok0: You're running kvm from the commandline rather than through libvirt? (you've probably told me several times, but I forget, sorry)
<mok0> Yes, the are started from a script called by rc.local
<mok0> s/the/they
<soren> Oh, I see.
<soren> mok0: Well, no, I don't think kvm does that.
<mok0> soren: perhaps we need to use the hacker tool... xname or something :-)
<mok0> Xhide it is
<soren> mok0: Heh.. Well, you could :)
<mok0> soren: don't even need to compile it... I've found it on our systems once in a while :-)
<soren> Ok.. :)
<mok0> A few people here use Joomla, and an earlier version required php safe mode to be switched off ... we were attacked several times through that
<mok0> I wish we could get rid of Joomla, though
<soren> I hate safe mode.
<mok0> Actually, not Joomla itself, but plugins that people install
<mok0> soren: I have safe_mode = Off :-)
<mok0> s/have/hate/
<soren> It's  the wrong solution to a ridiculous problem.
<mok0> soren: yeah
<mok0> soren: but apparently PHP is unsafe without it
<soren> php over fastcgi to separate process running as separate users being the correct solution.
<mok0> soren: right... but that needs to be implemented by the developers
<soren> No.
<soren> It's already working (and has been for quite a while)
<mok0> soren: I mean, it has to be implemented by the Joomla developers
<soren> They *require* libapache2-mod-php5?
<soren> Insanity!
<mok0> soren: I am not sure
<soren> mok0: Have I mentioned #ubuntu-virt, by the way?
<mok0> soren: I haven't heard about it
<soren> virtualisation channel. (unsurprisingly)
<mok0> I see... a dozen people there, or so
<mok0> I'll come along and hang out there
<Theo_> how do you change the root password from a non-root acount?
<mathiaz> !root | Theo_
<Theo_> excuse me?
<Theo_> what do I do?
<mathiaz> Theo_: https://help.ubuntu.com/community/RootSudo
<mathiaz> Theo_: !root should have given you an automated answer for our bot - but the bot seems very slow.
<mathiaz> Theo_: anyway the answer would be a link to the page above
<Theo_> oh
<Theo_> im reading it
<Theo_> thanks
<Theo_> so I have to enable it before I can login as root?
<_ProbeDot[a]> Is there a way to connect applications that are not started with the same UID?
<mathiaz> Theo_: well - the page tries to explain that there isn't any root account enabled in ubuntu and you should use sudo instead
<Theo_> ah
<Theo_> what about a root password that controls everything?
<Theo_> Fuck, I re-installed my ubuntu server when I couldn't login with the user root
<_ProbeDot[a]> I'm running most of jack with my UID, but one client is supposed to be started as a service
<Theo_> I put a usb drive in my server. How do I access the files?
<ubotu> Theo_: Do not try to guess the root password, that is impossible. Instead, realise the truth... there is no root password. Then you will see that it is 'sudo' that grants you access and not the root password. Look at https://help.ubuntu.com/community/RootSudo
<Theo_> thank you
<_ruben> holy crap ... it took ubotu 19 minutes to respond to the trigger :p
<jdstrand> henkjan: hi, looking at your comments for the ipv6 stateless autoconfiguration
<jdstrand> in ufw
<jdstrand> henkjan: is ipv6 stateless autoconfiguration something that should be expected to work anywhere, or just in certain environments
<jdstrand> ?
<onesandzeros> hello all.  If any of you are familiar with gentoo, you'll know of a way to prevent upgrades of a given package (/etc/portage/package.mask)  Does something similiar exist in ubuntu?
<jdstrand> onesandzeros: apt-pinning
<jdstrand> onesandzeros: you set it up in /etc/apt/preferences
<jdstrand> google for apt pinning /etc/apt/preferences
<onesandzeros> jdstrand: will do, thanks
<sommer> gar... connection issues
<zul> mathiaz: ping did you see that email to ubuntu-server?
<mathiaz> zul: about samba ? yes
<zul> im doing a test build of it to see what needs to be done
<mathiaz> zul: I think the plan was to wait for Debian to upload 3.0.28a
<mathiaz> zul: I've just noticed that the archive freeze for beta starts in two days
<mathiaz> zul: so it may be worth getting samba before beta
<zul> mathiaz: it is but if debian doesnt get it in time then we should do our own as well (also apart of the plan)
<mathiaz> zul: Once you've checked what needs to be changed, you may wanna ask steve what he thinks about 3.0.28a for beta
<zul> sure
<mathiaz> zul: also - there seems to be some issue with conffile
<mathiaz> zul: steve moved to ucf to handle the smb.conf file
<zul> mathiaz: erm...ucf?
<mathiaz> kirkland: ready for some merge fun ?
<kirkland> mathiaz: let's do it
 * faulkes- yawns
<faulkes-> I need a nap
<sommer> I want one
<faulkes-> you always want what I have, it's like having a little brother ;)
<faulkes-> first it's my blade server, now it's my nap
<sommer> heh... you have all the best stuff
<faulkes-> now, do I build heartbeat based fw / load-balancers in a fail-over config or do I vendor shop?
 * faulkes- considers
<faulkes-> anyone here use heartbeat in a production env?
<mojo> How can I safely remove the video card from a server that i want to be headless?  ssh and webmin are working fine on the box, but i am afraid of problems if the kernel tries to initiate the video card after it's been pulled.
<faulkes-> leave the card, remove the monitor
<faulkes-> iirc most systems will fail POST if you remove the video card
<mojo> faulkes-, yeah, did that but i want to recover the card for other systems i need to fix/text/build
<mojo> faulkes-, this is in my home, on shoestring budget
<faulkes-> I suggest you get an additional shoestring then ;)
<mojo> lol
<faulkes-> I'm pretty sure if you take the video card out, it'll die at POST
<mojo> faulkes-, fwiw i just became "in between jobs" so that's not gonna be easy
<faulkes-> gotta spend money to make money ;)
<mojo> faulkes-, well, i guess i can experiment with that then.
<owh> kirkland: Did you like my script?
<kirkland> owh: very nice!
<kirkland> owh: you beat me to it ;-)
<owh> Did it work for you?
<kirkland> owh: i'm actually working off of the output you have there
<owh> kirkland: It was suggested by mathiaz that you run it across your mirror, rather than the alpha-6 cd.
<kirkland> owh: ah, i'll do that
<owh> kirkland: Mind you, the script doesn't take into account where the code is from, main/universe, but I suspect the mirror does?
<kirkland> owh: yeah, separate dirs
<owh> kirkland: Just run it from the level where you want it to find packages.
<kirkland> owh: i'll just run it twice, one for each level
<owh> kirkland: It does a find -type f to find all the files.
<owh> kirkland: Also, it assumes no spaces in any paths.
<owh> kirkland: Final warning, it assumes all files in the tree that it finds are packages.
<kirkland> owh: shouldn't be any spaces
<kirkland> owh: i'll enhance it to only look at *.deb$
<owh> kirkland: If you can make your mirror read-only, I'd feel much more happy :)
<kirkland> :-P
<kirkland> owh: I'll run as a non-priv user
<owh> kirkland: I don't like scripts with "rm -rf" snippets :)
<kirkland> owh: I was working on patches for the following: http://pastebin.com/d204bce95
<kirkland> owh: those are services I use on a daily basis, that I've found myself griping about a lack of "status"
<owh> kirkland: Which reminds me, when I unpacked the "nessus" package on the Feisty Desktop CD, it had a really weird directory structure and borked stuff, but it didn't happen on the Hardy CD, haven't had a moment to look into it yet.
<kirkland> owh: I'd say we're only really intersted in Hardy here
<kirkland> but, duly noted
<owh> kirkland: Yeah, I was just going to have a look-see when I had a moment.
<owh> kirkland: That list seems good as a first go. I'd probably prioritise on those that result from a tasksel installation.
<kirkland> tasksel?
<owh> kirkland: When you run the installer it asks you: "What do you want this server to do?", That functionality is provided by tasksel.
<owh> kirkland: The options are things like "DNS/Firewall/Mail/Samba/LAMP" etc.
<kirkland> owh: gotcha
<owh> kirkland: You can just run it on a console right now to see :)
<kirkland> owh: you wanna grab the rest of those?
<kirkland> the tasksel ones?
<owh> kirkland: You mean, from tasksel?
<owh> kirkland: I'll have a look-see.
 * owh will have to first wake up and have some breakfast and coffee before any heavy lifting commences :)
<kirkland> owh: sure, that's fine
<kirkland> owh: stuff like bind
<kirkland> that is very important, but not on that little list
<kirkland> "that list" = "my list"
<owh> kirkland: I'm just creating a one-liner, gimmie a mo, my brain isn't awake yet.
<owh> kirkland: tasksel --list-tasks | while read a task b ; do tasksel --task-packages $task ; done | sort | uniq
<owh> kirkland: Be warned, it contains all the packages that will get installed as a result of all the tasks.
<owh> kirkland: Perhaps filtering out any desktop options, one mo.
<owh> kirkland: Better still, just the server options:  tasksel --list-tasks | grep server | while read a task b ; do tasksel --task-packages $task ; done | sort | uniq
<owh> kirkland: Hmm, that doesn't give us init scripts though, but if you make the mirror listing, I can merge the two.
<owh> kirkland: Lemmie have a coffee and I'll update the script I gave you to also output the package name.
<kirkland> owh: k, it'll be a few minutes, i'm on something else atm
<kirkland> deal
<owh> :)
 * owh wanders off to wake up.
<owh> kirkland: While I'm waiting for the coffee to brew, I've just added a column to the output that displays the package name.
<kirkland> owh: very good
<owh> kirkland: https://wiki.ubuntu.com/OnnoBenschop/ubuntu-server/init%2ed-status#preview
<owh> kirkland: When you get the new output, can you grep it into two piles and update the wiki page?
<owh> kirkland: s/get/generate/ :)
<owh> kirkland: I think I'm awake now :) Anything I can do?
<mathiaz> ScottK: so - I've got an merge of cyrus-sasl2 ready to be uploaded
<mathiaz> ScottK: do you think it needs a FFe because of the new saslfinger utility ?
<ScottK> mathiaz: I'd run in by slangasek as it's Main.  Since it's added in the Debian revision, I'm not sure.
<ScottK2> mathiaz: Are you ready with cyrus-sasl2-heimdal too?
<mathiaz> ScottK2: nope - didn't start
<ScottK2> mathiaz: OK.  Well that one should be easy (IIRC)
<owh> I've just done a straight boot from CD install of hardy alpha 6, selected no tasks, on a machine with no network interface and tasksel returns nothing at all. The /usr/bin/tasksel script is there, but it returns nada.
<owh> I'm guessing that this isn't normal behaviour :)
<kris_ph> Hello... is there anybody here who could direct me to where to find CGI directory?
<zul> /usr/lib/cgi-bin
<zul> mathiaz: yeehaw it builds
<kris_ph> zul thanks...it exists but no contents in that directory
<owh> Ok, I have found a bug in hardy-alpha6. I'm not sure if we care about it. If I install from CD *without* any network, tasksel doesn't work, it returns nothing - but if I install it from CD with a network, all is well. Should I log it as a bug, or is the edge case of installing a server without a network really not an issue - in my case I didn't want the server to start downloading lots of stuff from the net because all I wante
<ScottK2> owh: I'd say file a bug.  At the very least it'll be a reminder for the release notes if that's intended behaviour.
<owh> Cool, consider it done.
#ubuntu-server 2008-03-12
<owh> How do I tell LP that it's a bug report about hardy?
<kgoetz> owh - https://bugs.launchpad.net/ubuntu/hardy/+bugs
<owh> kgoetz: Yes, but how do I add my bug report, Bug #201237 to that list - do I just add a "hardy" tag?
<kgoetz> owh: if you click 'report bug' it should reporta  bug on hardy, aiui
<owh> kgoetz: Hmm, that seems to indicate that I need to delete the bug report I just made and do it again. That doesn't seem right.
<ScottK2> owh: Let me look at the bug.
<owh> https://bugs.launchpad.net/ubuntu/+source/tasksel/+bug/201237
<owh> ubotu seems asleep at the wheel again :)
<ScottK2> Yeah
<ScottK2> The only thing I see it nominate for a release.
<ScottK2> Dunno if that's the right way though
<ScottK2> owh: I'd suggest ask in #ubuntu-bugs
<owh> Cool, will do.
<ubotu> Launchpad bug 201237 in tasksel "tasksel does not work if installed on a machine with no network" [Undecided,New] https://launchpad.net/bugs/201237
<owh> ROTFL
<owh> FYI the suggestion in #ubuntu-bugs was to indeed nominate it for hardy, then it may or may not actually be approved by the maintainer for such inclusion.
<ScottK2> OK
<owh> I need some help. I'm still trying to track down why a cron.daily job isn't running. The script runs fine from a command-line. I'm wondering how I can "simulate" the cron environment so I can see what it's doing. There was a suggestion to install mail on this server to attempt to catch any output that might be generated, but I'm reluctant to do that on this production server. Any suggestions / comments / reading material?
<andguent> I would second the email notify setup, you never know what else isn't working and is trying to notify you via mail
<kgoetz> i'll third it - why wouldnt you want mail on a production sytem?
<infinity> There will be some minimal logging to syslog.
<kgoetz> the only other thing i can think of is redirect syslog to another system and bump up crons reporting
<andguent> another option is to cat /etc/crontab, look for the daily line, and run that
<infinity> You could also wrap your cronjob in something that logs to a logfile.
<andguent> it might rotate all of your logs though :)
<infinity> owh: What does the script do?
<kgoetz> first thing i'd check with not-runnign cronjobs is crons $PATH
<infinity> And, yes, I was going there. ;)
<owh> infinity: It backs up vmware images.
<infinity> cron's PATH is, by default, rather lacking.
<infinity> You should set PATH either in your cronjob, or the script being run by said cronjob.
<owh> That's a point that I had not considered.
<owh> Also, the mail side of things, under Debian I can setup local mail only, is that available under Feisty-server, I'm unsure.
<infinity> Yup.
<infinity> Both postfix and exim4 should give an option to be local-only.
<owh> Well, that simplifies things.
<infinity> postfix might even default to it these days.  Not sure, as I'm a disciple of Exim.
<owh> Ok, given the "overwhelming" suggestion to install mail, I might do that first.
<infinity> (Exim-lover though I am, though, I'd recommend postfix from the "it's the Ubuntu default, and you're more likely to get IRC support for it" perspective)
<owh> Yeah, exim here too, but I'll go with the Ubuntu default.
<kgoetz> i'm not familar with postfix's debconf, but exims is good for that sort of 'just need basic mail' stuff
<infinity> If you know exim well enough, do Exim.  We support both (ie: both are in main)
<infinity> exim4-daemon-light, select "local only", profit.
<andguent> i personally love ssmtp, its so much less of a headache to configure
<anthony> Any particular reason you like exim btw?  (/me doesn't know about either)
<owh> I'll do a quick test before I do it for real :)
<andguent> don't know about any local only settings though
<infinity> ssmtp has the distinct disadvantage of not delivering local mail. :)
<infinity> Nice for getting mail OFF the host, though.
<owh> I'm expecting that tasksel will do the right thing.
<owh> Ok, thanks for knocking me into place, I'll investigate some more :)
<kgoetz> infinity: which is really all you need in a server :)
<infinity> anthony: I've been using exim for a decade, I know the spec inside out and backwards, and I've contributed hudreds, if not thousands, of lines of code to the project.
<infinity> anthony: In other words, I'm terribly biased.
<kgoetz> hehe
<andguent> me on the other hand, have never contributed code to either :) I just know that 5-6 lines of conf files and i'm done :)
<anthony> infinity: That's a pretty good reason.  :)
<owh> infinity: You would have been a handy person to know some years ago when two instances of exim refused to send email between each other for some unfathomable reason. Network sniffing indicated that one instance would just give up. Never did find out why, but upgrading one side "fixed" it :)
<faulkes-> bah, you love sendmail.cf, admit it
<owh> ROTFL
<infinity> I think postfix likely wins in the "ease of use" category, but I still feel like I can do a lot more with exim's config files, when I want to do something "weird".
<owh> faulkes-: My introduction to sendmail.cf was on an Apple AU/X server :)
<owh> For those wondering, Apple made Unix before OS/X :)
<infinity> faulkes-: sendmail is why I love exim, to be honest.  Exim is nowhere near as confusing, but manages to be just as powerfully configurable.
<owh> s/Unix/Unix servers/
<infinity> owh: s/made/licensed/
 * faulkes- lalalalala m4 I can't hear you
<owh> infinity: Yeah, a bit of that too :)
<faulkes-> heh, you had to love the bat book
<infinity> owh: It's okay, your secret's safe with me.  My first at-home UNIC box was an Amiga.
 * faulkes- remembers doing some terribly awful things to sendmail.cf
<infinity> (First UNIX machine I played with that I didn't own was OSF)
<owh> infinity: Now that's personal torture :)
 * infinity misses DEC.
 * kgoetz gives infinity an Alphaserver 5/300
 * owh misses the Alpha on the desktop and the VAX heard in the back office :)
<kgoetz> *3000
 * kgoetz missed the whole era :@
<infinity> I have a DEC3k/900 lying around here.
 * owh started with 3.5kb RAM on a 6502 based Vic20 :)
 * kgoetz goes to sulk
<infinity> Never got around to porting the turbchannel drivers from Linux/MIPS to Linux/Alpha, so the thing still boots NetBSD/Tru64.
<owh> infinity: It seems that you have waaaay too much time on your hands :)
<infinity> If only that were true.
 * owh misses a serial link between the Vic20 and the Apple ][ joystick port to transfer images from floppy to the Vic20 tape drive :)
<infinity> If I had more spare time, the HPPA port would be in better shape, we'd have an Ubuntu/Alpha port "just cause", and I'd still be working on m68k glibc/kernel stuff.
<owh> infinity: This morning I was lamenting all the things I want to do, but just don't have enough hours in the day for, so I know what you're talking about.
<kgoetz> infinity: are you just a serial porter?
<infinity> kgoetz: Well, no.  I work for Canonical on the buildd infrastructure.  But, yeah, I used to do a lot of porting before this.
<infinity> kgoetz: No time anymore, really.
<owh> The irony is that while I was lamenting that, I remembered a TV show last night about the Royal Family and the strange things that people get up to. While I might not get to do all that I want, at least what I'm doing is improving the community around me, so it made me feel a tad more useful than the accountant who plays police officer for Liz when she visits Scotland one week in the year.
<owh> infinity: So, think of all the good you're doing :)
<owh> How do I stop an installer from apt-getting stuff during installation?
<kgoetz> remove the network cable
<owh> Hmm, yes, I suppose I can disable the interface in VMware.
<owh> Sigh, I thought there might be a more elegant way :)
<kgoetz> log into the installer console and try to ifdown the nics?
<owh> That might work after it's configured them.
<owh> Excellent suggestion.
<andguent> i'm a few minutes late, but you could also repoint its dns server to something bogus....
<owh> Fair idea, bit of extra work beyond Alt-F2, Enter, ifconfig down eth0
<owh> But valid :)
<kgoetz> owh: or even shorter: ifdown eth0 (if the installer supports it)
<owh> kgoetz: Nope :)
<kgoetz> aww
<owh> Tried that :)
<kgoetz> :)
<kgoetz> would i be correct in guessing these dirs are misnamed? http://archive.ubuntu.com/ubuntu/dists/hardy/main/installer-sparc/
<owh> Can someone please explain why if I've just logged into a machine, either via a login prompt or by unlocking the screensaver, I still am required to enter the same password in sudo? I'm unsure why this might be considered "safer". I'm not saying, never prompt for a sudo password, having it expire in the same way as it does between sudo calls is fine, but why?
<anthony> Anyone have opinions on ebox?
<owh> anthony: It will be the server gui of choice RSN.
<anthony> owh: And does that translate to "it's decent" or "it's the best of server GUIs, but all server GUIs are terrible"?
<owh> :)
<owh> anthony: Personally, I'm leaning toward the latter, but that is based on looking at it, not using it. I don't feel that I'm a fair observer.
<owh> anthony: I'm sure that there are other opinions in this room.
<anthony> owh: Fair enough.  I _can_ handle cli configuration, but might give it a look anyway, since if it works that would be very convenient at times.
<owh> According to the debian manual aptitude can install tasks, but the man page doesn't divulge how that might be achieved. With apt-get, you can specify with a trailing caret (^), this doesn't work with aptitude. How do I do this?
 * owh cannot use tasksel because it hangs at 100% due to a bug.
<andguent> owh: what are you trying to install and how?
<owh> The task mail-server
<owh> :)
<owh> I wanted to just run tasksel, but the box is running Feisty and it stops at 100% - this is on my VM instance, not production. So, then the work-around is to install the task from the commandline: apt-get install mail-server^, but my preference is for using aptitude.
<owh> I'm not gaim to try this on the production server just yet :)
<owh> s/gaim/game/ :)
<andguent> ok.. what happens when you do a 'apt-cache search mail-server'
<owh> I get procmail and gup back.
<owh> There are no packages with that name, the '^' in the apt-get command takes care of that.
<andguent> i'm not sure what documentation is throwing an extra ^ in the line, but apt-get doesn't use them in any special way
<owh> No, the way to install a task with apt-get is to supply the task name with a '^' at the end.
<andguent> if you are still looking to get tasksel installed, you can try 'apt-get install tasksel' -- or maybe i'm just lost and not helping much :)
<owh> No, tasksel is installed, but a bug in debconf stops it from working properly. There's a bug about it.
<owh> This is leaning towards, suck it up and use apt-get rather than aptitude :)
<andguent> i've only used apt-get, ignorant and happy about it :)
<owh> ROTFL :)
<owh> I'm just going to emulate a different install and see what gives.
<owh> Well, well, well. aptitude install postfix worked just fine and dandy, no dovecot, and other stuff :)
<andguent> mmmmmkay, glad you figured it out, i think i need to figure out if this screen is causing my headache or not :)
<owh> andguent: What is the refresh rate set to?
<andguent> 60hz :) I usually can handle that fine.... its TFT too
<infinity> owh: Tasks in apititude are something like ~t^taskname, I think.  It's been a while.
<infinity> owh: apt-get's interface works for me, and it's what I use on the buildds.
<owh> infinity: Tah, I ended up just installing postfix with no mail-server task :)
<lamont> infinity: there's another interface besides apt-get ??? :-)
<lamont> well, I mean other than dselect
<owh> lamont: Now that there is autoremove, not really :)
<owh> kirkland: ping
 * lamont was introduced to the aptitude version of easter eggs the other day... most cute.
<owh> lamont: I suppose it beats the flight simulator in Excel :)
<andguent> apt-get moo?
<owh> andguent: And others.
<lamont> andguent: that's the apt-get version... aptitude has a similar one
<andguent> somehow, that just doesn't surprise me
<owh> infinity: On the buildds, can you run "extra" scripts and get reports?
<owh> infinity: kirkland and I are trying to get init.d scripts to output a meaningful status. I've created some scripts to figure out what has an init.d, and I'm grepping those for "status)", but I wondered if the packages all get installed during build testing and if they are subjected to any standard tests.
<owh> infinity: Likely it isn't possible, but I just wondered.
<kgoetz> owh: packages are checked by lintian/linda for validity, or do you mean something past that?
<owh> kgoetz: We're trying to determine first of all which packages have an init.d script, then which of those supports "status", then which of those is LSB compliant. Fix bugs/Add code accordingly.
<owh> I just wondered if the buildds would accommodate that, rather than installing each package and testing.
<owh> At present I have all the steps untill "installing the package" and running the init.d script with "status".
<kgoetz> owh: you dont have to install - you could download the packages and dpkg -x them (i did lots of looking into those same sorts of questions)
<kgoetz> brb though, say 5 min
<owh> kgoetz: Cool.
<kgoetz> back
<kgoetz> determining if something has an init script should be relatively easy. not sure about status (not sure what this means) or lsb compliance, because i dont know whats involved
<owh> kgoetz: Yes, I've written the scripts and have all that detail, now I need to determine what it might output.
<owh> kgoetz: https://wiki.ubuntu.com/OnnoBenschop/ubuntu-server/init.d-status
<kgoetz> what is status?
<kgoetz> is it an upstart thing?
<owh> kgoetz: It tells you if the package is running etc.
<kgoetz> owh: any reason to `find -type f` instead of `find -name \*.deb`?
<owh> kgoetz: Yes, there are udebs in there too.
<owh> kgoetz: Besides the package dirs should only contain packages :)
<kgoetz> they shouldnt contain spaces either :) (which i think are fair assumptions, fwiw)
<owh> :)
 * owh wrote a nice big disclaimer at the top :)
 * kgoetz saw, and was a bit confused by it
<owh> kgoetz: It really wasn't intended for anything other than initially getting a handle on the status. If we're going to do this for real, I suspect I'll have a little think about it. Mind you, dpkg -x should handle it, the spaces are simple enough.
<owh> faulkes-: Does your graphing magic handle IRC logs?
<kgoetz> owh: quote the variables and it shouldnt be a problem
<owh> kgoetz: Yup.
<owh> kgoetz: I didn't because I got beat up by echo -e and trying to output a \t.
<ScottK2> Why do we care about status in init scripts?
<owh> Well, when we finally get ebox to do its thing, it might be useful :)
<owh> I'm sure that there is a high-brow LSB answer too somewhere :)
<owh> s/high-brow/official/
<ScottK2> I'll try it a different way then ...
<ScottK2> owh: What caused you to care enough to write the script?
<owh> Ah, there was some discussion about it. Lemmie have a gander in my logs. One mo.
 * kgoetz thought upstart meant no init scripts
<kgoetz> *init.d
<owh> ScottK2: Yesterday: (08:34:04) kirkland: owh: mathiaz and I were talking earlier today about getting more of the scripts in /etc/init.d LSB-compliant
<owh> ScottK2: So blame mathiaz and kirkland :)
<owh> Really, getting a status back is not a bad idea. You can then use that status in other places. If there is a standard way of doing it, makes life easier.
<owh> Does that answer the question, or did I just head off on a wild goose chase?
<owh> kirkland started volunteering and I had a few hours to put some scripts together.
<ScottK2> Makes me wonder what the heck status is supposed to provide, but not so much I'm going to go hunt it down.
<owh> kirkland told me he had a link, I'll get him to dig it up for us for tomorrow's meeting :)
<owh> ScottK2: http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
<owh> Current version is 3.2.0, but the specs for this appear to be the same.
<ScottK2> That makes sense I guess.
<owh> The nice thing about it is that you can then use the status in other places, a web gui, a cron job, etc.
<owh> kgoetz: Is it a reasonable assumption in my script that: a package with init.d script has it stored inside the /etc/init.d directory, not created by post installation script
<kgoetz> owh: yes. as a rule
<owh> That is, if I unpack it with dpkg -x, I'll have an etc/init.d directory.
<owh> Cool
<owh> So far so good then :)
<kgoetz> what i'm not sure of is if it will have a directory structure, or if you should grep the install scripts for a mv blah /etc/init.d
<owh> kgoetz: Well thus far I've had etc/init.d directories, but a grep for etc/init.d might be useful.
<owh> kgoetz: Lots of false positives, example scripts, help documents, procps (which has 'cp' in the name :), but nothing that indicates any other means of creating an init.d script.
<owh> s/Lots/Found lots/
<kgoetz> owh: what did you grep for?
<owh> grep -ra '/etc/init.d' ${temp_dir}/* | grep -E 'mv|cp'
<owh> The only one that jumped out at me in any way was python-twisted-core, but that appeared to be sample code only.
<owh> I suppose we could install *all* applications and see what falls out the bottom :)
 * owh is kidding.
<owh> kgoetz: Hmm, I suppose for a package to be compliant it needs to register all its files. The ones installed are plain, but the ones we're looking for would perhaps be classed as config files.
<kgoetz> owh: what do you mean by 'registered'?
<owh> In the dpkg list, there's a file for each package. I cannot recall what its called.
<owh> /var/lib/dpkg/info/*.list
<owh> That directory appears to be full of init.d references :)
<kgoetz> yep.
<kgoetz> have you thought about grabbing source packagefs instead of binary?
<nxvl> how do i write new blueprints?
<owh> kgoetz: Well, at present I'm just using the .iso
<nxvl> i don't find the option
<kgoetz> owh: ah right
<owh> kgoetz: I figured I'd just start at one end :)
<kgoetz> nxvl: https://blueprints.launchpad.net/ubuntu/ 'register a blueprint'
<kgoetz> ?
<kgoetz> owh: yep
<nxvl> kgoetz: thnx
<owh> The comments we're making here aren't going to waste, so keep poking me in the eye! (I'm adding them to a list to add to the wiki page.)
<kgoetz> i'm stopping- i'm heading off :)
<owh> Thanks for your comments!
<kgoetz> afk. heading home
<kgoetz> no worries :)
<nxvl> kgoetz: thnx
<Jeeves_> Morning!
<_ruben> mornin Jeeves_ .. having fun with your sun servers? ;)
<Jeeves_> _ruben: Not yet :)
<Jeeves_> Connected them with alom
<Jeeves_> But need to fix some stuff first
 * Jeeves_ is having arguments with the Cisco they're connected to :)
<_ruben> hehe
<_ruben> looked at the pics the other day .. lookin sweet ;)
<_ruben> quite a shitload of disks ;)
<Jeeves_> Indeed :)
<_ruben> was kinda curious about the cooling though .. they look rather tightly stacked together
<Jeeves_> I just requested an seperate vlan and IP-space, so we should be able to play with them this afternoon
<Jeeves_> _ruben: What about it?
<_ruben> well .. there's not much room for airflow i think?
<Jeeves_> _ruben: There's a lot of heat coming from the rear. So the airflow manages to get there :)
<amer> hi , how can i list the runlevel processes?, please any help
<Jeeves_> 09:37 < bitrot> Config on router jun1.kelvin changed by user 'daniel' with comment add Ubuntu vlan
<Jeeves_> Noc is doing their thing ;)
<_ruben> heh .. irc feedback of noc operations? how sweet :)
<nijaba> morning
<Jeeves_> _ruben: snmptraps
<_ruben> Jeeves_: ah .. something ive been meaning to investigate as well .. am in the progress of replacing our crappy d-link switches with 3com ones at one of our hosting environments
<_ruben> there's only very little info to be found on the details of the various traps
<Jeeves_> _ruben: Replacing 'crappy' switches with 3com?
<Jeeves_> Those are crappy as well! :)
<_ruben> Jeeves_: cisco had my preference, but would've cost 2-3 times more .. and compared to the d-link ones, the 3com ones are rather spectacular .. and who knows, maybe they'll be replaced with cisco's eventually .. if these turn out to not be adequate either ;)
<henkjan> jdstrand: stateless ipv6 autoconfiguration is something common used in an desktop environment. in any case its more common used than dhcpv6
<jdstrand> henkjan: I read up on it and committed it.  it should hit sometime today
<henkjan> jdstrand: okay
<henkjan> thnx for quick fixing :)
<jdstrand> np ;)
<henkjan> jdstrand: would you like to have access to an ipv6 enabled host for testing?
<henkjan> i can create a xen domU for testing purposes
 * jdstrand ponders that
<thewoland> Hi, I would like to install a server version of Ubuntu on a tiny PC
<thewoland> The PC is built on VIA Eden running at 200MHz and booting from a CF card.
<thewoland> Any suggestion on where to find a good manual for this task?
<zul> morning
<jdstrand> henkjan: I appreciate the offer, but I think at this time it is not required
<jdstrand> henkjan: not a lot are using ipv6 right now, and I think we are ok.  Perhaps we can revisit this in the future?
<henkjan> jdstrand: okay
<tzn> hello everyone
<tzn> got a question about rdac module
<tzn> im currently using some DSxxxx hw with centos systems
<tzn> what is current support for rdac in ubuntu?
<Jeeves_> http://ipv6.google.com/
<Jeeves_> 13:03 < jdstrand> henkjan: not a lot are using ipv6 right now, and I think we are ok.  Perhaps we can revisit this in the future?
<Jeeves_> jdstrand: Too bad you feel that way
<jdstrand> Jeeves_: please explain.
<jdstrand> ipv6 is fully supported in ufw
<Kamping_Kaiser> Jeeves_, hey mate
<jdstrand> I just didn't want henkjan to have to go through all the administrative overhead of giving me access to one of his machines
<jdstrand> when I can test ipv6 here locally
<Kamping_Kaiser> i wont be able to installfest tomorrow :( one of the test team decided he wanted to play with solaris
<jdstrand> Jeeves_: ^^
<Jeeves_> jdstrand: Cool that it's supported! Did I misinterpreted the 'not a lot are using ipv6 right now, and I think we are ok'?
<jdstrand> Jeeves_: I think so
<jdstrand> Jeeves_: I was responding to henkjan asking if I wanted access to an ipv6 enabled machine of his
<Jeeves_> I thought you made the often made chicken-egg mistake. My bad.
<jdstrand> np
<Jeeves_> Kamping_Kaiser: I'm about to configure alom here and there
<Jeeves_> And I've got two of them :)
<Jeeves_> It pings!
<Jeeves_> Anyone knows the default T1000 ALOM pass?
<Jeeves_> Booting T1000 ...
<sommer> noooooo... he'll go after John Conner!
<_ruben> haha
<sommer> here all week
<faulkes-> sommer? sounds a little close to conner
<faulkes-> and as we all know, john does use computers
<faulkes-> something you aren't telling us?
<sommer> it's not me it's skynet...
 * _ruben hides
<Jeeves_> Hmm
<Jeeves_> I don't have keyboard on the console
<zul> hasta la vista baby?
<shirish> guys, there was some discussion about a Ubuntu Small Business Server edition, is that still somewhere or no?
<Jeeves_> shirish: ebox? :)
<shirish> Jeeves_: dunno what ebox is
<dexem> shirish: http://www.ebox-platform.com
 * shirish thinks it might be something to do with server & enlightment wm
<shirish> dexem: thanx, will check it out.
<Jeeves_> shirish: Webinterface for SBS-ish stuff
<shirish> btw Ubuntu Server doesn't have its own homepage? Everything on the wiki only?
<shirish> also does somebody have info. about the specific kernel which ships with the server image, any mail/link explaining the difference between the desktop kernel & server kernels would be great :)
<shirish> found it, just sharing it with u guys as well, http://www.serverwatch.com/tutorials/article.php/3715071 seems pretty nice :)
<zul> mathiaz: for 3.0.28a debian dropped a bunch of patches so we dropped them as well
<mathiaz> zul: cool !
<zul> http://pastebin.ca/939747
<kirkland> mathiaz: hey, https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/201287
<ubotu> Launchpad bug 201287 in apache2 "apache2 init script support for 'status'" [Undecided,New]
<mathiaz> kirkland: exzzellent...
<kirkland> mathiaz: so I'm working on cron now
<mathiaz> kirkland: now could you also check if all the other bits required for lsb compliant scripts are available ?
<kirkland> mathiaz: apache was nice, in that it already had a built in "pidof" function and call and dependencies
<kirkland> mathiaz: cron does not
<kirkland> mathiaz: pidof is part of sysvutils
<mathiaz> kirkland: I think that pidof is actually a function provided by the lsb init functions
<kirkland> mathiaz: well, the command line utility is in sysvutils
<kirkland> perhaps there's a function....
<kirkland> mathiaz: so i didn't want to add sysvutils as a Depends
<kirkland> mathiaz: but I was thinking of adding it to Recommends
<kirkland> and making `etc/init.d cron status` fail gracefully if pidof not found
<kirkland> mathiaz: http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
<kirkland> that's what's required by lsb
<kirkland> mathiaz: http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptfunc.html
<kirkland> lsb also requires a pidofproc function
<mathiaz> kirkland: there is a pidofproc function
<kirkland> k
<mathiaz> kirkland: available in /lib/lsb/init-functions
<kirkland> let me try that
<kirkland> mathiaz: LSB has some tests
<kirkland> mathiaz: pidofproc works like a charm!
 * kirkland high fives mathiaz
<Jeeves_> zpool1                 8.4T    39K   8.4T     1%    /zpool1
<Jeeves_> Hmmm
<soren> kirkland: "pidofproc"?
<kirkland> soren: http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptfunc.html
<kirkland> soren: /lib/lsb/init-functions
<soren> kirkland: That's a rather unfortunate description.. "Only process identifiers of running processes should be returned"
<kirkland> soren: unfortunate why?
<soren> kirkland: So only if they're in actual running state in which case it'd be a no-op on a non-smp system.
<soren> or..
<soren> on processes which are in the process table, which is redundant since that's the only way they could have a pid.
<soren> s/^on/only/
<zul> yay: https://bugs.edge.launchpad.net/ubuntu/+source/samba/+bug/201442
<ubotu> Launchpad bug 201442 in samba "ffe for samba 3.0.28 to 3.0.28a" [Undecided,New]
<kirkland> soren: seems okay for me for daemons, which is the target here
<kirkland> soren: are you quabling with the wording of the spec, or the actual use of the function?
 * kirkland still can't tell when soren is joking ;-)
<soren> kirkland: The wording of the spec.
<kirkland> soren_not_joking: thanks, that will keep things clear... just keep /nick up to date at all times :-)
<soren_joking> I will.
<_ruben> Jeeves_: that could hold a lot of pr0n^H^H^Hrepository files
<Jeeves_> :P
<kirkland> soren: right, so running processes means to me processes in the process table
<kirkland> soren: which may or may not be active/sleeping
<soren> Right. Which other kinds of processes have pid's?
<soren> None.
<soren> So it's redundant (and thus confusing)
<kirkland> soren: ps -ef | grep defunct
<soren> Specifications should be exact and to the point.
<_ruben> Jeeves_: what kind of raid setup is that btw?
<Jeeves_> _ruben: No clue.
<Jeeves_> Trying to rebuild the whole stuff using zfs
<soren> kirkland: So all /runnable/ processes?
<_ruben> Jeeves_: hehe
<Jeeves_> I'm new to ZFS
<_ruben> Jeeves_: but i do hope there's hardware raid involved? :)
<soren> kirkland: "runable", perhaps.
 * soren is not sure
<Jeeves_> _ruben: zfs is software-raid, AFAIK
<Jeeves_> _ruben: But it's built for being a fileserver
<soren> kirkland: Where do I send patches for that spec? :)
<kirkland> soren: i know a couple of the IBM guys who work on it
<kirkland> let me check
<kirkland> soren: it's Linux Foundation, now, so perhaps in a month in Austin
<_ruben> Jeeves_: seems zfs has raid0/raid1 and 'custom versions' of raid5 and raid6 .. looks nice on first sight
<soren> kirkland: I'll print out 500 copies of a patch and distribute it all over the place and see what happens :)
<kirkland> :-)
<kirkland> okay, so the apache2 package doesn't have a -1ubuntu in its name
<kirkland> which means that it's just a straight up debian package?
<soren> Yup.
<mathiaz> kirkland: yes
 * mathiaz thinks about all the trees in Denmark 
<kirkland> mathiaz: looking at the changelog, i don't even see a -1ubuntu version in its history... does that mean ubuntu patching it is unprecedented (and therefore highly unlikely)?
<mathiaz> kirkland: it was - just that when a package is synced, the changelog is dropped
<mathiaz> kirkland: otherwise we'd have to keep a merge
<kirkland> mathiaz: i see....
<kirkland> mathiaz: how, then, do I get around this error: "dpkg-source: error: Version number suggests Ubuntu changes, but Maintainer: does not have Ubuntu address
<kirkland> "
<mathiaz> kirkland: did you set the Maintainer field to core-dev ?
<nxvl> mathiaz: did you see the blueprint i started yesterdays
<nxvl> mathiaz: i subscribed yourself to it
<kirkland> mathiaz: nope, I suppose that has to be done, since we've modified the package, then
<mathiaz> nxvl: I've noticed - still have to read thourgh your proposal
<mathiaz> nxvl: did you look at the previous specs and the pages in wiki.ubuntu.com about previous attempts ?
 * zul thinks about all the bacon in denmark
<mathiaz> kirkland: yes - https://wiki.ubuntu.com/DebianMaintainerField
<nxvl> mathiaz: i based the wiki page on ufw one
<mathiaz> nxvl: I meant about the content
<nxvl> mathiaz: i'm not still finished with the spects, but the base idea is already descriebed
<mathiaz> nxvl: there are a couple of spec written up for simplifing the administration of an ubuntu server
<nxvl> mathiaz: yes, i take a look at some of them
<nxvl> mathiaz: but nothing so similar as i want to
<nxvl> mathiaz: also, the tool i want can be used in the future for complete desktop and/or serve administrator, as it is modular
<Jeeves_> Who is my date for tomorrow morning, for playing with the T1000's ?
<Jeeves_> henkjan: Says it is pschulz01
<kraut> are you right or left handed?
<kirkland> mathiaz: see http://launchpadlibrarian.net/12607821/apache2_init_status.diff
<kirkland> mathiaz: that look better?
<mathiaz> kirkland: yes - what about the comment on pidof apache2 in pidof_apache2 ?
<mathiaz> kirkland: # if pidof is null for some reasons the script exits automagically
<mathiaz> kirkland: # classified as good/unknown feature
<kirkland> mathiaz: :-)  yeah, funny, huh?  i did hit that bug
<kirkland> mathiaz: and I found that putting the || true inside the shell call fixed it for me
<kirkland> mathiaz: hence my comment
 * kirkland notes that those comments were unchanged by kirkland
<mathiaz> kirkland: yes - but the apache maintainer added a "# classified as good/unknown feature"
<mathiaz> kirkland: if you've fixed a good feature, it ain't no good ;)
<kirkland> mathiaz: i took that as a poor attempt at comedy
<kirkland> mathiaz: calling a "bug" a "feature"
<mathiaz> kirkland: under which condition did you hit the bug ?
<kirkland> mathiaz: in the case where there are no apache2's running
<kirkland> that pidof command just bombs
<kirkland> and the script promptly exits
<kirkland> silently
<kirkland> no information
<mathiaz> kirkland: right - and now it would just say that apache2 is not running
<kirkland> it is definitely NOT a good feature
<mathiaz> kirkland: agreed
<Theo__> hello
<kirkland> mathiaz: i think the outermost || true could be removed
<kirkland> mathiaz: as well as the smart-alec comment
<kirkland> mathiaz: but I went with the bare minimum needed for my bug fix
<kirkland> mathiaz: alternatively, i could have used pidofproc
<kirkland> mathiaz: but it seemed silly, since this script had one specifically tailored toward apache2 built in
<kirkland> figured i better use that
<mathiaz> kirkland: yes - so we could get rid of the second truee
<mathiaz> kirkland: and just use PIDS=`pidof apache2 || true`
<kirkland> mathiaz: i agree
<kirkland> mathiaz: it's superfluous
<mathiaz> kirkland: could you also fix the echo statement ?
<mathiaz> kirkland: and use the lsb function instead
<kirkland> sure
<kirkland> mathiaz: what echo do you speak of?
<kirkland> mathiaz: I did change it to this:                 if pidof_apache >/dev/null; then
<kirkland> that takes care of it spitting process id's at you
<mathiaz> kirkland: there is an echo "ERROR:..." in pidof_apache
<kirkland> mathiaz: ah, I see
<kirkland> mathiaz: log_failure_msg ?
<mathiaz> kirkland: seems like the best option
<kirkland> mathiaz: http://pastebin.com/d2311aa2c
<kirkland> mathiaz: look better?
<mathiaz> kirkland: meh - I don't think we should do that
<mathiaz> kirkland: log_failure_msg echoes to stdout
<kirkland> mathiaz: crap, i just posted to launchpad
<mathiaz> kirkland: so we may end up with PIDS with the error message
<mathiaz> kirkland: -> PID=$(pidof_apache)
<mathiaz> kirkland: actuall - no
<kirkland> mathiaz:         exit 2
<mathiaz> kirkland: yes :)
<kirkland> mathiaz: it exits immediately
<kirkland> mathiaz: so you're happier with it now?
<mathiaz> kirkland: one more comment - could you also fix the echo "MPM ..." at the begining of the script
<mathiaz> kirkland: and make it use an lsb function
<kirkland> mathiaz: log_daemon_msg ?
<kirkland> log_end_msg
<dfullo> I am trying to use ldap authentication on 7.10 server
<dfullo> when I make the changes in nsswitch.conf and reboot the server it hangs
<dfullo> passwd:  files ldap
<dfullo> group: files ldap
<kirkland> mathiaz: I'd also need to move the . /lib/lsb/init-functions
<kirkland>  up
<dfullo> shadow: files ldap
<mathiaz> kirkland: log_failure_msg seems to be more appropriate
<dantalizing> mathiaz: have you had a chance to check on status of the -openvz flavor for hardy?
<mathiaz> kirkland: log_daemon_msg are debian/ubuntu specific IIRC
<mathiaz> kirkland: there are not part of the LSB standard
<kirkland> mathiaz: oky
<mathiaz> dantalizing: nope
<dantalizing> mathiaz: k.  i asked in #ubuntu-kernel and got no response, just fyi
<kirkland> mathiaz: http://pastebin.com/dac660c4
<mathiaz> dantalizing: gitweb doesn't show anything new on that front
<dantalizing> mathiaz: yah thats why i'm asking...i check during my lunchtime ... :(
<mathiaz> dantalizing: you may wanna ping BenC directly as he is the one that said openvz should land in hardy
<dantalizing> mathiaz: didnt wanna breach any existing protocol and hurt the chances by being a pest... wasnt sure the best way to follow up
<dantalizing> mathiaz:i'll try though
<mathiaz> kirkland: ok - much better.
<mathiaz> kirkland: so to keep improving things, the exit code for the MPM not here should not be 0
<mathiaz> kirkland: according to http://refspecs.freestandards.org/LSB_3.2.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
<mathiaz> kirkland: it should be 6
<kirkland> mathiaz: done.
<mathiaz> kirkland: do you have a debdiff somewhere ?
<kirkland> with the exit 6?
<mathiaz> kirkland: yes
<mathiaz> kirkland: or the whole diff
<kirkland> mathiaz: one second...
<mathiaz> dantalizing: it's okay to ping someone every once in a while
<mathiaz> dantalizing: as long as you don't harress them on a hourly basis
<kirkland> mathiaz: http://pastebin.com/d5bdc341f
<mathiaz> kirkland: could you also document the exit 6 in the changelog
<mathiaz> kirkland: something like "exit code compliant with the LSB policy."
 * dantalizing deletes his new ircpingbot
<kirkland> mathiaz: http://pastebin.com/d73b8e639
<mathiaz> kirkland: so to keep on LSB compliant exit codes, there are two exit 1 that are wrong
<mathiaz> kirkland: for reload) and *)
<mathiaz> kirkland: reload should exit 6 when there is a configuration problem with apache
<mathiaz> kirkland: and *) should exit 3 with an action is not implemented
<kirkland> mathiaz: okay, i'll make a sweep of all exit codes
<kirkland> mathiaz: this patch was originally just supposed to add a status) section...  now we're well into full LSB compliance work
<mathiaz> kirkland: I wouldn't touch exit 2 in pidof_apache
<kirkland> mathiaz: is that the new goal?
<mathiaz> kirkland: hum... :D
<mathiaz> kirkland: not full LSB compliance
<mathiaz> kirkland: just fixing stuff here and there
<mathiaz> kirkland: after that I'll be happy :)
<kirkland> mathiaz: sure you will .... :-)
<kirkland> *mathiaz like the iterative process
<kirkland> mathiaz: http://pastebin.com/d4caee1f5
<mathiaz> kirkland: great - I'm happy now
<mathiaz> kirkland: can you attache the debdiff to the bug and I'll sponsor your upload
<kirkland> sure thing, will do now
<kirkland> mathiaz: debdiff uploaded to https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/201287
<ubotu> Launchpad bug 201287 in apache2 "apache2 init script support for 'status'" [Undecided,New]
<kirkland> mathiaz: here are the one I was going to tackle immediately...
<kirkland> mathiaz: http://pastebin.com/d5c09ebc2
<kirkland> i think I can do most of those today
<kirkland> owh has a better list, and I'm running his script on my mirror for all of main and universe
<mathiaz> kirkland: seems great - I'd suggest that you check in the Debian bug tracker to see if there are patches for the status action
<mathiaz> kirkland: and also in LP
<kirkland> mathiaz: yep, i've been checking LP
<kirkland> mathiaz: haven't checked Debian
<kirkland> mathiaz: what do I need to do to push this apache patch to Debian?
<mathiaz> kirkland: So as a first step I'd start to file bugs in LP to implement the status action
<mathiaz> kirkland: then check if there is a bug in debian
<mathiaz> kirkland: if so - link it to LP
<kirkland> mathiaz: it to LP, or LP to it?
<mathiaz> kirkland: you link it in LP
<kirkland> mathiaz: gotcha
<kirkland> mathiaz: what about pushing the apache patch to debian?
<mathiaz> kirkland: yes - always a good idea to do it.
<mathiaz> kirkland: usually I do that once I've uploaded the package to ubuntu
<mathiaz> kirkland: with the submittodebian script
<kirkland> mathiaz: okay, so you'll take care of this one then, since you're the uploader?
<mathiaz> kirkland: I'd rather you do it - so that you can use to the process
<kirkland> mathiaz: fair enough
 * kirkland is happy to learn
<mathiaz> kirkland: actually - submittodebian is more for merges as it requires the ubuntu and the debian dsc files
<mathiaz> kirkland: in our current case we only have the ubuntu change.
<mathiaz> kirkland: ok - so could you use the sponsorship process for your apache2 diff ?
<mathiaz> kirkland: https://wiki.ubuntu.com/MOTU/Sponsorship/SponsorsQueue
<kirkland> mathiaz: sure, let me go read that
<kirkland> mathiaz: ubuntu-main-sponsors right?
<mathiaz> kirkland: yes - apache2 is in main
<kirkland> mathiaz: okay, i renamed the patch to .debdiff, set the status to confirmed, and subscribed ubuntu-main-sponsors
<mathiaz> kirkland: excellent.
<kirkland> mathiaz: i think that's all i was missing from the SponsorQueue page
<mathiaz> kirkland: it was just to introcude you to the sponsorship program
<mathiaz> kirkland: now I can get your debdiff from there
<kirkland> mathiaz: sure, no problem
<kirkland> that's good
<mathiaz> kirkland: ok - so I've uploaded your cyrus-sals2 merge
<kirkland> mathiaz: 'your' cyrus-sals2 merge :-P
<kirkland> mathiaz: i'm not to be blamed for that mess :-)
<kirkland> j/k
<mathiaz> kirkland: you should have received an email from LP
<mathiaz> kirkland: now the next thing to do is to forward patches to debian
<kirkland> mathiaz: okay....
<kirkland> mathiaz: wiki page?
<spiekey> hi
<spiekey> does someone know a firewall testing tool to test udp and tcp ports?
<spiekey> like ftester
<mathiaz> kirkland: https://wiki.ubuntu.com/Bugs/ReportingToDebian
<kirkland> mathiaz: cool, querybts shows no relevant bugs
<kirkland> mathiaz: okay, so i run submittodebian
<kirkland> mathiaz: take out the maintainer field changes
<kirkland> mathiaz: leave everything else
<mathiaz> kirkland: well - I tend to actually break the diff into functional patches
<mathiaz> kirkland: so that the debian maintainer doesn't have to go through a giant patch with multiple bugs fixed in it
<kirkland> mathiaz: this isn't exactly a "giant patch" though
<mathiaz> kirkland: right - it was more a general comment
<kirkland> mathiaz: k
<mathiaz> kirkland: let me check what I'd do
<kirkland> mathiaz: normal severity (6)
<mathiaz> kirkland: so I wouldn't include the postinst diff
<mathiaz> kirkland: it's ubuntu specific
<mathiaz> kirkland: the change to rules make sense for debian
<mathiaz> kirkland: for the control file I'd open a bug mentioning the reason to do this
<kirkland> mathiaz: there is no postinst in my diff
<mathiaz> kirkland: so I'd file two bug reports - one with the clean target diff
<mathiaz> kirkland: and one for the control file
<mathiaz> kirkland: ok - you're probably not using the one I've uploaded
<kirkland> mathiaz: are we talking about apache2 here?
<mathiaz> kirkland: no - cyrus-sasl2 !
<kirkland> mathiaz: jebus
<mathiaz> kirkland: I was refering to cyrus-sasl2 because I've just uploaded the merge
<kirkland> mathiaz: i think you should submit the cyrus-sasl2 ones, since you have the updated sources.  i'll do the apache2 one, for the experience, and because I have all the info
<mathiaz> kirkland: ok.
<kirkland> mathiaz: apache2 patches sent to debian
<chris____> hello, does anyone here have experience with BIND and DNS?
<chris____> I've set myself up a bind server and it works fine locally, its just when I do an nslookup on the domain from my local computer, it doesn't return anything
<mathiaz> soren: how can I setup a bridged network for libvirt/kvm ?
<sommer> chris____: have you put the dns server into /etc/resolv.conf?
<chris____> sommer: yes it is the first entry in my /etc/resolv.conf
<sommer> is port 53 open on the server?
<chris____> sommer: yes, nmap locahost shows port 53 is open
<sommer> chris____: you might try restarting bind9 and checking /var/log/syslog for errors
<chris____> ill check it out right now, thanks
<sommer> does dig -x domain.name work on the dns server?  another check anyway
<sommer> dig is part of the dnsutils package... if you don't have it installed
<chris____> here is the output from syslog, btw my domain name is my325i.com and the IP is 69.93.237.13
<chris____> Mar 12 15:27:01 chris named[19418]: zone 237.93.69.in-addr.arpa/IN: loaded serial 2008031000
<chris____> Mar 12 15:27:01 chris named[19418]: zone my325i.com/IN: NS 'ns1.my325i.com' has no address records (A or AAAA)
<chris____> Mar 12 15:27:01 chris named[19418]: zone my325i.com/IN: loaded serial 2008031200
<chris____> Mar 12 15:27:01 chris named[19418]: zone localhost/IN: loaded serial 1
<chris____> Mar 12 15:27:01 chris named[19418]: running
<chris____> sorry for pasting in here, I should have pasted it in a text file
<sommer> chris____: try creating an A record for ns1
<sommer> "zone my325i.com/IN: NS 'ns1.my325i.com' has no address records (A or AAAA)"
<chris____> okay, in the db.my325i.com file ? Here's my dig output http://69.93.237.13/dig.txt
<chris____> I added it in the db.my325i.com file and I got a new error, heres the output from /var/log/syslog http://69.93.237.13/syslog.txt, the entry I tried was "@ IN A ns1.my325i.com"
<nijaba> :( ubotu left us
<chris____> haha hes back
<chris____> I have to go, but I will be back on later probably asking the same question, thanks for helping me troubleshoot
<sommer> doh
<soren> mathiaz: You need to set up your bridge first manually.
<soren> mathiaz: After that, it'll show up in virt-manager.
<mathiaz> soren: ah - so I should not try to get an xml file for libvirt
<soren> Nah, don't bother.
<kirkland> mathiaz: maintainer field change added to changelog
<mathiaz> soren: I'm following this wiki page for setting a bridge to use with kvm: https://wiki.ubuntu.com/KvmWithBridge
<mathiaz> soren: there is a reference to a tap0 device, but I don't have it on my hardy server
<chris____> hello, I'm back, I was having trouble with DNS on my server. I can't seem to get it to work properly. I posted up my "dig -x my325i.com" (my325i.com is my domain) at http://69.93.237.13/dig.txt, I also posted up a snippet from my /var/log/syslog file at http://69.93.237.13/syslog.txt
<nxvl> mathiaz: server meeting starts in 45 minutes, didn't it?
<chris____> if anyone could help, it would be greatly appreciated
<chris____> by the way, the syslog snippet I posted was after adding "@ IN A ns1.my325i.com." to my db file
<mathiaz> nxvl: the server meeting will start in 45 minutes - at 21:00 UTC
<twb> Is there an equivalent to snapshot.debian.net for Ubuntu?
<kirkland> mathiaz: see if this one is better on the first time through....  https://bugs.launchpad.net/ubuntu/+source/cron/+bug/201504
<ubotu> Launchpad bug 201504 in cron "cron init script support for 'status'" [Undecided,Confirmed]
<nxvl> mathiaz: thnx, i still have problem with UTC time
<nxvl> and i can't add it to my gnome clock
<mathiaz> nxvl: date -u is your friend
<ScottK2> kirkland: Is there really a point in filing these bugs right now?  This is really a Hardy +1 issue.
<kirkland> ScottK2: in almost every case, they are trivial to fix
<kirkland> ScottK2: it is an LSB-compliance issue
<kirkland> ScottK2: and it's something that linux admins rely upon in other linux distros--i think it's important for the ubuntu server to provide such informaiton
<kirkland> ScottK2: we're concentrating (for now) on the core services in ubuntu-server which have the bug
<kirkland> ScottK2: you can see my patches so far for apache, and cron
<kirkland> ScottK2: for the many, many other init scripts (non ubuntu-server), sure, Hardy+1 is great
<kirkland> owh: howdy
<kirkland> owh: sorry i missed your pings last night ;-)
<owh> It happens when you're asleep :)
<owh> Did you see the updated stuff on the wiki page?
<kirkland> owh: i did
<owh> Did it help?
<kirkland> owh: and I have made a few changes to your script
<owh> Bring it on!
<owh> :)
<kirkland> owh: yeah, but there's something else we're going to have to work around to run it against a full mirror
<owh> ?
<kirkland> owh: so I'm running it against a local copy of something like http://archive.ubuntu.com/ubuntu/pool/main/
<owh> kirkland: Wasn't that the intent? Or are you talking about something else?
<kirkland> owh: now have a look at final subdirectory
<kirkland> http://archive.ubuntu.com/ubuntu/pool/main/a/abiword/
<kirkland> so I can filter that down some with find -type f -name *_i386.deb
<kirkland> to get rid of the dupliacate arches
<owh> kirkland: Yes, that will need to be filtered :)
<owh> What about udeb?
<owh> The .iso was full of them.
<kirkland> owh: but the versions present a problem...  we only want the latest
<owh> I didn't see that, one mo.
<owh> kirkland: Yes, that's right. I can deal with that with a sort and a tail.
<kirkland> owh: almost; maybe a fancy sort
<owh> kirkland: Based on that the naming of the latest version will be alphabetically consistent though.
<kirkland> owh: because those dirs have multiple packages
<kirkland> owh: abiword-common, abiword-gnome, etc.
<kirkland> i think sort can split up parts of the string
<owh> kirkland: So, perhaps we'd need to start at the other end of the problem, that is, start from a package list.
<kirkland> owh: right, so I have some perl that might help
<owh> kirkland: It depends what we're doing this for. Is it a one-off, or is it going to be an ongoing compliance check?
<owh> kirkland: If it's one off, we can construct a list, if not we need to be able to handle what comes our way.
<Erick> i need some help getting apache to work on ubuntu
<kirkland> owh: a one-off, I hope
<kirkland> owh: LSB has compliance tests
<kirkland> owh: http://www.linux-foundation.org/en/Testing
<owh> kirkland: How big is a full listing of a mirror?
<kirkland> owh: one minute, i'll pastebin it for you
<kirkland> owh: this is for main and universe
 * owh gets ready to copy/paste :)
<kirkland> owh: a listing of all /etc/init.d/* scripts, sorted by the package that owns it
<Erick> is anyone able to help me?
<owh> Erick: You need to tell us what is wrong.
<Erick> owh:  http://www.pastebin.ca/940179
<nijaba> Erick: ask your question, don't ask to ask
<kirkland> Erick: try starting it more like "sudo /etc/init.d/apache2 start"
<owh> I suspect that Erick has another copy of a web-server running.
<Erick> owh, no
<csguest> could somebody take a look at: https://bugs.launchpad.net/ubuntu/+source/xen-3.2/+bug/199533
<ubotu> Launchpad bug 199533 in xen-3.2 "Device 0 (vif) could not be connected. Hotplug scripts not working." [Undecided,Confirmed]
<csguest> it is confirmed and we have proposed fixes
<Erick> kirkland, when i start it like that it tells me it is already running
<kirkland> Erick: "sudo /etc/init.d/apache2 restart"
<owh> kirkland: So, this listing is output, or is it a dir listing of the mirror?
<Erick> kirkland, still cant connect to my computer
<kirkland> owh: perl script is here http://pastebin.com/d39fab86e
<kirkland> owh: uses a find under the covers
<kirkland> owh: uses a dpkg -c to list the contents of the package (such that I don't have to explode it like you do)
<kirkland> owh: currently, only looks for ./etc/init.d/
<kirkland> owh: needs your code to bust it open, and search for "status)"
<nijaba> Erick: can you ping your server?
<owh> kirkland: Did you see the discussion that kgoetz and I had about that yesterday? It may well be that there are other ways to create init.d scripts :(
<kirkland> Erick: have you customized the apache2 configuration files already?  or are you using the stock config
<Erick> nijaba, yes i can ping it
<kirkland> owh: hmm, i don't remember seeing that
<Erick> kirkland, ive configured it a different site, but have not done anything with apachee.conf
<owh> kirkland: I summarised it on the wiki: https://wiki.ubuntu.com/OnnoBenschop/ubuntu-server/init%2ed-status#head-0325a23d7857a153c12a28aedfe10ccf6c6fa10c
<kirkland> owh: here's the first part http://pastebin.com/d6ba4dded
<nijaba> Erick: can you paste your apache.conf on pastebin.ca?
<Erick> sure
<kirkland> nijaba: i doubt its his apache.conf...  it's probably one of the sites-enabled
<nijaba> Erick: sorry I meant your confing for the other site
<kirkland> Erick: can you restore the default configurations and see if apache2 will start?
<Erick> whoa, you lost me nijaba
<Erick> kirkland, i dont know how
<nijaba> Erick: you said that you did not touch apache.conf
<Erick> yeah
<Erick> i didnt
<Erick> well, a little bit with the servername
<Erick> but thats all
<nijaba> Erick: so I ma wondering how you configured the "other site"
<Erick> i told apache to use that one instead of the old one
<kirkland> Erick: list the contents of "ls /etc/apache2/sites-enabled/"
<Erick> kirkland, no such file or directory
<kirkland> Erick: what version of Ubuntu are we talking about?
<Erick> gusty
<kirkland> Erick: ls /etc/apache2/sites-available/
<Erick> ok, got it
<Erick> the site is: ericksite
<Erick> i configured it to use that one
<Erick> when i go to http://localhost/ i get that page, too
<kirkland> Erick: using a2ensite?
<Erick> kirkland, yes
<Erick> then i restarted apache
<kirkland> hmm, i have doubts about your installation if you're lacking an /etc/apache2/sites-enabled/ directory, but whatever... if http://localhost/ is working, it's a problem within your config file
<soren> twb: What are you looking for specifically?
<kirkland> Erick: post your ericksite config to a pastebin
<kirkland> stripping anything sensitive, of course
<Erick> kirkland, i did get the sites enabled
<Erick> that was it
<Erick> ericksite
<kirkland> Erick: cat ericksite
<Erick> cat?
<kirkland> Erick: man cat
<Erick> what do you mean?
<kirkland> Erick: I'm giving you commands to run
<Erick> ok
<Erick> lol
<nealmcb> server team meeting now in #ubuntu-meeting
 * owh heads over to the meeting.
<kirkland> Erick: man is how you read documentation from a command line in linux
<kirkland> Erick: "man cat" will tell you that cat is the command used to print a file to the screen
<Erick> ok
<Erick> i printed it
<kirkland> Erick: copy and paste that to a pastebin if you want help with your configuration
<deshantm> is there someone that could pass on a bug to the ubuntu Xen team or Chuck?  #199533
<deshantm> it is confirmed and we have proposed solutions
<Erick> kirkland:  http://www.pastebin.ca/940241
<Erick> whoa, wait
<kirkland> Erick: I don't need to see the documentation to "cat" ...
<Erick> cat ericksite doesnt work
<twb> soren: the ability to grab debs that were in an Ubuntu repo, but are no longer (because they've been superseded.)
<kirkland> Erick: probably because you aren't in the right directory?
<soren> twb: They're all on launchpad. Why do you need them, though?
<twb> soren: for example, right now I wanted the newest version of coreutils that didn't cause LP 191884.
<ubotu> Launchpad bug 191884 in coreutils "wrong id behaviour on a system with LDAP" [Undecided,New] https://launchpad.net/bugs/191884
<soren> twb: Ok, launchpad it is, then.
<twb> I don't understand what you mean by `they're all on launchpad'
<kirkland> Erick: cat /etc/apache2/sites-available/ericksite
<kirkland> Erick: or, cd /etc/apache2/sites-available; cat ericksite
<Erick> kirkland:  http://www.pastebin.ca/940254
<Erick> kirkland, i need to go
<kirkland> Erick: the problem is almost certainly that you have multiple web servers running
<kirkland> Erick: look hard for another server running
<Erick> i dont though
<Erick> never messed wither servers before
<Erick> the system was only installed 2 weeks ago
<kirkland> and try the /etc/init.d/apache2 restart thing again
<owh> Erick: That looks a lot like the default site. I think you have both default and ericsite enabled on the same port.
<Erick> but i need to go, thanks for the help
<kirkland> Erick: sorry, good luck
<kirkland> Erick: owh: could be... you should set each site to listen on a unique Listen port or ServerName
<owh> kirkland: I think Erick copied default to ericsite, edited it a bit and enabled it.
<kirkland> owh: right, he doesn't have a proper sites-enabled directory
<kirkland> owh: and he should be using a2ensite
<owh> kirkland: I'm putting that down to a typo on entering the ls command.
<kirkland> owh: http://pastebin.com/d4fca412e
<kirkland> owh: complete list
<kirkland> owh: that's main only, actually
<owh> kirkland: That's not looking too scary.
<kirkland> owh: i'm running universe now
<kirkland> gonna take 5...
<owh> Sure.
<owh> kirkland: You going to be around for a bit after the meeting?
<kirkland> owh: yep
<owh> k
<soren> twb: Which package do you want, I'll show you how to find them.
<twb> OK, for the sake of the exercise let's say coreutils=5.97-5.6ubuntu1
<soren> https://launchpad.net/distros/ubuntu/+source/coreutils/
<soren> twb: Find the release you want and click the version number of it.
<soren> That brings you to https://edge.launchpad.net/ubuntu/+source/coreutils/5.97-5.6ubuntu1
<soren> Go down to the builds sections. You want amd64 or i386?
<twb> Yes.
<soren> Which one? :)
<twb> Preferably as in an apt repo, like snapshots.d.n
<twb> s/as//
<soren> snapshots.d.n is not an apt repo.
<twb> It's a whole bunch of apt repos
<twb> One for each package
<soren> Ah.
<twb> Well really it's one repo, but with /package instead of /main
<soren> WEll, do you want the package or not? :)
<twb> One moment, I'm still taking notes
<soren> Ok, for the sake of the excercise, let's choose amd64. Click hardy amd64, which brings you to https://edge.launchpad.net/ubuntu/+source/coreutils/5.97-5.6ubuntu1/+build/472902.
<soren> On the left, there's a box called "Resulting binaries". Click the binary package you want..
<soren> ..and there it is ready for your downloading pleasure.
<soren> The url for that last page is:
<twb> OK.
<soren> https://edge.launchpad.net/ubuntu/hardy/amd64/coreutils/5.97-5.6ubuntu1
<twb> That's better than nothing, but it's very tedious.
<soren> Not if you just construct the url if you know the binary, arch, and version.
<twb> Often I don't know exactly which version is the one that introduced a regression
<soren> If you often need older binaries, that's a symptom that something is very wrong somewhere.
<twb> So I want to add all of them to my sources.list and downgrade them one version at a time
<soren> "all of them2?
<soren> "?
<twb> soren: yes, I would never downgrade on a normal system
<twb> "all of them" as in all versions
<twb> This is the snapdhots.d.n workflow
<soren> What would you need it for?
<twb> Finding what release introduced a regression
<twb> Or finding what is the latest version of a library that works with your stupid in-house code.
<soren> *shrug* File a bug against soyuz.
<soren> Again:22:36:57 < soren> If you often need older binaries, that's a symptom that something is very wrong somewhere.
<twb> Granted.
<twb> If hardware and software worked reliably, I'd not have a job
<twb> :-)
<soren> I'm sorry to hear that. Do you work in a support centre?
<soren> :p
<twb> I'm a sysadmin
<kraut> i'm your worst nightmare
<soren> Hah!
 * zul is an EX-sysadmin..wohoo
<nxvl> sorry, i needed to go out for a while, what are we talking about now?
<nxvl> wrong channel
<nxvl> :P
<ivoks> ScottK: this is something we could do for hardy+1
<nxvl> mathiaz: did you take a look at the blueprint?
<nxvl> i still need to work harder on the wiki page, but the base idea is there
<ScottK2> ivoks: Yes
<ScottK2> ivoks: You still need you new tasksel revision uploaded, right?
<ivoks> ScottK2: right... patch is uploaded
<mathiaz> nxvl: not yet
<ScottK2> ivoks: So you do not need me to do it?
<ivoks> ScottK2: umm...? i'm not core dev, so i can't apply it
<ScottK2> ivoks: So you meant you'd uploaded it to LP.  Got it.
<ivoks> ScottK2: i've just added new version to lp
<ivoks> right
<ScottK2> mathiaz: Why do you think the tasksel needs an FFe?  It looks like bug fixing a previous broken upload to me?
<kirkland> mathiaz: so we now have a better list for you
 * owh runs for cover.
<kirkland> mathiaz: http://ubuntu.pastebin.com/m627c07fb
<kirkland> mathiaz: owh installed a vm with the latest hardy-server snapshot
<kirkland> ticking all the servers offered by the installer
<mathiaz> ScottK2: are you refering to https://bugs.launchpad.net/ubuntu/+source/dovecot/+bug/164837 ?
<ubotu> Launchpad bug 164837 in dovecot "Dovecot SASL for postfix" [Low,In progress]
<ScottK2> mathiaz: Yes.
<owh> mathiaz: Then I installed all 7 server tasks.
<mathiaz> ScottK2: the debdiff adds a new postinstall script
<ScottK2> Yeah.  I'm asking slangesek about it now.
<kirkland> mathiaz: of which 4 (apparmor, postfix, postgresql, cupsys) have status sections
<kirkland> mathiaz: plus we already have patches for 2 more (apache, cron)
<mathiaz> ScottK2: ok.
<kirkland> mathiaz: waaay to many for he and i to hack out overnight, as ScottK2 suspected
<mathiaz> kirkland: hum - mysql has a status action IIRC
<kirkland> mathiaz: ooh, good point, bad grep on our part
<kirkland>   'status')
<kirkland> quoted, tricky.....
<ScottK2> So no ivoks has run off
<ScottK2> mathiaz: Did you see slangasek's comments on ubuntu-devel?  I think we need to think about Dovecot again
<kirkland> mathiaz: so mysql was the only additional one on that list
<ScottK2> mathiaz: At a glance, I don't see anything in the Dovecot part of the tasksel change that's clearly wrong for putting straight in the Dovecot package.
<ScottK2> That'd make this whole thing much simpler
<kirkland> mathiaz: at this point, should we just pick a few of the most important?  (eg openssh-server) and propse those?
<ScottK2> I see you discussing it now ...
<Erickj92> kirkland, would having the apache server going on another computer on the same network effect it on the machine im trying to get things working on?
<kirkland> Erickj92: nope
<Erickj92> darn
<Erickj92> is there any other possible thing?
<Erickj92> for the record, i cant ping that machine
<kirkland> Erickj92: well that would be a problem
<faulkes-> if you can't ping the machine, then a more fundamental problem exists than apache
<faulkes-> check that the interface is up and configured
<Kamping_Kaiser> unless you drop icmp
<Erickj92> faulkes-, how?
<kirkland> Erickj92: can that machine ping out?
<Erickj92> ping out?
<Erickj92> oh
<faulkes-> well, default install shouldn't affect it (if this is a default install)
<Erickj92> yeah
<Erickj92> it can ping out
<faulkes-> then it has connectivity
<faulkes-> so I would say it looks like an iptables issue
<Erickj92> i just bought this new static IP from my ISP
<Erickj92> is there somthing in my router settings that could be doing it?
<faulkes-> if you are dealing solely on the local lan, probably not but that isn't a garrauntee, every network is different
<Erickj92> im so confused
<faulkes-> but if you can't ping your machine but it can ping out, it sounds like there is something firewalling it (iptables)
<Erickj92> from where/
<Erickj92> ??
<Erickj92> like a software firewall, or hardware
<Erickj92> faulkes-, when i go to www.whatismyip.org, it tells me somthing different then the ISP told me
<Erickj92> that i had
<Erickj92> what is the command to stop apache
<Erickj92> ?
<iclebyte> is there a package for BASE?.. i cant seem to find one.
<soren> "BASE"?
<iclebyte> nope.. there is a package called 'acidbase' but it wants to use postgresql and i'm trying to use it in conjunction with the snort-mysql package
<faulkes-> hrmmm, I'm going to have to go over todays meeting log
 * faulkes- was running back and forth
<faulkes-> laptop install, server install, meeting
<faulkes-> not enough eyes and hands
<soren> ScottK2: dovecot.conf is a conffile? Sure?
<ScottK2> soren: Everything in /etc is a conffile since debhelper 3 or 4.
<soren> Er.. No.
<ScottK2> No?
<soren> Anything shipped in /etc is a conffile.
<soren> "shipped" being the keyword.
<ScottK2> Ah.
<soren> dovecot.conf is built during postinst from a template, iirc.
<ScottK2> I'd say that's parsing things pretty finely.
 * ScottK2 goes and reads the policy
<soren> No.
<soren> It's really not.
<soren> It's done that way to avoid it being a conffile.
<soren> Policy forbids you from doing automated editing of conffiles.
<soren> ..so if you want to edit a file in /etc e.g. during postinst, you need to make sure it's not a conffile.
<soren> "conffile" is not just a clever name for files shipped in /etc.
<soren> It's a status that has very specific meaning to dpkg.
<soren> smb.conf is not a conffile either.
<mathiaz> soren: yes - however dovecot.conf is managed via ucf.
<mathiaz> soren: would this matter ?
<soren> To what? I havent' followed the discussion. I just saw ScottK's comment in a bug claiming dovecot.conf was a conffile and it's not.
<soren> The short version is:
<mathiaz> soren: whether a maintainer script can modify a file managed by ucf, but not a conffile
<soren> You can choose to mark your configuration files as conffiles, never edit them programmatically, and let dpkg handle upgrades and all that jazz..
<soren> or..
<soren> handle upgrades gracefully yourself.
<mathiaz> soren: let me rephrase
<soren> Gracefully means: Don't clobber users' changes, don't ask the user to confirm changes that *you* made from a script, etc.
<mathiaz> soren: whether a maintainer script *from another package* can modify a file managed by ucf, but not a conffile
<soren> mathiaz: It's not strictly forbidden, afair, but frowned upon.
<soren> If you need that sort of thing, the package that "owns" the configuration file should provide a script or something to do the relevant changes.
<mathiaz> soren: well - in the case of dovecot, the proposal is to enable sasl by default
<soren> http://www.debian.org/doc/debian-policy/ch-files.html#s10.7.4 for reference.
<soren> mathiaz: Which package will need to edit which package's configuration?
<mathiaz> soren: looking at https://bugs.launchpad.net/ubuntu/+source/dovecot/+bug/164837
<ubotu> Launchpad bug 164837 in tasksel "Dovecot SASL for postfix" [Undecided,Invalid]
<mathiaz> soren: and the last patch from ivoks :http://launchpadlibrarian.net/12606104/tasksel.diff
<mathiaz> soren: there are two things that needs to be done: configure postfix to use sasl (via postconf) and modify dovecot to run a sasl server
<mathiaz> soren: the latter requires a modification of dovecot configuration file
<soren> That, in itself, is not a problem.
<mathiaz> soren: the former call postconf to setup postfix correctly, which is allowed in the policy
<mathiaz> soren: right - but it's frown upon.
<soren> To maintain sanity, the code to actually enable sasl in dovecot should be in the dovecot package.
<soren> ...and the tasksel postinst can call that (or whatever).
<mathiaz> soren: well it seems that we could actually ship dovecot with sasl enabled by default
<mathiaz> soren: which would solve our problems
<soren> mathiaz: Well, no, it's fine to edit another package's configuration. You just need to have the mechanics straight.
<soren> mathiaz: That would be lovely.
<soren> I've never liked the idea of the tasksel postinst.
#ubuntu-server 2008-03-13
<soren> The packages should figure it out somehow. I have an idea for this for intrepid.
<_CitizenKane_> Hello everyone, I was wondering if there is a good tool to analyze things such as httpd access and error logs, php error log, etc. ?
<_CitizenKane_> Ideally, it would also be web based
<zul> ivoks: when you are done can you send me your debdiff for bacula
<ivoks> zul: sure
<kgoetz> right... sudo crashing is probably a bad thing
<sommer> kgoetz: did you do a hardy apt-get upgrade?
<kgoetz> sommer: yeah (i do expect it to be a known issue)
<kgoetz> few other things didnt work in the dist-upgrade itself, but i expect thats unrelated
<sommer> I totally just ran into the same issue... :-)
<sommer> I thought I'd be clever and reboot, now the darn thing won't come back :\
<kgoetz> :|
<kgoetz> "interesting" *mentally notes he should be using apt-bugs*
<kgoetz> i notice malloc's happening during the dist-upgrade
<sommer> I didn't do a dist-upgrade... for that you should use do-release-upgrade
<ivoks> sommer: remote server? :)
<kgoetz> i dist-upgraded hardy -> hardy.
<kgoetz> grr. do-release-upgrade, no man page
<kgoetz> i cant open new terminals on the system in screen. interesting
<kgoetz> !pastebin
<ubotu> pastebin is a service to post multiple-lined texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the channel topic)
<kgoetz> FYI: the mallocs/failed update: http://paste.ubuntu-nl.org/59440/
<kgoetz> and an example explosion: http://paste.ubuntu-nl.org/59441/
<Jeeves_> root@unknown # zpool status  | grep ONLINE | grep -v raidz | grep -v state | grep -v pool | wc -l 46
<sommer> kgoetz: I received the same sudo error
<kgoetz> sommer: i ran sudo 3 tijmes, it errored slightly differently each time (probably just different memory addresses).
<ivoks> wow, this looks bad
<ivoks> don't reboot that machine
<sommer> ivoks: you could argue that it's not good :-)
<sommer> I'm getting an invalid uuid error... probably because the apt-get upgrade process didn't finish
<ivoks> does 'update-grub' works?
 * kgoetz doesnt have a root term open either :|
<sommer> it's in a initramfs
<sommer> I'll try recovery mode
<ScottK2> I just had a build die on the buildd's on some kind of UUID error.  I wonder if it's related.
<ivoks> ScottK2: dovecot.conf is a conffile?
<ivoks> ScottK2: check out /var/lib/dpkg/info/dovecot-common.conffiles
<kgoetz> sommer: has it come back yet?
<ScottK2> ivoks: soren corrected me a bit on that.
<ScottK2> ivoks: Since it's manufactured and not shipped, it's not.  I missed that subtlety.
<sommer> kgoetz: nope... not finding partitions by uuid, even in recovery
<ivoks> right
<ScottK2> ivoks: slangasek still thought it was poor form if actually legal.
<sommer> kgoetz: trying boot disk (sigh)
<kgoetz> sommer: gl
<ivoks> i agree it's poor, but it is legal
 * kgoetz cant check the server, pschulz01 has unpluged it
<ivoks> and it's the best way to do it; forcing custom changes on everybody sounds... well arogant
<ivoks> doing it trough ubuntu's mail-server task sounds like a product
<ScottK2> ivoks: Won't current users get the diff thrown at them.
<ScottK2> ivoks: Did you read slangesek and mathiaz's conversation on #ubuntu-devel?
<ivoks> ScottK2: no, when was it?
 * ScottK2 looks
<ScottK2> ivoks: It starts with me asking slangesek about it at 22:31 UTC on Wed.
<ivoks> ok, i'll read it
<kgoetz> sommer: pschulz01: says he hasnt unplugged it, so it must have disconnected for some other reason :| (i'm getting connection refused).
<kgoetz> we're turning it off and waiting for news
<ivoks> meh... metapackage was my first proposal :/
<sommer> kgoetz: weird stuff...
<sommer> I've mounted my root partition using a recovery cd, but I don't have a /sbin
<pschulz01> sommer: This sound scary
<sommer> doh... nm it's target/sbin
<kgoetz> sommer: uh... say what? thats ... bizzare?
<pschulz01> sounds
<sommer> kgoetz: did you have ebox installed?
<sommer> I think mine blew up right after upgrading ebox
<kgoetz> sommer: i didnt install it, so unless it was a dependancy/automagic no
<sommer> kgoetz: ah... probably unrelated then
<sommer> sigh... the emails have started
<sommer> the libc error is only hardy right?
<kgoetz> heh. get ready for a 5 day storm
<kgoetz> mine was on hardy.
 * sommer updating an edgy test machine
<ajmitch> heh, I see what you mean - that's an unhappy person on devel-discuss
 * kgoetz senses another 'owch' level of breakage hitting
<ajmitch> "it's hardy, you get what you pay for"
<kgoetz> yep
<pschulz01> ajmitch: "If you break it, you get to keep both bits"
 * ajmitch should look on the forums & see how many people have out out death threats already
<sommer> edgy updated fine, trying feisty... fyi
<sommer> feisty fine as well
<sommer> probably only hardy was effected
<ivoks> zul: debdiff is in your inbox
<owh> Can someone please remind me how I get the full path to an executable in a process list?
<sommer> ps -ef or ps aux
<owh> sommer: Yeah, but there are processes in my hardy process list, like say: imap-login and [kthreadd], that have no path.
<owh> I'm sure I'm missing something :)
<owh> In case you're wondering, I'm trying to map running processes to installed packages so I can find which of my init.d scripts belong to those packages, so I can add them to the list to figure out if they need a status option.
<owh> I'd rather not manually map 68 processes, but I may have to.
<sommer> ah... I'd think there's a ps options somewhere, if not should be some info in /proc
<owh> Lemmie see what drops out the bottom if I omit the ones that obviously do have a path.
<owh> That halves it, that's a lot better :)
<owh> Out of ignorance: Are the processes inside square brackets executables, or internal to the kernel or something else - rather, might they be associated with packages or not?
<owh> Examples: [aio/0] [ata/0] [ata_aux] [kacpid] [kacpi_notify] [kblockd/0] [khelper] [kjournald] [kpsmoused] [kseriod]
<sommer> I believe those are kernel processes
<owh> So for this exercise I can probably ignore them?
<sommer> I would think so, but you may want to get a second opinion from someone more knowledgeable about packaging :-)
<owh> Chicken :)
<owh> I'll start without and work my way down.
<sommer> hheheh... I'd just hate for you to waste a bunch of time on my say so
<owh> It's ok, I'll find a way to 'help' you in return :)
<owh> Thanks for the disclaimer tho.
<sommer> np, I do what I can... I do what I can
<sommer> anyone know why on hardy I need to use sudo to create self signed cert?  without sudo I get: unable to write 'random state'. from openssl
<sommer> aside from there being a permissions issue?
<owh> Crap, well that isn't going to work that well. Can someone please check my reasoning?
<owh> I'm trying to map running processes to packages to init.d scripts to find if they implement status.
<owh> Example, running process, apache2
<owh> Obviously there is an init.d script.
<owh> The apache2 executable comes from apache2-mpm-prefork
<owh> The init.d script comes from apache2.2-common
<owh> So, there isn't a map between the apache2 process and the init.d script that controls it.
<owh> Of course, I might just have chosen a really bad random example, but I'm thinking not.
<owh> Anyone disagree with any of this?
<sommer> owh: not exaclty sure what you mean by "map"?
<sommer> but your statements seem logical to me
<owh> Find a relationship between, ownership.
<owh> As in, this process comes from this package which also has this init.d script.
<sommer> that's what I thought, I think it will depend package by package
<owh> Hence my utterance earlier :)
<sommer> more so for applications that are split over more than one package... like apache2
<sommer> so ya, seems like you're on the right track to me :-)
<owh> I suspect that it will be true for mysql, postgress, php, ldap, etc :(
<owh> I didn't really want to get into rdeps :)
<owh> Perhaps it won't be all bad, perhaps some/many of the processes come from the same package. It looks like I might have to do this manually.
<owh> That's not really something that we can run across all of the distro though at any time.
<sommer> there's always a silver lining... heh
<owh> Is there any chance that a process that is running that comes from a package, in our case, /usr/sbin/apache2 comes from apache2-mpm-prefork, doesn't depend on a package that provides the init.d, or is that a big assumption that does not hold true?
<sommer> I would hazard that if there is it's a bug
<sommer> should be safe to assume
<owh> So, I should be able to map a process to it's package, then find the depends that have an init.d and make the link, at least identify the init.d (or more if there are others)
<owh> I mean for most it might well be the same package, but for apache it obviously isn't.
<owh> Apache continues to be fun. It depends on apache2.2-common which provides /etc/init.d/apache2, but it also depends on libc6 which provides /etc/init.d/glibc.sh
<owh> ROTFL
<owh> Hmm, that didn't work either.
<owh> I'll leave you all alone for a bit :|
<sommer> owh: go ahead and update... what's the worst that could happen? :-)
 * sommer past bed time... sigh
<owh> Update?
<sommer> hardy libc
<owh> Nah, I'm not game :)
<sommer> heh... don't blame ya
 * owh saw the message to the list and the smack down :)
 * owh is writing lovely code: cut -b76- hardy.process.list  | grep ^/ | cut -d' ' -f1 | sort | uniq | while read process ; do dpkg -S $process ; done | grep -v 'diversion by' | cut -d: -f1 | sort | uniq | while read
<owh> :)
<sommer> owh: party!
<owh> It's a Piiiiiiipppppppeeeee
<owh> Whoot, I think I have some lift off: cut -b76- hardy.process.list  | grep ^/ | cut -d' ' -f1 | sort | uniq | while read process ; do dpkg -S $process ; done | grep -v 'diversion by' | cut -d: -f1 | sort | uniq | while read package ; do init=`dpkg -L $package|grep /etc/init.d` ; if test "$init" ; then echo $process -\> $package -\> $init ; fi; done
<owh> I suppose it should be prefaced with ps -af |
<owh> and the hardy.process.list removed
<owh> Making it:
<kgoetz> you should probably quite the grep
<kgoetz> *quote
<owh> ps -Af | cut -b76- | grep ^/ | cut -d' ' -f1 | sort | uniq | while read process ; do dpkg -S $process ; done | grep -v 'diversion by' | cut -d: -f1 | sort | uniq | while read package ; do init=`dpkg -L $package|grep /etc/init.d` ; if test "$init" ; then echo $process -\> $package -\> $init ; fi; done
<owh> :(
<owh> That's wrong
<kgoetz> whats it meant to do?
<owh> Hold on, I broke some logic.
<owh> kgoetz: It's in the process of giving me the init.d scripts that belong to running processes :)
<owh> I'm trying to make it too pretty :)
<kgoetz> owh: i see... looking in /etc/init.d not complex enough? :P
<owh> Well, there's 68 in there and we only really want the ones that have a daemon running.
<owh> ps -ealf | cut -b76- | grep ^/ | cut -d' ' -f1 | sort | uniq | while read process ; do dpkg -S $process ; done | grep -v 'diversion by' | sort | uniq | sed 's/://' | while read package process ; do init=`dpkg -L $package|grep /etc/init.d` ; if test "$init" ; then echo $process -\> $package -\> $init ; fi; done
<owh> That's better.
<owh> I love machine readable output, especially the kind that isn't.
<owh> If you're wondering: apt-cache depends samba
<owh> ScottK2 Well said on udd
<ScottK2> Thanks
<kraut> moin
<henkjan> hmm, today no dist-upgrading for me
<ivoks> :)
<ivoks> i just recovered from it
<mok0> This mornings libc6 upgrade is corrupt!
<soren> It is.
<mok0> My friend just smashed his hardy system
<soren> An update is in the works.
<soren> We need to write up some instructions for recovering, though.
<owh> Just as a matter of interest, what's the process of notification for such instructions?
<owh> I mean, if I'm a sysadmin, buried in the bowels of a server that is borked because something like this happened, how would I go about finding a solution?
<owh> Is there an RSS feed for stuff like this for example?
<owh> Or a mailing list?
<henkjan> http://ubuntuforums.org/showthread.php?t=722886 has some instructions for recovering
<owh> Another way to think of this might be to say: "Imagine this big organisation Ubuntu that has server farms scattered around the planet. (And workstations for that matter.) The people in this room are the back-line support for these devices. How would we notify our remote server teams (people whom we call system administrators) of outages or faults."
<AnRkey> are there any nfs heros here?
<AnRkey> i can't mount my nfs share with write permissions for love nor pizza
<_ruben> AnRkey: sounds like a problem i had in the past .. cant remember if and if so, how, i fixed it tho :(
<AnRkey> _ruben, common! i have to know :D
<AnRkey> it's driving me nutes
<AnRkey> nuts
<AnRkey> _ruben, how did you mount the share in your fstab?
<mok0> soren: on a live system: for f in ` dpkg -L`; do cp $f /media/blahblah; done
<mok0> s/-L/-L libc6/
<soren> Or just dpkg --root /media/blahblah -i /tmp/dpkg*.deb
<soren> Er..
<soren> Or just dpkg --root /media/blahblah -i /tmp/libc6*.deb
<mok0> soren: Ah, another dpkg switch I didn't know about :-)
<soren> :)
<ProbeDot> i need to connect to it via internet, i have 50 xp stations connected to a 2003 server using roaming profiles (desktop,my documents,registery,etc are stored on the server), the internet gateway is the centos server, so i have a local network with local ip's and the internet network with the internet ip,
<ProbeDot>  the problem is that i need a laptop using a internet conection to connect to the local area network via internet, i need all windows networking features available, somehow, that laptop must endup connected to the lan with a lan ip, i need to connect a laptop in austria to a local area network in france
<ProbeDot> so that the laptop access al network resources of the lan in france, i want the laptop to act as a workstation in the lan, both laptop and lan have internet access, lan has one public ip, laptop same
<_ruben> AnRkey: wouldnt know .. my recent attempts involved manual mounting usingn `mount host:/exported/path /mounted/path` .. since i needed to move stuff between two servers i just copied from one to other .. and deleted on the first one afterwards :p
<AnRkey> this sounds crazy, but i need it running so i am using samba for now
<AnRkey> it works and well.... it works :D
<AnRkey> i will mess around with nfs again later
<AnRkey> _ruben, thanks for the chat :D
<AnRkey> _ruben, where are you? my sister in law wants to know
<_ruben> AnRkey: at work .. netherlands .. ;)
<sureshot> hey anyone know where extensions(modules) for php is located?
<_ruben> hrm .. wonder what would be the appropriate way to compile/install/etc kernel modules that are available as <whatever>-source package only .. iscsitarget-source to be specific
<_ruben> ah .. module-assistent
<_ruben> bah .. iscsitarget-module doesnt wanna compile .. then again, im trying on desktop edition, not server (just as a test)
<sommer> morning
<_ruben> mornin sommer
 * _ruben slaps Jeeves_ ... nl archive is b0rked :p
<Kamping_Kaiser> hehe
<_ruben> responds to ping and telnet on port 80 .. but doesnt return anything
<henkjan> _ruben: we are investigating
<_ruben> ah ok :)
<henkjan> _ruben: nl.arhive is working again
<zul> hey sommer
<mok0> is the libc6 thing fixed in the archives?
<_ruben> henkjan: lets test from here
<_ruben> henkjan: nice .. what was the prob?
<henkjan> _ruben: no idea yet. Jeeves_ went down to the colo
<henkjan> server is rebooted
<_ruben> ah
<_ruben> nasty
<henkjan> Mar 13 12:27:00 ftp kernel: [36632.422033] Out of socket memory
<henkjan> Mar 13 12:27:03 ftp kernel: [36635.853495] TCP: too many of orphaned sockets
<jdstrand> hi faulkes-!
<jdstrand> I was asked in the #ubuntu-server meeting yesterday to coordinate with you an requesting that people test the new mysql updates in -proposed
<jdstrand> faulkes-: this is mainly for dapper - feisty
<jdstrand> faulkes-: see bug #201009 for details
<ubotu> Launchpad bug 201009 in mysql-dfsg-5.0 "[mysql-dfsg-5.0] fix for several open vulnerabilities in -proposed" [High,Fix committed] https://launchpad.net/bugs/201009
<jdstrand> faulkes-: is there an appropriate place in the forums that this can be arranged?
<jdstrand> s/arranged/requested/
<_ruben> henkjan: scary
<Jeeves_> _ruben: Sorry about that
<Jeeves_> It seems there were NFS issues
<Kamping_Kaiser> Jeeves_, bugger.
<Kamping_Kaiser> Jeeves_, i was going to ask how your going... buried in 'oh noes' atm probably
<Jeeves_> Kamping_Kaiser: Yeah, I was rsyncing the iso's to the thumper
<Jeeves_> But maybe I forgot a lockd  ...
<Kamping_Kaiser> hehe.
<Kamping_Kaiser> i almost did that badly today. set a server to an in use IP (read: conflict) then dist-upgraded my hardy and copped the glibc bug, then couldnt change the ip anymore
<_ruben> Jeeves_: shit happens (to the best of us)
<Jeeves_> Jups
<_ruben> sweet .. my initial iscsi test succeeded .. win2k3r2 virtual server as initiator and ubuntu as target
<faulkes-> on what hardware?
<_ruben> faulkes-: dell dimension 5150 .. as in: one of the workstations under my desk ;) .. more a proof of concept than an actual (read: usefull) test ;)
<_ruben> the target that is
<_ruben> the next iscsi experiment will be on a machine with 8 sata disks and a raidcore/broadcom raid controller (fakeraid from what ive read, so software raid it'll be)
<_ruben> tho that hardware isnt avail yet
<faulkes-> ah
<faulkes-> just looking at iscsi stuff right now, have a promise iscsi box nobody here has ever touched I'm going to hook up to dell 1955 blades
<faulkes-> should be interesting to play with
<_ruben> 1955s .. we're looking into those aswell .. for longer term plans as vmware esx hosts mainly
<_ruben> we'll be getting some dell san stuff (borrowed from dell) to play with in a month or so .. need to finish off some other projects first
<_ruben> really looking forward to dive into the virtualization and san stuff
<zul> mathiaz: I uploaded samba-3.0.28a this morning
<mathiaz> zul: awesome
<jdstrand> mathiaz, soren, nijaba: just used ubuntu-vm-builder for a gutsy guest
<soren> amd64 or i386?
<jdstrand> and got grub 'Error 15' on boot
<jdstrand> amd64
<jdstrand> would you like to see the invocation?
<soren> You need to pass --compontents main,restricted for the kernel install to function.
<jdstrand> sudo ubuntu-vm-builder -a amd64 -d ./gutsy_server64 --domain strandboge.com --hostname gutsy-server64 --mirror http://192.168.122.1/ubuntu --addpkg openssh-server -s gutsy --vm kvm --user james --part ./partfile --kernel-flavour server --components main,restricted,universe,multiverse --ip=192.168.122.5 --libvirt qemu:///system
<soren> Oh.
<jdstrand> I did
<soren> Is your mirror up-to-date?
<soren> I mean *really* up-to-date?
<faulkes-> hrmm, that reminds me, I need to turn VT on in the bios's
<soren> Oh, it's gutsy.
<soren> Never mind.
<jdstrand> as of 07:00 UTC
<jdstrand> the host is hardy
<jdstrand> the guest is gutsy
<soren> --ip=192.168.122.5 works? Wow. NEver tried that :)
<jdstrand> soren: I can't speak to that
<soren> I've always seperated them by spcaes.
<nijaba> soren: I put that part in and I did test it ;)
<soren> With '='? Ok, cool.
<nijaba> soren: ah, no, always using space
<soren> jdstrand: I'd much rather see the output of the build process :)
<jdstrand> let me try without the '='-- I had some strange stuff that I was going to report separately
<jdstrand> soren: interesting--
<jdstrand> Searching for GRUB installation directory ... found: /boot/grub
<jdstrand> findfs: Unable to resolve 'UUID=3812df5a-550f-48e9-8046-12d584257ec2'
<jdstrand> Cannot determine root device.  Assuming /dev/hda1
<jdstrand> This error is probably caused by an invalid /etc/fstab
<jdstrand> but I'll paste the whole thing momentarily
<soren> Don't worry about that one.
<jdstrand> soren: it's booting
<jdstrand> guess it *really* didn't like the '='
<jdstrand> soren: had this http://paste.ubuntu-nl.org/59478/ at the end, which was disconcerting
<jdstrand> soren: I am sure the openssh part was due to /dev/pts
<jdstrand> well, *sure* is strong
<jdstrand> I bet it is because of it "Can not write log, openpty() failed (/dev/pts not mounted?)"
<mathiaz> jdstrand: I've seen that error too - it's because openssh cannot be started
<jdstrand> mathiaz: yes
<jdstrand> mathiaz: I was just saying it may not have been able to start bec of that
<jdstrand> the last two lines made me think something wet wrong too
<mathiaz> jdstrand: oh - I think it was not able to start because there is already an ssh daemon running on the host
<jdstrand> just fyi
<jdstrand> mathiaz: oh-- that makes sense
<jdstrand> mathiaz: I was thinking that it was virtualized already, but of course it wasn't yet
<mathiaz> jdstrand: I briefly mentionned it to soren in a meeting - apparently there is trick to make it work anyway
<mathiaz> jdstrand: You'd run in the same problem when installing apache2 which couldn't be started as it cannot bind to port 80
<jdstrand> it more just a usability thing for others, it clearly works well :)
<jdstrand> mathiaz: yes I understand now
<mathiaz> jdstrand: forget about the last comment - it could bind to port 80 as the install is run as root
<jdstrand> mathiaz: not sure about that
<soren> mathiaz: Unless you have a web server running already..
<jdstrand> mathiaz: when I did testing in chroots of server daemons, I could only do one at a time
 * jdstrand assumed mathiaz meant that it would be the same for any service that was already running on the guest
<jdstrand> s/guest/host
<Probedot_> i need to configure apache to allow access to http:\\url.com\dummy\download , But to block access ,to http:\\url.com\dummy on port 80,can you do it?
<Probedot_>  (i do need access to dummy on 443, i have a site where the root folder is a login to a console i need blocked,but a subfolder of that folder is open to clients to access files from,  I dont want people to access the console but still have access to the downloads, with Virt hosts,is that how i set up ports? like if i want it to work via 443,do i use *:443
<jdstrand> soren: where are the options for dnsmasq configured? I checked /etc/libvirt/* and /etc/default/libvirt-bin
<jdstrand> soren: I'd like to use a different address range
<jdstrand> soren: well, really just 'smaller'
 * Probedot_ waits
<jdstrand> so I can have static addresses not get in the way
<soren> jdstrand: That's in your libvirt network xml.
<soren> jdstrand: Use virsh to dump/redefine.
<mathiaz> jdstrand: try a net-dumpxml from virsh to see how to set the range
<jdstrand> ah /etc/libvirt/qemu/networks
<jdstrand> got it
 * jdstrand didn't dig deep enough
<jdstrand> soren: sick of me and virtualization yet? ;)
<jdstrand> (I know you are)
<mathiaz> soren: re bug 194237 - why do you want to install usplash ?
<ubotu> Launchpad bug 194237 in ubuntu-vm-builder "don't install usplash by default" [Undecided,New] https://launchpad.net/bugs/194237
<mathiaz> soren: it's not installed by default on a server
<soren> It's shiny.
<soren> I thought I made that quite clear :)
<soren> jdstrand: Not at all!
<_ruben> hehe
<mathiaz> soren: well... I'd expect a better argument thatn it's shiny.
<jdstrand> :)
 * jdstrand has *always* appreciated the shiny argument
<mathiaz> soren: as I mentionned in the bug, there are some artifact left on the console when using vnc.
<mathiaz> soren: that is the reason why I've filed the bug
<mathiaz> soren: it's annoying to have orange pixels in the middle of the screen. it's not... shiny ;)
<jdstrand> soren: I'd also mention that whatever loads kvm_intel is not using log_action_ for its output. as a result, that line is not in alignment with the other boot messages
<jdstrand> log_action_ is usplash firendly
<jdstrand> friendly
<jdstrand> other log_ functions are not
<Probedot_> i need to configure apache to allow access to http:\\url.com\dummy\download , But to block access ,to http:\\url.com\dummy on port 80,can you do it?
<Probedot_>  (i do need access to dummy on 443, i have a site where the root folder is a login to a console i need blocked,but a subfolder of that folder is open to clients to access files from,  I dont want people to access the console but still have access to the downloads, with Virt hosts,is that how i set up ports? like if i want it to work via 443,do i use *:443
<mathiaz> !servergui
<ubotu> Sorry, I don't know anything about servergui - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<soren> mathiaz: Well, that's the bug we should be fixing then.
<Jeeves_> Anyone from canonical here?
<Jeeves_> If so. Can you gently push someone from the mirror-team towards #ubuntu-server? :)
<Jeeves_> I'm not allowed to paste NSFW techpron here, right? :)
<Jeeves_> Nobody here highlighting on NSFW? :)
<nijaba> soren, mathiaz: any reco on how I should be implementing the templates in u-v-b?
<soren> Jeeves_: What do you need mirror gusy for?
<soren> guys
<Jeeves_> soren: Already talking to elmo. Thanks though
<nealmcb> !servergui
<ubotu> Ubuntu server does not install a desktop environment or X11 by default in order to enhance security, efficiency and performance.  !eBox provides a GUI system management option via a web interface.  See https://help.ubuntu.com/community/ServerGUI for more background and options.
<nealmcb> mathiaz: there you go!
<jdstrand> mathiaz: when automating vm creation, what tool are you using?
<mathiaz> jdstrand: custom scripts build around ubuntu-vm-builder :)
<mathiaz> jdstrand: These are just shell scripts that call ubuntu-vm-builder with the right paths
<jdstrand> mathiaz: are you assigning ip addresses or just using dhcp?
<mathiaz> jdstrand: well - I'm hoping to use dhcp in fact
<mathiaz> jdstrand: dnsmasq has this wonderfull feature to provide dns names for served ips
<mathiaz> jdstrand: so I plan to integrate this in my environement so that I don't need to know which ip address I need to connect to
<mathiaz> jdstrand: I aim at the following workflow: ubuntu-vm-builder host1; wait for 5 minutes ; ssh host1
<jdstrand> mathiaz: where are the names exposed? (I am thinking of the traditional dynamic dns updates where you still end up having to know the hostname)
<jdstrand> mathiaz: oh I see-- yes, that would rock
<jdstrand> mathiaz: keep me posted :)
<mathiaz> jdstrand: I've already that setup with vmware-server
<mathiaz> jdstrand: I've modified vmware-server to use dnsmasq instead of dhcpd
<nealmcb> ScottK, soren - good straightforward thoughtful comments on the u-d-d list - thanks
 * soren hugs ScottK2
<AnRkey> i want to import about 140 users into ubuntu
<AnRkey> does anyone have any idea's for an import solution?
<nealmcb> what does "break=bottom" do on the kernel boot line, anyway?
<soren> It breaks out of the initramfs at a specific point.
<soren> Most interestingly: after the root device has been mounted.
 * ScottK2 hugs back.
<nealmcb> soren: thanks.  I don't see it in Documentation/kernel-parameters.txt - is it grub-specific or ubuntu-specific or something?
<soren> Neither :)
<soren> It's from Debian.
<soren> See /usr/share/initramsf-tools/init
<soren> Er..
<soren> /usr/share/initramfs-tools/init
<soren> Look for "maybe_break bottom"
<soren> That's where it'll bail out if you pass break=bottom.
<sommer> mathiaz: thanks for adding links to the roadmap doc review section... I should have thought about that
 * jdstrand -> lunch
<zul> ivoks: around?
<ivoks> zul: yes
<zul> ivoks: I had a go through it looks ok. I added some stuff to it, changelog is here. http://pastebin.ca/941320
<zul> Im going to ask for a FFE exception and merge the version from debian as well
<ivoks> i noticed one thing tough... and i don't have an idea how to resolve it
<ivoks> if someone has bacula-dir-mysql|pgsql before and now updates this new version
<ivoks> postinst will ask for setting up new database
<zul> so if bacula-version is older than blah then exit fi or something like that?
<ivoks> ok, that was obvious :D
<soren> In postinst: if [ "$1" = configure ]; then if [ -z "$2" ]; then echo "Ooh, a fresh install."; else echo "Oh, an upgade from $2"; fi; fi.
<zul> soren: that works :)
<soren> Or even better:
<ivoks> zul: s/contorl/control :)
<ivoks> in changelog
<zul> yeah havent uploaded it yet :)
<soren> In postinst: if [ "$1" = configure ]; then if dpkg --compare-versions 1.2.3-4ubuntu5 -lt "$2"; then echo "We're upgrading past version 1.2.3-4ubuntu5. Let's do some cool stuff."; fi; fi
<zul> if dpkg --compare-versions "$2" lt 2.2.8-5ubuntu1 \ then do your bits else what soren said fi
<soren> Er.. Yeah, I should have used -gt, probably.
<soren> Er.. "gt", not "-gt".
<zul> soren: great minds think alike
<soren> So what zul said :)
<ivoks> horrible debdiff
<zul> ivoks: yeah it was fun...I just split the files up to make it easier
<ivoks> :)
<Theo__> how do i get libxerces26?
<Theo__> I typed "apt-get install libxerces26" and it did not work
<ivoks> Theo__: try 27 or 28
<Theo__> ok
<Theo__> Thank you
<Theo__> it worked
<jdstrand> oh that's fun
<jdstrand> install dapper with ubuntu-vm-builder and boot it up and it goes into memtest86
<zul> jdstrand: whhaaaa?
<jdstrand> looking at grub boot menu, there is no kernel entry
<mathiaz> jdstrand: you should check the build log and see which version of the kernel was asked to be installed
<jdstrand> mathiaz: I was just now
<mathiaz> jdstrand: I ran into a similar problem when tryinb to install hardy on amd64 - it tried to install linux-virtual which doesn't exist on amd64
<jdstrand> I was trying to remember if -server is on dapper
<ScottK2> No.  It's not.
<mathiaz> jdstrand: I think so
<ScottK2> OK.  Maybe I misremember, but I think not.
<ScottK2> IIRC when I had dapper servers I just installed from the alternate ISO.
<mathiaz> http://packages.ubuntu.com/dapper/linux-server
<soren> jdstrand: I fixed that, too.
<ScottK2> Fair enough.
<soren> jdstrand: You can do:
<soren> jdstrand: --flavour amd64-server
<soren> jdstrand: if you're still on amd64, that is. If not, it's something else, which I haven't seen before.
<jdstrand> soren: this is for an amd64 vm
<mathiaz> jdstrand: well - linux-server on dapper exists only on i386
<jdstrand> soren: is it only dapper or all that I should redo?
<soren> jdstrand: Right. It's called amd64-server
<jdstrand> ok cool
<soren> jdstrand: dapper and edgy at least.
<ivoks> is mail-server-task good name for binary meta package?
<soren> Not sure about feisty.
<soren> ivoks: No.
<soren> :)
<ivoks> :)
<jdstrand> edgy worked...
<soren> Because it's not a task.
<ivoks> then just mail-server
<mathiaz> jdstrand: linux-server was added to edgy
<ivoks> one could thnik that's just MTA
<ivoks> ubuntu-mail-server?
<mathiaz> ivoks: if you're talking about sasl, I'd create a package named postfix-sasl-enable or something like that
<mathiaz> ivoks: I'm still not sure how we should do this.
<ivoks> mathiaz: that name would be misleading cause we include more that just sasl configuration
<mathiaz> ivoks: actually no.
<mathiaz> ivoks: the plan would be to move the dovecot stuff directly in the default dovecot configuration file
<mathiaz> ivoks: which leaves only the postconf calls in the postinst script
<ivoks> mathiaz: even the maildir and sasl setup in dovecot?
<mathiaz> ivoks: we could enable sasl by default in dovecot
<ivoks> mathiaz: the changes we propose are stricly for postfix
<Theo__> Does anyone know how to setup a MTA server?
<ivoks> so, if someone uses exim, dovecot wouldn't even start
<jdstrand> ok, that is much more fun-- it's booting :)
<ScottK2> Theo__: Yes.
<Theo__> I started the server but cannot connect
<mathiaz> ivoks: why ?
<ivoks> dovecot creates socket in /var/spool/postfix
<mathiaz> ivoks: well - that's not so wrong
<mathiaz> ivoks: I still wonder if we should move this to a different path
<mathiaz> ivoks: and then set postfix to use this new path
<ivoks> postfix is jailed
<ivoks> so it must be inside /var/spool/postfix
<ivoks> or... we can bind mount
<lamont> having dovecot allow multiple paths to the file would fix things...
<ivoks> lamont: iirc, it actualy allows that
<mathiaz> I was more worried about the effect of having dovecot putting something in /var/spool/postfix
<mathiaz> if that works with lamont - it's all good
 * lamont has no objections to packages delivering things into the chroot to let them work.  well, assuming that they aren't utter crack.
<lamont> and if the admin installs dovecot onto a box with postfix, then he probably means to have it work.
<lamont> so, yeah, +1
<ScottK2> Theo__: What did you install, what Ubuntu release are you using, and more exactly what's the trouble?
<ivoks> no, dovecot doesn't support multiple SASL clients - so, only one socket
<Theo__> I have ubuntu server edition, libxerces26 and libstdc++6
<Theo__> . When I start the server there are no errors. I cannot connect to it through lan from another computer
<ScottK2> Personally I think it's quite reasonable to have the default dovecot configuration be the one that works with postfix.  It's our primary MTA.
<mathiaz> ivoks: well - we could make sure that dovecot is integrated with the postfix
<mathiaz> ivoks: I agree with ScottK2 ^^
<ivoks> ScottK2: then dovecot should depend on postfix and other way around
<mathiaz> ivoks: not needed
<ivoks> cause postfix won't start if it can't connect to dovecot's socket
<mathiaz> ivoks: postfix isn't configured by default to use sasl
<ivoks> i see where you're heading
<ScottK2> ivoks: I think depends postfix|mail-transport-agent is quite reasonable.
<mathiaz> ivoks: that the role of the new package - postfix-sasl-configure
<ivoks> ok
<ScottK2> Dovecot does need an MTA, but the MTA doesn't necessarily need Dovecot.
<mathiaz> ScottK2: dovecot really needs an MTA installed ?
<Theo__> what
<ScottK2> mathiaz: Does it?
<ScottK2> mathiaz: Sorry, speculating again.
<ivoks> it does not
<ScottK2> OK.  Nevermind about that bit then.
<Theo__> MTA=Multi Theft Auto?
<ivoks> ok, so postfix by default doesn't support sasl
<mathiaz> !mta | Theo__
<ubotu> Theo__: A Mail Transfer Agent (MTA) is the server software that sends and queues mail. The default MTA (and !MDA) on Ubuntu is !postfix ("exim" is also officially supported). See also !MailServer and !MUA
<ScottK2> Theo__: Mail Transport Agent
<ivoks> and dovecot by default provides sasl socket?
<mathiaz> ivoks: yes.
<ivoks> one important thing
<ivoks> if we pre-setup dovecot for sasl socket for postfix (/var/spool/postfix)
<mathiaz> ivoks: how would dovecot react if the path to the sasl socket doesn't exist ?
<ivoks> and postfix isn't installed
<ivoks> dovecot will fail to start
<ScottK2> That would be bad.
<ivoks> but...
<mathiaz> ivoks: that means dovecot needs to create /var/spool/postfix or whatever.
<ivoks> mathiaz: or that... right
<ivoks> or... bind directories
<ivoks> dovecot starts socket on /var/run/dovecot/auth
<mathiaz> that's the bit I'm worried about
<ivoks> and postfix init script binds it to /var/spool/postfix/private/dovecot-auth
<ivoks> but that's so... hacky :/
<ivoks> it would be much easier if dovecot would start tcp socket :/
<mathiaz> ivoks: so another option is to look into dovecot src code and make it exit properly if the path to the socket doesn't exists
<mathiaz> ivoks: if /var/spool/postfix/private/ doesn't exits, don't start the sasl daemon
<ivoks> it's stupid that it doesn't act like that already
<mathiaz> s/exits/exiSt/
<ivoks> cause it only exports auth socket
<ivoks> it's not like it uses it
<ivoks> is it wise to leave a unused socket which can be used for probing usernames?
<sommer> !servergui
<ubotu> Ubuntu server does not install a desktop environment or X11 by default in order to enhance security, efficiency and performance.  !eBox provides a GUI system management option via a web interface.  See https://help.ubuntu.com/community/ServerGUI for more background and options.
<sommer> heh
<nealmcb> :-)
<ivoks> hm...
 * jdstrand thinks it is just *awesome* that he can use multi-cpu in kvm
<AnRkey> i want to import about 140 users into ubuntu
<AnRkey> does anyone have any idea's for an import solution?
<AnRkey> i want to make the username and passwords the same
<nealmcb> AnRkey what about home directories and stuff - just make new ones, or migrate from somewhere?  uids?
<nealmcb> and what do you have for them now - passwd and shadow files from some other system?
<AnRkey> make new ones
<AnRkey> i was going to make a small script with adduser
 * nealmcb nods
<nealmcb> so you know all their passwords?
<AnRkey> cant figure out how to add the password to the adduser script
<AnRkey> well i want to make the login and password the same
<AnRkey> it's a small internal network
<AnRkey> it's just for a mail server
<nealmcb> ok - so not really an import, exported from somewhere else - you just want to make them up from scratch
<AnRkey> well i have the list from the old server
<AnRkey> the old server was setup in the same way
<AnRkey> login and pass where also the same
<AnRkey> we want to make the migration to the new server seamless
<ivoks> don't use adduser then
<ivoks> use useradd and set up password with chpasswd
<AnRkey> thanks ivoks, looking into that
<nealmcb> ivoks: cool - that is what I was searching for - chpasswd allows AnRkey to use the old passwords from the old shadow file also
<AnRkey> nealmcb, use old shadow file?
 * nealmcb hates the idea of 140 users all knowing that their peers password is the same as the userid
<AnRkey> nealmcb, that's just it, they don't know
<AnRkey> :D
<AnRkey> most of these people are farmers and have no idea what copy and paste is
<nealmcb> the -e and -m options of chpasswd look like they'll take the sort of stuff you can get out of /etc/shadow
<nealmcb> so you can just say that their password hasn't changed
<AnRkey> shweet, thanks nealmcb ivoks
<nealmcb> :-)
<ivoks> np
<AnRkey> looks like i might get some sleep tonight
<AnRkey> :P
<nealmcb> huh - I guess I misread the chpasswd man page on -m:  "Use MD5 encryption instead of DES when the supplied passwords are not encrypted"  - the "not" is surprising.   I guess it knows md5 vs des from the way it is encrypted?
<AnRkey> yeah that does not look like the correct switch
<AnRkey> i have the old shadow file so it looks like i'll be fine
<nealmcb> soren, sommer: I added the "break" option to https://help.ubuntu.com/community/BootOptions - the mysteries of initramfs are slowly unveiling themselves :-)
<sommer> nealmcb: that's good stuff, I know like 2.5 boot options :-)
<sommer> anyone have a favorite sql reference or website?
<mathiaz> sommer: I've found the Mysql User Manual good at that usually
<kirkland> sommer: not a website, but I bought this book in 1999, and learned everything I've needed to know about SQL from it.  I still pull it off the shelf occasioinally:  http://www.amazon.com/Special-Using-SQL/dp/0789719746/ref=sr_1_3?ie=UTF8&s=books&qid=1205436337&sr=8-3
<sommer> mathiaz: yep, I've linked to that already, looking for more references for the database section
<sommer> kirkland: cool, thanks I'll take a look
<mathiaz> sommer: and you can even find it in ubuntu - a simple apt-get install mysql-doc will make you happy
<sommer> oooh, that's a good point
<kirkland> sommer: also, i'm more of a postgres junkie, so i tend to use http://www.postgresql.org/docs/ for db specific questions
<sommer> kirkland: ah, I'll link that as well
<mathiaz> sommer: and may be mention sqlite also
<kirkland> sommer: you can go straight to the appropriate manual, if you want
<mathiaz> sommer: these are the main databases
<sommer> ya, I was thinking about an sqlite section... I'm a little hesitant to add a bunch of new content so close to SF
<mathiaz> !servergui
<ubotu> Ubuntu server does not install a desktop environment or X11 by default in order to enhance security, efficiency and performance.  !eBox provides a GUI system management option via a web interface.  See https://help.ubuntu.com/community/ServerGUI for more background and options.
<sommer> Matt East was worried about translation time
<sommer> if not for hardy it'll definitely be there for ibex :-)
<jdstrand> faulkes-: ping re mysql testing
<nealmcb> sommer: Is the virt section draft for the server guide somewhere where I can point folks at it?
<sommer> nealmcb: you bet: http://doc.ubuntu.com/ubuntu/serverguide/C/virtualization.html
<sommer> I commited an update last night so that's out of date, until the update process runs
<sommer> not sure when that is exaclty though
<nealmcb> sommer: cool - I should have known that - got confused by the versions again....
<sommer> np... mathiaz also added links to the roadmap :-)
<nealmcb> sommer: is the vmware line in http://doc.ubuntu.com/ubuntu/serverguide/C/jeos.html et al. still necessary, or is it running well in kvm also?  Or perhaps the following sections should be renamed to "vmware-xxx", since jeos seems a more generic concept?
<sommer> nealmcb: that's a good question... I like where your head's at!
<sommer> personally I haven't tested jeos with kvm, but I believe from what others have said is that it works fine
<sommer> nijaba or soren can probably shed more light on that one
<sommer> my plan is to do some testing this evening... so probably know more than :-)
<nijaba> sommer: I'm in a meeting right now.  should be avail in 30 min or so...
<sommer> cool
 * nealmcb is also on a phone call now - off in 40 minutes or so
<jdstrand> mathiaz: I have theory about bug #105457
<ubotu> Launchpad bug 105457 in mysql-dfsg-5.0 "mysqd_safe high cpu usage" [Low,Triaged] https://launchpad.net/bugs/105457
<jdstrand> I updated the bug accordingly, but don't really have time to look into it now
<jdstrand> let's go with hypothesis instead
<nijaba> sommer, nealmcb: I think we need to revisit the Jeos tutorial to a/include usage of ubuntu-vm-builder (even though it is universe) b/amend it for kvm usage as well
<nealmcb> nijaba: right on!
<nealmcb> and I suppose libvirt builds are a third possibility
<nealmcb> but I haven't tried that stuff
<nijaba> nealmcb: ubuntu-vm-builder now include a libvirt option...
<nealmcb> nijaba: great!
 * nealmcb is so out-of-it, but hoping to find time to get back up to speed....
<nijaba> I am currently working on adding some templating to the xml file.
<nealmcb> I'm talking with a guy who is having trouble with vmware and hardy - not in partner repo now?  missing headers when he tries it straight from vmware?
<nijaba> nealmcb: vmware-server for hard should be in the repo sometimes around GA
<nealmcb> Good
<nealmcb> at some point, jeos descriptions for xen and perhaps even containerized solutions would be nice also, either on the wiki or in the server guide
<nijaba> nealmcb: as xen / openvz are community supported solutions, I'd vote for wiki
 * nealmcb nods
<nealmcb> nijaba: so is canonical supporting vmware on ubuntu?
 * nealmcb wonders about vmware in the server guide for the same reason
<nijaba> nealmcb: no, we are working with vmware on having ubuntu (server, including Jeos) certified on ESX
<nijaba> nealmcb: in the same principle as another hw vendor in fact
<nealmcb> so should that be on the wiki instead also?
<nijaba> nealmcb: I am not following you..  which "that"?
<nealmcb> vmware
<nealmcb> have the server guide focus on supported open source solutions
<nijaba> nealmcb: vmware(-server) is not part of the official doc, only running Jeos on vmware is
<nijaba> afaik
<nijaba> and it is not a "canonical" reasoning, more a "main" vs "universe" one
<kirkland> mathiaz: are you here?
<kirkland> mathiaz: owh and I have a few questions about the init script lsb work, and how best to document this in LP
<kirkland> mathiaz: here's what I'm thinking....
<kirkland> mathiaz: we create one LP bug for general initscript LSB compliance
<nealmcb> I may be confused on just what is needed to run jeos on vmware - is the necessary stuff in main? universe? partner?
<kirkland> mathiaz: target that bug for Intrepid, really
<kirkland> mathiaz: and file individual bugs against each package with a non-LSB compliant init script
<nijaba> nealmcb: I think the necessary stuff is in main (and you can add open-vm-tools from universe if you want)
<kirkland> mathiaz: ideally, the LSB tests would tell us which are and are not compliant, we still need to try their tests
<nealmcb> so the partner stuff is just extra fancier stuff for building vms?
<kirkland> mathiaz: as owh, and i, and anyone else wants, we chip away at those bugs, most are pretty simple fixes, great little work items for new developers
<nijaba> nealmcb: yep, that's how I would view it
<kirkland> mathiaz: we will document in a wiki page exactly how to fix up an init script
<nealmcb> and it is vmware-server?
<kirkland> mathiaz: it's fairly formulaic, actually
<mathiaz> kirkland:  I like the "great little work for new developers" :)
<kirkland> mathiaz: i don't mean to trivialize it.....
<kirkland> mathiaz: but it shouldn't be that scary
<mathiaz> kirkland: oh no - I really think it's difficult
<mathiaz> kirkland: heu... *not difficult*
<kirkland> mathiaz: why so?
<nijaba> nealmcb: vmware-server is in partner repo, so that's really canonical's pmau
<kirkland> mathiaz: oh :-)
<nijaba> s/pmau/play
<kirkland> mathiaz: also, I think we throw FFe out the window entirely--the number of init scripts affected by this is tremendous
<owh> nijaba: What keyboard are you typing on?
 * mathiaz remembers how an azerty keyboard looks like
<nijaba> owh: frenglish
<kirkland> mathiaz: so we (the community) just cips away at them over time
<kirkland> mathiaz: perhaps some make it into Hardy
<kirkland> mathiaz: hopefully more make it into hardy+updates
<kirkland> mathiaz: and ideally we're lsb-compliant by intrepid
<kirkland> mathiaz: (profit)
<kirkland> mathiaz: owh: sound like a plan?
<owh> kirkland: mathiaz: wfm
<mathiaz> kirkland: well - the init scripts changes won't make it in hardy-updates
<kirkland> mathiaz: if so, this is what I'll write up in the roadmap, per meeting action yesterday
<kirkland> mathiaz: no?  security issues only make it in there?
<mathiaz> kirkland: they won't qualify for a StableReleaseUpdate
 * kirkland goes look up StableReleaseUpdate guidelines
<mathiaz> !sru
<ubotu> Stable Release Update information is at http://wiki.ubuntu.com/StableReleaseUpdates
<mathiaz> !sru | kirkland
<ubotu> kirkland: please see above
<mathiaz> kirkland: then rather than filing bugs about lsb-compliant scripts, I'd rather concentrate on "implement status action" for relevant scripts.
<mathiaz> LSB-compliant scripts requires more than just a status action
<kirkland> mathiaz: really?  okay.  that's easier
<mathiaz> kirkland: you need the headers, the correct exit code
<owh> mathiaz: Well, the init.d scripts that come with hardy tasksel full install all include the lsb_functions which indicates that the author is at least aspiring to compliance.
<nealmcb> nijaba: I just sent you some mail - perhaps you can help us get to more clarity....
<kirkland> mathiaz: just sourcing /lib/lsb/init-functions and using pidofproc gets the right exit code
<mathiaz> kirkland: right - but there are other places in the init script that may use the wrong exit code  - see apache2 init script
<kirkland> mathiaz: right, agreed.
<mathiaz> owh: correct. But they may be not completly compliant.
<kirkland> mathiaz: so what you're saying is concentrate on implementing the status section
<mathiaz> owh: and some of the changes are actually made by Ubuntu.
<mathiaz> owh: and we still carry the delta with debian.
<mathiaz> kirkland: yes - just the status action.
<kirkland> mathiaz: what if we added a function to /lib/lsb/init-functions
<owh> mathiaz: Yes, and changing the exit codes might break code that relies on the "faulty/non compliant" exit codes.
<nijaba> nealmcb: would you like me to reply here or by email?
<owh> kirkland: That's what I wanted to talk about :)
<mathiaz> kirkland: it should be easier to implement, more targeted and easier to get a FFe for it.
<kirkland> mathiaz: statusof $DAEMON $NAME
<owh> Now we're talking :)
<kirkland> mathiaz: I think most of them should look a lot like:
<kirkland> +status)
<kirkland> +	pidofproc /usr/sbin/cron >/dev/null
<kirkland> +	status=$?
<kirkland> +	if [ $status -eq 0 ]; then
<kirkland> +		log_success_msg "Cron is running."
<kirkland> +	else
<kirkland> +		log_failure_msg "Cron is not running."
<kirkland> +	fi
<kirkland> +	exit $status
<kirkland> +	;;
<kirkland> (Sorry for the code drop, it's small though)
<kirkland> DAEMON=/usr/sbin/cron
<kirkland> NAME=Cron
<kirkland> and status just becomes
<kirkland> a call to statusof
<nealmcb> nijaba: by mail is probably better, and I can forward to our colorado list as appropriate
<kirkland> owh: mathiaz ?
<kirkland> mathiaz: that might be FFe-able?
<mathiaz> kirkland: yes - that'd make sense.
 * nealmcb hopes he hasn't confused his buddies
<mathiaz> kirkland: I'd have a quick look at the existing init scripts that have a status action to see they would fit the pattern.
<nealmcb> nijaba: thanks!!
<owh> kirkland: Yup that's what I intended to get to :)
<kirkland> owh: you da man
<kirkland> owh: we're on the same page, i like that \o/
<owh> kirkland: I can't get credit, I thought of it first, but you wrote it down first :)
<mathiaz> kirkland: that would bring more weight to the FFe
<kirkland> owh: if you want the credit, you hack the patch to /lib/lsb/init-functions :-)
<mathiaz> kirkland: I'd also send an email to ubuntu-devel outlining the reason to do this
<owh> ROTFL - mine!
<mathiaz> kirkland: you'd get more input from all the other developers.
<kirkland> mathiaz: sounds good, probably should be done ASAP, i assume?
 * owh is already downloading source :)
 * kirkland downloads from his local mirror at 1Gbps :-)
<mathiaz> kirkland: yes.
<kirkland> :-P owh
<kirkland> mathiaz: owh: give me a minute, let me draft something and pastebin it
<mathiaz> kirkland: try to come up of list of packages that you'd change, which one would benefit from it
 * owh has a vSat terminal with 1Mbit that can be setup anywhere, so there is no need to be indoors :)
<mathiaz> kirkland: once you've got a list of packages to fix, look at bug 199014 for a way to track the work done on it.
<ubotu> Launchpad bug 199014 in emesene "python-xml removal: please drop/replace (build) dependencies" [Medium,In progress] https://launchpad.net/bugs/199014
<kirkland> mathiaz: cool
<nijaba> nealmcb: reply in our mail.  hope it is clear...
<skwashd> hi, are server related hardy issues dealt with here or #ubuntu+1?
<skwashd> i would prefer not to waste my time explaining it all here only to be told "go next door"
<nealmcb> nijaba: great - many thanks.  so while ubuntu is focussed on kvm, as free software, canonical is also continuing to work with others to improve how other vm technologies work on the platform
<nijaba> nealmcb: exactly
<skwashd> nealmcb: pity kvm is currently useless for real world stuff in hardy
<mathiaz> skwashd: could you be more explicit ?
<nijaba> skwashd: you should talk about this with soren, I do not think he would agree. Me neither.  Now, it is clear that we have some work to do before release, but wait, and you'll see...
<mathiaz> skwashd: I use kvm on a daily basis and it works great.
<skwashd> mathiaz: sure ... bridged networking has been broken for almost 2 weeks
<skwashd> and trying to get any help to confirm the problem is like pulling teeth
<skwashd> this is on hardy
<nijaba> skwashd: make sure the bug is reported (and this one is) and it will be fixed
<nijaba> skwashd: hardy is alpha...
<skwashd> nijaba: is it ... i can't find it on launchpad
<skwashd> nijaba: i know the state of hardy
<skwashd> someone on #kvm has suggested that tap io is going to the console ...  but no suggested fix
<nijaba> skwashd: I think it is, but you should check and report if not
<skwashd> nijaba: i am still trying to track down exactly where the problem is ... and report the bug against the right package
<nijaba> skwashd: ok, thanks for the search then :)
<skwashd> and i have given launchpad a good workout .... haven't seen anything relevant
<nijaba> skwashd: you should maybe go talk about it on #ubuntu-virt.  I know kees and soren have been troubleshooting bridge networking together last week
<skwashd> nijaba: ok ... will do
<skwashd> #ubuntu* chans breed quicker than a couple without a tv
<nijaba> skwashd: I like the metaphore :)
<skwashd> nijaba: i took the religious part out - coc compliance :)
<nealmcb> skwashd: lol - and thanks for helping make kvm better!
<skwashd> nealmcb: np :)
<skwashd> i love the idea of kvm
<nealmcb> nijaba: thanks - I forwarded that on with refs to a few more things you mentioned, etc
<nijaba> nealmcb: I just saw that (along with the typos in my reply).
<nealmcb> nijaba: oh - I didn't notice them.  just my own "expert advice from an expert" inexpertitude....
<nealmcb> :-(
<nijaba> nealmcb: no worry, I should have taken the time to proofread, and I think it is still quite understandable ;)
<kirkland> mathiaz: are you still around?
<kirkland> ScottK2: hi, are you around?
<kirkland> ScottK2: I was looking for some feedback on the init script stuff, and I thought you might have an opinion on a draft of an email i was about to send to ubuntu-devel
<kirkland> all: this is an email I'm drafting to ubuntu-devel regarding the "status" action in init scripts.  I'd like any feedback from we-the-server-guys before I drop it on ubuntu-devel: http://ubuntu.pastebin.com/m6eb6cdbd
 * ScottK2 looks
<ScottK2> kirkland: My suggestion would be to discuss this with release management before you send the mail and get one big FFe for the project.  If they say no, make it a spec for Hardy +1.
<kirkland> ScottK2: okay, good idea.  That's what I was hoping for, one blanket FFe.
<kirkland> ScottK2: only for a list of defined, agreed-upon services for Hardy
<kirkland> ScottK2: and a full spec for Hardy+1
<ScottK2> Right.
<kirkland> ScottK2: very good, thank you for your input
<kirkland> ScottK2: I'll run that list of suggested packages to be affected by this too, if that's cool
<ScottK2> I'd definitely have that list handy when you go talk to slangesek.
<steve176> Hi. How do I get tomcat to start after reboot? sudo /etc/init.d/tomcat start works from the command line and I have S71tomcat symlinks in /etc/rc2.d -> /etc/rc5.d
<mathiaz> kirkland: re your email
<mathiaz> kirkland: I wouldn't mention "a great number of package"
<mathiaz> kirkland: I'd say that the target for hardy are just daemons that are shipped on the ubuntu-server isos.
<kirkland> mathiaz: k
<kirkland> mathiaz: i was trying to avoid defining the list in this email
<kirkland> mathiaz: and do that in the launchpad bug instead
<mathiaz> kirkland: right - good idea.
<kirkland> mathiaz: try to keep this mail focused on the functionality itself
<kirkland> mathiaz: see if there are arguments with that
<mathiaz> kirkland: but mentionning a huge amount of potential packages can be scary at this stage of the release.
<mathiaz> kirkland: so the tactic would be to suggest what we try to aim for hardy ;)
<sommer> steve176: sudo update-rc.d tomcat5.5 default
<kirkland> mathiaz: perfect, agreed.
<mathiaz> steve176: if you have a symlink in rc3.d it should start on boot.
<steve176> sommer: usage error
<steve176> sommer: do you mean defaults?
<sommer> steve176: yep, sorry defaults
<steve176> mathiaz: thats what I thought. One thing I thought of - maybe JAVA_HOME / CATALINA_HOME (exported by /etc/profile) aren't available that early during boot
<mathiaz> steve176: you shouldn't need to setup for start at boot. This should be done when the package is installed.
<infinity> mathiaz: rc2.d, even.  Our default runlevel is 2, unless someone mangled something very recently...
<steve176> mathiaz: Not a package - downloaded and expanded the tar
<owh> Tsk Tsk
<infinity> steve176: /etc/profile is most likely not sourced on boot at all.
<infinity> steve176: I'd imagine your init script will want to set its environment.
<steve176> So the only way is to hard code tomcat home in /init.d/tomcat
<mathiaz> steve176: is there a reason why you're not using tomcat from the ubuntu repository ?
<steve176> mathiaz: I didn't think to check there was one
<infinity> steve176: /etc/profile is only sourced for login shells.
<owh> 5.5
<steve176> infinity: thx, that's bound to be part of the problem
<mathiaz> steve176: apt-get install tomcat5.5 will do all the work for you
<kirkland> mathiaz: you can see a diff here: http://ubuntu.pastebin.com/pastebin.php?diff=m6820f91d
<owh> steve176: Specifically version: 5.5.25-5ubuntu1
<kirkland> mathiaz: see if that verbage is better
<steve176> mathiaz: sometimes it's nice to do it by hand - that way I'll always have control of the version
<mathiaz> kirkland: wfm
<kirkland> mathiaz: okay, so, i want to file the launchpad bug first, so that I can reference it in the email
<kirkland> mathiaz: i was going to file against lsb-base, is that what you recommend?
<kirkland> owh has a debdiff patch
<kirkland> mathiaz: http://ubuntu.pastebin.com/m3f24c004
<kirkland> mathiaz: that's for the common function
<mathiaz> kirkland: I'd send the email first.
<kirkland> mathiaz: okay
<mathiaz> kirkland: to ubuntu-devel@lists.ubuntu.com
<steve176> mathiaz, infinity, sommer: looks like that fixed it. cheers
<mathiaz> kirkland: your message may be moderated because you're not a developer yet.
#ubuntu-server 2008-03-14
<kirkland> mathiaz: okay, sent.
<owh> Just out of interest has anyone looked at being able to mount Vista UDF disks in Hardy - just tried it on my server vm and it barfed.
<nijaba> goodnight
<owh> nijaba: Thanks for the work you did on the surveys - sorry for all the reports :)
<nijaba> owh: hey, we need testers :)
<owh> You tarzan, me tester :)
<kirkland> mathiaz: you're right...  message awaiting moderator approval
 * faulkes- yawns
<owh> faulkes-: Does your fancy stats code handle IRC logs?
<kgoetz> Jeeves_: ping? you about mate?
<lamont> ScottK: debian bug 470852
<ubotu> Debian bug 470852 in postfix "postfix: Wildcard virtual alias maps cause unwanted bounces" [Important,Open] http://bugs.debian.org/470852
 * lamont -> bed
<ScottK2> lamont: IMO anyone using domain catch-all's in 2008 pretty well deserves whatever pain they get.  OTOH, the behavior isn't ideal.
<faulkes-> owh: no, fancy stats code does not handle irc but it could be adapted to do so
<faulkes-> (yes I realize the reply is probably about 6 hours too late
<Jeeves_> kgoetz: I was sleeping. :)
<Jeeves_> kgoetz: What's up?
<owh> Jeeves_: The sun?
<Jeeves_> owh: Yeah, that did come up a about an hour ago. :)
<owh> Jeeves_: Over here it's already well and truly on its way down again :)
<Jeeves_> owh: 07:31, the clock says. I'm sitting here with sleepy eyes :)
<owh> Jeeves_: It depends on which clock you're looking, but given yours I'm thinking Holland :)
<pschulz01> Jeeves_: kgoetz Is on his way home at the moment.
<Jeeves_> owh: Correct
<Jeeves_> pschulz01: Ah ok. Did he want to talk about the T1000's ?
<Jeeves_> (That you know of)
<pschulz01> Jeeves_: Not sure.. we have an older sparc that he is trying to install.. we have passed the T1000 off to anotrher developer who is using Slowaris.
<Jeeves_> pschulz01: They're both pinging here. Still running solaris.
<owh> Jeeves_: Amsterdam die groote stad die is gebouwed op paalen ;-)
<Jeeves_> I'm thinking I'll give you guys access to my server, running dhcpd an tftpd and access to the serial console of the T1000
<Jeeves_> So you can do your thing and try to make it boot
<Jeeves_> owh: :)
<pschulz01> Jeeves_: I think kgoetz would appriciate that :-)
<Jeeves_> pschulz01: :)
<Jeeves_> Does he have pgp?
<pschulz01> Jeeves_: I would like to know that I can reinstall either OS's.
<pschulz01> Jeeves_: Yes..
<Jeeves_> What's his emailaddress, I'll send him an email with login data
<owh> Now this is weird. Yesterday or so I came here to ask why a daily cron wouldn't run and for debugging purposes the suggestion was to install mail on this server - local delivery only. Now this cron job is running - there were no other changes. The mail has the log output of the cron job that was previously failing. This makes no sense to me. Any ideas?
<Jeeves_> owh: Did you restart cron? (I had to do that yesterday on a box after placing files in /etc/cron.d/
<pschulz01> Jeeves_: I'm trying to see if it's in launchpad.
<owh> You're not going to tell me that a cron job fails because there is no place for the output to go?
<Jeeves_> pschulz01: Ack
<Jeeves_> owh: Not that I know of
<owh> Jeeves_: Nope, no restart of cron. I'll check the history just to make sure.
<Jeeves_> https://launchpad.net/~kgoetz
<owh> Jeeves_: Oooh, you might be onto something there. invoke-rc.d cron reload was the last thing I did :)
<owh> So, cron doesn't detect new stuff in it's tree?
<owh> Hmm, but run-parts --test showed me everything.
<owh> Is that a bug or a *feature*?
<Jeeves_> owh: That would be a bug
<Jeeves_> However, I've never seen it before, except for this week
<owh> I'm not even sure how it could even happen.
<owh> I mean run-parts runs every time cron runs.
<Jeeves_> Me neither
<owh> You know, come to think of it, I wonder if in the deeper past I've seen something similar.
 * owh goes to RTFM for a bit.
<owh> It says: "Like /etc/crontab, the files in the /etc/cron.d directory are monitored for changes."
<owh> That makes no sense, the Debian policy manual says: " If a service reloads its configuration automatically (as in the case of cron, for example), the reload option of the init.d script should behave as if the configuration has been reloaded successfully."
<owh> Indicating that I shouldn't need to reload cron.
<owh> Jeeves_: Do you remember the circumstances in which you had to reload cron?
<Jeeves_> owh: Yes, we copied files in /etc/cron.d/
<owh> And they were in crontab format, rather than scripts?
<pschulz01> Jeeves_: I've email kgoetz.. expect an email from him at some stage.
<owh> It seems that .d wants one format, .daily, .weekly, .monthly seem to want scripts.
<owh> s/one/crontab/
<Jeeves_> pschulz01: I'm allready emailing him :)
<owh> Jeeves_: Which version of OS?
<Jeeves_> gutsy
<owh> Ditto
<owh> Better test hardy :) - I'll run it on my vm.
<owh> Jeeves_: That will be interesting. I've created a demo cron job on both hardy and gutsy without reloading cron. I'll see what happens.
<Jeeves_> pschulz01: He's got email :)
<owh> Jeeves_: Well, my hourly cron job just ran without any problems at all. No reload needed - gutsy workstation. (In case you're wondering, I setup a cron.hourly to work the same as cron.daily, very handy)
<_ruben> hmm .. thought i started work early, but jeeves beat me to it for quite a bit it seems ;)
<Jeeves_> _ruben: Sorry bout that :)
<soren> Er... You never need to reload cron.
<soren> Regular users can have a crontab. It wouldn't be very useful for them if they had to e-mail their admin to get them to reload cron every time.
<_ruben> hehe
<henkjan> _ruben: no Jeeves_ at the office right now. so i guess you were earlier :)
<owh> soren: I realise that this is how it should be, but both Jeeves_ and I have experienced a situation where a job wouldn't run until cron was reloaded.
<soren> owh: I'm not sure how often it reloads or how it determines what to reload..
<soren> owh: But if it's failing to reload at all, that's a *serious* bug.
<owh> Well, I had a daily cron job that hadn't run for 5 weeks. Now it magically does.
<owh> soren: According to suggestions here I installed postfix on the server in question *and* I reloaded cron.
<owh> soren: It's possible that the cron job didn't run because there was no mail, but the job itself does not use any mail at all. It does however generate backup logging output.
<_ruben> henkjan: guess he's mindcontrolling his computer from afar then ;)
<owh> soren: I'm uncomfortable with the notion that a cron job wouldn't run because output is not going anywhere and that would cause it to fail.
<henkjan> _ruben: irssi + screen on his colo server. And working from his computer at home :)
<soren> owh: I can understand that
<owh> soren: The very last thing I did three days ago on that server was reload cron, and low and behold, now the script is running.
<owh> I've just created daily scripts on three different machines, hardy-server, gutsy-server, gutsy-workstation to see what gives.
<_ruben> henkjan: guess "work" can be a wide subject .. irssi + screen here as well .. actually currently trying to find an irssi script/plugin i used to use ages ago which shows a horizontal line to indicate which part of a channel i've already 'read'
<owh> Heritage computers facing scrap heap: http://www.abc.net.au/news/stories/2008/03/14/2189966.htm
<kraut> moin
<Kamping_Kaiser> Jeeves_, ping? (just got home)
<Kamping_Kaiser> Jeeves_, i have an encrypted email from you - i need to talk to you about it :)
<_ruben> hehe
<Kamping_Kaiser> :P
<Kamping_Kaiser> owh, sad :(
<_ruben> Jeeves_ probably fell asleep again or so ;)
<Kamping_Kaiser> hehe
<henkjan> Jeeves_ is at the office right now
<henkjan> just not at his desk
<owh> Kamping_Kaiser: What's sad?
<Kamping_Kaiser> owh, chucking out history
<henkjan> Kamping_Kaiser: maybe i can help you?
<owh> Kamping_Kaiser: Ah, now I get it. Yes.
<Kamping_Kaiser> henkjan, i needed to ask him about his gpg key, but i might have just been pebkacing
<Kamping_Kaiser> woot. my fault :)
<Kamping_Kaiser> henkjan, thanks anyway
<henkjan> okay, problem fixed :)
<Kamping_Kaiser> :)
<_ruben> grrr.. stupid windows backup server .. its on gbit .. but most of the time im getting 30Mbps to it, and occasionally 300Mbps .. no fun copying 200GB at 30Mbps
<owh> Try 60Gb at 5k
<Kamping_Kaiser> afk a while, having a liedown
<_ruben> owh: rather not ;)
<owh> _ruben: Chicken :)
<Jeeves_> Kamping_Kaiser: I was at the dentist
<_ruben> scaru
<_ruben> scary too
<ivoks> it stupid that we don't allow degraded raid array to boot :/
<_ruben> that'd be rather stupid indeed
<ivoks> where should i mount root partition in busybox? to /root?
<Jeeves_> ivoks: Who doesn't allow degraded array's to boot?
<ivoks> ubuntu :)
<nijaba> ivoks: file a bug, I find it sad as well...
<ivoks> https://bugs.edge.launchpad.net/ubuntu/+source/mdadm/+bug/125471
<ubotu> Launchpad bug 125471 in mdadm "Booting from a degraded array could be improved" [Undecided,New]
<ivoks> i'm fighting with degraded / over the phone for 2 hours
<nijaba> ivoks: hmm, an old one.  I'm going to see if we can do something about it in hardy
<ivoks> argh... now i have to go to the city 100km away :/
<jdstrand_> soren, nijaba: is there a particular reason why you are disabling acpi in ubuntu-vm-builder when using --libvirt?
<zul> morning
<jdstrand_> hi zul!
<nijaba> jdstrand_: no idea
<zul> hi jdstrand_!
<jdstrand_> nijaba, soren: cause on hardy 386 in a vm, it is needed to use the keyboard
<jdstrand_> I don't know all the combinations of releases/archs need it though
 * jdstrand_ ponders 5 releases, 2 archs, and (at least) 3 kernel flavors
<jdstrand_> and acpi on/off
<jdstrand_> so 60 combinations for ubuntu only
<soren> jdstrand: Oh, that's because I'm an idiot.
<jdstrand> soren -vv
<jdstrand> (when -v just isn't enough;)
<soren> It's quite possible that the libvirt xml I used as a template was for running WinXP.
<soren> jdstrand: The last ubuntu-vm-builder works as expected, right?
<jdstrand> soren: I don't know about the keyboard cause I would always get the grub Error 15
<soren> Still?
<soren> What's the command line you use again?
<jdstrand> soren: I don't think u-v-b ever created the <features><acpi/></features>
<jdstrand> for 0.3:
<jdstrand> sudo ubuntu-vm-builder -d ./hardy_386 --domain strandboge.com --hostname hardy-386 --mirror http://192.168.122.1/ubuntu --addpkg openssh-server -s gutsy --vm kvm --user james --part ./partfile --kernel-flavour 386 --components main,restricted,universe,multiverse --ip 192.168.122.10 --libvirt qemu:///system -a i386
<jdstrand> soren: note I used '-s gutsy' here, cause of the grub error
<jdstrand> soren: I then upgraded
<soren> I'd like to see the cmdline that causes grub error 15.
<jdstrand> soren: do you mean the u-v-b invocation, or the output from grub when running u-v-b?
<soren> The former.
<jdstrand> soren: let me downgrade u-v-b and try again
<soren> jdstrand: Nono, use the latest.
<jdstrand> soren: that isn't where I saw Error 15, but ok (I haven't tried the new one yet)
<_ruben> hmm .. afaik there's a way to do the installation of ubuntu-server over ssh .. cant find how that was done .. google keeps showing me howtos on how to install ssh :p
<soren> _ruben: google for network-console-udeb, IIRC.
<_ruben> soren: lets give that a try
<jdstrand> soren: I am running this now: sudo ubuntu-vm-builder kvm hardy -d ./hardy_386b --domain strandboge.com --hostname hardy-386b --mirror http://192.168.122.1/ubuntu --addpkg openssh-server --user james --part ./partfile --kernel-flavour 386 --components main,restricted,universe,multiverse --ip 192.168.122.11 --libvirt qemu:///system -a i386
<jdstrand> soren: and got this error (but it continued on):
<jdstrand> /usr/bin/ubuntu-vm-builder: line 193: validite_arch: command not found
<jdstrand> soren: I'll let you know when I boot into it
<soren> Ah, maybe I need to spell validate properly?
<jdstrand> oh!
<jdstrand> I happen to know the arch is valid ;)
<jdstrand> soren: crud, apt didn't finish upgrading to ppa4
<jdstrand> soren: trying again
<jdstrand> soren: rm: cannot remove directory `/tmp/vm-builder-QZXMYX8296/root/lib/modules/2.6.24-12-386/volatile': Device or resource busy
<jdstrand> soren: *but* no grub error
<jdstrand> \o/
<jdstrand> soren: and no <features><acpi/><features/>
<jdstrand> soren: but it won't boot
<jdstrand> ACPI: Unable to load the System Destription Tables
<jdstrand> PCI:...
<jdstrand> Loading, please wait...
<jdstrand> and hang
<Kamping_Kaiser> bleh. i felll asleep
<_ruben> hehe
<jdstrand> soren: I added the <acpi/> bits and rebooted and it still won't boot...
<jdstrand> soren: interesting
<jdstrand> soren: well, maybe not
<jdstrand> nm
<jdstrand> soren: well possibly :)
<jdstrand> soren: ok, I disabled splash and quiet and it hangs at 'Uniform CD-ROM driver Revision: 3.20'
<jdstrand> I do have a warning about sda and write cache disabled... doesn't support DPO or FUA
<jdstrand> eh heh, it dropped me to initramfs
<jdstrand> /dev/hda1 does not exist
<jdstrand> soren: ^^
<jdstrand> is this the uuid stuff?
<soren> jdstrand: Sorry, I was on a call with Rick.
<jdstrand> np-- I will be now too :)
<soren> Oh, I thought mathiaz was next :)
<soren> I think I know what at least part of the problem is.
<_ruben> hmm .. only gettin 1MB/s from the nl mirror .. sucks downloading isos as such low speeds :p
<jdstrand> *shrug* asked for in 5, I said 'ok'
<soren> jdstrand: The problem seems to be that the virtual kernel is not currently installable.
<henkjan> _ruben: that must be your connection
<_ruben> henkjan: very well possible .. was getting only 500KB/s to utwente (first dutch mirror in the list)
<_ruben> even tho im on 100Mbps fiber
<henkjan> _ruben: bug your provider
 * _ruben fires up a speedtest.net
<_ruben> hmm .. throughput seems darn crappy at the moment .. strange
<_ruben> heh .. only 1Mbps to Madrid :p
<_ruben> ow well .. already completed burning the iso .. no complains for co-workers .. lets hope its just another friday-afternoon-hickup
<henkjan> get a decent ISP like BIT :)
<_ruben> henkjan: i doubt you can compete with our current setup .. pricewise .. not sure what we pay, but its no much for a fiber line that's supposed to be 50mbit but in reality gives us 100mbit
<_ruben> the 100mbit is shared by the complete complex, but the complex is stuffed with low-usage customers
<henkjan> not today :)
<_ruben> true :p
<Kamping_Kaiser> *radiates dislike*
<_ruben> too lazy to try to figure out where the problem might be
<Kamping_Kaiser> our fastest internet link at work is 2mb, which is via the usa (and as such has massive latency)
<_ruben> and contacting the isp isnt that easy .. unless i bypass our very-hard-to-contact direct contact and go straight to our colo provider which happens to be the actual provider of the fiber as well (wedare)
<henkjan> Kamping_Kaiser: i've got 2x 10G :)
<_ruben> Kamping_Kaiser: our backup dsl line is already 20mbit i think .. never done any real speedtests on it
<_ruben> biggest pipe i have avail is the 1gbit uplink we have at our other colo
<Kamping_Kaiser> _ruben, the fastest link i have access to (itshare sa's) is 8mb~
<Kamping_Kaiser> henkjan, :( sounds like a mate of mine at one of the unis in adelaide. his slowest net link is the gigabit to his desktop
<Kamping_Kaiser> s/net link/link to the net
<henkjan> Kamping_Kaiser: actually, i'm working at an ISP :)
<Kamping_Kaiser> henkjan, i gathered :)
 * _ruben should consider a jobswitch :p
<Kamping_Kaiser> hehe
<henkjan> _ruben: http://bit.nl/vacatures
<Kamping_Kaiser> visit australia and see what the 'first world' finds fast (or come to PNG and find out what the 3rd world finds fast...)
 * _ruben clicks on it for the fun of it
<_ruben> unix engineer might suit me
<henkjan> _ruben: wel, come join a nice club with smart people
<_ruben> tho my skills are probably too limited .. they're sufficient for my current place tho
<Kamping_Kaiser> "Medewerker" - something worker... whats mede?
<henkjan> _ruben: mwoah, if you are motivated to learn
 * Kamping_Kaiser is amused at english words scattered through the site
<_ruben> henkjan: motivated i am .. i'd feel sorry for my current company tho, they'd be near dead in the water without me .. ok .. a bit exagurated (sp?)
<_ruben> Kamping_Kaiser: coworker
<Kamping_Kaiser> ah
<sergevn> does anyone has any experience with setting up an WPA Enterprise + Freeradius environment?
<jdstrand> soren: I am not specifying the virtual kernel
<jdstrand> (I was on the phone)
<jdstrand> soren: is it required even with the 386 flavor?
<jdstrand> soren: but anyway-- you did see that it booted and dropped to initramfs, right?
<Kamping_Kaiser> sergevn, i've not used WPA, but i  have used Freeradius (a little)
<sergevn> Kamping_Kaiser: In what context did you use freeradius?
<_ruben> henkjan: and ede is a bit far me ;-) (not sure if the pun was intended or not)
<_ruben> for me even
<henkjan> _ruben: ah, that would be te biggest issue :)
<Kamping_Kaiser> to authenticate users against a remote system (no group data/login data, just username/password validity)
<_ruben> henkjan: currently working in rotterdam and living in zoetermeer .. zoetermeer - ede is an 1 hr drive :p
<_ruben> without traffic ;)
<henkjan> _ruben: thats what i'm driving daily (rijssen - ede)
<_ruben> jikes :p
 * _ruben takes a stroll down to the basement to reinstall one of his sles9 vmware hosts with ubuntu 7.10
<soren> jdstrand: Gutsy and onwards on i386 defaults to virtual.
<soren> jdstrand: Anything else defaults to server (or amd64-server on dapper/amd64).
<jdstrand> soren: but I specified the flavor
<soren> jdstrand: Oh.. Er.
<soren> jdstrand: Good point.
<jdstrand> --kernel-flavour 386
<jdstrand> :)
<soren> jdstrand: Yeah, I removed a stack of your options as they didn't apply here (network and mirror stuff).
<soren> jdstrand: And accidentally removed the kernel-flavour, too.
<soren> jdstrand: I'll try again.
<jdstrand> soren: too big of a stack apparently
<jdstrand> :)
<jdstrand> np
<soren> jdstrand: Ok, it works for me now. I'll push to launchpad and the ppa.
<kirkland> cjwatson: hi there--I have a message pending moderator approval on ubuntu-devel@lists.ubuntu.com...  mathiaz said you were a moderator on that list, and could probably push it through?
<Jeeves_> Kamping_Kaiser: Hi there!
<Kamping_Kaiser> Jeeves_, hello!
<jdstrand> soren: thanks
<_ruben> sweet .. install over ssh seems to work .. thanks for the pointer, soren
<soren> _ruben: Er... Which pointer was that? :)
<_ruben> the .. uhm .. what was it called again :p
<_ruben> the network console thingie
<cjwatson> kirkland: done. (It would be nice if the supporting function in lsb-base went to Debian so that we don't have to maintain deltas against every single package with an init script!)
<soren> _ruben: Oh! I somehow assumed we were talking about virtual installs or something.  :)
<_ruben> ah :)
<kirkland> cjwatson: absolutely, you bet.
<kirkland> cjwatson: any idea if there's a historical, political aversion to LSB in Debian?  (something I should be aware of before embarking on that trip?)
<Kamping_Kaiser> Jeeves_, i've successfully ssh'd in :)
<_ruben> crap .. unless im blind then the installer doesnt allow the striping option of logical lvm volumes
<cjwatson> kirkland: Debian adopted lsb-base fairly happily, although I think there has been a bit of controversy over 'status' specifically; I'd search debian-policy archives
<jdstrand> soren: ppa5 is a go! \o/
<kirkland> cjwatson: very good, will do.
<jdstrand> soren: hardy/i386 with 386 kernel flavor works great now
<jdstrand> soren: so you are calling grub commands on your own now rather than update-grub?
<TrioTorus> Hi, so what's with ubuntu security updates when using ebox packages? On the ebox trac page, it says that the ebox team needs to verify every security update. How is this going to work in Hardy?
<jdstrand> TrioTorus: first I've heard of that from them
<jdstrand> TrioTorus: ebox will be in universe, so it will be community supported, so I guess they will provide the updates for us to upload?
<TrioTorus> so, generally speaking, there will be no need to activate an extra repository?
<jdstrand> I would hope not
<jdstrand> TrioTorus: can you give a link?
<TrioTorus> jdstrand, http://trac.ebox-platform.com/wiki/Document/Development/DebianSecurityUpdates
<TrioTorus> this could well be out of date though...
<mathiaz> kirkland: you should search the debian bug track system in the debian-policy package.
<Jeeves_> Kamping_Kaiser: I see!
<jdstrand> zul: ^^
<mathiaz> kirkland: the wiki page on wiki.debian.org about LSB init script has a link to a bug in debian bts about adding status to the policy.
<jdstrand> zul: do you have a comment on this? I really think the ebox folks should just provide debdiffs in the normal way for universe security
<Kamping_Kaiser> Jeeves_, :) now i have to brush up on my ALOM and tftp skills
 * kirkland going there now, mathiaz 
<mathiaz> _ruben: are you using preseeds ?
<zul> the ebox guys provide us debdiffs and we have a look at them and intergrate them
<_ruben> mathiaz: no, running d-i in expert mode
<zul> so for ebox we are just the packaging guys
<Jeeves_> Kamping_Kaiser: You have experience with that stuff?
<mathiaz> _ruben: you get a message similar to: no root disk found ?
<Kamping_Kaiser> Jeeves_, a little, from mesing with our sun at work before the dev team got their hands on it
<_ruben> mathiaz: im still in the d-i completing the installation ;)
<kirkland> mathiaz: from Wed, 19 Jan 2005 01:30:55 +0100  :-)
<Jeeves_> Kamping_Kaiser: I would like to do the second one myself and make nice screenshots and stuff, for our weblog
<Kamping_Kaiser> Jeeves_, sure. i'll try and doc it as i go to help you when you do yours
<mathiaz> _ruben: there is a bug in the debian installer related to pv and lv - it's not able to wipe physical volumes
<mathiaz> _ruben: hum - I misread your comment
<_ruben> mathiaz: hmm .. d-i doesnt offer me grub either, only lilo .. perhaps i should go down a diff road
<mathiaz> _ruben: you were looking for striping option in lv
<mathiaz> _ruben: that's probably because you've installed /boot on an lv
<mathiaz> _ruben: grub doesn't support LV IIRC
<_ruben> mathiaz: ah crap!! .. i created a sw raid for /boot and forgot d-i to use it for /boot .. damnit :p
 * _ruben writes a mental note to self: dont do installs on friday afternoons
 * _ruben goes back a few steps to the partitioner part
<cjwatson> d-i in expert mode> you mean "I am an installer developer and want lots of noise" mode? :-)
<Jeeves_> Kamping_Kaiser: Coolio
<mathiaz> cjwatson: are bug 154086 and bug 107205 the same ?
<ubotu> Launchpad bug 154086 in partman-auto-lvm "Installing to HDD with previous ubuntu fails to create fresh LVM claiming group already in use" [Medium,Confirmed] https://launchpad.net/bugs/154086
<ubotu> Launchpad bug 107205 in partman-auto-lvm "LVM install crashed - lvm metadata not removed when creating a new partition layout." [Medium,Triaged] https://launchpad.net/bugs/107205
<mathiaz> cjwatson: when I do my test, the preseed install still fails if there are previous lv/vg/pv on the disk
<_ruben> cjwatson: i more or less misinterpreted the expert boot option .. i kinda expected it to take me to the d-i menu, which it did, but additional asks me tons of extra questions as well ;)
<_ruben> "luckily" my lvm is on top of sw raid .. so with clean sw raid, there's no old lvm :p
<cjwatson> mathiaz: no, I don't think they are. 107205 depends on whether the group is activated or not; your bug is 154086.
<cjwatson> which depends on whether the group exists on the disk or not
<cjwatson> anyway, there's a preseed for that
<cjwatson> I think
<mathiaz> cjwatson: I think we talked about that - and the preseed doesn't work
<cjwatson> well, that depends on which preseed you were using
<cjwatson> it changed in hardy
<mathiaz> cjwatson: I'm using "d-i partman-auto/purge_lvm_from_device boolean true"
<cjwatson> mathiaz: that's the <=gutsy one; use partman-lvm/device_remove_lvm in hardy
<mathiaz> cjwatson: awesome - I'll try that !
<mathiaz> cjwatson: that should fix a very annoying problem when I reuse hardware to test installs
<Kamping_Kaiser> night all
<_ruben> install complete .. now i gotta walk down to the basement again to remove the cdrom to have it boot properly :/
<jdstrand> zul: so that will be the case after hardy is released? ie they will provide backported security updates?
<zul> jdstrand: that would be my understanding
<cjwatson> mathiaz: please let me know whether that works; however, I'm surprised that it didn't already ask you the question
<mathiaz> cjwatson: well - I'm not doing manual installs.. Everything is automated.
<cjwatson> mathiaz: in general, I would appreciate it if you could go into more detail about the exact way in which a preseeded install fails, rather than just saying that it fails. Usually a "failure" means that it asks a particular question (which might be something you can respond to, or might be a fatal error)
<cjwatson> mathiaz: but then you should be familiar with using DEBCONF_DEBUG=developer to figure out which question it is asking so that you can answer it!
<cjwatson> it's rarely useful to treat these things in a binary worked/didn't-work way
<ivoks> does anyone objects to MIR for libqwt5-qt4; it's a simple qt library, having it in main would make it possible to have bacula gui console
<mathiaz> cjwatson: ok - I'll look into that when my preseed fails.
<nxvl> happy pi day!
<nxvl> i mean, good morning
<nxvl> :P
<cjwatson> mathiaz: in particular, if partman-lvm/device_remove_lvm is indeed what you need to set, DEBCONF_DEBUG=developer would have shown that the installer was asking it
<cjwatson> but if it's something else, then the log should be helpful in figuring out what
<mathiaz> cjwatson: thanks for clarifying this.
<cjwatson> mathiaz: also, happy to help with automation problems on #ubuntu-installer; I pay more attention there than here, in general
 * _ruben should probably go there and "whine" about lack of lvm striping support in the installer ;)
<cjwatson> that's odd, mdcfg has code for raid0
<cjwatson> oh, lvm striping, not raid striping
<_ruben> yeah
<_ruben> nasty way of achieving raid10/raid0+1 (always mix those 2 up)
<cjwatson> you could just use raid5
<_ruben> since raid10 is lacking from the installer as well (which would be another solution)
<_ruben> i need good write performance
<cjwatson> ah
<_ruben> which basicly means i should get hw raid ;)
<cjwatson> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=384521 though nothing's ever been done with it
<ubotu> Debian bug 384521 in installation-reports "wishlist: striped logical volume with LVM" [Wishlist,Open]
<cjwatson> I'll reassign it to the right place, at least
<_ruben> ah, nice
<_ruben> aug 2006 .. hehe
<_ruben> time to go home .. weekend
<lunaphyte_> hi :)
<lunaphyte_> i've been struggling with updating a server from 6.10 to 7.10.  i updated from 6.10 to 7.0.4 and finally got over the problems i had with that (it wouldn't boot) and so yesterday went through the process again to update from 7.0.4 to 7.10 and i again have a server that won't boot.
<lunaphyte_> this is what happens : http://i25.tinypic.com/25sym8g.jpg
<lunaphyte_> i booted from the install disc and ran a rescue shell, but their don't appear to be any raid or lvm programs available.  what can i do?
<cjwatson> lunaphyte_: the installer's rescue *mode* should give you raid/lvm
<cjwatson> assuming this is an alternate install CD
<cjwatson> (or server install CD)
<cjwatson> at least as of 7.10
<cjwatson> don't try booting the installer with init=/bin/sh or root=blah or whatever, that won't help much
<cjwatson> rescue/enable=true would be the parameter you want, or the rescue mode on the CD boot menu
<lunaphyte_> cjwatson: yeah, it's the server install cd.  i think i've figured at least one problem out - i have some sort of cd-rom or optical drive issue, although it's not clear what.
<lunaphyte_> maybe someone here can make sense of it:
<lunaphyte_> if i boot from the cd on a computer in my office, i can go through the process of getting into rescue mode, etc with no errors.  so that says to me, the cd is fine, the iso i downloaded is fine, etc,etc.
<lunaphyte_> but, if i take that same cd and use it to boot this server i'm working on, it boots, it loads the installer, it reads files off the disc, yet it complains that there is a problem with integrity of the disc, and craps out.
<lunaphyte_> so on the one hand, i'd conclude that since i proved the disc is good on another computer, it must be the optical drive.  but why would the optical drive appear to be able to do everything it needs to do expect for reading this one particular file?
<lunaphyte_> i guess i'll stop whining and just go find another drive and swap them out.   :)
<cjwatson> have you tried cleaning the drive lens? I say this a lot, but it does seem to be a fairly common problem, and distressingly hard to figure out
<cjwatson> there could well be a problem positioned such that it only affects data in particular parts of the disk
<jmtt> kirkland: confcall today?
<kirkland> jmtt: we were just trying to figure that out....
<kirkland> jmtt: none of us had an invite
<jmtt> kirkland: please join now. we are still on.
<lunaphyte_> cjwatson: yeah - that was my first thought.
<lunaphyte_> i figured it out - the drive was connected w/ a 40 conductor cable instead of an 80 conductor cable.
<lunaphyte_> that's one of those things that, for me, i always forget to consider because i still don't really understand it.
<kirkland> jmtt: joining....
<jmtt> kirkland: same confcall info as last time if you can find it
<jmtt> kirkland: i can't private msg you for lack of freenode registration
<kirkland> jmtt: gotcha, we're looking it up
<jmtt> kirkland: ok
<kirkland> jmtt: can you add me to the permanent invite?  dustin.kirkland@canonical.com
<lunaphyte_> ok, i was able to get recovery mode to fully boot.  it saw all of my md devices, and all of my lvm volumes, and i was able to start a shell using my root volume.  how can i troubleshoot this?
<AnRkey_eepc> hi all
<AnRkey_eepc> i have some problems here with networking
<AnRkey_eepc> i have two nics in a 7.10 server box
<AnRkey_eepc> ubuntu-desktop installed
<AnRkey_eepc> can ping adsl router on one nick
<AnRkey_eepc> can't ping past it
<AnRkey_eepc> and now it won't boot so never mind, reinstalling
 * AnRkey_eepc breaks down in tears
<lunaphyte_> if i have 5 md devices (md0-4), should they all show uuids in the output of vol_id?
<theblogg1> can someone help me with
<theblogg1> this
<theblogg1> please
<theblogg1> ?
<theblogg1> http://ubuntuforums.org/showthread.php?t=724213
<AnRkey_eeepc> how do i enable routing on ubuntu?
<AnRkey_eeepc> i have 2 nics in this box and I want my second server to be able to get internet access through it
<faulkes-> AnRkey: same way you would do it in almost any linux distribution
<faulkes-> sudo sysctl -w net.ipv4.ip_foward="1"
<faulkes-> or uncomment the line about ip forwarding in /etc/sysctl.conf
<faulkes-> and reboot
<faulkes-> err, no, sudo sysctl -p /etc/sysctl.conf iirc
<faulkes-> man sysctl in other words ;)
<jdstrand> faulkes-!
<jdstrand> faulkes-: did you get my message re mysql?
<ScottK2> lamont: Let me know if you want to discuss the postfix bug.
<lamont> ScottK: later on or maybe this weekend, sure
<ScottK2> OK.  No rush.
<lamont> there was another one that came in today wrt not honoring smtpd_recipient_restrictions or some such that I suspect is Pebkac.
<jdstrand> soren, nijaba: I am *loving* the ubuntu-vm-builder/libvirt combination.  It and you guys are *awesome*
<faulkes-> jdstrand: message re: mysql?
 * faulkes- looks around
<jdstrand> faulkes-: I'll recap
<jdstrand> faulkes-: I recently did a -security update for mysql
 * faulkes- nods
<jdstrand> faulkes-: it had some significant changes, esp on dapper-feisty
<jdstrand> faulkes-: so I uploaded it to -proposed for more testing
 * faulkes- nods again
<jdstrand> faulkes-: mathiaz thought it might be a good idea to announce it somewhere on the forums
<jdstrand> maybe a developer section?
<jdstrand> faulkes-: let me get the bug...
<jdstrand> bug #201009
<ubotu> Launchpad bug 201009 in mysql-dfsg-5.0 "[mysql-dfsg-5.0] fix for several open vulnerabilities in -proposed" [High,Fix committed] https://launchpad.net/bugs/201009
<jdstrand> faulkes-: really all that is needed is for people to enable -proposed, install the packages and go to the bug and say 'works for me on <release>'
 * faulkes- nods
<jdstrand> faulkes-: I am not sure where the best place would be for that (I am a total forums newbie)
<jdstrand> faulkes-: or if it is appropriate at all for the forums
<jdstrand> faulkes-: thoughts?
<faulkes-> well
<faulkes-> I think it is appropriate
<faulkes-> ubuntu dev link forum seems most appropriate but it's dead
<mathiaz> zul: did you send your diffs to debian from your samba upload ?
<zul> not yet
<faulkes-> jdstrand: given the nature of the bug, I would say security forum is probably the best place
<zul> its on my todo list
<mathiaz> zul: great - are you using submittodebian ?
<zul> mathiaz: will be :)
<mathiaz> zul: hum... actually I'm not sure it would be usefull in that case as it's not a merge
<jdstrand> faulkes-: http://ubuntuforums.org/forumdisplay.php?f=322 ?
<mathiaz> zul: but a new upstream revision...
<ScottK2> Isn't sending samba diffs to Debian as simple as giving it to slangasek?
<mathiaz> ScottK2: true - slangasek is doing a great job at it
<mathiaz> ScottK2: but he is quit busy these days with beta coming up next week
<mathiaz> ScottK2: and christian is already working on packaging 3.0.28a for debian
<ScottK2> Fair enough
<faulkes-> jdstrand: yeah
<mathiaz> ScottK2: so the earlier we send our patches, the higher the chance to get it included in Debian in the next upload
<zul> I was thinking of subscribing to the pkg-samba ml and sending the patches there
<mathiaz> zul: I think they prefer to use the bts
<zul> mathiaz: ok
<zul> Ill take out the relevants
<mathiaz> zul: I'm subscribe to the pkg-samba ml and all the bugs are sent there
<mathiaz> zul: you may wanna have a quick discussion with steve about this though
<zul> will do
<jdstrand> faulkes-: I submitted a new thread.  thanks!
<ScottK> mathiaz or dendrobates: Why does ubuntu-standard depend on ufw instead of recommend it?
<mathiaz> ScottK: recommends are not installed by default with apt-get
<mathiaz> ScottK: I also think it's the way the meta-package is created
<ScottK> mathiaz: They are installed by default for meta-packages
<ScottK> If you look at the meta packages most of them moved everything to recommends.
<mathiaz> ScottK: hum.. I don't know then.
<mathiaz> ScottK: may you should ask in #ubuntu-devel
<jdstrand> ScottK: it's in the standard seed
<jdstrand> isn't ubuntu-standard created from the seed?
<mathiaz> jdstrand: yes - from ubuntu-meta
<ScottK> jdstrand: Yes, but it's depends, not recommended.  As an example, I can remove ntfs-3g if I don't want it, but not ufw.
<mathiaz> jdstrand: but the question is why it's a depend rather than a recomend
<mathiaz> IIRC Recommends are installed by the installer
<mathiaz> However apt-get install doesn't install Recommends by default
<mathiaz> so I wonder what would happend on an upgrade
<ScottK> mathiaz: It does for meta packages.  They are a special case.
<owh> Greetings all. I'm looking for some people who can do some testing. I think I've found a bug in cron - specifically, if you add a script to /etc/cron.daily - and if you *don't* specifically reload cron, it never gets executed. I've tested this on a hardy-server, a gutsy-server and a gutsy-workstation. I get mixed results.
<owh> The gutsy-server in question had a problem where a script wasn't running. After a reload it did. I added my test script and that too ran automatically. The hardy-server and the gutsy-workstation both do not run the script.
<owh> Before I lodged a bug I thought I'd first confirm that such a problem exists.
<owh> My test script is this:
<owh> #!/bin/bash
<owh> date >> /var/log/date.log
<owh> Yesterday Jeeves_ reported that he had seen the same problem on his gutsy-server with /etc/cron.d
<owh> Or should I just lodge the bug as it stands?
 * owh is a little reluctant to pull the alarm cord at this stage in the cycle.
<sommer> owh: I'll give it a testy test test
<^u^> am surprised the basic server install uses so much hd space...
<pschulz01> kgoetz: ping
<owh> sommer: Tah
<sommer> owh: setup a gutsy test and a hardy test
<sommer> you may need to remind me to check the results :-)
<owh> sommer: Which TZ?
<sommer> edt
<owh> 6:25am is not the same everywhere :)
<owh> What's that in UTC?
<sommer> -4
<owh> Alternatively, what time is it with your test machines?
<owh> Cool
<sommer> heh... had to double check with the stupid daylight savings thing
<owh> Heh
<owh> Note that the machines didn't reboot after creating the script, which might also affect the outcome.
<owh> s/might/will likely/
<sommer> owh: is there nothing in the logs?
<owh> sommer: The logs show that the daily job ran normally. But the script doesn't actually run. run-parts --test shows the scripts as expected.
<sommer> ah, strange
<sommer> can't say that I'm using any custom cron.daily scripts
<owh> Stranger still, my console just showed my 625 daily cron job run at 735 on the gutsy-workstation.
 * owh checks crontab
<owh> 25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
<owh> That ran, well, the date log said 7:35, meaning that either my daily cron job took 1 hour and 10 minutes to get to the date stamp job.
<owh> Whoa
<owh> grep daily /var/log/syslog shows: Mar 15 07:38:20 localhost anacron[16254]: Job `cron.daily' terminated
<owh> There is really screwwy stuff going on here.
<owh> /var/log/syslog.0:Mar 15 06:25:01 localhost /USR/SBIN/CRON[15300]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
<owh>  /var/log/syslog.0:Mar 15 07:30:03 localhost anacron[16254]: Will run job `cron.daily' in 5 min.
<owh> It appears that both anacron and cron are running the daily job.
<owh> All those are from the gutsy-workstation. The hardy-server shows no daily cron references in any log files.
<owh> I think I should probably put this in a wiki page for documentation purposes at the moment.
<sommer> probably good to keep a record somewhere
<owh> Yup.
<Kamping_Kaiser> pschulz01, pnog
<Kamping_Kaiser> *pong
<pschulz01> Kamping_Kaiser: Just checking :-)
<pschulz01> Kamping_Kaiser: I'm watching blinking lights.
<Kamping_Kaiser> pschulz01, i'm just out of bed
<pschulz01> Kamping_Kaiser: apt-get install pants
<Kamping_Kaiser> pschulz01, i'm slightly ahead of that :p
<Kamping_Kaiser> what do you take me for - jdub?
<pschulz01> Kamping_Kaiser: well....
<Kamping_Kaiser> hehe
<mindframe-> what should i use to monitor bandwidth usage per port?
#ubuntu-server 2008-03-15
<sommer> mindframe-: mrtg works great for that
<owh> sommer: I've written up what I've found so far: https://wiki.ubuntu.com/OnnoBenschop/ubuntu-server/cron-bug
<sommer> cool
<sommer> looks good
<owh> I'm just wondering if its something weird like, reboot - add script - fail, vs reboot - reload - add script - success - add script - success
<sommer> have you tested cron.hourly? ...wondering if it may be effected as well
<owh> Hmm, I thought that I invented that on my system, hardy has it too :) I'll check.
<owh> sommer: So far two out of three act normally, but the one we care about missed the job by about 46 seconds, so I'll have to wait an hour :)
<sommer> wow... maybe something with the hardware?
<owh> I'm beginning to wonder if its an anacron interaction. The hourly job doesn't use anacron.
<sommer> ah... makes sense
<owh> As for hardware, the gutsy-server is a HP proliant, the gutsy-workstation is a Thinkpad and the hardy-server is a vmware-server
<gregbrady> Good evening everyone.
<gregbrady> What is the best way to go about setting up a proxy server in my home network in order to block certain sites from individual users?
<andguent> evenin' :) (i'm thinking most of the room is fairly quiet...)
<andguent> are you already running Ubuntu on a server, or server-esk box?
<Kamping_Kaiser> you'll need an authenticating proxy
<gregbrady> Ubuntu on a desktop
<gregbrady> All ubuntu machines.  5 in all
<gregbrady> Over wireless
<Kamping_Kaiser> gregbrady, do you have one user per system?
<andguent> is your router running ubuntu as well? or just 5 desktops
<gregbrady> No the three of us have access to all computers, except one main one.
<gregbrady> Router is a standard Belkin Pre-N
<andguent> if you know exactly which websites you want to block, or exactly which websites you want to allow, squid will be the most popular proxy out there to learn
<gregbrady> No, not exact websites.
<gregbrady> But I guess I could build a list over time
<andguent> if you want it to be a 'smart' filter that just blocks certain types of websites, that gets slightly harder, but still doable
<Kamping_Kaiser> dansguardian is the only filtering proxy i have used, so its the only one i can really recomend
<gregbrady> Kamping_Kaiser, yup, I understand completely.  I'm looking for something maybe more generic, if that makes sense.
<andguent> same, dansguardian is a very nice key word based filter, but it usually is run with squid
<Kamping_Kaiser> andguent, i'm aware of it running with anything else
<gregbrady> dansguardian just seems a bit daunting to install and manage.
<Kamping_Kaiser> gregbrady, are you after keyword filtering/sitelists, what sort uf stuff?
<Kamping_Kaiser> its not as hard as you think, exept it has the hidden overhead of needing to configure squid
<gregbrady> Kamping_Kaiser, keyword and content
<gregbrady> I would also like to monitor sites visited
<Kamping_Kaiser> all proxys will log
<gregbrady> Yup, I know.
<andguent> if you have a good guide, or are familiar with squid & dans, it can be 3 config files and maybe 10 lines of config entries, not bad for what it does
<gregbrady> -no guide, and not familiar with squid or dans
<Kamping_Kaiser> 3 files?
<andguent> squid.conf, dansguadian.conf, dansguardianf1.conf, ack, and /etc/default/dansguardian
<andguent> ok, 4 :)
<Kamping_Kaiser> you dont /need/ to edit f1 *i havent*
<Kamping_Kaiser> i dont remember default/dans
<andguent> going off of memory, its possible i'm wrong, anywho, the point is it isn't difficult at all
<Kamping_Kaiser> agreed on the point :)
<andguent> trying to track down a good guide
<gregbrady> Yeah, so am I!
<Kamping_Kaiser> https://wiki.ubuntu.com/EdubuntuProxy any help?
<gregbrady> let me check.  Be aware that I'm just running Ubuntu
<andguent> eh, its all the same, just different software installed by default
<gregbrady> That's my understanding.  I'm relatively new so I just wanted to clarify.
<andguent> as long as you can install dansguardian and squid, it really doesn't matter what you run it on
<andguent> its actually possible to at least run squid on a commodity linksys router, but thats another project :P
<andguent> Kamping_Kaiser's link looks good, I found this guide as well which keeps it short and sweet:http://www.brianmathis.net/2007/11/30/howto-squid-and-dansguardian-with-dapper/
<owh> gregbrady: Just to break in here, but many domestic wifi routers have filtering capacity built in.
<andguent> I feel like they are both missing a few httpd_accel lines for squid
<gregbrady> owh, yes, but it is pay by the month on my router
<owh> gregbrady: Huh? In a domestic environment? How are you connected to the 'net?
<gregbrady> owh rogers and a belkin router
<owh> gregbrady: The belkin router is in your home?
<gregbrady> owh yes
<gregbrady> Belkin Pre-N 530200
<owh> gregbrady: Then my point still stands. Many domestic routers have filtering built in, that is, you can access their web interface and setup some form of URL filtering. Not all, but many.
<owh> gregbrady: Most of them are running some form of linux anyway :)
<gregbrady> This one relies on a service that you subscribe to
 * owh is googling.
<owh> gregbrady: I get no hits at all with that model number.
<gregbrady> Sorry, Ubuntu froze and crashed.
<Kamping_Kaiser> :o
<owh> gregbrady: That sounds like a bigger problem than filtering. I've only had that happen with hardware faults.
<gregbrady> yeah, first time for everything.
<gregbrady> Too bad, I was doing a lot of long running processes in the background
<owh> Did you see my last comment: gregbrady: I get no hits at all with that model number.
<gregbrady> owh, second please.  I have about 5 process to get started again.
<gregbrady> Man, and they were running for the last about 4 hours.......
<gregbrady> Ok, back in business!
<gregbrady> Belkin F5D8230-4
<owh> gregbrady: http://whirlpool.net.au/index.cfm?a=h_view&model_id=327 - look for manual, click the link, look for manual, download.
<gregbrady> owh, it only has them for win/mac
<gregbrady> nope, found it
<owh> gregbrady: Integrated Parental Control Web Content Filter: Belkin Parental Control allows you to set up your own policies and block any website you want. There is also an optional reporting feature (fee-based) that allows you to get a report showing you every website that was visited from your network (refer to your Parental Control User Manual for more information).
<gregbrady> exactly, fee based
<owh> gregbrady: Read it properly
<gregbrady> owh, sorry page 10.  Parental controls.
<owh> Let me make it clearer for you gregbrady: Integrated Parental Control Web Content Filter: Belkin Parental Control allows you to set up your own policies and block any website you want.
<andguent> owh: I think he was looking for keyword/smart filtering of some sort
<gregbrady> Parental control is free for 6 months, which includes blocking anysite
<gregbrady> The last paragraph says what is included in the Parental controls, if subscribed to
<gregbrady> owh, I could live with tracking and manual list creation if required
<gregbrady> owh, maybe I'e misread it.
<gregbrady> Is this a really unusual request for Ubuntu users to make?
<owh> Well, on a ubuntu-server group, yes :)
<Kamping_Kaiser> gregbrady, request what?
<gregbrady> owh, could be.
<gregbrady> Kamping_Kaiser, requesting an easy method for managing web access.
<andguent> gregbrady: I'm more than happy to help getting squid and dansguardian up and running with you, i was just letting you check out alternate options in case they really did work for what you needed
<gregbrady> andguent, nope, I understand your intent.
<gregbrady> andguent, I guess I had hopes it would be easier.  I've never tried this before
<andguent> there may be other better ways to do it, but the squid and dans combo has been around for a while, and its the one i know [works]
<andguent> i've heard smoothwall can be an easy install, but i've never used it
<gregbrady> andguent, I will attempt to follow the directions I've already been given
<andguent> the long and the short of it is that there is no harm in trying it, at least not until you start permanently forcing custom iptables entries at boot, then you just need to remember where you made that change 6 months ago :)
<owh> gregbrady: Everything in the manual indicates that the only part that is fee based is the reporting. I do not have access to the "Parental Control User Manual", but as the owner of the device you presumably do.
<gregbrady> owh, nope and when you try to use the function, it asks you to login in and register, which of course does not work.
<owh> gregbrady: If your domestic router does what you need then that will be far simpler to implement than anything else - unless you already have a central gateway linux box.
<gregbrady> owh, I agree.
<owh> gregbrady: Did you look on the CD that came with the device?
<andguent> gregbrady: I can even try setting the same filter up here with you, i don't really have a need for it right now though
<gregbrady> owh, I just googled and found out that that service no longer works.
<gregbrady> owh, they were indeed subscription based, but it does not function anymore.
<owh> gregbrady: Then perhaps contact Belkin and ask them what you need to do to enable filtering. Perhaps they have a new firmware version for you.
<gregbrady> owh, I tried to get help with them before with VPN and that was a useless adventure.  No more Belkin products for me!  But, I'm stuck with what I have at the moment.
<owh> andguent: I completely understand your response and it is correct if the router cannot do this, but only then. It might be a trivial thing to initially setup, apt-get install xyz, but maintaining it will be a challenge across several machines.
<gregbrady> owh, exactly!!!!!
<gregbrady> owh, that is why I wanted LSTP to work, but it does not work with a wireless server
<owh> Huh?
<owh> Says who?
<gregbrady> I tried, with lots of help in the chatrooms, no luck
<andguent> its all about deciding how complicated you want it, depending on the technical skill of those being filtered :)
<owh> gregbrady: There is no earthly reason that I can think of, other than perhaps available bandwidth that might cause issues. (Bugs not withstanding.)
<gregbrady> I thought LSTP sounded like a dream to me.  Maintain one main machine only......the rest are just thin clients with nothing to do on them.  Simple in theory.
<owh> I suppose you could point all browsers at the same proxy server and lock that redirection down, then run the proxy/filter in one place and maintain it there. It will require that workstation to be on for anyone to browse, but it would work.
<Kamping_Kaiser> owh, the key issue is devices dont netboot
<Kamping_Kaiser> * netboot wireless
<Kamping_Kaiser> anyway. afk
<owh> Kamping_Kaiser: So, put an initrd on the device and get it to connect.
<gregbrady> The whole thing is a lot more complex then I thought at first.
<owh> gregbrady: It always is, which is why I was diverting you toward fixing your router :)
<gregbrady> owh, Oh, I understand you completely and it would be awesome!
<gregbrady> owh, Ubuntu is so easy to upkeep on individual machines....if I could just manage net access at one point, that would be awesome.
<owh> gregbrady: As I said, configure one machine with andguent's assistance if so inclined, then redirect the proxy from all the other workstations to that same machine. Magic.
<gregbrady> owh, or buy a proper router
<owh> gregbrady: You can often throw money at anything to solve a problem :)
<gregbrady> owh, good point, but this decision is not trivial.  I will have to maintain it as well.
<andguent> afk for a bit, be back
<gregbrady> owh, and this server has a bunch of hard drives in it that consume power.....more than a router I would imagine
<owh> gregbrady: By several orders of magnitude.
<owh> sommer: In case you're awake, the hourly jobs didn't run on hardy either.
<gregbrady> owh, and I'm not really sure I understand exactly how the whole squid/dansguardian actually works, so if there is a problem it will be hard to fix.
<gregbrady> owh, I like to understand the things I implement
<faulkes-> gregbrady: a good policy
<faulkes-> but nothing a bit of time and determination won't solve
<faulkes-> squid is fairly straight forward once you do your config to your tastes
<gregbrady> faulkes- no doubt, I'm investing now.
<faulkes-> I havent used dans so
<gregbrady> faulkes- I just have a problem blindly installing something.  It may work, but as soon as there is a glitch, and there will be glitches, then the real nightmare starts
<gregbrady> squid and dans maybe perfect, I just have to read more to understand
<Kamping_Kaiser> i love the way its set+forget after the iniital install
<gregbrady> Kamping_Kaiser, I'm still reading.....
<hatter> I have a problem where the ubuntu server network lags intermittently for 3 or 4 seconds,  the server is primarily samba,  I have changed every piece of hardware,  any1 got any clues ?
<Kamping_Kaiser> gregbrady, not rushing you :)
<gregbrady> Kamping_Kaiser, nope, none taken.  I appreciate the help!
<andguent> I'm back -- gregbrady: I'm more than happy to help with things, but it doesn't make sense for half a dozen people to all give you ideas at once :) I will keep quiet unless asked for something
<gregbrady> andguent, thanks and no problem.  I'm just doing some research here for my own peace of mind.
<andguent> there are multiple ways on how to implement it, some are dead easy to revert (so easy that if you have 12 year olds+, it might not be effective..)
<andguent> but definitely read what you need
<Kamping_Kaiser> afk. back as kgoetz in about 30
<gregbrady> Ok, trying to install
<gregbrady> tons more to read in squid.conf!
<andguent> gregbrady: Questions for you (if you want to do some design planning...) What would you consider the highest computer skill level of those being blocked is? (IE does this filtering config need to be protected by passwords) Are you sure that the proxy server box you are setting up will always be on and available? Will that box have a static IP address?
<gregbrady> Skill level is minimal.....My son is 8.
<gregbrady> The server can always be on
<gregbrady> Server can be static, I don't see an issue there.
<andguent> good good, if it hops around, family members will complain, including the one who shares your bed :)
<gregbrady> hahahaha
<andguent> although, i'm guessing most wives are more agreeable to porn filters then say.... rewiring all of the phones to go through voip, and then voip crashing....
<gregbrady> I don't care about porn for the wife.
<gregbrady> I only need to protect my son.  On all machines, except the server
<andguent> yea, i know, but i'm sure she likes the idea... anywho, i'm getting us off topic
<gregbrady> hahahah, for sure!
<andguent> are you following a guide of any sort? most guides I have seen don't test things in phases very well, and you end up with a 4 part process that is broken somewhere in the middle :)
<gregbrady> https://wiki.ubuntu.com/EdubuntuProxy
<gregbrady> I've only installed the software and opened gedit
<gregbrady> with squid.conf
<andguent> ok good, I would say get squid.conf adjusted as needed, and then pause there to test it and verify functionality
<gregbrady> I must read to understand what I'm adjusting though.  I'm working on that now.
<andguent> take your time
<gregbrady> It says the default settings should be fine
<andguent> only one way to find out for sure :) I usually adjust things, but lets try the easy way first
<gregbrady> So just turn it on?
<andguent> yea, lets try that start command, and see if manually pointing your browser to your proxy gets you any connectivity
<gregbrady> failed
<gregbrady> no worries.  I've never had one single thing work out of the box with default settings.
<andguent> squid failed to start? did it give a message? usually I would say avoid pasting numerous lines in a chat room, but we seem quiet enough in here tonight :)
<gregbrady>  * Starting Squid HTTP proxy squid                                               * Creating squid spool directory structure
<gregbrady> FATAL: Could not determine fully qualified hostname.  Please set 'visible_hostname'
<gregbrady> Squid Cache (Version 2.6.STABLE14): Terminated abnormally.
<gregbrady> CPU Usage: 0.004 seconds = 0.004 user + 0.000 sys
<gregbrady> Maximum Resident Size: 0 KB
<gregbrady> Page faults with physical i/o: 0
<gregbrady> Aborted (core dumped)
<gregbrady> FATAL: Could not determine fully qualified hostname.  Please set 'visible_hostname'
<gregbrady> Squid Cache (Version 2.6.STABLE14): Terminated abnormally.
<gregbrady> CPU Usage: 0.012 seconds = 0.004 user + 0.008 sys
<gregbrady> Maximum Resident Size: 0 KB
<gregbrady> Page faults with physical i/o: 0
<gregbrady> Aborted (core dumped)
<andguent> ok, lets try what it asks then, do you have a line in squid.conf that starts with 'visible_hostname' ?
<gregbrady> already working on that
<kgoetz> back :)
<andguent> excellent, forgive me if I ask silly questions :)
<andguent> kgoetz: we are just starting into basic squid configuring, both of us got distracted a bit while you were out :)
<gregbrady> andguent, no problem
<gregbrady> ok, everything is commented out except the http_port line
<andguent> i believe you can set visible_hostname to anything you want, just keep tinkering until squid restarts without complaining -- I could get my own squid started up here for good measure too....
<andguent> Can I assume you are familiar with http://paste.ubuntu-nl.org/ just in case we need to exchange config files?
<gregbrady> I only used that once
<kgoetz> andguent: cool. i'm trying to get setup here, and having tecchnical issues ;|
<gregbrady> I'm not sure what the hostname should be
<gregbrady> What format?  In quotes?
<andguent> shouldn't need quotes unless you have spaces or fancy characters
<gregbrady> and the name is irrelevant?
<andguent> I just put in a line on its own like this: 'visible_hostname goober'
<andguent> yea, put it in as 'youPeskyKidsMadeMeResortToThisJustToShieldYourEyes' :)
<gregbrady> ok, squid started
<gregbrady> confirming now
<gregbrady> keithclark@keithclark-desktop:~$ ps aux | grep squid
<gregbrady> root     20819  0.0  0.0   4780   652 ?        Ss   23:49   0:00 /usr/sbin/squid -D -sYC
<gregbrady> proxy    20824  0.3  0.3   7000  4528 ?        S    23:49   0:00 (squid) -D -sYC
<gregbrady> 1000     20859  0.0  0.0   2976   756 pts/0    R+   23:50   0:00 grep squid
<andguent> via 'ps aux|grep squid' I assume? -- I have another confirm command to run: 'netstat -nltp|grep squid'
<andguent> I get: tcp        0      0 0.0.0.0:3128            0.0.0.0:*               LISTEN     28667/(squid)
<gregbrady> I guess that means no then
<gregbrady> (Not all processes could be identified, non-owned process info
<gregbrady>  will not be shown, you would have to be root to see it all.)
<kgoetz> check your logs too
<andguent> your command output was good, but netstat should show one line.... hmm, maybe 'sudo netstat -nltp|grep squid'
<gregbrady> yup....
<gregbrady> sorry, a home distraction....
<andguent> happens :)
<gregbrady> ok, it was the tv
<gregbrady> so, success at the moment
<andguent> ok, i personally like to test every phase so we know if we are standing on good ground or not, point your firefox manual proxy through localhost port 3128.... I can find exact menus in a sec
<gregbrady> Should I not set my ip as static now for this machine?
<gregbrady> And, should I be using my laptop to try this out now?
<andguent> eh, lets confirm squid first, setting static IP can be the next step
<kgoetz> personally i'd think ip is first, sine you want squid to listen on it
<andguent> currently, its listening on all IPs, squid won't even care if you hop IPs, it will only matter to other computers, and none of them know about it yet
<andguent> In firefox, can you go Edit -> Preferences -> Advanced -> Network -> Settings -> Manual Proxy....
<andguent> remember this location, we will need to undo it later....
<gregbrady> not a problem
<andguent> Check off "Use this proxy server for all protocols", and enter for HTTP proxy: localhost, port 3128
<andguent> then get a new tab up and just check your connection to google
<gregbrady> localhost=ip of server?
<andguent> you should actually be able to put that english in, exactly as spelled
<andguent> IP would work too, but we might be changing that in a minute :)
<gregbrady> ok, google works fine.  how do I confirm it went through the proxy server?
<gregbrady> I actually understand this so far!
<andguent> try 'tail /var/log/squid/access.log'
<andguent> it should show your page hit, and your IP among other things
<gregbrady> yup, it worked
<gregbrady> so really, at this point, if I changed my other machines to point to this proxy server, tracking would be working?
<andguent> good good, in the end you will end up with two logs like that, dans will have its own
<andguent> yes, at minimum, if you repeated the firefox proxy settings on every box, you could at least see what sites they are visiting
<andguent> but note that squid usually logs time stamps in non-english format
<gregbrady> Awesome, you are good.  I actually followed this
<gregbrady> andguent, I understand....I'm just happy with the progress
<andguent> dans will have some easier logging to follow, i like to change date stamp to easier readable format, but thats later :)
<gregbrady> now, I'm assuming, dansguardian sits between the proxy server and the net?
<andguent> actually, the other way around, dans likes to get everything first, and then passes everything to squid
<andguent> but close :)
<gregbrady> That is what I meant
<andguent> squid is mostly there for speed and redirection functionality
<andguent> ah ok
<gregbrady> internet to dans, dans to squid, squid to client, client to browser?
<gregbrady> and viseversa?
<andguent> either way, a static IP is a good idea before we get too far into this, feel free to undo your proxy settings in firefox before we forget about them...
<andguent> mmmm, close...
<gregbrady> ok, still learning
<gregbrady> proxy reset
<andguent> for outbound connections, the chain is workstation app (firefox) -> Dans on server -> Squid on server -> router -> internet, the return trip would be flipped
<gregbrady> got you.
<kgoetz> iptables redirects > firefox settings
<gregbrady> yeah, this is where I get a little confused.
<andguent> definitely an option, with an 8 year old though, we can get away with the easy manual proxy for now if we want
<gregbrady> I would have thought that dans would be installed and configured first then?
<andguent> i think of squid as being the foundation, start from the link closest to the internet, and test backwards from there
<kgoetz> andguent: i'd think iptables would be easier overall, as you dont need to reconfigure each client
<gregbrady> So far my son does not mean to do what he does.
<gregbrady> I got you.....
<gregbrady> Sorry, just trying to figure all this out as we go.
<andguent> mmm, iptables will need to be run on each workstation, unless this server is becoming the router, either way we are touching each workstation
<kgoetz> i just assumed the server would become the default route
<gregbrady> ok, then the server will dole out ip addresses?
<gregbrady> dns serving duties?
<andguent> it can be, but that requires some advanced firewalling --- none of this is required, but can be done
<andguent> you can leave dhcp and dns exactly as is if you want to keep it simple
<gregbrady> no....the hardware firewall seems to be working....and I lend out some wireless to my neighbour
<gregbrady> I dont' want his machine to be affected
 * kgoetz bites back a suggestion involving more complexty 
<kgoetz> bad me!
<gregbrady> kgoetz, please let me hear them
<andguent> i would say keep things as is, make the changes that are easiest to undo :) run with it for a week, and then make it more complex :)
<gregbrady> I will keep simple, but I want to hear options
<andguent> or throw another nic in the server, install shorewall, pull the belkin behind the new router, and make it a transparent proxy for all :)
<gregbrady> Maybe I should have said this....I have a neighbour that cant' afford the internet access, so I let him use mine via wireless.  I don't want to change his settings.
<kgoetz> set the gateway to be the default route for everything (so everything gets the option of using squid), and drop an iptables rule to route the neighbour through to th net directly not via squid
<gregbrady> andguent, if I were to throw another nic in, I would just buy another router with everything built in.
<andguent> if you have ubuntu (or really anything with iptables --- linux kernel 2.4 and up), its dead easy to force ALL port 80 outbound traffic to your filtering
<andguent> you could, but running a full linux server as a router is a massive amount of power, not only knowing what websites people have been to, but who is hogging all of that download speed, monitoring all ports, not just web traffic
<andguent> iptraf is your friend, but its only useful if your server sees the connections going out to the internet
<gregbrady> andguent, yes, I understand the benefits.
<andguent> either way, all of that stuff can be added later if you feel so inclined, not required for tonight :)
<andguent> kgoetz: I agree you could definitely set his proxy box as the default gateway, but again, more complexity :)
<DrIP> hi all
<DrIP> i need to install my server without internet help!
<gregbrady> my goal is to be able to get Matthew back online tomorrow with the least amount of trouble and the most amount of understanding so that I can manage it
<gregbrady> But I would so enjoy learning more with you guys!
<kris_ph> hello.. where can php.ini be located in gutsy?
<andguent> DrIP: Any way the box can temporarily migrate to a place with internet? You probably don't need it for the cd installer, but its nice
<andguent> kris_ph: 'locate php.ini'
<andguent> gregbrady: are you ok setting the IP static or would you like help?
<DrIP> andguent: sadly, no...
<gregbrady> andguent, just point me....I think I know.
<DrIP> i have resources to download a 'full dvd' if possiable
<andguent> DrIP: how far can you install without the connection? a basic cd should work unless you know it will be missing software you want
<kris_ph> Is it a good practice to hide the versions of apache and php in http head request?
<DrIP> it goes to about 83%
<DrIP> kris_ph: yes
<andguent> gregbrady: if you have gui on that box, it may be easiest to just use network manager to set things, default gateway and dns server can both probably be your belkin
<gregbrady> System, Administration, Network....
<kris_ph> DrIP: I got the list of php.ini already.. but the thing is.. which php.ini location I need to modify to hide php details?
<andguent> gregbrady: IP address can be any similar address as the other units on your network, as long as the last number is different
<andguent> kris_ph: usually /etc/php/apache/php.ini, but don't quote me on that :)
<kris_ph> DrIP: I just want to modify expose_php to OFF..
<kris_ph> andguent: thanks..
<DrIP> kris_ph: andguent should better hear that question :p i have yetr to get a working install of ubuntu server >.<
<DrIP> err yea
<DrIP> andguent: is there a dvd of server?
<andguent> kris_ph: the real test is to figure out how to detect if that info is showing, adjust something, and see if the info dissapears
<DrIP> i can not use internet... it *has* to be a serile server
<kris_ph> andguent: yeah.. I tested it already.. and I detected it that it is showing its apache version and php information..
<kris_ph> andguent: that's why I'm going to modify it.. ")
<andguent> DrIP: any special software you will need to install? are you able to sneaker net any extra needed debs from another location?
<DrIP> andguent: i just need wireless for my internet... but i can't use ndiswrapper and the linksys WMP11 isn't supported :(
<DrIP> (and yes, i'm trying to get a server on wireless
<DrIP> )
<gregbrady> ok, I think it is set
<gregbrady> should I logoff and back on to the network to be sure?
<andguent> DrIP: http://www.ubuntu.com/getubuntu/downloadmirrors has dvd links at the bottom
<gregbrady> ok, that did not work
<DrIP> ahhhh
<andguent> gregbrady: can you get a second computer available for IRC and then break your server network settings again so we can see how far it got?
<DrIP> andguent: thanks :)
<andguent> DrIP: np :)
<gregbrady> I got as far as setting my static settings, then failure
<gregbrady> system
<gregbrady> admin
<andguent> gregbrady: I'm assuming you are chatting from a different box then? is it still broken?
<gregbrady> network
<gregbrady> wireless connection
<gregbrady> properties
<gregbrady> uncheck roaming
<andguent> yup, following.... if at all possible, it would be good if the sever was wired, but if it can't happen (and wireless has been working fine) then so be it
<andguent> with you so far
<gregbrady> again, server is wireless
<gregbrady> Everything is wireless
<andguent> it will work, as long as your wireless has been behaving :)
<DrIP> andguent: eep those dvds are all client :|
<andguent> DrIP: oh really? bleh, i thought there was a server edition
<gregbrady> My wireless has worked for over 4 years, it should be ok today
<DrIP> yea :(
<DrIP> how can i get my server installed without internet...
<kris_ph> Is there anybody here who uses phpsysinfo?
<DrIP> brb... i need my lappy...
<gregbrady> Then I find the Network Name, password type,password, I choose ip address subnet and so on
<andguent> gregbrady: ok, where does the connection fail?
<gregbrady> when I close and reconnect
<gregbrady> It does not connect
<gregbrady> you know what, this is probably a propriety driver problem
<andguent> after hitting that reconnect, can you confirm the IP address was properly set via ifconfig?
<gregbrady> that is why the lstp did not work
<andguent> i couldn't tell you, it seems strange that any driver could do dhcp, but not static.... however it sounds like you have beat that bush to death already
<DrIP> bbl
<andguent> i assume youve double checked your gateway address, and still can't ping your gateway or anything?
<gregbrady> yup, all my wireless cards across 6 machines in total are not supported
<gregbrady> I have to use ndiswrapper, I believe
<andguent> that still should allow static IPs though...
<andguent> ....should..
<gregbrady> No, I came across this before
<gregbrady> belkin+linux=no good
<andguent> well, then i guess we stick with dhcp then, put it back to roaming
<andguent> can you sign into your router and see about setting a dhcp reservation? anything you can do to prevent your server from changing IPs the better
<gregbrady> no, too much of a pain for me as administrator to redirect when my server decides to change the address
<gregbrady> Belkin has very simple, easy to use hardware.  Not powerful but very easy to setup under Windows, Unfortunately
<andguent> any basic router should have that option, as long as you don't change routers you won't have a problem.... even if you do, if we keep the setup simple, you will just have to go into firefox and disable the proxy to bypass
<gregbrady> I remember that Windows handles this problem......Belkin writes drivers for it, but not Linux.
<andguent> i'm assuming you are able to web into your router?
<gregbrady> yup
<andguent> have you confirmed that you can't set a reservation for your server?
<gregbrady> yup
<andguent> (sorry if ive asked this before)
<andguent> bleh
<andguent> i've never seen one not have the option in years, but I will leave it to Belkin to prove me wrong :)
<gregbrady> Well, if you hear of a Belkin again, don't waste your time helping them out.
<andguent> we still can do the internet filter, i just don't know how often your server box changes IPs
<gregbrady> There is no VNC option, LSTP does not work, no internet filters and also no static ips
<gregbrady> It kind of makes it secure!
<andguent> no VNC?? are you able to do file sharing at all or anything?
<gregbrady> nope
<gregbrady> no tunnelling
<andguent> can you ping another workstation?
<gregbrady> yes
<gregbrady> sorry, no vnc over the wan
<andguent> try setting up firefox on another computer to use your server's proxy IP...
<andguent> just for kicks
<gregbrady> that will work
<gregbrady> but if I try to setup a vnc from outside, no go
<gregbrady> no settings for it
<andguent> ok, i misread you then, that would seem like a basic feature, but it shouldn't stop your internet filter project
<gregbrady> no, it will work, but it is us the the dns server to have the same address, or me running around making changes
<gregbrady> *up to the dns server
<andguent> the dns server is allowed to change, that shouldn't affect the filtering
<gregbrady> no, just he proxy settings of all clients
<andguent> even if name lookups go direct to the internet, the actual page loading will still be filtered
<gregbrady> ok, hang on then, I misunderstand
<andguent> i'm not sure if i'm understanding you... heh
<gregbrady> ok, sorry.....
<andguent> I will hold my tongue a sec, mind explaining what you mean once more?
<gregbrady> the server will not have a set ip address
<gregbrady> how can my clients find it?
<gregbrady> hahaha, you can answer now!
<andguent> normally, computers don't change IP very often, even if using DHCP
<andguent> it does happen, but it shouldn't be a regular thing
<gregbrady> ok, maybe I misunderstood, let's carry on then!!!!
<andguent> i'm looking into dhcp client man pages now to see if there is a way you can keep it asking for the same address
<gregbrady> I will also reseach, but let's carry on for now, agreed?
<andguent> another option is to see if name lookups work properly on your network.... samba style name broadcasts may take care of your problem (set proxy filter IP to server name)
<andguent> yes, carry on we shall :)
<gregbrady> I seem to also have a problem with samba name lookups....they take forever
<andguent> ok, squid works, lets get dans up and running, going back to the guide it says change one line, i think i want to do slightly more then that...
<andguent> get yourself into that dansguardian.conf, we can page down together, i agree that you want to loose the UNCONFIGURED line
<gregbrady> ok, let me get that far.
<gregbrady> shoot, I lost the address for the instructions after the network loss...do you have what I sent you?
<andguent> yup
<andguent> https://wiki.ubuntu.com/EdubuntuProxy
<gregbrady> yup, I'm into the editing
<andguent> ok good, loose the UNCONFIGURED line if you havin't already, some other options in here are good to know about, but not required
<andguent> loglevel = 3 is sometimes excessive, you can bump it down, but you start loosing track of what websites people have successfully gotten to
<andguent> remember your /var/log/squid/access.log -- exact same concept, but for dans
<gregbrady> ok, so save it?
<andguent> i actually recommend one other change
<gregbrady> ok, no problem
<andguent> about 5-6 pages down (depending on page size), track down virusscan = on
<andguent> this is a nice feature when it works, but leaving virusscanning on can cause headaches, and since you are all ubuntu, who the heck cares about viruses anyway? :)
<gregbrady> it is on
<andguent> turn it off, earmark it as something to tinker with later
<gregbrady> I will turn it off
<andguent> ok, save, quit that file
<gregbrady> ok, saved
<andguent> one more file that the guide doesn't touch.... same directory dansguardianf1.conf
<andguent> in here, note how many other directories dans checks, that first page worth is just an fyi for things to browse through
<andguent> naughtynesslimit is something to note and tweak later, especially if while visiting medical websites, you and your wife get denied too often
<gregbrady> file not found
<andguent> mmmm... /etc/dansguardian/dansguardianf1.conf ?
<gregbrady> nevermind......(turning red)
<andguent> its all good :P I'm amazed I'm still typing this coherently :)
<gregbrady> I'm sorry to keep you up......
<gregbrady> I owe you a few drinks of choice
<andguent> no no, quite alright
<Wrathtide> would a LAMP server be suitable for usage as a file server?
<andguent> amusingly enough, i havin't done this filter setup in a while, i need the practice
<andguent> Wrathtide: it definitely can, but it will have extra programs running
<andguent> as long as it is on a home network, and not exposed to the outside internet, it really doesn't matter either way -- do you trust the people on your network not to break your stuff? :)
<gregbrady> I'm sorry guys, I appreciate the extra suggestions, but could we just keep it simple for tonight?  We've been at it for hours....I'd love to learn further about all options though!
<Wrathtide> andguent: they would break it if they could :\
<Wrathtide> inept fools, i just had to reformat the household pc
<andguent> gregbrady: i'm assuming you got into dansguardianf1.conf? did you see my notes above?
<gregbrady> oh yeah
<gregbrady> and I'm ready
<andguent> Wrathtide: in that case, just do the normal install, setup samba with password requirements or read only the whole way, drop the lamp
<andguent> gregbrady: I have not experimented with bypass options, but it might be worth tinkering with later, especially if your wife doesn't have the sudo password on this box :)
<gregbrady> she has sudo
<Wrathtide> andguent: right now i have FreeNAS running an ssh server which is giving me better speeds than cif/nfs
<gregbrady> (and other tools)
<andguent> and virusscan reappears in this file, turn it off again for good measure (i think turning it off once is enough, but lets kill it until its dead)
<Wrathtide> andguent: meh, ill just stick to freenas till i restore all the data back
<andguent> Wrathtide: speed is good, if it does what you want, dont kill yourself on something new :)
<Wrathtide> =)
<Wrathtide> thanks
<andguent> gregbrady: I think we are done with this file, save quit
<gregbrady> done
<andguent> ok, back to the guide, their next two steps sound good
<andguent> start, ps aux, cross fingers (in no particular order)
<andguent> sudo /etc/init.d/dansguardian start && ps aux | grep dansguardian
<gregbrady> andguent...you are addressing whom?
<andguent> gregbrady: you, sorry, i assumed the room went empty again :)
<andguent> kgoetz: You still around? :)
<gregbrady> ok, seems to have gone ok
<andguent> good good, lets do my netstat test again: 'sudo netstat -ntlp|grep dans'
<gregbrady> and now to test?
<andguent> should give you 0.0.0.0:8080
<gregbrady> tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN     27394/dansguardian
<andguent> eeeeexcellent
<gregbrady> I'm excited here!
<andguent> go back to your firefox proxy settings, change the port from 3128 to 8080
<gregbrady> ok
<andguent> after that, lets see if basic web visits still work
<gregbrady> works ok....I hope that is a good thing!
<andguent> if you have success, your web visit should be logged in two places: 'tail /var/log/dansguardian/access.log' and 'tail /var/log/squid/access.log
<gregbrady> wooooooooooohoooooooooooooo
<andguent> it might show a little differently, but your visit should get logged twice, telling you that your connection went through both programs
<andguent> now for the real test: visit badboys.com
<gregbrady> You are good!
<andguent> you should get a big red NO SOUP FOR YOU message
<gregbrady> Access denied
<andguent> wonderful
<andguent> you have a functioning internet filter
<gregbrady> You are very very good!
<andguent> :) I have practice
<andguent> a few more tweaking files to consider... all in /etc/dansguardian:
<gregbrady> I thank you for you understanding and putting up with my tantrums
<kgoetz> andguent: sorry, yes i am
<kgoetz> im/ rebuilding a system, so i'm paying attention to that
<andguent> bannedexeceptionlist will cause some grief, will prevent you from downloading exe's, avi's, isos, and other sillyness, start commenting some of these out
<andguent> kgoetz: no problem at all, just curious if anyone was watching
<kgoetz> andguent: no, no one at all :)
<andguent> gregbrady: Making sure you caught that file mention before I give more
<gregbrady> andguent, you know what, I'm happy for today....thank you.......let's leave the rest until later!
<kgoetz> exeptionsitelist is a good on too
<kgoetz> incase it hasnt been meantioned ;)
<andguent> definitely, i didn't want to dump my top 10 just yet, i think he might be off showing his wife or something :)
<gregbrady> I hope it is ok if I come back here to learn again.  You have been great.
<andguent> bannedexceptionlist just drives me batty, especially on a network with hundreds of windows boxes and all I want is a stinkin driver download
<andguent> yup yup, i probably will be around :) please read all of the file names in /etc/dansguardian and be aware they are there, if you want to block a site, or unblock a site, start there
<andguent> like kgoetz mentioned, exceptionsitelist can be important (more so if you have windows boxes trying to hit updates.microsoft.com)
<kgoetz> or archive.ubuntu.com ...
<andguent> go nuts setting proxy server settings in firefox and see who hits the wall first :) I'm sure there are ways to be notified if someone gets a denied message
 * kgoetz is afk for 20-3o min - getting mroe water
<andguent> also note what is within ./phraselists/, there are a lot of naughty words in there. I hear there are massive updated lists you can track down from somewhere, but i never got that far, the current setup does fairly well
<gregbrady> shoot, now all my machines have no wireless access
<andguent> uhhh, don't know what to do for you there.... reset the router?
<gregbrady> nope
<andguent> everything we did today should be completely opt-in, its not going to take anything over automatically
<gregbrady> I think I will reinstall Linux on all machines and not worry about networking....it is not it's strong point.
<gregbrady> Everytime I try to share directories or files in linux it is just a hassle
<andguent> ?? uhm, ok seems like a lot of work, sorry to hear that wifi is such a pain
<andguent> maybe tomorrow night we can play with samba :)
<gregbrady> No, wifi under xp is no problem
<kgoetz> wifi always has ben (and will b for th forseable future) a pain
<kgoetz> *sp
<gregbrady> nope, worked under xp with no issues
<kgoetz> i'm not talking about xp...
<gregbrady> oh, sorry
<gregbrady> andguent, I appreciate the try though!
<gregbrady> I have a ton of re-installing to do before I sleep!
<andguent> are you wiping linux off or something?
<andguent> or just reinstalling fresh
<gregbrady> I need the protection working
<kgoetz> pity you hav to use samba.
<kgoetz> ssh/nfs are so much easier
<andguent> i've always found samba easier, but ive also been playing with it for roughly 7 years more
<gregbrady> No, I must go back to Windows!
<kgoetz> andguent: how can samba posably be easier then anything else :o
<andguent> just keep one linux box for the filtering, you said wifi worked fine for 4 years!!
<gregbrady> andguent, on your belief, I will keep linux until tomorrow
<andguent> if it worked for 4 years, one night down is ok, really it is :)
<gregbrady> No, one night of porn....remember the reason
<gregbrady> Any OS can direct all sites
<andguent> samba: set share level, create share, set perms on share, samba restart -- i do it in my sleep
<gregbrady> This seems to be a real hurt point for linux....if a solution can be found, maybe we could take advantage of it!
<kgoetz> andguent: nfs: add share to export, optional extra controls, nfs reload
<andguent> kgoetz: i'm sure its very easy, i've just never had a need, yeay for lowest common denomenator
<kgoetz> andguent: heh, no joke
<gregbrady> shoot, my internet is down except for the server now
<kgoetz> ssh is better
<andguent> ?
<kgoetz> but lets leave that alone
<andguent> when you say down, you mean before or after firefox got additional proxy filter settings?
<kgoetz> bbs
<andguent> via sshfs? fun toys there....
<andguent> samba seems to be able to pay bills better, most business users want windows, don't care what the server is as long as it works, nfs is only for linux users really..
<gregbrady> Shoot, the connection does not even come up
<andguent> gregbrady: revert all browser proxy settings before assuming the connection is down :) if one is up, something must be working
<andguent> ??
<kgoetz> andguent: s/linux/unix
<kgoetz> sshd+unix desktops vfs== win
<andguent> kgoetz: i stand corrected
<gregbrady> no browser
<andguent> well, sit corrected...
<gregbrady> new boot
<andguent> nfs is only for *nix users really..
<gregbrady> Ok, none of my 5 machines boots to a workable network now.  I'm offline
<andguent> gregbrady: ? how are you chatting?
<gregbrady> why?
<andguent> are you using one of those 5 computers for irc?
<gregbrady> Yes, the server
<gregbrady> good point
<andguent> so, when you say the network is down, have you tested anything other than firefox? can anyone ping google?
<gregbrady> I cant' get on the net
<andguent> what message do you get when you ping www.google.com
<gregbrady> I get ping results
<andguent> ok, if it gives you numbers, then your wireless is fine
<gregbrady> ok, so now it works!
<gregbrady> Why is that?
<andguent> when you were using firefox to get to a site, what message was it giving? was it saying proxy unavailable, or malformed url?
<gregbrady> unavailable
<andguent> if you have any wireless weirdness, web browsing will now see the weirdness twice as often.... everyone has to go to your server, and then back out to the internet
<andguent> there are too many possible reasons why it dropped
<gregbrady> Ok, so the solution is?
<andguent> but if pinging google.com works, and the web browser doesn't, double check to make sure both filtering programs are running
<andguent> also keep in mind, if you restarted the server, it may take a minute for squid and dans to finish starting up
<gregbrady> No, all works well on the laptop....it just took 15 mins
<andguent> dansguardian has an awful lot of data to search through in its own phraselists, squid is anal about its cache files, they both double check stuff on startup
<andguent> did this happen immediately after the server restarted?
<kris_ph> Hello... is there a way to prevent users from accessing restricted sites with the use of proxysites like www.anonymouse.org?
<kgoetz> stop users getting to those sites?
<andguent> i dont think anonymouse is trying to offer that type of feature, but i only glanced at the page just now
<pnukeid> heloo there
<gregbrady> I will have to pay back somehow, thanks!
<pnukeid> i was trying to install mysql 5.1 from source
<kgoetz> grrr. now i'm having network issues *grin*
<kris_ph> yeah.. preventing them going to restricted sites using anonymouse.. a good proxysite is www.proxycentre.com
<pnukeid> but i always have problem
<pnukeid> anybody can help me ?
<kris_ph> if the user knows that site.. even if in your firewall you denied them accessing friendster.com or youtube.com... they could still be able to access those sites..
<kgoetz> depends on your proxy
<andguent> gregbrady: good luck, i might be back on tomorrow night, or there should be hundreds of other people able to help, just don't install XP, its baaaad :) Goodnight everyone, good luck on your projects
<kris_ph> I don't think so.. IPcap can't even filter it...
<kgoetz> ipcap being IP level?
<gregbrady> andguent, you are correct, I just got fustrated@
<kris_ph> did you get what I mean?
<kgoetz> kris_ph: i'm not sure i did
<kgoetz> pnukeid: try #mysql, or are you installing from ubuntu source?
<pnukeid> no i try install mysql from source
<kgoetz> pnukeid: try #mysql
<pnukeid> ok thx..
<kris_ph> kgoetz: okay.. this is the case.. I want users in the LAN as well as in WLAN not be able to access www.friendster.com and www.youtube.com since it is an educational institution...but when they will now using proxysites like those ones I mentioned....well too bad..they could gain access
<kris_ph> *know
<pnukeid> may be in here have experience with my problem too
<kgoetz> kris_ph: i havent tried going to proxy sites in dans, so i dont know if it would catch stuff
 * kgoetz tries it out
<kris_ph> kgoetz: okay..try it out.. and tell me what happens...
<kgoetz> kris_ph: whats a proxy i can try and use?
<kris_ph> kgoetz: in your router or in any firewall...try to deny users to access those sites...  you try www.proxycentre.com
<kris_ph> kgoetz: let's say... you block them accessing www.friendster.com then try to access www.friendster.com using www.proxycentre.com and tell me what happens..
<kgoetz> kris_ph: i cant see a place to put in a custom site.
<kris_ph> kgoetz: ows.. I can't understand what you meant..
<kgoetz> kris_ph: i want to try and visit 4chan (i know my proxy will block that) - but i cant see a way to do that at proxycentre
<kris_ph> kgoetz: in Enter the URL Address: you type the website URL ....then hit enter... and tell me what happens...
<kris_ph> kgoetz: you could find Enter the URL Address: in the webpage of www.proxycentre.com
<kgoetz> ah, i didnt see that option
<kgoetz> kris_ph: dansguardian blocks this link (4chan)  http://www.proxycentre.com/index.php?q=aHR0cDovL3d3dy40Y2hhbi5vcmcv
<kgoetz> reason is:
<kgoetz>    ... has been denied for the following reason:
<kgoetz>    Weighted phrase limit exceeded.
<kgoetz> "2008.3.15 16:56:26 - 192.168.0.2 http://www.proxycentre.com/index.php?q=aHR0cDovL3d3dy40Y2hhbi5vcmcv *DENIED* Weighted phrase limit of 50 : 240 " (i've cut off teh text it blocked on)
<kgoetz> infact, using the proxy it was more sevearly blocked then without: " 2008.3.15 16:48:59 - 192.168.0.2 http://www.4chan.org/ *DENIED* Weighted phrase limit of 50 : 73"
<kris_ph> kgoetz: you mean.. dansguardian was able to filter it?
<kgoetz> kris_ph: i mean exactly that
 * kgoetz wonders if he just shattered kris_ph 's filtering world
 * kgoetz must have
<MacKulkin> anybody knows how to make a streaming server on linux using ipcam ?
<Jeeves_> Morning
<Jeeves_> Does anyone know if rsync 3.0 will make it into hardy?
<pschulz01> Evening...
<pschulz01> Can anyone here recommend an SNMP tool that they actualll use?
<pschulz01> (I am using cacti for some stuff..)
<kris_ph> Is there a way to automatically update the ubuntu box everyday by its own?
<spiekey> hey!
<spiekey> whats wrong with this interfaces config? http://pastebin.ca/
<spiekey> errr...http://pastebin.ca/943568
<nijaba> morning
<spiekey> hi
<spiekey> i get: SIOCSIFFLAGS: Cannot assign requested address
<nijaba> kris_ph: just install unattended-upgrade
<kris_ph> what are the pros and cons with it?
<zul> pros you dont have to worry about it, cons you worry about it breaking your system
<kris_ph> I've been receiving emails with these details: from Cron Daemon ; subject: Cron <smmsp@****> test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp; Message: /usr/share/sendmail/sendmail: line 875: /usr/sbin/sendmail-msp: No such file or directory
<kris_ph> how would I solve it? I will not be using sendmail anymore...
<kris_ph> >	I've been receiving emails with these details: from Cron Daemon ; subject: Cron <smmsp@****> test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp; Message: /usr/share/sendmail/sendmail: line 875: /usr/sbin/sendmail-msp: No such file or directoryhow would I solve it? I will not be using sendmail anymore...
<faulkes-> I take it you uninstalled sendmail via apt?
<faulkes-> dpkg -P <package>
<faulkes-> which should purge anything (like config files) which are still kicking around
<faulkes-> nijaba: gonna grab a new copy of the survey and go after some of the recently submitted bugs
<kris_ph> hello... can anybody here point me to a link with guide on how to install fastcgi in ubuntu gutsy?
<gregbrady> good morning all.
<gregbrady> Is there a good dansguardian log analyzer out there?
<soulc> anyone here?
<soulc> ahhhhh
<soulc> anyone know how to fix the eyecandy stuff ?
<gregbrady> Is there a way to set a computer running ubuntu up to use a proxy server, no matter what?  I changed Firefox for my son, but then I see that he also uses other browsers, which defeat the purpose!
<timboy> hey setting up dns here on my end my box's ip is 192.168.0.7 do I set that as nameserver in resolv.conf?
<timboy> or do i set my isp's nameserver ip?
<timboy> nevermind I got it
<keithclark> Hey every one, what is going on?  Pretty quiet in here!
<keithclark> Kamping_Kaiser, this dansguardian is working wonders!  Thanks!
<takedown_> hi, l'd like to now which more secure ubuntu or debian(not just out of the box), where security fixes comes faster or this same? I know ubuntu build with stack-protection features and have built-in apparmor support, when debian doesnt
<takedown_> or give me a link please for a objective comparsion
<rgl> hi.
<rgl> will hardy support xen out-of-box?
<Kamping_Kaiser> rgl, being a xen client? i belive so
<Kamping_Kaiser> takedown_, what sort of answer are you expecting in *ubuntu*-server?
<Kamping_Kaiser> afk. heading out
<rgl> Kamping_Kaiser, being a dom0.  the xen packages seem to live in "universe" rather than in the normal archive :/
<Kamping_Kaiser> rgl, is dom0 a host or.../
<Kamping_Kaiser> s#/#?
<takedown_> Kamping_Kaiser: idk, whatever
<rgl> Kamping_Kaiser, dom0 is the host.  domU is the guest(?).
<Kamping_Kaiser> rgl, host i'm not sure about, but i expect it will
<soneil> rgl, I gather it's totally workable, just not the "supported" option.  ubuntu are going with kvm, but nothing stopping you using xen
<rgl> soneil, oh I found the https://wiki.ubuntu.com/UbuntuWeeklyNewsletter/Issue76 which says just that.  thx.
<alstone> ah here we are
<alstone> how do I get the distro name plz?
<ScottK2> alstone: Look in /etc/lsb-release
#ubuntu-server 2008-03-16
<The-Kernel> I can't seem to find where the dovecot and postfix logs are stored. Where would they normally go? I looked in var/logs/
<ScottK2> For postfix it'll be in /var/log
<ScottK2> any of the mail* files
<The-Kernel> yeah, I see those files, but there's nothing in them
<ScottK2> Then postfix isn't logging
<ScottK2> Unless you've changed where the logs go.
<The-Kernel> Nah, I just installed it via apt-get, and it worked from there on out
<The-Kernel> I don't get why it wouldn't log stuff
<ScottK2> Normally it does.
<lamont`> if you restart syslog, postfix stops logging.
<lamont`> until you restart postfix
<ScottK2> The-Kernel: Who does ls -l /var/log/mail* tell you owns the files? (plus what lamont said)
<symptom> is there a live cd for ubuntu-server?
 * lamont goes to town with family, back later
<The-Kernel> ok
<The-Kernel> root
<The-Kernel> root owns them
<The-Kernel> that must be why
<ScottK2> They should be user syslog and group adm
<The-Kernel> ok
<The-Kernel> will change
<ScottK2> The-Kernel: You aren't using webmin are you?
<The-Kernel> what's that?
<ScottK2> Something you don't want to use.  You'd know
<ScottK2> !webmin
<ubotu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. See !ebox instead.
<ScottK2> It's a web based admin gui that tends to make people cry eventually.
<The-Kernel> I see
<The-Kernel> !ebox
<ubotu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See the plans for Hardy at https://wiki.ubuntu.com/EboxSpec
<The-Kernel> huh
<The-Kernel> still nto logging
<The-Kernel> maybe postfix has turned logging off?
<The-Kernel> the mail.info.0 has stuff in it
<The-Kernel> it stopped on dec 17th
<The-Kernel> eh, well ebox doesn't seem to work for me.
<ScottK2> I'd suggest restarting syslog and postfix
<ScottK2> Ebox is still a work in progress.
<The-Kernel> I did
<ScottK2> Odd
<ScottK2> Check the permissions again?
<The-Kernel> in dec's logs it has data, just not in jans, febs, or marchs
<ScottK2> It sounds like something is up with your syslog.
<ScottK2> logrotate should have made dec's data go away a long time ago.
<The-Kernel> I see.
<The-Kernel> I don't have a syslog, just a sysklogd
<The-Kernel> same thing?
<faulkes-> no
<The-Kernel> I need to restart the system
<The-Kernel> there's nothing in any of the logs....
<mohamed_> hi, can anyone recommend a light desktop to install on ubuntu-server ?
<sommer> !servergui
<ubotu> Ubuntu server does not install a desktop environment or X11 by default in order to enhance security, efficiency and performance.  !eBox provides a GUI system management option via a web interface.  See https://help.ubuntu.com/community/ServerGUI for more background and options.
<sommer> mohamed_: I like openbox myself, but fluxbox is also good
<mohamed_> thx, sommer ,
<sommer> mohamed_: np
<RB2> Evening everyone.
<ylon> installing ubuntu gutsy on xenserver and it is hanging at the install onset, anyone have any experience installing unbutu with xen in the domU?
<ylon> anyone?
<RB2> ylon, sry not I
<ylon> :) thanks
<gregbrady> Using squid, does the server cache webpages automatically?
<gregbrady> With the default settings?
<smehmood> if you haven't already, gregbrady, i'd recommend asking in #ubuntu, its more lively at this hour
<gregbrady> smehmood, thanks, I'll give that a shot
<nijaba> morning
<BaD_CrC> before i download the 400+mb iso of ubuntu server, does it have a way to configure restricted drivers for wireless cards during install?
<atomic__> hello, can anyone reccomend me a good image restoring solution, i have a preinstalled and configured 7.10 client that i want to make a image of, and deploy to a lot of other client machines, i've used SystemImager to no avail, and since the machines have no optical drives, a netboot install (pxe,tftp) would be the best option for the job
<Kamping_Kaiser> mondo might netinstall, but i dont remember. otherwise i'm not sure
<atomic__> thnx, ill check it out now
<blue-frog> atomic__: systemrescuecd, clonezilla, oscar,
<atomic__> thank you, i really need to work this one out :)
<zul> clonezilla
<atomic__> yeah, clonezilla looks like the best candidate
<kris_ph> atomic__ mmm have using it/
<kris_ph> clonezilla?
<atomic__> im setting up the drbl server right now
<atomic__> try and prepare everything
<atomic__> and later ill have to drive to work to actually netboot the machines and get it going :)
<atomic__> hopefully everything will be ok
<atomic__> systemimager exploded on me yesterday, spent 2 hours tracing variables in a perl script
<kraut> moin
<atomic__> whoa, looks like it uses a lot of other stuff too, ill have to stop that dhcp server afterwards
<The-Kernel> Hi, I have telnet installed on my server, which seems to be unresponsive to everything except ping.
<The-Kernel> Can I use port 443 with telnet to log into my server?
<ivoks> telnet client or telnet server?
<The-Kernel> client
<The-Kernel> I'm trying to get into the server
<The-Kernel> but something has happened to it
<ivoks> you have telnet client on server?
<The-Kernel> ivoks I have both a client and a server on the server
<The-Kernel> I want to use a client to telnet to the server
<The-Kernel> I need to figure out how to get into the server
<The-Kernel> everything is rejecting me
<The-Kernel> I'm able to ping it though
<The-Kernel> this is what i get with ssh ssh: connect to host 216.139.15.128 port 22: Connection timed out
<ivoks> it probably crashed
<The-Kernel> yeah
<The-Kernel> its 30 miles away
<The-Kernel> can't get to it from here
<ivoks> bad luck :)
<The-Kernel> I hate ebox
<The-Kernel> it doesn't work
<The-Kernel> there is no way to trigger a restart or anything?
<The-Kernel> dang....
<The-Kernel> I get my web site if I telnet crowenix.net 443
<ivoks> well, no, if you don't have ilo or something like that
<Nafallo> drac? :-)
<The-Kernel> ilo?!
<ivoks> ipmi :)
<The-Kernel> !ilo
<ubotu> Sorry, I don't know anything about ilo - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<The-Kernel> !drac
<ubotu> Sorry, I don't know anything about drac - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<Nafallo> !ipmi
<ubotu> Sorry, I don't know anything about ipmi - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<ivoks> integrated lights out
<The-Kernel> !ipmi
<ivoks> drac is dell's version of ilo
<The-Kernel> huh?
<The-Kernel> how does it work?
<ivoks> and ipmi is intel specification; often used by supermicro
<ivoks> The-Kernel: ilo enables you remote access as if you are in front of the computer
<ivoks> you can control power of your machine
<ivoks> you can turn it off, or on
<The-Kernel> yeah, but that doesn't help right now
<ivoks> it has it's own ip
<The-Kernel> my server, yes
<ivoks> i was talking about ilo :)
<The-Kernel> I guessed that
<ivoks> do you have access to remove control of power supply?
<The-Kernel> what?
<The-Kernel> not now
<ivoks> s/remove/remote
<The-Kernel> There's no one at that office so...
<The-Kernel> oh, no
<ivoks> so, no ports are open except https?
<The-Kernel> um...
<The-Kernel> haha
<The-Kernel> I can still check mail on the server
<The-Kernel> actually
<The-Kernel> ftp is
<The-Kernel> can I use that in some way?!
 * Nafallo wonder what The-Kernel is really trying to do :-)
<The-Kernel> I figured it out
<The-Kernel> One of the guys who have access to root on the server, installed ebox
<The-Kernel> I was talking about it last night
<The-Kernel> with him, and he went ahead and installed it without asking me
<The-Kernel> can I log in using https?
<zul> to ebox it uses https if its running
<sommer> anyone know why I'm "unable to write random state" unless using sudo for this command:  openssl x509 -req -days 365 -in smtpd.csr -signkey smtpd.key -out smtpd.crt
<tzn> sommer: please take a look at http://www.openssl.org/support/faq.html#USER1
<tzn> but you can safely ignore that message I think
<sommer> tzn: ah I see, thanks
<owh> sommer: Ping
<sommer> owh: icmp reply
<owh> That's almost as quick as TCP over pidgin :)
<owh> How did the cron test go for you?
 * sommer checking
<sommer> what am I looking for exactly?
<sommer> there are two entries in the /var/log/date.log fiel
<sommer> file even
<owh> What times?
<leonel> hello  I've just saw on  keescook blog that   selinux is available for hardy .. is there already politics builded for  postfix openssh apache and friends ?
<owh> sommer: Something like 6:25 am?
<sommer> owh: 6:26 on the 15th and 6:41 on the 16th... on the hardy test
<owh> sommer: Is that server doing anything real, as in, is it busy?
<sommer> owh: nope... it's a test machine
<owh> sommer: What do the syslog* say about daily, zgrep daily /var/log/syslog*
<sommer> owh: no errors
<owh> sommer: Are the jobs being run by cron or by anacron?
<sommer> owh: anacron
<owh> sommer: Are you seeing: /USR/SBIN/CRON[7417]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
<owh> Or something else?
<sommer> yep that's what I'm seeing
<owh> sommer: There are only two hits, or are there also anacron entries on separate lines?
<sommer> owh: http://paste.ubuntu-nl.org/59868/
<owh> sommer: Thanks. Does /usr/sbin/anacron exist?
<owh> I'm guessing that you're using cron.
<sommer> oh, sure cron yep
<owh> This is getting stranger and stranger.
<owh> That machine hasn't been rebooted since you added the test script?
<sommer> nope... was it supposed to be?
<owh> No
<owh> Is there a way to determine if cron was reloaded/restarted *at all* since the last time the machine rebooted?
<faulkes-> syslog?
<faulkes-> it should report a restart in there
<owh> faulkes-: Would that show a cron restart / reload?
<faulkes-> for an invidual crontab or for crond as a whole?
<sommer> owh: nothing in daemon.log or syslog that I can see
<owh> faulkes-: crond as a whole.
<owh> sommer: That server was a boring, boot from cd installation?
<sommer> owh: yeppers
<owh> crap
<owh> So, you and I are running the same thing and seeing different behaviour.
<owh> Lovely :)
<sommer> I could do an exciting network install if you want?
<faulkes-> cat /var/log/syslog | grep -i anacron
<owh> faulkes-: Just so you know what we're trying to figure out: https://wiki.ubuntu.com/OnnoBenschop/ubuntu-server/cron-bug
<owh> sommer: Don't think it would do anything for us.
 * faulkes- nods
<faulkes-> k
<owh> sommer: That was a full cd install, not a business card cd right?
<owh> sommer: alpha-6?
<sommer> it was a daily build from feb 26... only one I had on hand after the libc thing
<owh> So slightly different versions perhaps.
 * owh cannot recall when alpha-6 was released/
<sommer> I can update and check again tomorrow... actually need to update anyway
<owh> To me it looks like once it's detected a change, from then on it works, but I have no data to back that up.
<owh> It's like a missing flag/file of some sort almost.
<owh> sommer: I don't know if an update will actually change the outcome.
<soren> What exactly does "#
<soren> What exactly does "Added date script cron job without reloading cron
<sommer> well either way... we'll find out
<timboy> anyone using roundcube?
<soren> mean?
<soren> timboy: yes
<owh> soren: On that wiki page?
<timboy> soren, you using postfix?
<owh> soren: If you mean on the wiki page I showed faulkes-, then, the script at the top, the two liner, date >> /var/log/date.log was created in the /etc/cron.daily directory. It was called date and chmodded +x to enable it. It was then run to test it and left alone. cron was not started/reloaded/restarted. The server was not rebooted.
<owh> soren: Is that what you're asking?
<owh> sommer: Yes. That update, will you do it as an install from scratch, or an upgrade?
<soren> owh: The one you posted alink to.
<sommer> owh: upgrade
<soren> timboy: Yes.
<owh> soren: See above.
<soren> owh: Right.
<soren> owh: Well, if you're putting stuff in cron.daily, cron doesn't even need a reload.
<owh> So what I'm saying is that in some circumstances adding a cron job as a script to cron.daily does not cause it to run automatically. What we're trying to figure out is what causes it not to run. Sometimes it works as expected, sometimes it doesn't appear to.
<owh> soren: And because it's a pretty big claim to say that cron is broken, I'd rather not start making such statements until I know what's going on :)
<timboy> soren, how do I tell if I'm set up properly. I try the test in the setup for roundcube and it fails... I am running dovecot
<owh> I suppose I could just ignore the problem and make it a SOP to reload cron when adding a script.
<soren> owh: have you tried run-parts --list?
<owh> sommer: Thanks for your help. If you can think of anything else, please feel free.
<owh> soren: Yes, it reports all scripts as expected.
<sommer> owh: sure np
<owh> I've got some Ubuntu-CD's going back to dapper, I think I'll do some CD installs and subject them to the same test and see what gives.
<owh> In fact, perhaps the LiveCD exhibits the same problem.
<owh> That will simplify things :)
<owh> brb
<soren> timboy: What is "the test"? I set it up ages ago and just works.
<timboy> soren, lol on the install after you set up the database it has you try to log in to see if your setup is working and it fails. how do i create a user to see if it will work? my system user doesn't...
<soren> timboy: roundcube doesn't have it's own user database. It just tries to authenticate against your imap server.
<soren> s/it's/its/
<timboy> ok i'll dig deeper thx...
<owh> http://www.randsinrepose.com/archives/2007/11/11/the_nerd_handbook.html
#ubuntu-server 2009-03-09
<pjsturgeon> I have a fat-32 drive connected and im trying to set it up as a samba share. i have one user named samba, and i was wondering if instead of making the default user/group for the mount root, if i could set the default user and group as samba using umask and gmask. not sure how to do it
<pjsturgeon> that way i dont need to reformat my disk right now to a *nix format to allow it to work totally properly
<pjsturgeon> i ran $ id and i see 111(sambashare) but thats not the name of my samba user.
<shally87> hi i want to ask..is it possible to built a dns server when i have a dynamic ip?
<genii> shally87: Thats what ddns services are for
<JanC> shally87: you can't run a DNS-server on a dynamic IP, if that's what you are asking
<shally87> genii: so i need to point a ddns server to my dns server?
<shally87> JanC: owh ok..too sad for that..
<genii> shally87: Thats the idea, yeah. You also can't be an authoritive dns server
<shally87> genii: so there is no need for me to built a dns server at the first place unless i want it to have my brand right. Btw how to run multiple website on a single LAMP server? any internet guide that i can use?
<JanC> well, you can run a DNS server on a dynamic IP, but it won't work properly  ;)
<shally87> ok..
<LHC> shally87, I learned how to do that just here
<LHC> virtual hosts
<JanC> shally87: it's in the apache manual
<genii> shally87: You might want to pay your ISP for a static IP
<LHC> I have a static ip, is that good?
<shally87> LHC: with static ip you can do a lot..but getting a static ip is too expensive for a student..
<JanC> LHC: if you want to do any serious hosting, a static IP address is a must have
<genii> LHC: It makes life simpler for other boxes to find your domain
<LHC> mine came with one without having to pay more
<LHC> but I have no use in hosting at home
<shally87> LHC: where you live?
<LHC> uk
<shally87> LHC: internet speed?
<LHC> 10meg
<LHC> 1mbyte/s
<LHC> shally87,  why
<JanC> download I guess (upload speed is more useful to know for hosting ;) )
<shally87> still if run hosting on a 1Mb speed just so slow right..
<LHC> I know but upload is always less, mines 60kbs
<shally87> yeah..upload more impotant
<shally87> huh??
<shally87> less than 60kbps?
<LHC> yes upload is a must ofc xD
<LHC> mine is 60kbytesps
<shally87> why not optimize that?
<LHC> LOL
<oh_noes> Does anyone know if I can implement a workaround for this vmbuilder bug I've submitted:  https://bugs.launchpad.net/vmbuilder/+bug/337083
<uvirtbot`> Launchpad bug 337083 in vmbuilder "Multiple --mirror parameters may be required" [Undecided,New]
<LHC> thats like dialup man
<JanC> anyway, back on-topic
<shally87> i still havent get the manual since i install the LAMP server
<shally87> you got any fast link?
<shally87> ohoh..
<LHC> shally87,  you want to know how to vhost
<shally87> i just check my self..
<LHC> its easy, k
<shally87> LHC: I'm using vbox
<LHC> k
<JanC> the apache manual is on the apache site (and on you disk if you installed its package)
<JanC> and /usr/share/doc/apache2/README.Debian.gz has some useful info too
<shally87> oh_noes: you arrive in wrong channel i think..
<shally87> maybe a help in #vmware
<shally87> ok..
<oh_noes> shally87: errr, no...
<shally87> JanC: i'll have to read it first..
<JanC> maybe #ubuntu-virt, but not #vmware
<shally87> gotta go now..thanks for the help JanC & genii
<jiggles> Hi all, does anyone know what should i use to expand a VM that's operating from 2 .qcow2 images?
<JanC> define "expand"
<kirkland> jpds: "moved"?  it's always been there
<jiggles> JanC, i ran out of space in the VM and i would like to make it bigger
<JanC> jiggles: memory, diskspace, ... ?
<jiggles> JanC, diskspace (qcow2)
<JanC> I'm not sure you can make qcow2 images larger than intended at creation, but if not you could create a new, larger disk image, copy everything, and then use that image instead of the old one...
<hads> JanC: You can convert the images to raw, grow them, then convert back.
<hads> JanC / jiggles
<twb> Where "grow" means to use a conv=notrunc seek=4G type call to dd.
<hads> Yeah, there's a few blog posts with instructions that come up in Google
<jiggles> hads, yeah just saw a couple in google
<jiggles> hads, have you done this before?
<JanC> hads: you still would have to grow the FS I suppose?
<hads> jiggles: No, I haven't personally.
<hads> JanC: Yeah
<jiggles> hads, how would you suggest i grow the FS?
<hads> Using the standard tools for your filesystem, I'm sure one of the blog posts tells you how.
<twb> resize2fs
<twb> However you'd either do that from within the running VM, or by using losetup to get a /dev/loop0 -- since the .raw file will start with an MBR.
<twb> I felt really clever when I worked out to use losetup for that a while back.
<jiggles> actually i did not see that part of it in the blog posts i read (the few i saw were for windows)
<jiggles> i'm assuming that i just boot in, and run resize2fs as twb said?
<twb> Doing losetup would be faster, since it'll run in the host environment.  But IIRC growing is O(1) anyway.
<hads> http://www.outflux.net/blog/archives/2008/09/03/kvm-disk-image-filesystem-growth-notes/
<twb> I wish I could use kvm.  Sniffle.
<twb> Stupid vmware-server is hogging the VT bits
<hads> :(
 * twb grumbles about the lack of libunique 1.0.6
<twb> Incidentally, that totally fucked me over.  Merely *installing* kvm causes the kvm module to be modprobed in the postinst -- which made my entire system hard-lock.
<oh_noes> Im trying to Intall Ubuntu Server on my Server with Intel Hardware RAID1 Mirror, BUT Ubuntu Hardy is listing all my Drives and not the array I created.   Why is this?
<ball> oh_noes: Is it *real* hardware RAID, or fake hardware RAID (perhaps done in the firmware)?
<oh_noes> it's Fake, but I still can't use MDADM
<ball> I don't know what MDADM is, but if it's fake then why not use software RAID?
<oh_noes> mdadm is linux software raid
<ScottK> If it's fake raid, then I'll need some driver to work.
<oh_noes> I have the driver ... it finds the disks but instead of listing the logical volumes it displays all 6 disks
<ball> oh_noes: I get that too, though I've not tried it on Linux
<ball> hang on, I'll bring up another xterm
<ball> It's been years (literally) since I tried RAID on a Linux box.  I'm not sure how it works there.
<JanC> oh_noes: my motherboard has such an intel fake raid and I disabled it in the BIOS, maybe yours is too?
<JanC> oh_noes: the reason I disabled it is that I want to use mdadm (software raid) instead of fake raid though, as that is what everybody suggests (it's more stable & it's faster...)
<ScottK> You'll probably be happier on Linux with disabling it (as JanC says) and using mdadm.
<oh_noes> I have Intel fake RAID, and Ive enaled it.  And i've created a RAID1 OS Mirror to boot from (2 disks), and a RAID10 Data volume (4 disks)
<oh_noes> but, inside the installer, it Lists all my 6 disks, NOT the 2 logical volumes
<JanC> I guess the driver is not loaded (automaticly) then
<JanC> but seriously, why use it?
<oh_noes> Because I dont want the added overhead of managing mdadm?
<JanC> added overhead?
 * ScottK doesn't recall any either.
<oh_noes> If I lose a drive, will MDADM show up my drive in the server as Red?
<oh_noes> If I replace the drive, will it auto rebuild?
<JanC> fake raid will cause more overhead then linux software raid
<JanC> than
<JanC> there is a reason why it's called *fake*
<oh_noes> no .. it's called fake raid because it doesnt have an onboard processir to handle the XOR's
<JanC> telling mdadm to rebuild is 1 command (and you can script it, if you really want)
<ScottK> Actually you'll get an email telling you something's failed.
<baldaris> hi
<ball> hello baldaris
<baldaris> hi..
<baldaris> hey i am using ubuntu hardy as my email server..
<baldaris> when i am trying to access my email server from remote hosting via smtp using sasl , i am getting authentication error..
<baldaris> i have made changes to main.cf
<baldaris> file..
<giovani> baldaris: #postfix
<baldaris> yeah
<giovani> not here
<baldaris> can you tell me which channel?
<giovani> I just did
<giovani> read
<baldaris> ooh
<baldaris> thanks
<ai3nv39> giovani,hi,i solve the problem ,useing rescue disk rebuild Grub.
<giovani> ai3nv39: great
<oh_noes> Using Ubuntu Server 8.04.2, if in the installer I create a LVM Mirror thing, does that use MDADM?
<oh_noes> I want to create a OS Mirror RAID1 (2 disks), then a RAID10 (4 disks) for /data
<ball> six disks total?
<ai3nv39> oh_noes,i used Raid controller Create a System just like you~
<ai3nv39> not soft raid
<oh_noes> ball, yes
<ball> oh_noes: I'd turn off fake RAID and do it through software
<oh_noes> I am...
<oh_noes> During the install I selected 'LVM', then selected my 2 disks  iwanted to use
<oh_noes> How do I verify I
<oh_noes> How do I verify it's confused as a RAID1?
<oh_noes> I dont have 'mdadm'
<hads> LVM is not RAID
<oh_noes> so how do I deploy a simple RAID1 Mirror OS / ?
 * ball wouldn't like to guess.
<oh_noes> ball: so you keep saying dont use fakeraid, but cant point me in the direction of how to actually deploy software raid1 for ubuntu server?
<ball> oh_noes: I've not used Ubuntu Server enough to be able to guide you, sorry.
<ball> I'm thinking of putting it back on my Dell though
<mib_2yfun1a6> hi all
<naymyowin> á·á
<naymyowin> hi
<uvirtbot`> New bug: #339823 in postfix (main) "qmgr process loads the system when using rate_* in custom transports" [Undecided,New] https://launchpad.net/bugs/339823
<naymyowin> hello
<naymyowin> anybody help me??
<foolano> hey guys, are you using any antivirus + samba?
<_ruben> !helpme
<ubottu> Avoid your questions being followed by a trail of "Please, help me", "Can nobody help me?", "I really need this!", and so on. This just contributes to making the channel unreadable. If you are not answered, ask again later; but see also !repeat and !patience
<naymyowin> no
<naymyowin> i have install mail server
<naymyowin> but whenever i typed /etc/init.d/networking restart, i got error of posfix
<naymyowin> hello
<MatBoy> it seems difficult to log to a fifo file... are there some issues with this kind of logging ?
<domas> MatBoy: don't
<domas> MatBoy: fifos are one-process-to-one-process
<domas> MatBoy: not many-processes-to-one-process
<domas> MatBoy: if you want scalable logging, use datagrams to unix domain socket
<domas> though it will be lossy :)
<MatBoy> domas: but I need to... I need to use it for some mail.* logging
<domas> I told you why it is wrong
<incorrect> i've added a user to an ldap group, i get see that they are in the right group in getent group,  but id <user> doesn't show the group,  is there some cache i need to flush?
<_ruben> having syslogd log to a fifo would qualify as one-process-to-one-process (assuming only 1 process reads from it)
<MatBoy> yep, only one process reads from it
<MatBoy> domas: so, why shouldn't I use it un such case ?
<MatBoy> *in
<MatBoy> (cold hands)
<domas> the problem is with how fifo open/close would work
<MatBoy> but when it's proven to work ?
<MatBoy> I mean... I need the fifo working, nothing else :)
<simplexio> MatBoy: fifo is first in first out, when data in fifo is read is discarded, so only one reader can use it
<MatBoy> simplexio: yes, I only have one reader !
<MatBoy> but nothing comes in it seems
<incorrect> and the idiot of the day award goes to me,
<domas> incorrect: forgot to log out?
<incorrect> no worse than that
<domas> tried on other machine?
<incorrect> you won't guess
<incorrect> so my ldap server has a group, it has a member of 'user1,user2,user3,etc'
<incorrect> i wonder who user user1,user2,user3 is exactly,
<domas> =)
<incorrect> should have been "user1","user2","user3",
<incorrect> it looked right in getent passwd
<incorrect> but the username had ,'s in it
<incorrect> too funny
<incorrect> ok next,  i need a semi smart http benchmarking tool, siege doesn't seem good enough as it doesn't pull the 'page' as such just the html contents
<domas> what is wrong with pulling just html :)
<incorrect> the images are transcoded
<domas> make separate benchmark for that
<incorrect> i want to simulate real usage
<incorrect> see where the system bottle necks
<incorrect> too much dynamic data on the site
<thewrath> is there a service that i can check to make sure the httpd.conf is configured properly
<thewrath> in ubuntu
<yann2> apache2ctl configtest
<thewrath> ok let me set up the server on my lappy toppy lol
<jpds> kirkland: Before it was mounted as ~/Private, now it's ~/.Private according to df -h.
<dennda> Brainstorming: I ran my server on my own up to now. Now I want to grant somebody SSH access (non-root). What things should I check beforehand? My checklist currently is this: - Make sure home-folders are only readable by their owners, - Check folder / file permissions, - ... what else?
<jpds> kirkland: I don't have a ~/Private dir either.
<thewrath> coudl someone help me with a apahce config question i know this is not the room for it but maybe someone can help me
<storrgie> what is the name of the remote desktop program so i can add it to my hosts.allow file?
<thewrath> whats a good firewall for ubuntu server?
<incorrect> I am looking for a website benchmarking tool that can spider a site a couple of levels
<incorrect> regression testing on a single script is not what i want
<kraut> moin
<twb> incorrect: wget?
<twb> thewrath: iptables.
<incorrect> twb, err no something more like siege
<kirkland> jpds: ~/.Private has always been the underlying directory that contains the encrypted data
<kirkland> jpds: you can mount that on top of whatever you want, by editing ~/.ecryptfs/Private.mnt
<kirkland> jpds: the two supported/official mountpoints are $HOME/Private and $HOME
<kirkland> jpds: the former i refer to as "encrypted-private"
<kirkland> jpds: the latter, is encrypted-home
<dennda> Some more recent versions of openssh allow chrooting a user (or a group of users). Does that approach *easily* allow granting that user permission to use binaries like wget or others that I did not copy into his chroot? (I want to chroot a user, but don't want to restrict access to scp/sftp only and don't want to copy binaries)
<jpds> kirkland: OK; Private.mnt has ~.
<jpds> kirkland: So, if I put data into ~/.Private - what happens to it?
<kirkland> jpds: very bad things
<kirkland> jpds: you want to write through your encrypted mountpoint to ~/.Private
<kirkland> jpds: if data gets written to ~/.Private that didn't go through ecryptfs, it's not going to know what to do with it when it tries to read it
<kirkland> jpds: and you'll get some nastygrams in dmesg
<jpds> kirkland: Hmm, OK.
<jpds> Thanks.
<kirkland> jpds: what are you trying to accomplish?
<kirkland> jpds: encrypted home?  or encrypted private?  or encrypted nothing?
<jpds> encrypted private.
<kirkland> jpds: okay, after installation, you run 'ecryptfs-setup-private' as your user
<kirkland> jpds: it should walk you through it
<jpds> OK, thanks.
<pirroh> hi there, 8.10 server, trying to use a 3TB raid5 configuration. if I don't create a 3TB partition, the remaining space is flagged as unasable. Is there something I'm missing or could it be a bug?
<thefish> pirroh: have you tried using lvm?
<domas> ghm
<domas> interesting
<domas> I have a server that crashes from time to time, and now I realized kswapd is doing bad stuff there (like here http://p.defau.lt/?Uf1T62Y4gyr_QSsIihQf4w )
<domas> seems to be another kernel memory leak
<domas> ah, thats 2.6.28, probably not much ubuntu issue
<uvirtbot`> New bug: #339966 in dovecot (main) "dangerous action: dovecot-postfix force-installs new conf file" [Undecided,Incomplete] https://launchpad.net/bugs/339966
<furicle> Can someone tell me what 'cat /etc/lsb-release' looks like on hardy server verbatim?
<giovani> furicle: http://www.pastebin.ca/1356727
<giovani> why would it be a different format than any other release?
<furicle> Thank you. My memory is playing tricks on me.  I thought -server had a different release file than desktop/alternate installs...
<giovani> nope
<giovani> that's why there's no -server repository
<giovani> it's just a question of which kernel you install, and what packages are installed by default
<thewrath> hey all
<thewrath> what would i use to install a gui on the server
<jmedina> thewrath: it depends of what you want
<thewrath> just a gui itnerface
<jmedina> there is only X, twm, gnome, kde
<thewrath> which one woudl you reommend jmedina
<jmedina> or what kind of gui?
<thewrath> whcih one is the best
<jmedina> I never use graphics in my servers, only ssh+bash+screen
<thewrath> this is for a guy at the school i am implementing my senior project at
<thewrath> oh ok
<jmedina> of course I use web interfaces to manage some services, but they are specific for the services in my server
<thewrath> he is trying to use the server as something to help with imaging
<thewrath> bc zenworks is not likeing vista
<jmedina> so, it depends wheat you want to do in the GUI
<thewrath> jmedina: what services you are running if you dont mind me asking
<jmedina> for my desktop I use KDE (kubuntu)
<thewrath> oh ok
<thewrath> i am interested in what others run on their ubuntu server
<jmedina> file servers, webservices, mail, ldap, dhcp, dns,
<jmedina> and a lot of web applications
<thewrath> web applications such as ?
<thewrath> like phpnuke is that what you mean
<jmedina> wikis, CMS, CRMs, blogs, system admin, EBOX, Mandriva Directory Server, sarg,
<thewrath> CRMS and EBOx?
<jmedina> vtiger, sugarcrm
<thewrath> what do they do
<thewrath> never heard of CRMS and Ebox
<jmedina> ebox-project
<jmedina> CRM = Customer Relationshipt Managment
<jmedina> ebox is a web interface
<thewrath> oh ok
<thewrath> oh i think ih ave that
<thewrath> what CRM do u use
<thewrath> any free versions?
<jmedina> to manage servers, it runs on ubuntu
<jmedina> vtiger, sugarcrm
<jmedina> here, in my job vtiger
<thewrath> you like vtiger?
<jmedina> yeap
<jmedina> it is integrated with our IP PBX asterisk for clic2 call
<thewrath> oh ok
<thewrath> what is mandriva directory server
<jmedina> a web interface to manage LDAP accounts for mail, proxy, samba, dhcp, dns
<thewrath> oh ok
<ivoks> open ldap?
<jmedina> yeap
<thewrath> i have nothing to use ldap with i wish i did
<jmedina> well I use it to manage accounts in OpenLDAP, of course in ubuntu hardy servers
<thewrath> for vtiger to get it sudo apt-get install vtiger?
<jmedina> http://tuxjm.net/docs/cursos/Samba+OpenLDAP+PAM+NSS-4Ubuntu/html/
<jmedina> it is my howto for ubuntu (in spanish)
<thewrath> oh okay
<thewrath> i think i found the install guide from their site
<jmedina> thewrath: I never install web application from apt, I always download from upstream and install by hand, because I want to have the latest
<thewrath> oh ok
<thewrath> jmedina: but you would install kubuntu-desktop over server if u had to?
<jmedina> but htat is my choise
<thewrath> we did that with ubuntu-desktop and it didnt load the desktop just the command line
<jmedina> thewrath: kubuntu-desktop nop
<thewrath> nop?
<jmedina> I prefere something light, like windowmaker
<jmedina> :D
<jmedina> or sometimes I only export DISPLAY or use ssh X forwarding
<thewrath> okay we tried wtih ubuntu-desktop and it didnt load the gui at all only the command line, any idea why
<jmedina> start GDM
<jmedina>  /etc/init.d/gdm start and change to TTY7
<jmedina> but, fist, install gdm (Display Manager)
<jmedina> or start X by hand with "startx"
<thewrath> so install gdm first and then kubuntu-desktop
<jmedina> no
<jmedina> install gdm
<jmedina>  /etc/init.d/gdm start
<jmedina> and login
<thewrath> i did that as well after i removed ubuntu-desktop
<jmedina> well you need ubuntu-desktop in order to use  GDM
<jmedina> if you install kubuntu-desktop you can use KDM (kdm package)
<thewrath> so install gdm and ubutnu-desktop or kubuntu-desktop
<jmedina> gdm is the graphic login manager
<thewrath> oh ok
<thewrath> so install gdm and ubuntu-desktop
<jmedina> for GNOME, KDM is for KDE
<thewrath> righ
<jmedina> yeap
<thewrath> and then run /etc/init.d/gdm start
<thewrath> then resetart?
<jmedina> for servers I prefer to use startx, I dont want to have a X enviroment all the time, only when it is needed, but is started on demand with "startx"
<jmedina> thewrath: there is no need to restart a servre
<thewrath> ok
<jmedina> start gdm, and login via GDM
<thewrath> oho k
<jmedina> just doit
<thewrath> so ubutnu desktop and gdm got it
<thewrath> i haev to do it at work bc it will take like forever on my net connection to do it so i have to wait till i am in at school
<jmedina> "bc"?
<jmedina> thewrath: copy yoiur /etc/apt/archives to the other machine
<thewrath> because
<jmedina> or use a I to create a CD /ISO from your cached packages in /var/cache/apt/archives/
<thewrath> use a I?
<jmedina> application
<jmedina> dont remember the name
<thewrath> oh ok
<jmedina> here I only use dpkg --get-selections , copy /var/cache/apt/archives
<thewrath> do the machine with desktop on it cppy the /etc/apt/archive directory to my machine here at home?
<jmedina> copy the whole dir to the new machine, then dpkg --set-selections and apt-get upgrade
<thewrath> i don like copying stuff from one machine to another so what i will do is do it fresh on my machine
<thewrath> o you ever use webmin?
<jmedina> I never use webmin
<jmedina> I dont need it, and I dont like it
<thewrath> oh ok
<thewrath> know any other good web admin tools like webmin'
<Nafallo> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<thewrath> yea but you haev to configure everything by hand
<thewrath> which somtimes that is not bad
<thewrath> any others
<ivoks> we should add a trigger to ubottu
<thewrath> wat about fluxbox+
<thewrath> fluxbox?
<ivoks> when someone even thinks of webmin, it should paste the ebox text, and kickban the user
<thewrath> lol thank you ivoks
<Nafallo> ivoks: ubottu as op? no thanks.
<ivoks> :)
<ivoks> chanserv would be fine :)
<jmedina> ivoks: that wont help, the user will think his connection failed, and then re-login, and ask again for webmin thinking that his last question was not published.
<ivoks> :)
<jmedina> it is my experience with some customers :D
<thewrath> unless you set it up as ban for webmin'
<Nafallo> jmedina: you kickban your customers?! evil!
<jmedina> sometimes
<jmedina> when they dont pay
<ivoks> so... lighttpd doesn't scale on SMP
<ivoks> it could, we couple of drawbacks
<ivoks> s/we/with
<thewrath> lighthttpd?
<ivoks> like... corrupted access logs
<thewrath> wat
<thewrath> i installed fluxbox but how do i start it
<Nafallo> startx
<thewrath> says not insatlled i can install it by typoing sudo apt get install xinit
<thewrath> is that right
<thewrath> command not found
<thewrath> can not start /etc/X11/X no such file or directory, aborting
<thewrath> xinit: server error
<thewrath> /usr/bin/startx: line 166: xauth: command not found
<thewrath> why do i get that
<thewrath> i type in starx
<thewrath> *startx
<thewrath> any ideas why]
<soren> ScottK: Just to be sure... If a new upstream release (well, bzr snapshot really, but that's rather beside the point, /me thinks) is a bugfix only release, I don't need to go through the FFe process, right?
<thewrath> soren: know why i get errors when typing in startx
<soren> thewrath: Nope. I do know, though, that it's off-topic for this channel.
<soren> xauth is in xbase-clients, fwiw.
<soren> ...but these sorts of questions belong in #ubuntu, not here.
<thewrath> why in ubuntu when i am running server
<giovani> thewrath:  because server is not a different product, it's just a set of server packages -- questions involving X-based guis belong in #ubuntu
<ivoks> soren: i'm testing web servers in kvm... any idea why i would get very bad results in side kvm (i use virtio for disk and nic)?
<ivoks> inside
<soren> ivoks: Which version?
<ivoks> soren: both host and guest are jaunty
<soren> Disk image type?
<ivoks> file
<ivoks> qcow2
<ivoks> i don't think it's disk, since CPU is almost idle, at least there's no iowait
<jmedina> is KVM recommended for productions servers?
<jmedina> I always used paravirtualized servers with Xen
<hads> Works for me
<jmedina> if fact, fully virtualized always gave poor performance compared to para-vir
<soren> ivoks: On the host, you mean?
<ivoks> soren: on guest
<ivoks> guest's cpu is 50% sys, 50%idle
<olcafo> I've been using KVM in production enviroments running a webserver and email server for 20 or so users since 7.04. It has worked flawlessly for me.
<soren> ivoks: I'd try an lvm backed solution and see if that makes a difference. I'm not convinced you'd see the qcow2 performance overhead or lag exposed in any way in the guest.
<ivoks> interesting...
<ivoks> at some point, siege stops requesting pages from guest
<ivoks> and web server on guest just idles
<ivoks> it's like there's no communication
<ivoks> 'lag' describes this perfectly
<ScottK> soren: If it's bugfix only, no FFe is needed, just document what you did in a bug (if there isn't one already) and close the bug in debian/changelog.
<soren> ScottK: Thought so. Thanks for clearing that up.
 * soren goes to upload
<ScottK> yw
<nat2610> Sorry for my question that isn't related to CentOS but strickly sysadmin. I don't know where to ask it... it's a best practice question: I have a 3ware raid controller with RAID1 cluster and 1 of the drive is degraded, I'd like to replace it and do it as fast as possible. My idea is to shutdown the machine swap the drive restart it and recreated the raid after the machine reboot and is running again (to limit the dow
<nat2610> ntime). Is it better to remove the drive from the unit (software) before shutding down the server or it doesn't matter. I'm hopping to have an automatic rebuilding after the boot
<hads> CentOS?
<hads> Pasting around the place are we
<ivoks> why just not hot swap it?
<ivoks> you don't have to remove the driver in 3ware application
<ivoks> if you can't hot swap it, shut it down, replace the disk, and power it up again
<ivoks> then rebuild it trough tw_cli
<nat2610> ivoks, there is no hotswap rack
<olcafo> I've done this a few times with the 3ware controller, you can hot swap or shut it down like ivoks suggested. one it replaced the RAID should start rebuilding by itself.
<olcafo> *once its
<ivoks> i love 3ware and its tools
<nat2610> olcafo, that's what I m hopping
<ivoks> and after your raid is synced
<ivoks> you should format the disk and put ubuntu on it
<olcafo> you will get significant preformance hit while it rebuilds though... I usually remove network access from the users while this happens.
<nat2610> olcafo, bad performance is ok ... downtime not really ...
<ivoks> right, cache is off
<Nafallo> olcafo: you can't tweak the rebuild priority on 3ware?
<olcafo> I'm paranoid and downtime is acceptable in this setting.
<ivoks> well, if you can't hot swap, you don't have a choice
<olcafo> Nafallo, can't say I've ever seen that option.
<Nafallo> olcafo: hm. oki. thanks.
<ivoks> rebuild priority?
<ivoks> the thing is that 3ware disables cache on write during rebuild
<ivoks> that's why performance is worse than when raid is complete
<olcafo> makes sence.
<ivoks> tw_cli is my favourite non-free application
<Nafallo> ivoks: sure. that's sensible, but some controllers allow you to set the priority the rebuild will have over normal system tasks as well.
<ivoks> i know
<Nafallo> you're kind of supposed to be able to use systems duing rebuild :-)
<ivoks> well, you are
<ivoks> the system isn't unusable
<ivoks> it is just slower cause write cache is off... it's like normal disk
<ivoks> when raid is ok, you have write performance of 150MB/s
<ivoks> when degraded, it's 20MB/s
<ivoks> its
<ivoks> it's
<ivoks> bah
<ivoks> well, write speed depends on amount of RAM on controller
<Nafallo> fair enough
<olcafo> I'm assuming write cache is turned off because the RAM is being used to rebuild?
<ivoks> probably
<ivoks> 'night
<pjsturgeon> anyone found a good guide (or wanna guide me through) how to resize and create partitions in ubuntu command line?
<pjsturgeon> they are all slightly confusing for my current state or using gui;s
<olcafo> pjsturgeon, what are you trying to resize?
<pjsturgeon> i have a 500gig drive with 250gig of content. its currently fat32 and i need to convert it to something more samba friendly like ext3
<pjsturgeon> i was gonna slice it down the middle and format half the drive as ext3 then copy the content accross
<giovani> wow, you really don't have a second drive?
<giovani> doing it the way you want is messy
<pjsturgeon> i did...
<pjsturgeon> i formatted it in ext3 about 20 minutes ago
<pjsturgeon> went to get a fresh can and drop kicked the damn thing accross the room. darn usb enclosure!
<giovani> so copy the files there?
<pjsturgeon> now it beeps :-/
<olcafo> I'm not a big fan of resizing partitions, although I have done it once successfully with NTFS (using the gui :P)
<pjsturgeon> ahh its only a bunch of media files, im not TOO bothered if it all dies (no crying but a little swearing)
<pjsturgeon> re-downloadable, etc. so... how to resize?
<olcafo> I've done it with the gui tool qtparted. Not actually sure how to do it command line..
<pjsturgeon> ahh, will put my 500giger into the usb enclosure and try not to knock it accross the room. cheers guys :)
<olcafo> huh, the CLI tool is called "parted", go figure. http://www.gnu.org/software/parted/manual/html_mono/parted.html
<pjsturgeon> ha! who'd have thought
<pjsturgeon> cheers :)
<mathiaz> bdmurray: let me introduce https://wiki.ubuntu.com/DebuggingOpenldap - for reviewing pleasure...
 * bdmurray claps
<pjsturgeon> anyone had any luck resizing a fat32 partition with parted?
<pjsturgeon> (parted) resize 1 0 235520
<pjsturgeon> Error: Unable to satisfy all constraints on the partition
#ubuntu-server 2009-03-10
<oh_noes> Can you create a RAID1 Software mirror in Linux/MDADM after installing? Or do I have to do it in the installer?
<olcafo> short answer: yes
<olcafo> this can be done after install
<twb> oh_noes: it is probably not something that an inexperienced admin should attempt.  At least, not while drunk.
<Nafallo> or even when sober :-)
<twb> I've done it a couple of times and it's straightforward, but I have a lot of experience with md.
<twb> Like resizing a filesystem, the important thing is to do the steps in the right order
<olcafo> the installer makes it easy, but the experience of doing it by hand in invaluble when something breaks and you need to fix it quickly.
<twb> You'll also need to take the server down to do it
<Nafallo> olcafo: it's just not as straightforward to create an md on the disk you're using as / ;-)
<olcafo> twb, that depends on if we're talking about making an RAID on the system partition or not.
<olcafo> Nafallo, right
<twb> olcafo: if by "system" you mean "root", then yes.
<olcafo> sorry, my lingo is not quite nix. I work on to many different OSes.
<oh_noes> If I want to use FakeRAID, https://help.ubuntu.com/community/FakeRaidHowto
<oh_noes> but this article doesnt explain how to do it for Ubuntu Server 8.04 without the LiveCD
<oh_noes> how do I activate dmraid?
<twb> oh_noes: you really don't want to use fakeraid.  You either want software raid, or in unusual cases, hardware raid.
<oh_noes> I know what I want to use :)
<oh_noes> i'm doing some performance testing here at work, and Im interested to do some analyzsis between the two
<twb> software raid is better than fakeraid because you can put the drives in any linux system and still read the data.
<twb> oh_noes: oh, OK.
<twb> oh_noes: well if it's proper fakeraid the kernel should just see one disk :-)
<leonel> what would you recommend ?? :  apache +  ( fastcgi | scgi | fcgid ) ???
<twb> leonel: depends on what the app supports
<JanC> leonel: and you could replace apache for other httpds too...
<leonel> JanC:   I'm with cherokee now  and it's working great   I just have a server with apache and I cant move yet to cherokee ..
<leonel> twb: Right ..
<thewrath> can someoen help me with ufw?
<twb> !anyone
<ubottu> A large amount of the first questions asked in this channel start with "Does anyone/anybody..."  Why not ask your next question (the real one) and find out?
<thewrath> on https://wiki.ubuntu.com/UbuntuFirewall under package integration that is for onlu ubuntu 8.1 right?
<thewrath> *8.10
<JanC> yes AFAIK
<twb> That seems to be what it says.
<thewrath> afaik = as far as i know?
<thewrath> just double checking i am tired
<JanC> thewrath: yes
<thewrath> is it hard to upgrade  server 8.04 to 8.1?
<JanC> and ufw had some significant changes in 8.10
<thewrath> thank you JanC
<thewrath> i see that and i like them
<twb> Seems stupid to me to call it "Apache" when "getent services 80" gives a perfectly meaningful "www".
<thewrath> especially with the app part
<oh_noes> Whats the default behaviour for mdadm if it loses a disk?
<thewrath> what ist eh command that i use to upgrade ubuntu from 8.04 to 8.1
<twb> oh_noes: it marks that node in the array as degraded
<JanC> twb: it's fine with apache, but do you know all the ports samba needs ?  ;)
<oh_noes> Does it email anyone, does it set the front disk of the server to red status ?
<twb> JanC: then samba ought to register those ports
<thewrath> and how many people are using 8.1 server
<twb> JanC: so they end up listed in /etc/services
<twb> oh_noes: IIRC it generates an email, probably for root.
<thewrath> does 8.1 server have the smae issue with the 8.1 desktop wtih wireless wpa2 enterprise encryption
<JanC> twb: they are, but their names might not always be what you expect
<twb> oh_noes: I don't know anything about the "front of the disk".
<twb> JanC: well, for example I'd prefer it to say "submission" and "smtp" to "postfix" for both.
<oh_noes> twb: well servers have hot swappable disks at the front, everyone in ubuntu says to use MDADM software RAId instead of my hardware raid
<thewrath> what ist eh command that i use to upgrade ubuntu from 8.04 to 8.1\
<thewrath> and how many people are using 8.1 server
<twb> oh_noes: that depends whether it's real raid or fake raid.
<thewrath> does 8.1 server have the smae issue with the 8.1 desktop wtih wireless wpa2 enterprise encryption
<JanC> twb: well samba is based on a collection of legacy MS protocols that you probably don't want to know the names of  ;)
<oh_noes> my hardware raid, if it detects a dead disk, it'll mark it as red/amber (like a normal servers), then i can replace it and it'll auto resync
<twb> JanC: I know what CIFS and SMB and NTLM are...
<twb> JanC: I dunno if AD has a "kerberos" port per se.
<thewrath> anyone know the answers to my questons?
<thewrath> jw bc i am going to bed here soon
<oh_noes> if I hotswap a degraded disk, will it auto rebuild it?
<twb> oh_noes: no
<twb> oh_noes: at least, I'm pretty sure it doesn't
<JanC> twb: "you don't know" is the reason for ufw giving them an alias
<oh_noes> That's extremely handy isnt it!
<twb> JanC: grumble
<thewrath> well i guess i will ask tomorrow
<thewrath> gn everything
<twb> JanC: as I see it, using ufw is a means by which I can write complex iptables-restore scripts, but still allow ignorant end users to do "ufw allow <port> from <ip>"
<oh_noes> So let me get this straight, ubuntu recommend running software raid over FakeRAID, even though software RAID a) doesn't change the warning lights on the servers, and secondly, it won't even rebuild it automatically if it's hotswapped/replaced?
<twb> oh_noes: if you have hot-swappable drives, you probably don't have fakeraid
<JanC> twb: ignorant end-users don't have the power to do that on your systems (I hope)?  ;)
<twb> oh_noes: you probably have proper hardware raid
<twb> JanC: the end users own the system
<oh_noes> twb: then why does ubuntu install lists all my inidividual disks in the isntaller?  Even though I configured a system RAID1 mirror, and a data RAID10?
<twb> JanC: i.e. the company that contracts me to babysit their server
<twb> oh_noes: is this an IBM serveraid box?
<JanC> ah, babysitting is always fun  ã
<oh_noes> Not IBM, it's a different vendor, but basically, yes.
<twb> Yeah, they're... exciting.
<twb> IIRC you need a special driver to make the kernel see a single disk
<twb> We ended up using software raid and forgetting about the hotswap feature because it was too much pissing about
<twb> YMMV, etc.
<oh_noes> yeah, mdraid is the special driver, but it's next to impossible to use it on Ubuntu server hardy
<oh_noes> theres no clear documentation on how to run dmraid, then install ubuntu server 8.04 no gui mode
<uvirtbot> New bug: #278864 in samba4 (universe) "package samba4 None [modified: /var/lib/dpkg/info/samba4.list] failed to install/upgrade: trying to overwrite `/usr/bin/nsstest', which is also in package samba-tools" [Undecided,Confirmed] https://launchpad.net/bugs/278864
<oh_noes> I'm playing with mdadm, I pulled a disk out (hotswap)  of a running server and put it in 5 minutes ago.  I got an error and degraded event for /dev/md0 ... how do I resync everything now?
<JanC> add the disk again
<oh_noes> I did, I pulled it and put it back.    /dev/md0 is showing as State : clean, degraded
<JanC> mdadm /dev/md0 --add /dev/<disk/partition>
<JanC> or maybe --re-add
<JanC> (I've never used re-add until now)
<oh_noes> cool, I think that did it. Now I just need to figure out how to do an automatic re-add with my hotswap functionality
<JanC> I guess that would require a custom udev rule or something
<JanC> adding it manually (e.g. through a remote shell) isn't really a lot of work though
<ball> How much disk space is the bare minimum for Ubuntu Server?
<JanC> ball: I think it's less than 300 MiB
<JanC> but then you don't have anything useful running on it...
<ball> JanC: Thanks.  Sadly it didn't work on this machine.  I think the installer tried to use some fancy graphics mode, which just corrupted the screen.
<JanC> AFAIK the installer uses curses on a console, which should work (almost) everywhere?
<ball> Perhaps it just loaded a font that didn't work on this box then.
<ball> I /did/ see the Ubuntu logo splash.
<ball> ...but afterwards the menus were illegible.
<JanC> oh, that was after installing then?
<ball> JanC: no.
<ball> JanC: that was upon booting from the install CD
<JanC> oh, the isolinux bootloader then maybe
 * ball isn't sure what that is, but it sounds credible.
<JanC> were you running in a VM?
<ball> No, this is a bare machine.
<ball> (not capable of virtualisation)
<JanC> then I have no idea (I know it has some issues because the bootloader uses the 16-bits CPU mode)
<JanC> and some virtualisation software versions don't like that
<JanC> you might want to file a bug
<ball> Well, whatever code was at fault, it tried to do something fancy that the display hardware didn't like.
<ball> JanC: if I knew how, I would.
<JanC> bugs.ubuntu.com
<ball> JanC: Ah right.  I have no access to the Web at present.
<JanC> then do it later
<ball> Okay, I'll make myself a note to do that.
<JanC> did you try the latest stable release (8.10) or a development version?
<ball> 8.10
<JanC> trying latest 9.04 alpha or daily might be useful too (to make sure it wasn't fixed already)
<ball> JanC: I'll see if I can get a copy of that in the morning then.
<billy12> what's a good mp3 trans-coder/ converter?
<JanC> billy12: taht's not really server-related...
<JanC> billy12: ask in #ubuntu please
<ball> billy12: bladeenc for encoding, amp for playing?
<billy12> lol sorry, yea, but i was looking for a CLI version, friendly to server
<ball> billy12: I prefer Ogg/Vorbis given the option though.
<JanC> billy12: still not a server question, and if there is a need for it, someone might propose an #ubuntu-cli channel...  ;-)
<ball> Hopefully back later, downloading some system source files.
<billy12> thank you, but may try like so, if one had to trans-code a mp3 on ubuntu-server, what program would you turn to?
<bob__> hey, can any one help me get my server to share its internet connection
<incd^> Whats the benefit of running own local DNS-server? Now we use only nettica.com's services.
<hads> You can use DNS for local hosts and also cache
<jmedina> I always use mine with at least one slave in another country, and for a local network you can use cache, it saves you a lot of bw
<jmedina> for LAN I like to use dhcp3-server+bind9 for Dynamic DNS
 * hads uses dnsmasq
<domas> and if you're ISP, you may want to run unbound :)
<jmedina> unbound?
<hads> Get it? bind, unbound
 * jmedina hates virtualize windows under xen :S
<jmedina> is that the new dns daemon?
<jmedina> I think I heard about it a few weeks
<hads> First hit in google :)
<jmedina> I was using bind9 at a isp without problems
<jmedina> even under attacks
<jmedina> :D
<jmedina> by the way is there any ubuntu bind9 version with ldap support?
<jmedina> well, time to sleep, cya
<hads> heh
<kraut> moin
<^law1^> moin :)
<lukehasnoname> moin
<uvirtbot> New bug: #340383 in dhcp3 (main) "dhcpd sends 127.0.1.1 as DNS server" [Undecided,New] https://launchpad.net/bugs/340383
<dayo2>  what other av software can i put on my firewall server, based on this post:
<dayo2>               http://ubuntuforums.org/showthread.php?t=380169
<kallepersson> Hey!
<kallepersson> I have an old computer running ubuntu server at home and I wonder two things: a) What is the recommended temperature for a regular file/web server? b) Are there any neat tricks on how to make the server cooler?
<kallepersson> I totally understand that this varies, but it's currently at 51C and it sounds quite warm.
<^law1^> hm tat quiet hot bro
<kallepersson> Any ideas on how to make it cooler?
<Kamping_Kaiser> big gold chain?
<^law1^> ur server is real server or pc tat functioning as server?
<kallepersson> a pc
<^law1^> how many cpu fans u put ?
<kallepersson> just one
<Kamping_Kaiser> not sure that question makes sense ^law1^
<Kamping_Kaiser> kallepersson, where is it located? put it somewhere with clear airflow, keep hdds 1 space apart if possible, put in some more fans, and make sure its not dusty inside
<^law1^> the best way is place ur server in air conditioner  room
<soren> 51C doesn't sound completely unreasonable to me.
<kallepersson> Only have one hdd atm. but thanks
<soren> Sure, it'd be better if you could get it down to 20C or so, but 51C is what I'd expect from a machine like that.
<kallepersson> It's just going to function as a home server, so I don't have access to any air conditioner room
<kallepersson> soren: right, thanks
<soren> Just don't put it in the boiler room :)
<Kamping_Kaiser> 50 for an idle temp is a bit high for my liking, but nto unreasonabe
<^law1^> if the room not so dusty open ur casing case
<kallepersson> ^law1^: it's my living room :)
<uvirtbot> kallepersson: Error: "law1^:" is not a valid command.
<kallepersson> >___>
<^law1^> then add extra fans will be the best way
<kallepersson> thx
<^law1^> :)
<soren> Opening the case is probably not a good idea. It ruins the airflow.
<kallepersson> Oh?
<kallepersson> btw, is there any nice tools to check temperature other than cat /proc/acpi/thermal_zone/THRM/temperature?
<soren> I mean.. Sure, go ahead and open it if you want to put more stuff in there or whatnot, but keeping it open as a way to keep it cool... I don't think that'll turn out as expected.
<simplexio> kallepersson: sensors, but i think it need lm-sensors installed
<sivang> does anyone know if it is planned to include AR81 driver in upcoming lrm releases ?
<kallepersson> simplexio: no package called sensors as far as I can tell
<simplexio> lm-senors
<kallepersson> ok, got it
<kallepersson> lm-profiler and lm-syslog-setup?
<simplexio> ?
<simplexio> on my computer lm-sensors is in repository
<kallepersson> well, only those two binaries are installed as far as i can tell
<kallepersson> yes, got it installed
<kallepersson> but how do I check temperature
<kallepersson> right, it's a tool for creating a profile of your server's temperature
<kallepersson> ah, "sensors"
<simplexio> first you need setup it.. sensors-setup ?
<simplexio> depnds drivers which it loads. on my desktop it show currents etc from power too
<sivang> ./×××
<mrwes> I'm trying to add my printer via remotely via cups and the web GUI, but it's saying my security certificate is 'self-signed'
<mrwes> I see the printer, but I can't enable it
<mrwes> hrmm
<uvirtbot> New bug: #340404 in postfix (main) "package postfix 2.5.5-1 failed to install/upgrade: subprocess post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/340404
 * lamont` would love to know how people are managing to delete /etc/postfix/postfix-script and having it not land on the dist-upgrade, given that it's IN THE DEB
<soren> lamont`: Is it a conffile?
<lamont> soren: everything in /etc is a conffile
<soren> lamont: That's how, then.
<lamont> fsck.ext3: Device or resource busy while trying to open /dev/sda2
<soren> lamont: Everything in /etc is not a conffile, though.
<lamont> WTF is it trying to open sda2 - that's part of /dev/md1 - of course it's busy, dammit
<soren> lamont: Everything *shipped* in /etc and processed by dh_installdeb with a debhelper compat level > 2, yes.
<lamont> soren: so anyway, they remove postfix-script, and then they upgrade, it blows up, and somehow that's my fault
<soren> lamont: Well... Yes :)
<soren> lamont: If the admin deletes a conffile, dpkg respects that and doesn't put it back.
<lamont> and if I have preinst check for the file and bomb out, then they'll just file bugs saying that preinst fialed
<soren> lamont: Postfix should either handle the absence of that file gracefully or make it not a conffile.
<lamont> and how does one make it not a conffile?
<soren> lamont: Either set DH_COMPAT=2 (bad idea) or ship it in /usr somewhere and copy it in from postinst.
<soren> lamont: Or stop using debhelper altogether :)
<lamont> giggle
<soren> Hey, there /are/ package that don't use debhelper.
<lamont> and yeah, it doesn't really belong in /etc other than for the fact that upstream kinda hardcodes it as being there, and one could argue that it actually is/should-be a conffile
<lamont> for my never ending pain
<lamont> ==> ship a copy in /usr, and replace it in protinst if the admin removed it.  am I allowed to berate the admin? :)
<soren> lamont: In Debian, perhaps. I don't think the CoC allows you to do so in Ubuntu. :/
<bromic94> hey i have 8.04 and 8.10 server on one machine
<bromic94> how can i take off 8.04 server
<soren> What do you mean? Are you dual-booting the two?
<bromic94> yes
<bromic94> i want to get rid of 8.04
<soren> Just remove the partition and remove it from the list in /boot/grub/menu.lst.
<bromic94> just remove anything that says 8.04?
<bromic94> like it will say Ubuntu 8.04.2, kernal 2.6.24-23-server?
<bromic94> how do i remove the partition?
<bromic94> soren: any ideas?
<lamont> bromic94: mkfs will stomp on it.  most times, I just reuse the partition for something else
<bromic94> i want to get rid of everything
<bromic94> from 8.04
<bromic94> so 8.1 has everything from the hard drive
<lamont> by which you mean that you want to make it one partition that has 8.10 filling all of the space formerly given to 8.04 and 8.10 separately?
<bromic94> yes
<lamont> depending on the partition table, that could be anything from simply a delete and resize in fdisk and resize2fs, to a reinstall
<bromic94> the only server i want is 8.10
<bromic94> how do i format to put 8.1 back on?
<lamont> 8.10 for starters... 8.1 is ambiguous...
<lamont> if you just boot the 8.10 server disk, one of the options is (paraphrased) "stomp on the drive and use all of it"
 * lamont cries at blkid
<bromic94> so that will make everything 8.10?
<soren> lamont: What's wrong with blkid?
<lamont> soren: #u-d <-- it hates me
<soren> Yeah, I just saw.
 * soren pats lamont on the head
<soren> lamont: There, there.
<orudie> question. how can i check which nameservers the domain is using ?
<thefish> orudie: dig domain.com NS
<orudie> thanx
<thefish> :)
<orudie> how do you get to the sepcific line number in nano ?
<frippz> orudie: ctrl+c
<frippz> oops, read that wrong
<frippz> thought you wanted to see what line you were on
<frippz> ctrl+w and then ctrl+t will allow you to enter the desired line number (and column number as well, for that matter)
<heath|work> I need to setup a dns server, I have seen tutorials for a couple of different ones (powerdns, mydns).  Can anyone offer any suggestions on a nice dns setup for web hosting?
<viezerd> I guess this mailinglist "https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce" covers *all* the security announcements but is there a mailinglist for security updates on server only ?
<aruetten> @heath: a simple easy to use dns server ist dnsmasq, but it's simple and so it has some limitations.
<aruetten> @heath on the other end is bind, I think bind can everything in the dns field, but isn't so simple to configure
 * ball points out that this isn't Twitter ;-)
<ScottK> mathiaz: I want to add pax to amavisd-new depends which will mean it gets added to the CD.  It's only ~50K and solves Debian Bug 517156 (which apparently has some security implications), so I'm pretty sure it's the right thing to do, but wanted to run it by someone here first.
<uvirtbot> Debian bug 517156 in amavisd-new "amavisd-new: should recommend pax" [Important,Fixed] http://bugs.debian.org/517156
<ScottK> viezerd: There is not.  It's all in one repository, so there's no clear way to make a distinction.
<viezerd> ah ok, thx for the information ScottK
<mathiaz> ScottK: http://cdimage.ubuntu.com/ubuntu-server/daily/current/jaunty-server-amd64.list says that pax is already on the cd
<mathiaz> ScottK: so there isn't any problem adding pax as a dependency of amavisd-new
<ScottK> mathiaz: Thanks.  I Iooked at the seeds and read them wrong I guess.
<ScottK> Iooked/looked
<mathiaz> ScottK: I used to look at the seed  - even the output of germinate isn't that easy to parse by human eyes
<ScottK> yeah.
<mathiaz> ScottK: when it comes to the content of the -server iso I always look at the .list file on cdimage
<ScottK> I think that's a good practice.
<heath|work> aruetten, Thanks for the input, I will look into dnsmasq
<kirkland> o/
<ScottK> Ubuntu Server meeting will be here today.
<sommer> :)
<ball> ScottK: 15:00 UTC?
<mathiaz> all right - due to a conflict in ubuntu-meeting we'll run the ubuntu-server meeting here
<ScottK> Now
<kirkland> lots of new attendees
<mathiaz> #startmeeting
<kirkland> mathiaz: good luck with that
<Brazen> o/
<mathiaz> kirkland: I know
<kirkland> :-)
 * ball sits down and shuts up
<mathiaz> kirkland: but it also help for writting the minutes later :)
<mathiaz> so today's agenda: https://wiki.ubuntu.com/ServerTeam/Meeting
<ivoks> ok
<mathiaz> Previous meeting minutes: https://wiki.ubuntu.com/MeetingLogs/Server/20090303
<mathiaz> [TOPIC] SRU bug tracking
<mathiaz> ivoks: ^^
<mathiaz> ivoks: ready to discuss that?
<ivoks> nope
<ivoks> :)
<mathiaz> ivoks: ok - let's move on then
<ivoks> i'll add it to agenda when redy
<ivoks> ready
<mathiaz> [ACTION] ACTION: ivoks to add to the server team agenda an item about better SRU management.
<mathiaz> [TOPIC] Postfix and Dovecot integration
<mathiaz> ivoks: thanks for blogging about it
<ivoks> right, we got nice feedback
<ivoks> people like it so far...
<mathiaz> ivoks: there was a bug related to the package
<ivoks> was it?
<mathiaz> ivoks: bug 339966
<uvirtbot> Launchpad bug 339966 in dovecot "dangerous action: dovecot-postfix force-installs new conf file" [Low,Confirmed] https://launchpad.net/bugs/339966
<mathiaz> I'm not sure how we should handle that.
<ScottK> If it's accurate, that's not Low, IMO.
<mathiaz> I've been discussing wit the reporter
<ivoks> i see...
<mathiaz> ScottK: I don't think dangerous is the correct word
<ivoks> i'll work on a fix for this
<mathiaz> ivoks: what would be the plan?
<ivoks> i have to think about it
<ivoks> check if dovecot.conf is changed
<mathiaz> I'm not sure we should/could support upgrading an existing dovecot system to integrate it with postfix
<ScottK> It can be known if the existing config is modified or not, so I think this is solvable.
<mathiaz> not really - dovecot.conf is modified by -pop and -imap when they get installed
 * nijaba realizes the meeting is here... o/
<ScottK> Hmmm
<ivoks> mathiaz: well, we can workaround it
<ScottK> OK.  Then I'm glad ivoks is going to solve it.
<ivoks> mathiaz: ignore ^protocols and then check
<cemc> modified but not ignored completely
<ivoks> if user changed protocols, he still has not working dovecot
<ivoks> or... hm...
<mathiaz> ivoks: OTOH I'm not sure if we should support this in the dovecot-postfix package
<cemc> if used had imap support, and he decides to install pop3, will that break imap ?
<ivoks> mathiaz: i understand your point of view and i agree
<mathiaz> the dovecot-postfix package is a different kind of package since it encapsulate a script
<mathiaz> It doesn't really ship new files - it just modifies existing configuration
<ivoks> mathiaz: maybe we should warn user during preinst
<ivoks> mathiaz: adding 'if you have already working dovecot, purge this package'
<mathiaz> ivoks: I thought about that - but you cannot detect if you're installing a brand new system or if the system is an pseudo-upgrade of a running dovecot system
<ivoks> mathiaz: we just let everybody know that?
<ivoks> mathiaz: it's a good way to advertise all features that comes with it :)
<mathiaz> ivoks: right - so I think we should update the description of the package
<ivoks> or that...
<mathiaz> ivoks: also - I think that the postinst modifies the dovecot.conf file
<mathiaz> ivoks: to stick a comment in it
<ivoks> nope
<ivoks> that's in source
<ivoks> dovecot.conf is modified during build
<mathiaz> ivoks: hm - right.
<mathiaz> ivoks: so may be doing this in the postinst will force a ucf merge of dovecot.conf
<ivoks> urgh...
<mathiaz> ivoks: that way the admin will have to see the new comment added in dovecot.conf during postinst which would have helped in the situation
<mathiaz> ivoks: I don't know if that's the correct way to do it though
<ivoks> give me couple of days to thing about this
<ivoks> think
<mathiaz> ivoks: right - could you post your solution to the bug?
<ivoks> sure
<mathiaz> I'm not sure about the right way to tackle this issue
<mathiaz> let's move on
<mathiaz> [TOPIC] Samba bug day
<ivoks> me neither, so i have to think about it for a while :)
<mathiaz> as you may have noticed if you read the planet next Thursday will be dedicated to triagging samba bugs
<mathiaz> ttx: thanks for blogging about it
<methods> how do i install an older version of package ?
<mathiaz> nijaba: still on track to blog about it tomorrow?
<nijaba> mathiaz: sure
<nijaba> mathiaz: blog is ready
<mathiaz> I'll prepare also a post to be published on Thursday
<nijaba> waiting ti tomorrow 9am for launch
<mathiaz> nijaba: awesome - thanks :)
<ttx> mathiaz: who will be running the show from QA ?
<genii> methods: Specify version on cli apt-get. eg:   sudo apt-get install something=specific-version              "specific-version"  is one which can be reported by apt-cache policy <packagename>
<mathiaz> ttx: you
<mathiaz> ttx: and me :)
<incorrect> when logging in over ssh i have a log delay before the bash prompt appears
<mathiaz> ttx: oh - QA - noone special
<incorrect> I am not sure why its on some systems but not others
<ttx> mathiaz: ok
<mathiaz> ttx: the whole triagger community is there
<mathiaz> ttx: we should be in #ubuntu-bugs to give a hand to the triagger though
<ttx> I'll be there.
<nijaba> incorrect: failure to do a reverse dns check is generally the issue
<mathiaz> any developer is welcome in #ubuntu-bugs too
<mathiaz> to help out with samba bugs
<mathiaz> let's move on
<mathiaz> [TOPIC] Exchange support for Evolution
<incorrect> nicetry, I've set DNS to no, it look to be that some nodes can't talk to the ldap server for some weird reason
<kinnaz> https://bugs.launchpad.net/ubuntu/+source/lustre/+bug/229821 <--- anyone has figured fix for what ?
<uvirtbot> Launchpad bug 229821 in lustre "lustre-source build fails in hardy" [Undecided,New]
<mathiaz> ivoks: ^^ how did this worked out?
<ivoks> still nothing
<mathiaz> ivoks: no time or not working?
<ivoks> evolution crashes
<mathiaz> ivoks: did you report a bug?
<ivoks> i tested mapiprofile app today and it worked (i guess)
<ivoks> mathiaz: not yet, i see there's new evolution-mapi in archive
<ivoks> so i'll test with it and then report
<ivoks> mathiaz: i guess it's cause of this particular exchange setup
<mathiaz> ivoks: great. Keep seb128 in the loop if you find some bugs
<mathiaz> ivoks: hm ok.
<mathiaz> ivoks: what is mapiprofile?
<ivoks> mathiaz: eovlution-mapi uses libmapi library
<ivoks> mathiaz: there are also cli tools for that 'openchangeclient'
<ivoks> i'm not sure evolution calls openchangeclient directly or trough library
<mathiaz> ivoks: ok - so the openchange client tools are working correclty
<mathiaz> ivoks: which means that the issue is in the evolution-mapi plugin
<ivoks> ................i'm not sure.... yet
<mathiaz> ivoks: ok
<ivoks> mathiaz: output is strange
<ivoks> lots of OK OK OK, and then error connecting
<mathiaz> ivoks: seems that it needs more investigation
<ivoks> i'll have to dig more into it
<mathiaz> ivoks: cool. thanks
<mathiaz> That's all I had from last week minutes
<mathiaz> is there anything else to add wrt to last week meeting?
<mathiaz> nope - let's move on then
<mathiaz> [TOPIC] Support for likewise-open krb5 patch
<mathiaz> so I've uploaded likewise-open patch to krb5 in jaunty
<mathiaz> and I've received an email from the debian maintainer
<mathiaz> the situation is that the next version of mit krb5 (1.7) will provide the same functionality (GSSAPI) but with a different implementation than the one done by likewise
<mathiaz> however for jaunty we will ship krb 16 with the GSSAPI support from likewise-open
<ttx> when is 1.7 scheduled ?
<mathiaz> ttx: end of april
<mathiaz> the likewise-open patch introduced symbols you've
<mathiaz> exposed in your shared libraries and public headers that have diverged
<mathiaz> from the krb5 upstream.
<ttx> do we need the GSSAPI support for anything else than likewise-open 5 ?
<mathiaz> that may mean we'd have to maintain API compatibility in coming releases
<mathiaz> ttx: no - not in jaunty
<ttx> hmm.
<mathiaz> ttx: the reason I put the patch in jaunty is because of likewise-open
<ttx> mathiaz: I'm not sure likewise-open 5 will make it to jaunty. If it doesn't, it would really make sense to back out that patch.
<mathiaz> ttx: when would we know when it doesn't?
<ttx> mathiaz: very soon. My packaging is almost ready, Ffe should follow
<mathiaz> one proposal is to change the krb5 patch slightly
<ttx> but it's a complete rearchitecture, not a small update. So we are quite late oin the cycle for it
<mathiaz> to avoid supporting additional public functions
<mathiaz> ttx: so my question is if we should look into changing the likewise-open krb5 patch to maintain API compatibility with upstream
<mathiaz> ttx: which means modifying likewise-open
<mathiaz> ttx: to support the modified krb5 patch
<ttx> mathiaz: yes, probably.
<mathiaz> ttx: ok
<mathiaz> ttx: it seems we should discuss this a bit more with upstream
<mathiaz> let's move on
<ttx> mathiaz: definitely.
<mathiaz> [TOPIC] Launch screen by default in ec2
<mathiaz> kirkland: ^^
<kirkland> mathiaz: zul has asked that i postpone this discussion until he's back from vacation
<mathiaz> kirkland: ok.
<kirkland> mathiaz: sorry
<mathiaz> kirkland: np
<mathiaz> [TOPIC] Open Discussion
<mathiaz> anything else to add?
<dantalizing> i've got something
<ScottK> I discovered Debian Bug 518524 today
<uvirtbot> Debian bug 518524 in amavisd-new "Fails to detect message with multiple virus payloads as infected" [Grave,Fixed] http://bugs.debian.org/518524
<ivoks> oh, lol
<ScottK> I've got a merge prepared and asked ubuntu release if I should upload it now or after Alpha 6.
 * ScottK is waiting for an answer.
<ScottK> It only affects Jaunty.
 * ScottK is done.
<ivoks> after alpha sounds ok, imho
<ivoks> after all, it's a development release :)
<ScottK> It's a quick build, so depending on where they are, I could see it either way.
<mathiaz> ScottK: well - does it block the release of alpha6?
<ScottK> mathiaz: No, so it isn't critical if it waits.
<mathiaz> ScottK: I don't think so - so it can wait for after alpha6
<mathiaz> dantalizing: yes
<dantalizing> i just started aggregating rss on my own from multiple people, and it has evolved.
<dantalizing> i picked up ubuntuserver.org and put a planet on it
<dantalizing> http://planet.ubuntuserver.org/
<dantalizing> i know there is an exisiting wp site
<dantalizing> just wanted to throw it out to the server team
<dantalizing> fyi
<dantalizing> and if yall had some specific desire for it
<jbernard> kirkland merged my changes to update-motd to add inotify support, fyi
<mathiaz> dantalizing: great - thanks.
<kirkland> jbernard: but i haven't uploaded it to jaunty yet :-)
<jbernard> ill file an Ffe today
<kirkland> but yes, jbernard did some great work to get update-motd to be able to run either in a cron-base, or an inotify based mode
<kirkland> \o/
<mathiaz> jbernard: awesome. Thanks for the good work!
<mathiaz> anything else to add before we wrap up?
<kirkland> mathiaz: 2 small things from me
<kirkland> mathiaz: i've backported kvm-84 to build on hardy, it's available in the ~ubuntu-virt PPA
<ivoks> er... including kernel part?
<kirkland> mathiaz: anyone having long-standing issues with kvm-62 on hardy ... i suggest you try that package and let us know how it works for you in #ubuntu-virt
<kirkland> ivoks: not yet, i'll work on that next
<kirkland> ivoks: userspace only, thanks for the clarification
<ivoks> ;)
<kirkland> mathiaz: and second, qemu has finally release 0.10.0 (after nearly a year)
<kirkland> mathiaz: i'm merging that now, would like to try to get that into universe for jaunty, will probably need an FFE
<kirkland> mathiaz: but there are lots of bugs fixed
<mathiaz> kirkland: you'd have to ask the motu-release team - ScottK would probably be able to help reviewing the FFe.
<ScottK> For server issues I can decide.  File a FFe bug and I will review it.
<kirkland> mathiaz: yep, 'tis why i'm mentioning it here, since it seemed that ScottK was around
<kirkland> ScottK: thanks, will do
<kirkland> mathiaz: all from me
<mathiaz> great. Anything else to add?
<mathiaz> [TOPIC] Agree on next meeting date and time
<mathiaz> so as some of you have noticed we're currently clashing with the TB meeting
<mathiaz> the kernel team meeting has also moved one hour forward
<soren> Has Canada switched to DST as well?
<mathiaz> soren: yes
<soren> Or have we really moved it just for the benefit of the USAnians?
<mathiaz> soren: canada and usa are in DST
<genii> No, us canucks are forced to suffer also
<mathiaz> OTOH the TB will also move back an hour in 4 weeks
<mathiaz> and since TB meetings only happen every other week we would just conflict once more
<mathiaz> so my proposal is to leave the server meeting at 15:00 UTC
<soren> Tell me again why we moved the meeting?
<ivoks> to make it more interesting :)
<soren> ivoks: It's not working :)
<mathiaz> soren: because it's the same time for northamerican
<mathiaz> soren: and it was a better time for europeans
<soren> mathiaz: Which makes it different for *everyone* else.
<ivoks> northamericans, raise your hand; everybody else, raise both of them
<sommer> o/
<mathiaz> soren: yes - but everything will be back to the regular schedule in 4 weeks (for european)
<Brazen> o/
<jbernard> o/
<soren> mathiaz: You're not making a very convincing argument :)
<mathiaz> anyway - my point being that we'd have only one more conflict with the TB
<mathiaz> and after that we'd be back to the regular schedule with TB (every other week), server, kernel teams meeting
<ScottK> o/
<soren> You could use the exact same arguments for keeping it at the same time relative to UTC.
<soren> In three weeks it'd be normal for USAnians and Canadians.
<mathiaz> soren: nope -because we'd have a weekly conflict with the kernel team meeting
<soren> ...and all the while, it's been normal for Europeans.
<soren> See, *that's* a (somewhat) useful argument.
<mathiaz> 15:00 UTC -> 1 conflict with the TB meeting (in 2 weeks), 16:00 UTC -> 3 conflicts with the kernel team
<soren> Alright.
<mathiaz> so next week, in #ubuntu-meeting at 15:00 UTC?
<ivoks> ok
 * soren still grumbles that we have to bow to the kernel team's acceptance of American daylight savings time imperialism and not the other way around :)
 * mathiaz points soren to #ubuntu-kernel
<ivoks> let's kill DST
<ivoks> it's usless anyway
<ScottK> mathiaz: slanagasek told me to go ahead and upload, so it's done.
<Brazen> I leave my lights on all day anyway
<ball> I mostly run servers at UTC and let workstations calculate their offset based on that.
<mathiaz> allright folks - thanks for attending.
<mathiaz> happy alpha6 testing
<mathiaz> and see you all next week, in #ubuntu-meeting at 15:00 UTC
<ivoks> we need to have lights on car all the time :)
<mathiaz> for the Ubuntu server team meeting
<mathiaz> #endmeeting
<nijaba> Thanks mathiaz
<nijaba> ivoks: http://www.facebook.com/s.php?ref=search&init=q&q=Daylight&sid=1ae160743a3fd9f216bb737ee4fb277a&n=-1&o=4&k=200000010&sf=p&hash=10f703513bb20aacdbc7822d58934e6f&s=40#/group.php?sid=1ae160743a3fd9f216bb737ee4fb277a&gid=2208383750
<ball> Are there any doughnuts left over from this meeting?
<ivoks> nijaba: lol i agree :)
<Brazen> anybody use remote X apps from an Intrepid server?
<Brazen> Anyway, it seems to be broken after an upgrade from Hardy
<Brazen> I like to run virt-manager and virt-viewer remotely via ssh, but now after typing the command, the command prompt returns and no application shows up.
<Brazen> I think the same thing will happen if you don't have xauth installed, but xauth is installed.
<ivoks> why don't you run virt-manager on workstation?
<ivoks> and connect over SSH to libvirt on server?
<Brazen> because I prefer to tunnel through ssh
<Brazen> well, yeah
<Brazen> BUT
<Brazen> I figure that way I keep the version of virt-manager that would come with the server
<Brazen> AND, I sometimes have to connect from a Windows workstation.
<yann2> I like to run virt-manager and virt-viewer remotely via ssh  < why would you want to do that?
<yann2> ah sorry didnt read through all the backlog, fair enough
<Brazen> Just to clarify what I mean by "through ssh" -  I tunnel X through ssh, not the libvirt protocol.
<ivoks> take care...
<Brazen> It's not a huge concern, because I'll just tunnel vnc and use that instead, but tunnelling X seems to be broken.  I have not tried from a fresh install of Intrepid Server though.
<Brazen> Just thought I would throw it out there while all the super smart people were here, in case anyone knew of something.
<Brazen> It's a pretty clean server to upgrade though.  I install kvm, libvirt, virt-manager, virt-viewer, xauth, ntp, and that's pretty much it.  I leave any other tweaking to be done inside virtual machines.
<jmedina> Brazen: have you enabled debug to ssh
<jmedina> ssh -v host -X or something
<Brazen> no
<Brazen> I use Putty, even on Ubuntu
<Brazen> although, come to think of it, I don't even know if those programs would run locally on the server (no X on server to even try it)
<Brazen> ok here is what I get with -v
<Brazen> from an Ubuntu Hardy workstation:
<Brazen> brazen@kvm_server:~$ sudo virt-manager
<Brazen> [sudo] password for brazen:
<Brazen> debug1: client_input_channel_open: ctype x11 rchan 3 win 65536 max 16384
<Brazen> debug1: client_request_x11: request from 127.0.0.1 49544
<Brazen> debug1: channel 1: new [x11]
<Brazen> debug1: confirm x11
<Brazen> brazen@kvm_server:~$ debug1: channel 1: FORCE input drain
<Brazen> debug1: channel 1: free: x11, nchannels 2
<Jeeves_> Brazen: 'apt-get install xauth'
<Brazen> Jeeves: yeah I said that further up, it's acting just like xauth is not installed.  But xauth IS installed.
<Brazen> "i     xauth       - X authentication utility
<Brazen> :
<Brazen> "
<Brazen> oops
<Brazen> "i xauth  - X authentication utility"
<Brazen> from aptitude search xauth
<Brazen> plus it worked fine from Hardy before upgrading it to Intrepid
<Brazen> I would really just be interested to know if this works for anybody on Intrepid.
<yann2> man exim4 packaging is a fU*** pain in the ***
<yann2> *getting mad about dependancies conflicts*
<yann2> install exim4 before postfix on ubuntu LTS, you'll never be able to install postfix
<yann2> (sorry, had a bad day)
<lamont> yann2: apt-get install postfix should remove the parts of exim4 that are in the way
<yann2> yeah that's the theory :)
<yann2> the reality is that the uninstall of exim4 is crashing
<yann2> +script
<yann2> so you can't remove it, nor install postfix :)
<yann2> http://pastebin.com/m1a67d252
<ScottK> yann2: Edit /etc/init.d/exim4 to have exit 0 as it's 2nd line and try again.
<yann2> lovely thanks :)
<yann2> that seems to have worked \o/
<yann2> sorry about the shouting before, just spent 5 hours reinstalling a server, didnt feel like starting from scratch again :'(
<ScottK> yann2: That's a pretty standard trick for removal failures because it couldn't stop something.
<lamont> yann2: that's an exim4 bug, stop when not running == success, not failure
<fevel> hey guys
<fevel> I use ubuntu server with squid and apache
<fevel> how can I force a certain webpage everytime someone opens a browser on my network?
<giovani> fevel: #squid
<fevel> giovani, are you sure squid will do that friend? My friends tell me to redirect http to my webserver
<giovani> and how do they propose "redirecting http" without a proxy?
<giovani> you could spoof dns
<giovani> but that's not http-specific
<fevel> using the router
<fevel> or even iptables
<fevel> but if you think im better off with squid I believe you
<Brazen> What router do you have?
<Brazen> I use pfsense as my router, and I'm pretty sure it can do it, but it would probably do it through it's squid package.
<JanC> <fevel> how can I force a certain webpage everytime someone opens a browser on my network?
<JanC> fix that webpage?
<JanC> oh, you mean showing the same page to everyone...
<Brazen> JanC: would you like some gum?
<fevel> sorry for the delay
<fevel> I havent checked the router they will be using
<goofey> JanC: I believe that is acheived using a proxy server
<fevel> I am only respponsible for the server
<fevel> yes
<fevel> I have found someything
<fevel> http://www.publicip.net/zonecd/what.php
<fevel> but im trying to implement it without the knoppix... diretly from my server
<giovani2> that's what's called a captive portal
<fevel> giovani2, thanks for the tip
<giovani2> fevel: did you explain exactly what you're trying to do while I was logged out?
<giovani2> there's a difference between a captive portal and redirecting specific sites/etc
<fevel> no
<fevel> its a captive portal
<fevel> like this
<fevel> http://www.publicip.net/zonecd/what.php
<fevel> I just found out about chillispot
<heath|work> if userA owns folder 'myfiles' and userA has a quota of 5MB,  if userB places a file in that folder, does that count against userA's quota?
 * jmedina likes to test quotas with file image mounted as loop disk
<jmedina> but in that case, I think no, because quota records are related to uids/gids or inodes/blocks
<jmedina> you can test with afile with a symple byte, it wont hurt
<jmedina> and check with repquota -a
<mathiaz> jdstrand_: what do you think of bug 286614?
<uvirtbot> Launchpad bug 286614 in openldap "slapd apparmor profile needs update for multiple databases" [Low,Triaged] https://launchpad.net/bugs/286614
<jdstrand_> mathiaz: I have no problem with it
<jdstrand_> mathiaz: using /var/lib/ldap/** seems fine
<heath|work> jmedina, thanks
<heath|work> The problem I am running into is users using webdav,  do you know if apache can chown on files?
<mathiaz> jdstrand: is the k permission also needed?
<jdstrand> mathiaz: probably, since the alock will be different for each database?
<mathiaz> jdstrand: right
<mathiaz> jdstrand: what is your opinion on bug 218899 and 227744? (ie adding the openldap user to the sasl and ssl-cert group by default)
<uvirtbot> Launchpad bug 218899 in openldap2.3 "dapper upgrade to hardy: user openldap is not added to group sasl" [Low,Triaged] https://launchpad.net/bugs/218899
<uvirtbot> Launchpad bug 227744 in openldap2.3 "dapper upgrade to hardy: openldap silently refuses to start when unable to open SSL certificates - main: TLS init def ctx failed: -64 - openldap user not in ssl-cert group" [Unknown,Confirmed] https://launchpad.net/bugs/227744
<mathiaz> kees: mdeslaur ^^
<mathiaz> in dapper slapd was running as root, while in hardy and + slapd runs under the openldap user
<jdstrand> kees: I don't like the idea of doing either automatically. detecting it and letting the user know seems like a very good idea
<jdstrand> mathiaz: ^
<heath|work> jmedina, what about making a bunch of loopback files?  Do you know how many I can make and mount through fstab?
<mathiaz> jdstrand: ok - so on new installs the openldap user should *not* be part of the ssl-cert and sasl group
<jdstrand> mathiaz: that is my opinion, fwiw
<heath|work> in other words, How reliable is a bunch of loop back devices?
<jdstrand> mathiaz: eg, who's to say the server isn't a multi-purpose server that does smtp auth via sasl, and slapd isn't using it. in that circumstance slapd shouldn't have the permissions
<uvirtbot> New bug: #326768 in mysql-dfsg-5.0 (main) "mysql thinks it has crashed when it hasn't" [Undecided,New] https://launchpad.net/bugs/326768
<J_P> LTS release is each 2 years right ?
<J_P> next 10.4 ?
<Deeps> 6.06 -> 8.04 -> 10.??
<uvirtbot> New bug: #291070 in net-snmp (main) "snmp crash at bootup" [Undecided,New] https://launchpad.net/bugs/291070
<Fenix|work> Greetings... anyone using HAVP here?
<kees> jdstrand: hm.
<kees> jdstrand: is the current solution to add the openldap user to the sasl group?
<jdstrand> kees: there is no current solution. it is an idea at this point
<heath|work> anyone know how many loop devices ubuntu can handle?
<genii> At least 256
<heath|work> hmm
<heath|work> is there a was to have a quota on a folder instead of a user of group?
<smoser> heath|work, i think max loops is 255
<heath|work> smoser, thanks
<heath|work> The problem I am faced with is I have a web app that allows users to upload files.  I would like the users to be able to use webdav also, but apache cannot chown of files that are coming in.  Does anyone have any suggestions?
<heath|work> I just need to limit how much a user can upload
<smoser> but it appears you have to pass max_loop= when you load the kernel module
<smoser> # for((i=0;i<257;i++)); do echo -n $i. && { mp=/mnt/x${i}; [ -d ${mp} ] || mkdir ${mp} ; } && mount -o loop /tmp/my.img ${mp} || break; done ; echo ; echo unmount: ; for((x=0;x<$i;x++)); do echo -n ${x}.; umount /mnt/x${x}; done; echo
<smoser> thats how i tested. my intrepid with no max_loops parm to loop module maxes out at 8
<heath|work> smoser, I added options loop max_loop=64  to /etc/modprobe.d/options
<heath|work> that worked, but it doesn't resolve my issue
<heath|work> really I just need to be able to say:  This folder can only be this big
<heath|work> but I haven't read anything that allows me to do that...
<heath|work> is there a way to force all files that are created in a folder to be owned by a certain user or group?
<heath|work> can forcegroup and forceuser only be used with smb, or is there another option for normal system files and folders?
<smoser> heath|work, regarding your problem above with webdav (which i'm not familiar with at all), you may find a solution in suexec (http://httpd.apache.org/docs/1.3/suexec.html).  that allows the apache server to run as different users.
<TrioTorus> Is there an option that can be set to force permissions on a nfs share? I've googled and found that my chances are slim, but I was wondering if I would set up a kerberised nfs share, could I force user permission? Can this be done with nfs4 maybe?
<kirkland> jbernard: question for you ...
<kirkland> jbernard: what will restore the inotify jobs on reboot?
<kirkland> jbernard: i killed off the init script
<kirkland> jbernard: i think we might have to resurrect it
<TrioTorus> Let me rephrase: It's my understanding that nfs doesn't have user authentication. Can such a thing be implemented using kerberised nfs?
<jbernard> kirkland: hm...
<mathiaz> kirkland: hm - going through the kvm bugs - seems that you're always trying to get the importance set
<Fenix|work> any squid experts around? :)
<Fenix|work> #squid is full of zombies
<mathiaz> kirkland: https://bugs.launchpad.net/ubuntu/+source/openldap/ I usually don't set an importance
<kirkland> mathiaz: i find it helps me
<jbernard> kirkland: we'll have to resurrect it
<mathiaz> kirkland: until it's in a confirmed/triagged state
<kirkland> mathiaz: high = crashes host, medium=crashes guest, low=lesser issues
<mathiaz> kirkland: the NEW states means you haven't looked at them?
<kirkland> mathiaz: or haven't confirmed them, in some cases
<mathiaz> kirkland: or there is NEW information available?
<kirkland> mathiaz: in one case, i need a windows 2000 iso to confirm it
<kirkland> mathiaz: i don't have that
<kirkland> mathiaz: i find that setting an importance helps when sorting https://bugs.launchpad.net/ubuntu/+source/openldap/
<kirkland> mathiaz: i met with the kvm/qemu upstream guy twice now, and i wanted to have a prioritized list of bugs to bring to his attention
<kirkland> mathiaz: even if it's not confirmed, but someone claimed that some bug was corrupting their data, i would set the importance to Critical
<kirkland> mathiaz: or crashing their host, i'd set it to High
<mathiaz> kirkland: right
<mathiaz> kirkland: for the upstream dev you'd also include the incomplete bugs?
<kirkland> mathiaz: LP does barf a lot of email though, sorry about that
<kirkland> mathiaz: probably not
<kirkland> mathiaz: many of those are incomplete based on his feedback
<mathiaz> kirkland: oh no problem. I wasn't complaining about the LP emails
<mathiaz> kirkland: I actually don't receive kvm related bug
<kirkland> mathiaz: that he'd need x, y, z from that user to help troubleshoot it
<kirkland> mathiaz: in most cases, he wants to see them try it on the latest kvm
<kirkland> mathiaz: which i built in a ppa for Hardy
<mathiaz> kirkland: I just had a look at the kvm bug list as I know that you're actively managing it
<kirkland> mathiaz: yeah, i'm trying to :-)
<mathiaz> kirkland: I've just spent two days in the openldap bugs
<mathiaz> kirkland: in the process I wrote a DebuggingOpenldap page
<kirkland> mathiaz: i noticed ;-)
<mathiaz> kirkland: with debugging instructions.
<mathiaz> kirkland: It should help me in the futur when dealing with a certain type of bugs as I always need the same information
<kirkland> mathiaz: yeah, i want to set up the LP page with a few things that I *always* need from kvm bug reporters
<kirkland> mathiaz: right
<jbernard> kirkland: probably /usr/sbin/update-motd should call the initscript to do the start-stop-daemon stuff
<mathiaz> kirkland: I think bdmurray can help with that
<jbernard> kirkland: ill work something up
<mathiaz> kirkland: and the step afterwards is to include all of this in apport
<jazzdog> hi
<jazzdog> i have reformatted /dev/sdb to xfs (previously reiserfs). I can see the change of UUID in blkdev but the contents of /dev/disk/by-uuid did not change
<jazzdog> how can I update /dev/disk/by-uuid ?
<mathiaz> kirkland: I think that apport has now the ability to gather any type of information even if there isn't a crash
<kirkland> jbernard: yeah, have a look at it
<kirkland> jbernard: i'd like to avoid an init script, if possible
<kirkland> jbernard: we could leverage the cronjobs
<kirkland> jbernard: but that might be dirty
<mathiaz> kirkland: are you using tags in kvm?
<bdmurray> kirkland: something that would show up at kvm/+filebug?
<kirkland> bdmurray: right
<kirkland> mathiaz: not really
<kirkland> mathiaz: i thought about it though
<bdmurray> kirkland: yes, that is something I can help with
<kirkland> bdmurray: nice
<kirkland> bdmurray: i'm just about to jet off here for a bit
<bdmurray> kirkland: it'd also be good to have an apport package hook for kvm that could grab that automatically too
<bdmurray> I can help with that too!
<kirkland> bdmurray: but drop me some instructions via email or irc, if it's not trivial
<kirkland> bdmurray: excellent
<jbernard> kirkland: how so?
<jbernard> kirkland: the enable/disable can call the initscript
<kirkland> jbernard: i'd like to avoid having an init script if possible
<jbernard> kirkland: but we will somehow have to store inotify-enabled state on the harddisk
<mathiaz> jdstrand: re openldap user in the ssl-cert group by default: a user is asking we wouldn't do that by default?
<kirkland> jbernard: /var/tmp
<mathiaz> jdstrand: https://bugs.launchpad.net/ubuntu/+source/openldap2.3/+bug/227744/comments/10
<kirkland> jbernard: see how the existing enabled/disabled update-motd works
<uvirtbot> Launchpad bug 227744 in openldap2.3 "dapper upgrade to hardy: openldap silently refuses to start when unable to open SSL certificates - main: TLS init def ctx failed: -64 - openldap user not in ssl-cert group" [Unknown,Confirmed]
 * kirkland is away for a while
<jazzdog> i have reformatted /dev/sdb to xfs (previously reiserfs). I can see the change of UUID in blkdev but the contents of /dev/disk/by-uuid did not change
<jazzdog> how can I update /dev/disk/by-uuid ?
<bdmurray> kirkland: https://lists.ubuntu.com/archives/ubuntu-devel/2009-January/027206.html
<jdstrand> mathiaz: I responded
<mathiaz> jdstrand: thanks :)
<nat2610> anybody know if there are different version of 3dm2 (3ware controller) following the controller  you have or it's the same app ?
<docta_v> there is twcli
<Kevin_> Hi Guys.. My Ubuntu server just starting serving up PHP files terribly slow today and I haven't a clue why. The server is running Wordpress. Could anyone take a look at my site and give me any insight?
<ivoks> what's the url?
<jmedina> Kevin_: what about top output? probably you are under atack, well your wordpress, it is common with old versions
<Kevin_> I'm running Wordpress 2.7.
<Kevin_> Also, the site is behind a firewall.
<jmedina> anyone here using paravirtualised drivers under xen?
<jmedina> from redhat or suse?
<baz> what are some main differences between ubuntu server and debian?
<Vog-work> baz: http://www.ubuntu.com/community/ubuntustory/debian
<Vog-work> http://www.ubuntu.com/community/ubuntustory/debian
<baz> thank you
<Vog-work> http://www.linuxforums.org/forum/debian-linux-help/51333-what-difference-between-debian-ubuntu.html google is your friend :)
<baz> Vog-work, those links suck, one is from 2005 and the others are very high level overview
<baz> i know how to use google
<Vog-work> Ok the question was very general as well. Is ther esomethign specific you were wondering about?
<Vog-work> (and no offence meant with the google is your friend comment)
<baz> i was hoping for some specific examples of how/why people are choosing ubuntu over debian
<baz> Vog-work, yeah sorry for my aggressive tone too
<Vog-work> Baz, the main reason I get from people are the more up to date repositories and changes in ubuntu.
<baz> Vog-work, i should've probably phrased my question better but i was expecting to lead into more specifics based on what people said :)
<ScottK> I started with it Ubuntu Server because I needed stuff that was newer than Sarge had.
<Deeps> that was my reasoning too
<Vog-work> Yeah sure you can go and compile from source whatever you like but the convenience of having made and tested packages is nice.
<ScottK> I also like having the same base under the hood on my servers and desktops.
<oh_noes> people chose Ubuntu over debian for support. Thats bascially it.
<oh_noes> Large Orgs cannot implement Debian, and don't want to pay so much for redhat linux, so Ubuntu entered the mix
<ScottK> oh_noes: I disagree.  Most people use community support, not paid, and you can get that for both.
<Vog-work> oh_noes: Yeah I hear that as well.
<ScottK> There are lots of large organizations that use Debian too.
<Deeps> although i've since found the ipv6 support in hardy to be sub-par, so i'm considerring switching back to debian with the release of lenn
<oh_noes> ScottK, of course most people do, but from a busines point of view, enough customers are using paid support to pay canonical
<Vog-work> ScottK: I used to get flamed in #debian if I didn't ask a question "just so"
#ubuntu-server 2009-03-11
<Vog-work> I find tha ubuntu is much more friendly to a newbie.
<ScottK> Vog-work: That's true.  Ubuntu generally has a more friendly culture.
<Vog-work> Or at least I did a few years ago.
<ScottK> Also we do try to do stuff to make it easier that's hard to do in Debian.
<ScottK> Because we team maintian it's easier for us to implement stuff that touches several packages.
<baz> in terms of stability there are no issues at all besides the inherent risk of using newer packages? is it totally world-class?
<ScottK> My experience has generally been good.
<Deeps> likewise
<ScottK> Newer also means more support for newer hardware too.
<Vog-work> So has mine, Debian is really quite stable, but will be missing features available in ubuntu.
<Deeps> that said, lenny's quite current for a debian release
<ScottK> Yes, but when did Lenny freeze?
<Vog-work> Deeps: I'll need to go look at lenny, it's been awhile.
<baz> whats a *big* feature ubuntu server has that debian lacks? it seems apt-get is 90% of what we need
<Deeps> ScottK: unsure, but it does pack a .26 kernel i believe
<baz> in terms of ease-of-use that is
<ScottK> As an example, even though Lenny just released and 8.10 released months ago, 8.10 has a newer kernel.
<Vog-work> baz: out of box support for various raid controllers, even the crappy on board ones.
<ScottK> Deeps: Intrepid, that's 4 months old has .27.
<Deeps> ScottK: no doubt, i meant compared to debian releases of yore, it's quite current by debian release standards
<ScottK> Deeps: Certainly.
<Deeps> ScottK: i was in no way attempting to compare against ubuntu
<jmedina> I use Ubuntu for the LTS, and because it just works better than others
<ScottK> Certainty of support times is another difference.
<baz> ScottK, what do u mean by that
<ScottK> Ubuntu Server is supported for 5 years.  Debian Stable is supported one year after the next release.
<ScottK> So how long is Lenny supported for?
<Deeps> baz: tasks, ubuntu has predefined tasks for webservers, mailservers, printservers, and so on
<Vog-work> ScottK: Good call I always forget to point that out.
<Deeps> baz: which makes installation of such services relatively simple
<ScottK> Debian has tasksel too.  In some ways not as refined however.
<baz> Deeps, i think I have seen that before - are u refering to checking the LAMP box during install and having it all built automatically for you?
<Deeps> baz: thats the one
<ScottK> Myself I like that Postfix is the MTA of choice, since that's my MTA of choice too.
<baz> yeah, thats too awesome
<Vog-work> Baz: It's weird I always find things that happen automatically with ubuntu that I used to have to do manually. But I never remember them after thae fact.
<Deeps> baz: like ScottK says, debian has tasks too, but they're not as refined in some ways
<ScottK> That said, Debian does stop and spend a long period of time fixing bugs and stabilizing stuff, so it is likely to be more stable for some uses.
<Vog-work> The user base for ubuntu is so much larger and the fourms are much more active. So when you get thet weird problem somoen else has probably already had it and figured it out.
<baz> Vog-work, big time, i often append "ubuntu" at the end of a google search even if my question is not ubuntu related and i get better aNSWRS
<Vog-work> And theose bug fixes that ScottK mentions are subsequently incorporated into the Ubuntu code base most of the time.
<ScottK> Yes, but we get more new versions with new features in the meantime, so who knows.
<jmedina> one think I thanks is the MOTU effort to incluide packages in main
<ScottK> I find Ubuntu very stable for what I do.
<ScottK> Of course since I'm an Ubuntu developer and have fixed a lot of stuff that affected me, it's not random.
<baz> so it seems ubuntu is the way to go for me... I am not running huge clusters that require 99.99999999999% availability and i dont have a full team of geeks backing me up, so ease of use is important
<jmedina> and of course the biggest community
<Vog-work> Baz: alot of us are in the same situation. Hence, why the community exists.
<Deeps> baz: that said, if you're looking for an impartial opinion, asking in #ubuntu-* whether ubuntu is better than X isn't likely to lead to many votes for X ;-)
<baz> hehe, true
<baz> but instead of votes i got logical facts
<Vog-work> By the way it makes me proud to be part of it when I see a user come into the channel asking a question and then a semi clear answer can be hashed out.
<jmedina> and I think it is easier to contribute to ubuntu than debian, Ubuntu has launchpad, the whole infraestructure to work with an community operating system
<Vog-work> On that noe I need to head home.... :) Good luck baz
<baz> thanks Vog-work!
<Deeps> indeed, best of luck
<jmedina> the only thing I dont like about ubuntu is the logo
<Vog-work> lol
 * jmedina still prefers slackware logo (not the new one)
<Deeps> bed time for me, work in 8 hours :/
 * Vog-work is a Tux purist
 * jmedina is using a gentoo t-shit, so I look a linux hardcore user XD
 * ScottK gives Vog-work a smack just to remind him of the good old days in #debian.
<ScottK> jmedina: Gentoo T-shirt says to me "way to much free time".
<Vog-work> my last emerge world was the day I started to use ubuntu :)
<jmedina> ScottK, actually, using ubuntu there is a lot of spare time
<jmedina> I  have not used gentoo since I migrated to dapper in desktop and servers
<jmedina> but not these days, people in crisis wants to learn and migrato to linux, most here in mexico
<jmedina> this econimical crisis its good for linux and open source
<ball> I have to go
<lonelywind1403> today i used aptidue to update ubuntu server 8.10 and get and error message about GPG (invalid signature) error on http://security.ubuntu.com and don't know how to fix it. ( screen captures here http://www20.tok2.com/home/naaikada/Ubuntu%20Server-2009-03-11-08-48-42.png ). can I resolve this problem?
<ScottK> lonelywind1403: What happens if you do sudo apt-get update?
<ScottK> vorian: All the Main rdepends for libdb4.2 are gone, so it's likely killable.  I'm patching kolab right now, so maybe you could hunt through for any others that are left?
<lonelywind1403> ScottK: still that error message (screen capture: http://www20.tok2.com/home/naaikada/Ubuntu%20Server-2009-03-11-09-23-16.png)
<ScottK> lonelywind1403: What does "dig a security.ubuntu.com +short" return?
<lonelywind1403> ScottK: it return 91.189.88.37
<ScottK> Well that's at least right.
<JanC> lonelywind1403: is there a proxy between you and the internet?
<ScottK> lonelywind1403: https://bugs.launchpad.net/apt/+bug/117010/comments/3 give an example of how to manually add a key for apt.  I'd try that with the key ID in the error message.  JanC's question is also a good one.
<uvirtbot> Launchpad bug 117010 in apt "invalid GPG key - is this a big problem? (dup-of: 24234)" [Undecided,New]
<uvirtbot> Launchpad bug 24234 in apt "apt-get update failing with bad signature." [High,Confirmed]
<lonelywind1403> no, this server is connect direct to internet
<lonelywind1403> uvirtbot: trying sudo apt-get update -o Acquire::http::No-Cache=true and having same error message
<uvirtbot> lonelywind1403: Error: "trying" is not a valid command.
<JanC> lonelywind1403: uvirtbot is not human  ;)
<lonelywind1403> JanC: yeah, but i'm  wondering how can it answer my question like that. ( searching ??? ) :D
<JanC> lonelywind1403: it saw Scott's mentioning of a Launchpad bug #
<lonelywind1403> JanC: did you get same error message at this time ?
<JanC> I don't have any apt key errors no
<lonelywind1403> JanC: if not, what if U send me a pgp database ( /etc/apt/trusted.pgp )
<ScottK> lonelywind1403: Did you try the manual key adding step in the bug comment I linked you to?
<lonelywind1403> ScottK: ~$ gpg --keyserver subkeys.pgp.net --recv 40976EAF437D05B5 && gpg --export --armor 40976EAF437D05B5| sudo apt-key add -
<ScottK> That looks right.
<lonelywind1403> ScottK: result is  ( gpg:               imported: 1)
<ScottK> Now try sudo apt-get update again.
<lonelywind1403> but when run apt-get update
<lonelywind1403> still same error :(
<ScottK> Odd.
<ScottK> lonelywind1403: You've got the correct key then.  I don't know what to tell you.
<lonelywind1403> ScottK: i'll try copy /etc/apt/trust* from other server, hope it work :(
<ScottK> Good luck.
<lonelywind1403> ScottK: bad news , nothing changed :(
<JanC> I suspect something else is broken
<ScottK> I think so, but I don't know what.
<JanC> maybe check for a leftover lock file for the apt/gpg database?
<JanC> or something like that
<lonelywind1403> may be something outdate ( cause i'm just installed this system from yesterday )
<ScottK> A lock file would give a different error.
<JanC> ScottK: I would hope so, but I've seen too many errors being masked by apt and/or dpkg to trust that  :-(
<lonelywind1403> All: should I post this error on forum ?
<napple> When I'm calling a blocking function (like recv(), accept() or something similar) in a process which has just a single thread (that is, I use no threading library or such).. does the blocking call prevent the process from being scheduled for execution? The reason why I ask is this: I have a lot (>400) of small worker processes, 80% of them in a blocked state because they are waiting for data to be received. However, scheduling all 400 p
<napple> rocesses is just a nightmare for the platform. So it'd be good if only processes are scheduled, which are not in blocking mode.
<lonelywind1403> ScottK , JanC: the problem is solved, :D ( flowing this post http://ubuntuforums.org/archive/index.php/t-821058.html )
<JanC> lonelywind1403: what exactly worked?
<lonelywind1403> sudo apt-get clean
<lonelywind1403> cd /var/lib/apt
<lonelywind1403> sudo mv lists lists.old
<lonelywind1403> sudo mkdir -p lists/partial
<lonelywind1403> sudo apt-get clean
<lonelywind1403> sudo apt-get update && sudo apt-get upgrade
<subchee> hello
<subchee> could you tell me please, that a basic Apache (on Ubuntu Server 8.10) which worker uses as default? worker or prefork?
<subchee> sorry, prefork or mpm
<hads> Depends. dpkg -l will tell you
<subchee> thank you, apache2-mpm-prefork is installed :)
<subchee> I allways forget about basic commands, like dpkg -l :(
<hads> :)
<ball> I'm in the process of installing Ubuntu Server on my old Dell
<ball> I've wasted a whole day trying to bring up NetBSD on another old machine and...basically I give up.  I need to get some work done and Ubuntu Server seems like the path of least resistance.
<ball> hello _coredump_
<_coredump_> hello ball
<ball> What's a light Web server to run on Ubuntu Server ?
<hads> apache, lighty, cherokee, nginx, thttpd
<subchee> could you please take a look at my Apache2 config? I'm trying to tuning it... http://pastebin.com/d775394dc
<ball> hads: thanks
<subchee> the HW: AMD6000+ x2 64bit, 4x2 GB dual channel DDRII 800MHz, SataII winchesters in Raid array
<ball> Where can I find out more about local Ubuntu communities?
<subchee> this a web server with a few domains, but I'd like to host more domains... in the evenings 500+ users are online
<ball> How do you host more than one domain on a Web server?  Do they all share a single IP address?
<subchee> yes
<hads> Depends
 * ball sighs
<ball> I'm pretty sure I want to set up a loco
<ball> ..or at least a local user group
<Blinkiz> kirkland, With ecryptfs, will my folder names be encrypted as well?
<Blinkiz> kirkland, Will ecryptfs work on top of ext4?
<Blinkiz> kirkland, What kind of encryption are ecryptfs using as default? AES 128? Can I change this myself?
<ball> what's the command to type to update an Ubuntu Server host?
<Blinkiz> ball, You mean between ubuntu releases, like 8.04 to 8.10?
<ball> Blinkiz: no, just to make sure that security patches are applied etc.
<hads> sudo apt-get update && sudo apt-get dist-upgrade
<Blinkiz> ball, oki.. Yeah, like hads said
<Blinkiz> oh, not dist-update. Should be just "upgrade"
<hads> dist-upgrade
<ball> Thanks, that's running now
<Blinkiz> ball, :)
<subchee> bye people
<ball> I need to decide what Web server to install
<ball> hello drdebian_
<ball> If I install thttpd, will it automatically be configured to launch at boot time?
<hads> Likely
<Blinkiz> ball, But they way, if you want security updates to be installed automatic on your ubuntu server, you can do "sudo apt-get install unattended-upgrades". Config file will then be in /etc/apt/apt.conf.d/50unattended-upgrades"
<ball> Hmm... that might be a good idea.
<ball> hello BrunoX1ambert
<ball> How do I turn on sshd?
<hads> Install it
<ball> E: Couldn't find package sshd
<ball> I think I tried "openssh" too
<ball> should it be OpenSSH ?
<hads> Search
<hads> "apt-cache search" and grep are your friends.
<jpds> ball: openssh-server.
<ball> That's great, thanks.  I've written down te "apt-cache search" tool too.
<jpds> ball: And most services installed will be launched at boot time, unless you change it with update-rc.d.
<ball> That's good, it'll save me a job
<^law1^> guys, if wan install tomcat which package to install ? there r so many package tomcat6, tomcat6-admin, tomcat6-common, tomcat6-docs, tomcat6-examples,
<^law1^> tomcat6-user
<ttx> ^law1^: depends on what you want to get
<uvirtbot> ttx: Error: "law1^:" is not a valid command.
<^law1^> i want to get tomcat run just like in windows
<ttx> Usually "tomcat6" is what you want (the system daemon)
<ttx> "tomcat6-user" is a package that lets you deploy multiple private instances
<ttx> anyway, the server guide is your friend here
<ttx> http://doc.ubuntu.com/ubuntu/serverguide/C/tomcat.html
<^law1^> oh i'm opening it now :)
<^law1^> ttx, THX this is a great guide :D
<twb> #openvz is silent, but maybe someone here knows.  How do I increase the maximum memory available to an OpenVZ VE?
<twb> It seems to default to 256MB, which isn't enough for fat ol' GHC.
<ball> What's a GHC?
<incorrect> hello, is there a server tuning guide?
<kraut> moin
<ball> hello incorrect
<incorrect> hello ball
<^law1^> moin kraut :)
<ball> hello oly
<oly> hi ball,
 * thefish feels the love
<^law1^> hi ALL :)
<oly> hello to you all :)
<ball> I'd be awfully tempted to install Ubuntu Server on the box behind me if I could get the installer to work.
<oly> where does it get to ?
<ball> oly: if I can get this rewired slightly, I'll try it again.
<oly> okay, just asking possible you need to provide a parameter to the kernel to get it to boot or something :p
<incorrect> what is the ulimit pipe size?
<soren> incorrect: "ulimit -p"
<incorrect> i mean what is pipe size
<oly> guessing some one here might know the answer to this, can i copy files over ssh but use sudo / su to increase my permissions ?
<jserver> could anyone test my upstream for me?
<jserver> someone with at least a 300kb/s download rate
<incorrect> jserver, why don't you use http://www.speedtest.net/
<jserver> not accurate, im am on a 15Mb connection and it reports 5
<incorrect> so
<incorrect> could well be accurate
<jserver> umm
<jserver> 1/3 of the bandwith alloted to me
<incorrect> yep
<jserver> but my downstream im not worried about, I get so many mixed results for my upstream, I would like to have someone just max it out to see what it can do
<jserver> of course I dont wanna bother anyone, just figured I would ask and see if anyone would.
<incorrect> do you have an iso image i can pull from you then?
<incorrect> url?
<incorrect> wget jserver isn't working
<jserver> my server is locked down at the moment
<incorrect> well that isn't exactly helpful
<jserver> is ddc reliable?
<jserver> dcc
<incorrect> I will wget a file from you
<jserver> 1 min
<jserver> heres a 24mb file, should be enough to test the bandwidth
<jserver> http://www.sites4christ.org/LimeWireLinux.deb
<jserver> one of my client sites
<maswan> jserver: 90KB/s
<jserver> you got to be kidding me
<jserver> cant run a server on that kinda shit
 * jserver smacks his ISP in the ear
<jserver> suppose to be a whole lot more than that
<heath|work> Anyone know of a way I can monitor a folder for changes?
<Jubez> nietchzie
<Jubez> my bad, wrong window
<Jubez> forgive the spelling too, hehe
<oly> heath|work, inotify-tools ?
<heath|work> I will look into that thanks
<oly> i dont know for sure, but i know inotify can notify you of filesystem changes i have never tried the command line tools though
<oly> only ever used it in python :p
<heath|work> funny you mention python, that's what I am reading now
<heath|work> hmm.. check out dnotify
<oly> i used it to detect when a ftp script had down loaded a file so that i could act on it
<oly> works really well
<heath|work> oly, does it run all the time, or do you cron it?
<oly> well the ftp script is a cron job, but the python script is a daemon thats always running in background
<Scix> a quic question: Is rc.local runned after the network is up?
<Scix> i have to wget some configuration files everythime a computer starts
<ivoks> yes
<ivoks> unless network is dhcp
<ivoks> if you want to run something after bringing up network interface, /etc/network/if-up.d/ is much better place
<Scix> The network is using DHCP. does if-up.d have root access?
<ivoks> scripts in that directory are executed with root privileges, yes
<Scix> ok, and all i have to to is to add it and make i executable?
<ivoks> yes
<Scix> tanks, you are an angel, ivoks
<soren> Scix: Note, though, that it gets executed for *every* interface. Even the loopback interface.
<soren> Scix: ... You can check the $IFACE environment variable for the right interface.
<Scix> soren, tanks for the note :)
<soren> Sure :)
<ivoks> soren: you will never be a better angel than I am! :p
<ivoks> :D
 * soren bows to ivoks
<cliebow> compiling openldap-2.4.15..when running make test port 9001 appears already in use?? any workaround?
<cliebow> ahh..
<genii> cliebow: lsof -i:9001     should tell you whats currently using it
<cliebow> a n opensuse bug says pull localhost from ::1 line in /etc/hosts..
<cliebow> that got things going..
<genii> Hopefully you don't use anything ipV6 anywhere
<cliebow> nope..
<cliebow> ill set it back if the tests are successful..
<incorrect> I am looking for a smarter web load generating tool,  siege is great, but it doesn't spider the side or pull any of the links in the page
<incorrect> does anyone here know of a tool to do this?
<JanC> incorrect: what some people do is use a script that "replays" real traffic based on webserver log files, but of course you need the site to already be in production for that
<incorrect> I have a set script i want to simulate
<ball> incorrect: wget?
<incorrect> i could write something, but i don't have massive amounts of time
<heath|work> incorrect, does this help? http://www.joedog.org/index/siege-faq#a21
<heath|work> looks like you can build a url text file
<incorrect> heath|work, siege isn't very smart
<incorrect> i am using siege right now
<incorrect> i think apache's flood might do the trick
<ttx> ScottK: fyi, I should post a FFe for bug 323601 by tomorrow. It should end up in your plate, being server-related.
<uvirtbot> Launchpad bug 323601 in likewise-open "Upgrade to likewise-open 5 for Jaunty" [Wishlist,In progress] https://launchpad.net/bugs/323601
<ScottK> ttx: likewise-open is in Main, so that needs ubuntu-release approval, not motu-release.
<ttx> ScottK: It would be done as a separate, new package, since you can't update from likewise-open 4 to the 5 version.
<ttx> so it should hit universe...
<ScottK> Ah.
<ttx> and conflict with the version in main.
<ttx> ScottK: the Ffe will detail that aspect.
<ttx> ScottK: just wanted you to know about it before I post it.
<ScottK> Also make sure you cover if this relates to any spec.
<dendrobates> ScottK: the main reason for doing like this is to protect users.  likewise did not create a clean upgrade path (IMO) from 4.x to 5.x.
<dendrobates> ScottK: it requires users to rejoin the domain.
<ScottK> dendrobates: Makes sense.  My major question would be should this just wait for Karmic and get an early backport at this point?
<ScottK> I'll have a look at it when it's ready.
<dendrobates> ScottK: we have been discussing that, but there is a huge customer demand,  5.x was actually released during the intrepid cycle and we held off then.
<ScottK> I see.  Grumble.
<dendrobates> ScottK: so users are doing the upgrade themselves, in not so good ways.
<ScottK> Well if Canonical is up to doing the archive admin work on it and there's a ready package there isn't a lot of reason to not put it in.
<dendrobates> ScottK: but look at it and let me know what you think.
<ScottK> Also if we get it in now, we could backport to Intrepid/Hardy soonish and reduce non-standard installs.
<bdmurray> Where does bug 341030 belong?
<uvirtbot> Launchpad bug 341030 in ubuntu "Local mail delivery missing in default server install" [Undecided,New] https://launchpad.net/bugs/341030
 * ScottK looks
<ScottK> bdmurray: That behavior is by design.
<ScottK> bdmurray: I commented in the bug.
<bdmurray> ScottK: great, thanks for doing that!
<ScottK> The reporter won't be happy, but at least they have an answer.
<orudie> how would i add a user to sudoers list
<giovani3> orudie: you either add the user to the admin group, or you can explicitly add the user in the sudoers file: https://help.ubuntu.com/community/Sudoers
<orudie> is it a good idea to match the hostname with a domain name /?
<orudie> if the primary use of the vps server is a website for this domain name
<ivoks> how can you match hostname with domainame?
<ivoks> hostname.domainame
<ivoks> server.server? :)
<orudie> ivoks, hi friend ?
<orudie> :)
<orudie> ivoks, we are about to do the transition of the email accounts to the host you helped me with a few weeks ago
<ivoks> ok
<orudie> ivoks, can you help me with some stuff ?
<jcrow> Anyone knowledgeable about LVM? After completing a 8.10 install I added a vg and created my lv. The three lvs are mounted as /tmp /home and /var. /var fails to mount on boot but the other two automount ok. The /var lv can be mounted after boot so I know it is ok. Any idea where to start fixing this one?
<ivoks> sure
<ivoks> jcrow: /etc/fstab
<jcrow> ivoks: /etc/fstab is configured for /var partition
<ivoks> how is configured?
<jcrow> You may have hit the nail on the head. I went to look at the fstab and noticed that the line for var has default vice defaults for the mount options. I did not notice that before. I am rebooting now. This is a server so it takes a minute or five.
<jcrow> ivoks: /etc/fstab was the problem. I had been digging through the Ubuntu boot process for the last hour to find where it might be failing. All I needed was a kick in the right direction. Thanks!
<ivoks> jcrow: np
<orudie> ivoks, how reliable is zoneedit ?
<ivoks> what's that?
<ivoks> ah... dns service
<ivoks> never used, i have my own dns servers
<giovani3> I've had success with dnsmadeeasy
<giovani3> they're quite diversified, and unbelievably cheap for what they offer
<giovani3> http://www.dnsmadeeasy.com/s0306/price/dns.html
<orudie> ivoks, so if i want my email clients to be set up as pop, i cat use mail.mydomain.com as incoming mail server, it should be pop.mydomain.com instead?
<ivoks> err?
<ivoks> i don't understand
<ivoks> you can use single hostname
<ivoks> you don't have to set up different hostnames for different services
<frojnd> Hello there
<frojnd> Any good guys here ?
<ivoks> no, only bad guys
<frojnd> :P
<frojnd> I've finally shutdown the server after 2 months or more..
<frojnd> not because id' wann't to but because I had to, they had to do some electricity maintance..
<frojnd> Somehow something is going wrong
<frojnd> I got message that filesystem check went wrong or smth and to look into /var/log/fsck/checkfs http://pastebin.ca/1358382
<frojnd> What exactly  does this error mean ?
<frojnd> Does it has to do something with my external USB hard drive ?
<frojnd> did fstab it wrong ?
<frojnd> did I fstab it worng ?
<frojnd> Any ideas ?
<ivoks> you don't know what UUIDs are?
<frojnd> ivoks: id of hard didsk ?
<frojnd> disk
<ivoks> right
<ivoks> uniq id of the partition
<frojnd> ivoks: and something is crossing over
<ivoks> no...
<ivoks> you are using -A switch
<ivoks> and fsck can't read lines with UUID in /etc/fstab
<ivoks> does this happen during the boot or you are manually running fsck?
<frojnd> ivoks: no during the boot :) as I said the first time shutdwon and start today: http://pastebin.ca/1358385
<frojnd> this is my /etc/fstab
<frojnd> not sure if it's right
<ivoks> not during the boot OR no, during the boot?
<ivoks> i see
<ivoks> you configured your fstab wrong
<ivoks> #
<ivoks> UUID=B4B0049FB00469E6 /mnt/p2p ext3 relatime 0 2
<ivoks> that last '2' at the end means 'stop everything if you find an error on this partition'
<ivoks> '0' would mean 'just ignore everything'
<ivoks> for partitions that aren't crucial for server's operations, it should be 0 or 1
<frojnd> ivoks: funny part is I can't see my 1tb external HDD if I do fdisk -l, just 200gb /mnt/p2p and /home, /  160gb: http://pastebin.ca/1358388
<ivoks> 2 will stop booting in case of an error, as you can see in your case
<frojnd> ivoks: so there are errors :S
<ivoks> my guess is that you don't have that disk in server
<ivoks> or your IDE controller is broken, or something else
<ivoks> anyway, replace that '2' with '0'
<ivoks> reboot
<ivoks> and then fix the partition problem
<frojnd> ivoks: about IDE's, I'll stop server now, check the cables
<ivoks> frojnd: don't
<ivoks> frojnd: fix fstab, boot the system, let it work
<ivoks> and then check if there are driver problems or something like that
<frojnd> so just change 2 into 0
<ivoks> maybe it isn't hardware problem
<frojnd> I see
<ivoks> her, 2 into 0 only for /mnt/p2p
<ivoks> /sher/yes/
<ivoks> grr...
<frojnd> :)
<frojnd> restarting
<frojnd> let me see the monitor
<ivoks> that UUID is wrong
<ivoks> you added it yourself?
<ivoks> it's too short
<ivoks> for ext3
<frojnd> ivoks: I've changed it into 0 and it still says that UUID is worng :)
<frojnd> it doesn't exist
<ivoks> cause it is wrong
<frojnd> ivoks: for /mnt/p2p
<ivoks> but the server will boot now
<frojnd> ivoks: yes
<frojnd> ivoks: before you help me further...
<frojnd> ivoks: I have to plugin CD-ROm
<ivoks> frojnd: so, again, you added that uuid yorself?
<ivoks> yourself
<frojnd> ivoks: I think I foud some how to identify UUIDS..
<ivoks> ok, well, this one is wrong :)
<frojnd> ivoks: If I have onyl ONE IDE cable and HD and CDROM
<ivoks> log in to your server
<frojnd> can I put jumpers to Cable select ?
<ivoks> don't touch hardware
<frojnd> so CDROM and HD would be rocognized
<ivoks> just boot your server
<frojnd> ivoks: yes yes..
<frojnd> ivoks: this one isn't the ubuntu issue :)
<frojnd> before I'll set any UUIDS I'd also like to plugin CDROM
<frojnd> now I only have HD
<ivoks> ok
<frojnd> I jsut have to put pins on Cable select ?
<ivoks> yes
<frojnd> on bouth CDROM and HDD ? if I have one cable for both ?
<frojnd> ok
<frojnd> brb
<frojnd> Thanx for now, and please don't go away :)
<frojnd> I'll need some more help from you :)
<frojnd> ivoks: here ?
<heath|work> how do you view the uid of a user?
<ivoks> yes
<frojnd> ivoks: now I only see 160GB DISK :D
<frojnd> however cdrom works
<frojnd> ivoks: this has something to do with pins I've select right ?
<chamunks> Hey i need some information on importing and exporting mysql thru phpmyadmin.
<frojnd> I've put pins on hdd to cable select, also on cderom but only cdrom is recognized
<ivoks> frojnd: i still don't understand why you touched hardware in the first place
<frojnd> :S
<frojnd> ivoks: I'd like cdrom to be there...
<ivoks> your system was fine
<frojnd> ivoks: but without cdrom
<ivoks> you really need it?
<chamunks> I cant remember my passwords to my databases but if im exporting them to a plaintext file that shouldnt matter once i've imported them to the new system?
<jbernard> heath|work: grep $USERNAME /etc/passwd
<heath|work> thanks jbernard
<ivoks> chamunks: mysqldump --default-character-set=utf8 -u root -p --all-databases &> mysql_dump_all_databases.sql
<chamunks> Im pretty sure that my assumption is correct but i need someone who's done this before to tell me that im okay to format my old server.
<ivoks> chamunks: then, on import, again --default-character-set=utf8
<jbernard> heath|work: more specifically: grep ^$USERNAME /etc/passwd | cut -d: -f3
<frojnd> ivoks: ocasionally :\
<chamunks> ivoks, thanks so much.  since i dont like using root for my web apps and only like assigning separate users for separate processes will this be easy to do on the new system?
<frojnd> ivoks: not important as we speak... I'll play with cables some other time, How can I figure iut ouw UUIDS for both disks ? the one that's on IDE cable and the one that's plugged int via USB ?
<chamunks> ivoks, im not mysql savvy I just know that its required to keep settings etc.
<ivoks> ok, too much lines with 'ivoks' on the screen
<frojnd> :)
<ivoks> frojnd: sudo vol_id /dev/sdXY
<mathiaz> kirkland: do I have to update the kernel on hardy if I want to test/run kvm 84 from the ubuntu-virt PPA ?
<chamunks> oops :P  ill keep those lines you hooked me up
<dementorr> y i need some help becouse im new whit linx ,i instal verlihub but i canot run any lua scripts
<ivoks> mathiaz: imo, yes; i don't see how it would work without kernel module of same version
<ivoks> chamunks: different mysql servers/clients have different character sets conffigured
<ivoks> while we are at it
<chamunks> Im going ubuntu to ubuntu im just going through some growing pains.
<ivoks> mathiaz: why do we predefine latin1-sweeden as default character set in mysql? :)
<ivoks> mathiaz: why not... utf-8?
<Nafallo> oooh
<Nafallo> Sweden! :-D
<ivoks> sweden, right :)
<mathiaz> ivoks: probably because this is what upstream does...
<ivoks> it's a compile time option
<ivoks> iirc
<mathiaz> ivoks: I agree we should use utf-8 - I think 5.1 may use it though
<ivoks> didn't try it yet :/
<ivoks> chamunks: well, however, it's wise to add --default-character-set=utf8 during export and import
<frojnd> ivoks: ok, which one is correct ? ID_FS_UUID=c3652bfc-ad3b-479f-96d0-fbac77abf314 or ID_FS_UUID_ENC=c3652bfc-ad3b-479f-96d0-fbac77abf314 ?
<chamunks> standards rock
<ivoks> frojnd: notice that they are the same :)
<frojnd> ivoks: right :)
<frojnd> ivoks: thanx
<frojnd> I just have to replace the previous one with this one now
<ivoks> yes
<frojnd> ivoks: what about the USB one ? If I can't even see it on fdisk -l
<kirkland> mathiaz: i just reinstalled my hardware with hardy server
<kirkland> mathiaz: let me do some harder kvm-84 testing on hardy
<kirkland> mathiaz: previously, i just sniff-tested it on my mythtv box
<kirkland> mathiaz: but i can't risk downtime on that machine--my wife kills me when mythtv goes down :-)
<ivoks> frojnd: usb/scsi/lvm/ide/whatever everything's the same
<kirkland> "mission critical server" in the kirkland household
<frojnd> ivoks: after what command?
<chamunks> ivoks, what if for some reason the databases "collation" before export is latin1_sweedish_ci
<chamunks> For some strange reason.
<ivoks> :)
<ivoks> that's default in ubuntu
<ivoks> chamunks: then you can import without --default-character-set
<chamunks> so that will convert to utf8 with no hicks?
<ivoks> nope
<ivoks> it will stay like that
<chamunks> sorry i meant if i used the set=utf8 switch you sent do you think it would be okay?
<ivoks> no, don't use switch
<ivoks> i told you to export and import with that switch
<ivoks> if you exported with another charset (and you did), then don't use that switch
<frojnd> I don't think I udnerstand what u meant with  usb/scsi/lvm/ide/whatever everything's the same ivoks
<chamunks> right my wording is bad
<chamunks> I havent exported yet im looking at the phpmyadmin stuff to give myself a visual.
<ivoks> frojnd: it will work with usb
<frojnd> ivoks: what exactly will work with usb
<ivoks> 20:19 < frojnd> ivoks: what about the USB one ? If I can't even see it on fdisk
<frojnd> ivoks: I can't see the 1tb sotarage on fdisk -l :P
<frojnd> which is connected via usb
<chamunks> ill use the default charset i wont complicate things thanks for the advice.
<ivoks> frojnd: check dmesg
<frojnd> ivoks: on what should I be throwing my eye ?
<chamunks> ivoks, thanks again take care.
<frojnd> ivoks: at the end some usb device is using adresses from to a lot :D
<frojnd> now I can't see the whole dmesg
<frojnd> just the last lines
<frojnd> ivoks: at the beginning of the boot process there is a message that this ID..something should use another chip, perhaps another driver... dunno what exactly it say
<frojnd> ivoks: here is the dmesg output, right after the boot: http://pastebin.ca/1358427
<ivoks> take care people...
<frojnd> ivoks: don't go :S
<ivoks> frojnd: something's wrong with your usb disk
<ivoks> try disconnecting and connect it again
<frojnd> ivoks: ok
<frojnd> ivoks: wait
<frojnd> when in OFF mode right ?
<frojnd> maybe usb port isn't recognized :S
<ivoks> for the love of god...
<ivoks> i better disconnect than continue...
<ivoks> bye
<frojnd> hehe :)
<frojnd> I scared him
<mathiaz> kirkland: ok - I was planning to update to kvm 84 on my hardy server once I've finished with this round of iso testing
<mathiaz> kirkland: and then re-rerun the iso tests with kvm 84 - that should be a good test case
<mathiaz> kirkland: as far as stability is concerned.
<kirkland> mathiaz: i would definitely appreciate your help testing kvm-84
<kirkland> mathiaz: i'm still working out the kernel bits
<mathiaz> kirkland: ok - as far as I can back out the changes easily I'm glad to help
<mathiaz> kirkland: but for now I cannot use kvm 84? I should wait for the kernel bits?
<kirkland> mathiaz: i think you can use the userspace for kvm84
<kirkland> mathiaz: but you won't get all of 84's goodness until you use the kernel module too
<mathiaz> kirkland: hm ok.
<mathiaz> kirkland: and the plan is to try to push both userspace and kernel ?
<kirkland> mathiaz: the userspace bits will get you a much newer, better, shinier qemu
<kirkland> mathiaz: the kernel space will get you a better kernel module, for handling the virtualization acceleration stuff
<kirkland> mathiaz: there's important magic in both places
<kirkland> mathiaz: bugs fixed by each
<mathiaz> kirkland: my question is actually whether I should upgrade to userspace 84 now and if that would be helpfull to you
<kirkland> mathiaz: and potential regressions introduced by each :-)
<mathiaz> kirkland: or wait for you to sort out the kernel bits
<kirkland> mathiaz: i think it would be interesting to test the userspace
<kirkland> mathiaz: knowing that existing kvm-62 kernel issues won't be solved by a kvm-84 userspace
<kirkland> mathiaz: however, kvm-62 userspace issues (qemu-related) may well be solved by kvm-84 userspace
<mathiaz> kirkland: ok - so I'll give 84 a shot once I've finished this round of iso testing
<kirkland> mathiaz: thanks.
<uvirtbot> New bug: #341278 in tomcat6 (universe) "CVE-2009-0781: XSS in tomcat6 and tomcat5.5" [Undecided,Confirmed] https://launchpad.net/bugs/341278
<pjsturgeon> wereHamster: Looking at your script move finished passwords, it has no clear support for adding in authorization. Where to stick in a user/pass?
<pjsturgeon> I tried a few things with no luck
<pjsturgeon> lemme start again, this script: http://pastie.org/357738 gives the error: ** Exception: Problem with http://localhost:9091/transmission/rpc
<mrwes> I just installed Denyhosts on my server, and now I can ssh into my box. The /etc/hosts.deny is adding my IP -- how do I change that?
<mrwes> do I need to add myself to the /etc/hosts.allow?
<mrwes> hrmm
<hansin> does anyone know if there is a CLI prefered method to upgrade to a new release?  i know upgrade-manger -d work in GUI land, but has anything been made to work for CLI?  i usually just change my apt sources to newer version but that may not be the safest i am thinking.  thanks.
<ajmitch> hansin: do-release-upgrade
<goofey> how about sudo apt-get dist-upgrade?
<hansin> ajmitch: thanks, that looks like CLI style update-manager equivalent.
<hansin> goofey: that is how I usually do it, and have never had issues (after updating sources list), but I think there can be a small chance if there are some major infrastructure adjustments that something could be missed.  i think "dummy/transitional" packages are sometimes used to help alleviate this.
<goofey> hansin: ahh, I wasn't sure that was the "official" way to do it, just one way I happened to know of
<hansin> goofey: that is how i have always done it actually, but wanted to find a more "official" way.  i'm not saying this is it, but i bet it is considered more appropriate.  maybe it doesn't matter, but i'm going to try this way when jaunty gets released.
<goofey> hansin: my guess is there will be "upgrade your server to jaunty" webpages after the release too
<MK-BB> hi
<MK-BB> anyone here?
<hansin> sure, why not.
<MK-BB> I was wondering if you guys know good website for a beginner to set up chtoor
<MK-BB> chroot*
<hansin> MK-BB: take a look at this; there seems to be quite a few options though don't know much about how to chroot: http://www.google.com/search?hl=en&q=ubuntu+chroot+howto&aq=0&oq=ubuntu+chroot+how
<MK-BB> hansin, I was trying to setup jailed ssh but google is not seem to help me
<mrwes> I just installed Denyhosts on my server, and now I can ssh into my box. The /etc/hosts.deny is adding my IP -- how do I change that?
<mrwes> do I need to add myself to the /etc/hosts.allow?
<mathiaz> kirkland: http://people.ubuntu.com/~mathiaz/jaunty-raid1-degraded-boot-fail.png <- is this the RAID1 failure you saw?
<kirkland> mathiaz: http://launchpadlibrarian.net/23151208/Screenshot.png
<kirkland> mathiaz: that's my error
<kirkland> mathiaz: which looks to be the same
<kirkland> mathiaz: https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/334994
<uvirtbot> Launchpad bug 334994 in linux "Degraded RAID boot fails: kobject_add_internal failed for dev-sda1 with -EEXIST, don't try to register things with the same name in the same directory" [High,In progress]
 * mathiaz nods
<Nafallo> so ehrm... once it works. there will be a way for me to tell the machine to always start in degraded mode without me having to run around to the data centres and press buttons, right?
<uvirtbot> New bug: #341432 in krb5 (main) "package libkrb5-dev 1.6.dfsg.4~beta1-3 failed to install/upgrade: failed to delete `/usr/lib/libkrb5support.so.dpkg-tmp': Read-only file system" [Undecided,New] https://launchpad.net/bugs/341432
<kirkland> Nafallo: yes, i added that in intrepid
<kirkland> Nafallo: and backported it to hardy
<kirkland> Nafallo: you can preseed the debconf question
<Nafallo> aha! it should already work then :-P
<kirkland> Nafallo: or configure with 'dpkg-reconfigure mdadm'
<Nafallo> kirkland: was just about to ask. thanks.
<kirkland> Nafallo: there's a question about 'do you want to boot your raid if it becomes degraded'
 * Nafallo fixes
<Nafallo> until I have money to buy an E200 and a hotswap kit :-)
<_dbd_l12> just had the hardest (well - most tries) install of ubuntu server 8.04 on an old machine (P III) with an AOpen DVD drive. The iso I md5summed and burned to CD kept getting random read errors during install... Tried the usual suspects (slow burning, different cd-types, bootflags (all_generic_id nodma etc.) but in the end i could only resolve it by burning the iso to a dvd instead and booting from dvd. The installation went flawless from
<_dbd_l12> there. Should i report a bug with details concerning hardware for this?
<ivoks> did you do md5sum on CD?
<_dbd_l12> i did the md5sum on the iso before burning (using a different machine). I still have the cd so i could still md5sum them...
<ivoks> try that
<ivoks> so you could eliminate 'burning errors'
<_dbd_l12> ok, i'll see what md5summing the disc itself gives
<_dbd_l12> damn, its seems like i have already thrown the used cd's away so it seems like i'll have to reburn / check if the problem persists and then md5sum the disc itself. if the md5 is ok i'll come back but that will prob. be tomorrow or in 2 days. thanks for the idea ivoks
<ivoks> np
<ivoks> good luck
<mathiaz> kirkland: http://testcases.qa.ubuntu.com/Install/ServerRAID1 <- does that make sense?
<mathiaz> kirkland: does it cover all of the RAID1 use case?
<Nafallo> I'd suggest swap to be on RAID0 to be a more real world example but... *shrugs*
<Nafallo> good enough for a test case I guess.
<mathiaz> Nafallo: why raid0? if you loose a disk you may be screwed
<maxb> Why would you put swap on a raid0?
<Nafallo> mathiaz: how often do you need swap afterwards? it's better to keep that as fast as possible in case it ACTUALLY is needed.
<kirkland> mathiaz: raid0 is important for non-mission-critical data that you want to serve very fast
<maxb> Surely two separate swap partitions would be better?
<maxb> Save the raid overhead
<kirkland> mathiaz: my archive mirror, and iso mirror are on a RAID0
<Nafallo> maxb: not sure it's such a huge overhead?
<kirkland> mathiaz: if i lose a disk there, i gotta resync with the archive, but meh ...
<kirkland> mathiaz: i'd rather have the more space, and faster read/writes
<mathiaz> kirkland: oh yes - I agree with this use case
<mathiaz> kirkland: however if you setup a system with RAID1 it means you wanna protect yourself from disk failure
<JanC> kirkland: if you want it to be fast you must cache your archive/mirror in RAM  ;)
<kirkland> mathiaz: that description looks good
<Nafallo> raid0 can't compete with memory performance, but gets hell a lot closer than raid1 :-P
<mathiaz> kirkland: and by putting swap on raid0 you're not doing it correclty
<kirkland> mathiaz: to be exhaustive, you should try booting from each disk in the array
<ivoks> just create two swap partitions
<ivoks> raid0 for swap is pointless
<kirkland> swap sucks altogether, i don't use it at all
<mathiaz> right - but raid1 for swap makes sense IMO
<kirkland> 8GB of ram is enough :-)
<kirkland> mathiaz: i agree with you
<ivoks> i use raid1 for swap
<kirkland> mathiaz: i have exhaustive test instructions at https://wiki.ubuntu.com/BootDegradedRaid#Implementation
<mathiaz> kirkland: wiki page updated with a link to your instructions. Thanks for the review!
<kirkland> mathiaz: you bet;  thanks for putting this together
<kirkland> mathiaz: i hate to see regressions in this :-)
#ubuntu-server 2009-03-12
<Elite> Hey guys
<Noah0504> Hey.
<sgrover> Looking for tips.  I VPN to a remote buntu server, and want to mount some of it's directories to the local workstation.  Do I really need Samba/NFS/sshfs to do this?  If so, which would your recommend?
<twb> sgrover: to access a remote filesystem as part of the local directory tree, you need a network filesystem.
<twb> sgrover: that's unavoidable.
<twb> sgrover: as you are connecting over a VPN, I presume you do not need to encrypt traffic over the network filesystem, so NFS and CIFS are valid choices.
<twb> sgrover: in general, you should use NFS for unix-to-unix networking, and CIFS if Windows is involved.  sshfs is only for emergencies, as it is not an efficient nor featureful filesystem.
<sgrover> twb: Thanks.  Just needed a sanity check before I began setting up Samba.  (I need to share/connect to Windows resources from that server anyways...)
<twb> sgrover: if you already have Kerberos authentication set up, then AFS, NFSv4 or CIFS might be good choices.
<twb> Note that in all cases, you *will* need a centralized authentication protocol (e.g. LDAP or NIS) -- or you will need to manually synchronize UIDs and GIDs between hosts.
<twb> NIS is usually discouraged because -- though simple -- it is old, inflexible and has numerous security problems.
<sgrover> I was going to keep it simple and avoid kerberos (for now).  For the most part, the server needs to mount a single windows share so a custom app can access files.  Me mounting the server's directories is for development purposes....
<twb> LDAP servers are very difficult to set up if you aren't used to LDAP.  The client side is now pretty easy.
<twb> OK, if you just need to mount CIFS (Windows share), then you can put that in /etc/fstab.
<sgrover> I've gotten familiar with LDAP - our custom app is PHP based and uses the Active Directory server for authentication and management.
<twb> Righto.
<sgrover> Thanks though.  Samba it is.  Perhaps with AD authentication... we'll see.. :)
<ball> What's an appropriate choice to serve up filespace from an Ubuntu Server to Xubuntu workstations?
<JanC> NFS ?
<ball> JanC: that could work, conceivably.
<twb> ball: on a trusted network, NFS between Unix hosts is a good rule of thumb.
<ball> twb: that was my first thought, but I didn't know if there was some modern replacement
<twb> ball: there isn't really.
<ball> Okay, well that answers that one :-)
<twb> ball: all network filesystems suck badly, but the ways in which NFS sucks aren't so important for Unix on a LAN
<ScottK> ball: Is it on a secure LAN or does it touch the internet?
<ball> ScottK: there's a gateway to the Internet (NAS, with almost everything blocked inbound)
<ScottK> Because once you get out on the big bad Internet, I'm a fan of SFTP.
<twb> ScottK: that's not a filesystem, though sshfs is.
<twb> However mounting something like /home over sshfs is not too fantastic.
<ScottK> Yes.
<ScottK> Most any form of remotely mounted home has some serious downsides.
<twb> AFS is the obvious, though hairy, choice for /home over the internet.
<twb> See e.g. hcoop.net
<ball> okay, thanks.
<ScottK> I've no idea how it is in Xubuntu but in Konqueror or Dolphin the fact that it's remote of SFTP is mostly hidden.
<JanC> considering the upload speed of most internet connections, no network filesystem is useful for /home
<twb> ScottK: that's not at the filesystem level.
<twb> ScottK: so for example you can't cd into the SFTP directory and run "make" or "gcc"
<twb> JanC: it depends on the usage for /home, too.
<twb> JanC: I mean, a GNOME desktop uses /home very differently to a shell account on a VPS
<JanC> twb: on Ubuntu (and I suppose Xubuntu) you can use gvfs-fuse and make can work fine...
<twb> JanC: that's basically the same approach as sshfs -- but that's no longer just SFTP.
<ScottK> twb: True.
<ball> I'm literally failing to keep my eyes open.
<twb> JanC: that's all I was saying by "SFTP isn't a filesystem"
<ball> Goodnight everyone.
<JanC> twb: of course, it's jus tused as a filesystem  ;)
<ScottK> twb: I agree, but for a lot of use cases it's close enough.
<twb> SFTP has locking issues -- not that NFS and AFS don't.
<twb> YMMV, etc.
<ScottK> It is relatively simple to get up and running too.
<twb> ScottK: yes, definitely.
<twb> ScottK: especially if you already use ssh everywhere
<twb> ScottK: that, and it's easy to set up "ad hoc" mounts as necessary, without local root privileges.
<ScottK> For my own use I have yet to hit a need for anything beyond ssh + sftp.
<twb> JanC: have you compared gvfs-fuse to sshfs?
<ScottK> Now I don't know if it's because I'm not using the Gnome desktop, but just some apps, but in KDE apps I can file, save as, sftp://.... and save a file from any app.  I can't do that from Gnome apps at all.
<twb> I'd be interested to read a technical compare-and-contrast.
<ScottK> Is that typical?
<twb> ScottK: you're supposed to be able to do that in a full GNOME environment
<twb> ScottK: you might need to turn on some GNOME daemons or something
 * ScottK has no idea.
<twb> ScottK: is gvfs-backends installed, and is the gvfs daemon process active?
<ScottK> No to both.
<ScottK> So that's probably it.
<twb> ScottK: that's probably why
<ScottK> The kid's computer has it.  Thanks to aptitude why I can see a depends that needs fixing.
<TimReichhart> hi guys I need some help on fixing a problem with nagios3 I already posted this on the forums and it seems im not getting any help but here is the post on the forum http://ubuntuforums.org/showthread.php?t=1093018
<uvirtbot> New bug: #341526 in mountmedia "Add support for Dell Driver Injection Disk" [Undecided,New] https://launchpad.net/bugs/341526
<Brando753> anyone know how to get the server to work with a wifi card during install
<Brando753> ?
<tmart> have you seen this forum post? http://ubuntuforums.org/showthread.php?t=376283
<tmart> not sure if it will help
<Brando753> ill take a look
<kirkland> ScottK: https://edge.launchpad.net/bugs/341237 is the qemu sync that awaits motu-release approval
<uvirtbot> Launchpad bug 341237 in qemu "Please sync qemu 0.10.0-1 (universe) from Debian unstable (main)." [Medium,In progress]
<TimReichhart> anybody help me with the issue im having with nagios3
<TimReichhart> anybody help me with the issue im having with nagios3
<twb> Ooh, qemu 0.10.
<coffeedude> ttx: morning.
<ttx> hey
<ttx> coffeedude: anything for me ?
<coffeedude> Hehehe. didn't expect you in so early local time.
<ttx> didn't expect you so late local time.
<coffeedude> Yeah.  Have the patch and testing one more local rebuild.
<coffeedude> I don't think I'll be going to bed tonight.
<coffeedude> I'll send you the working patch while I'
<coffeedude> while I'm still testing.
<ttx> sure.
<ttx> Also if you have the blurb about new features, I could use it today.
<coffeedude> I'm working on getting that for you as well.
<ttx> Great.
<ttx> There is an openssl-gpl potential link in -lsass that I need to investigate too
<coffeedude> If you need a license exception from me to link with openssl, I can manage that.  I know the openssl vs. gnutls is always an issue.  Hadn't thought about it affecting you before now though.
<ttx> coffeedude: yep -lsass and -rpc end up linked with openssl
<coffeedude> you're gonna make me do something about that aren't you. :-P
<ttx> coffeedude: yes, I'll probably need something, I admit :P
<coffeedude> Can I add a linking exception clause for openssl to the likewise-open license files?
<coffeedude> ttx: work in progress patch files in route to you now.
<ttx> coffeedude: that would mean rereleasing the source tarball, right
<coffeedude> ./sigh...Maybe not.
<coffeedude> Let me think.
<coffeedude> need to walk around some.....been sitting too long. be back in a few minutes.
<ttx> coffeedude: I'm fine with a new source tarball, but that might be complicated to release on your side.
<ttx> k
<coffeedude> ttx: Found a few minor issues with the patches I sent you.  one more round I think.  Shouldn't take long.
<ttx> k
<uvirtbot> New bug: #334781 in bacula (universe) "bacula 2.4.4 configuration fails" [Medium,In progress] https://launchpad.net/bugs/334781
<twb> Sanity check: 255.255.255.252 is /30, right?
<hads> http://krow.net/dict/subnet.html
<ivoks> yes
<twb> Thanks.
<twb> I hate the bloody decimal dotted quad notation
<ivoks> if ip is .1
<ivoks> er... ignore last comment :)
<twb> ivoks: actually I was going to use /31 with .2 and .3, but then I wondered if that would mean that .2 and .3 were actually the equivalent of .0 and .255 (i.e. broadcast IPs).
<Nafallo> twb: they aren't
<twb> This is a "network" of two machines, see.
<ivoks> sudo apt-get install ipcalc
<Nafallo> ehrm. well. sipcalc 192.168.0.1/31
<hads> The cheat sheet
<soren> twb: Interesting. I like that notation a lot.
<twb> soren: I would prefer dotted hex.
<twb> C0:A8:00:00/24
<ivoks> i would prefer ipv6 :)
<twb> ivoks: well sure
<ivoks> everything else is just workaround
<soren> ivoks: I disagree
<soren> You can use that notation for other purposes than subnetting.
<ivoks> well, i was refering only to subneting
<soren> Example: A company I used to work for hosted mail servers for other companies. They each had their own mail server. We dedicated a /24 subnet to each customer. The mail server was always XX.YY.ZZ.25 (IIRC). That enabled us to have a firewall rule allowing access to and from port 25 on 192.168.0.25/255.255.0.255.
<soren> ...instead of one firewall rule per customer per service.
<soren> There's obviously no way to do that with CIDR notation.
<twb> In /etc/syslog.conf, does *.* match *everything*, or only things that have a dot?
<twb> I mean, I see "auth" in the default syslog.conf and it made me uncertain.
<twb> The manpage seems to reassure me.
<mrwes> since root account is disabled, where does system mail goto?
<Jeeves_> Hi all!
<_ruben> mrwes: check /etc/aliases
<_ruben> g'day Jeeves_
 * Jeeves_ needs a hint
<Jeeves_> I edited /etc/sysctl.conf to disable ipv6 RA. However, it seems the settings are not applied when rebooting.
<Kamping_Kaiser> Jeeves_, hey mate
<Jeeves_> Anyone here to hit me with the cluebat?
<Jeeves_> Kamping_Kaiser: Hi!
 * Kamping_Kaiser smacks Jeeves_ with a cluebat (sorry , cant help with the problem though)
<Jeeves_> :)
<henkjan> Jeeves_: block ipv6 ra with iptables?
<Jeeves_> henkjan: yuk
<Jeeves_> I'm looking for a solution, not a workaround :)
<simplexio> Jeeves_: if you add ipv6 yo kernel it will be. i think the problem ist that it load ipv6 module after you have tried to set ipv6 settings in sysctl.conf and ipv6 options arent availabe then
<simplexio> Jeeves_: offcourse you need first check that sysctl options really work
<Jeeves_> simplexio: Hmm. That could be something
<Jeeves_> sysctl works, that's tested
<simplexio> Jeeves_: in my system it compain that cant change nfs options but after boot process is ready and all nfs stuff is loaded it run sysctl without complains
<Jeeves_> simplexio: ./rcS.d/S15module-init-tools
<Jeeves_> simplexio: ./rcS.d/S17procps
<simplexio> isnt there somekind rc.local tms file where you can put your own stuff that is run last in boot process ? .. add there sysctl -p
<Jeeves_> So the options should be there
<Jeeves_> simplexio: That's too late
<simplexio> what ipv6 ra does ?
<Jeeves_> When booting is completed, the box allready sent a sollicitation
<Jeeves_> so it allready has it's RA-address
<simplexio> well offcourse, quick hack and remove module
<Jeeves_> 13:31 < Jeeves_> I'm looking for a solution, not a workaround :)
<simplexio> what does that ra mean in ipv6
<mrwes> _ruben: ahh..ok, root: administrator is already there -- thanks
<simplexio> http://www.cyberciti.biz/tips/linux-how-to-disable-the-ipv6-protocol.html
<simplexio> that tells just howto disable whole ipv6
<Jeeves_> http://zhadum.org.uk/2007/03/04/taming-the-edgy-eft/
<Jeeves_> simplexio: Router Advertisement
<simplexio> Jeeves_: 100% way is just compile new kernel and remove ipv6 ra option from there.. im oretty sure that it there
<Jeeves_> simplexio: Don't be silly
<Jeeves_> I've got it now. Just add ipv6 to /etc/modules
<Deeps> Jeeves_: #ipv6 on freenode's a good place to ask about ipv6 related stuff
<Jeeves_> that way it gets loaded sooner
<Deeps> people there generally tend to be quite clueful
<Jeeves_> and sysctl works fine :)
<henkjan> Jeeves_: please document it on the wiki ;)
<simplexio> i usually just compile stuff that i need into kernel, and stuff that idont need i just add in modules
<Jeeves_> henkjan: Which wiki? :)
<Deeps> btw, what are you trying to do, accept_ra=0?
<henkjan> Jeeves_: wiki.bit.nl :)
<domas> Jeeves_: you can ask Jeeves!
<simplexio> but i like to run latest and greatest kernels. so for me compiling new kernel isnt that big problem
<henkjan> stockkernels++
<Jeeves_> simplexio: When you administer 100+ servers, you'll soon start to love stock kernels :)
<simplexio> :) could be, or i learn howto create on apt-mirrors and own kernel packages
<simplexio> later is probably harder way :)
<Jeeves_> We have our own package repository
<Jeeves_> (and a mirror, nl.archive.ubuntu.com )
<simplexio> but if something brakes i can only blame me
<maswan> Jeeves_: Do you want to disable the automatic address generation?
<Jeeves_> simplexio: I simply blame henkjan
<Deeps> different scales have different needs
<Jeeves_> maswan: Yes, but I allready managed to do so
<Jeeves_> Deeps: Indeed :)
<maswan> ah, ok
<maswan> from our interfaces:
<maswan>   pre-up modprobe ipv6
<maswan>   up /sbin/sysctl -q -w net.ipv6.conf.eth0.autoconf=0
<simplexio> and there is no right solution to problem, only bad and even more bad
<simplexio> there has been what 3 working solutions to problem allready :) have like linux
<maswan> this one has been working since dapper, IIRC. So we stick to it. :)
<simplexio> add ipv6 to modules, load ipv6 first in interface before interface is up, change kernel
<maswan> I'm not sure adding ipv6 to modules covers it always.
<maswan> We did that for a while, and there were some kind of issue, I forget what though..
<maswan> and yes, the inet6 interface has to come before the inet interface
<ivoks> in jaunty, ipv6 isn't module
<Deeps> wayhey
<maswan> ivoks: Ah, is the autoconf disabling fixed then?
<ivoks> maswan: i don't know; i just noticed that it's built in, not a module
<petia> ivoks, around ? its orudie i'm at work
<maswan> ivoks: Hm. Maybe we should try it out on some server then
<ivoks> petia: yes
<petia> am i interrupting,  ?
<ivoks> no
<cemc> ivoks: umm, so ipv6 can't be disabled in Jaunty?
<petia> after we set up the mail server i tested it the email client and used  mail.mydomain.com as an incoming server, did it still use pop3 protocol ?
<ivoks> cemc: i'm sure it can, appending ipv6=off or something to kernel line
<ivoks> petia: it's impossible to tell what you have used, just based on hostname
<ivoks> hostname doesn't define protocol
<petia> is there a way to tell though ?
<ivoks> one server (servers have hostnames) support lots of protocols; http, pop3, ssh, imap, smtp...
<ivoks> petia: you should hire someone to set up mail server for you, since obviously you don't understand basic network concepts
<ivoks> petia: hostname doesn't define protocol
<ivoks> it's just a name
<ivoks> it could've been mysuperserver.domain.com
<petia> :)
<petia> right, but maybe you could have a look and let me know which hostname i should use in the incoming server according to the way the server is set up
<Jeeves_> cemc: Why would you want to disable ipv6?
<^law^> guys , i just installed ubuntu server in virtual box after that i can't boot it . it shows need pae enabled, but i can't enable the pae in my virtual box it can't be checked it is disabled. what shoud i do then ? XD
<ivoks> petia: server has only one hostname
<ivoks> petia: it can have aliases
<ivoks> petia: and, i repeat, hostname of the server doesn't decide what protocol your client will use
<petia> ivoks, ok
<ivoks> petia: in most of the cases, user that uses that client decides which protocol will be used
<ivoks> by selecting the protocol during configuratiob
<petia> ivoks, well i know for sure that pop3 protocol is set up on the server, i just didnt try it yet
<ivoks> s/b$n/
<ivoks> then try it
<petia> ivoks, i'm fucken scared lol
<petia> ivoks, cuase if this shit is not gonna work my boss is gonna kill me
<petia> ivoks, i'm about to point DNS to the new server
<jussi01> petia: please try keep the language clean ;)
<ivoks> petia: what did you do till now?
<ivoks> what did you configure?
<petia> postfix/dovecot and roundcube webmail client
<^law^> guys , i just installed ubuntu server in virtual box after that i can't boot it . it shows need pae enabled, but i can't enable the pae in my virtual box it can't be checked it is disabled. what shoud i do then ? XD
<cemc> Jeeves_: because I don't use it (yet) and I don't want to change all my running servers (apache, ssh, whatnot) not to listen on ipv6 too ?
<petia> tested emails with outlook and thunderbird, howver i used mail.example.com as an incoming server
<Jeeves_> cemc: If you don't use it, what's there to disable? :
<petia> petia, imap is set as well, so therefore i'm not sure which protocol mail.* used
<ivoks> you really don't get it?
<ivoks> server serves protocols
<petia> i understand that hostname does not define protocol
<ivoks> client *decides* which protocol will be used
<petia> uhu
<petia> lol
<petia> i think i got it
<petia> i dont work on the client side too much dude
<ivoks> well, you should know what your clients are using, if you want to setup server
<ivoks> servers exists so that they could serve clients
<ivoks> that's their whole purpose
<petia> well if we take thunderbird for instance which is what i have on my desk
<ivoks> ok
<petia> in the add new email account
<petia> it doesnt ask you if your email acount is using pop3 or imap
<ivoks> i'm sure it does
<petia> ok i'm in the add account wizard
<cemc> Jeeves_: have you looked at 'netstat -nlp' with ipv6 enabled? or 'ip a' with o lot of interfaces/IPs ? and generally I like to not run stuff I don't need :) especially on servers which are low on memory ;)
<petia> it asking, select the type of account you would like to set up : the choices i have is Email, Rss, Gmail, or newsgroup account
<petia> i go to email
<ivoks> petia: http://www2.cit.cornell.edu/computer/email/thunderbird/account-both.html
<Jeeves_> cemc: ipv6 does not require much more memory :)
<ivoks> 3rd screen
<petia> ok now i see
<petia> yes
<petia> yeah
<petia> ok
<petia> :)
<cemc> Jeeves_: still I believe that what you don't need it's not supposed to run ;)
<Jeeves_> cemc: You should enter the 90's :)
<cemc> Jeeves_: :)
<mark> because obviously you won't ever need to communicate with those computers which get ipv6 instead of ipv4 in a few years ;)
<cemc> mark: I did not say it needs to be removed. But if I want to, I should be able to disable it ;) especially if I won't be using it soon
<mark> being able to disable is good
<cemc> and I definitely will know if and when I need it, so I won't forget I disabled it :)
<Jeeves_> cemc: Just start using it! Saves you the frouble of disabling it :)
<cemc> ;)
<nob_z> hi, who can help me with spamassassin sa-learn?
<Jeeves_> nob_z: Whats the problem?
<nob_z> spamassassin seems to not use the learned mails
<nob_z> I followed the install of spamassassin+postfix-dovecot+amavisd-new in the ubuntu server guide
<Jeeves_> nob_z: You needs at least 200 messages before bayes gets used
<Jeeves_> 200 of both types
<petia> ivoks, you there? the email mesages are not coming in
<nob_z> Jeeves_, I use more than 200, around 1000, but nothing changed
<nob_z> seems that I dont used sa-learn in the right form
<Jeeves_> nob_z: There is a way that you can see if bayes is being used
<Jeeves_> nob_z: sa-learn said 'xx messages read (xx used)' or something like that?
<nob_z> Jeeves_, yes... and sa-learn --dump magic increades the number of nham nspam tokens
<Jeeves_> nob_z: Than it's used
<Jeeves_> nob_z: And if you run
<Jeeves_> spamassassin --lint -D1
<Jeeves_> ?
<uvirtbot> New bug: #326235 in samba (main) "Winbind crashes with new samba-package (3.5)" [Low,Incomplete] https://launchpad.net/bugs/326235
<Jeeves_> Do you see bayes-notices there
<Jeeves_> ?
<nob_z> trying...
<nob_z> Jeeves_, yes, there are bayes-notices
<nob_z> Jeeves_, sa-learn must be used as root or by every user?
<Jeeves_> nob_z: That depends how you configured bayes
<nob_z> where I can check it?
<Jeeves_> /etc/spamassassin/local.cf ?
<fevel> How can I make squid show an html page I made as homepage?
<Jeeves_> fevel: http://www.visolve.com/squid/squid24s1/miscellaneous.php
<ivoks> petia: yes?
<petia> ivoks,
<petia> ivoks, how can i change the output of this postconf | grep mydestination ? it needs to be a new domain, the old domain is still there
<ivoks> postconf -e "mydestination=blablabla"
<petia> ivoks, there was a file where i remember i specified the mydestination
<ivoks> petia: postconf -e will edit it
<ivoks> so...
<ivoks> postconf -e "mydestination=blablabla"
<petia> ivoks, paul@atlantalin:~$ postconf | grep mydestination
<petia> mydestination = jpiamerica.com, localhost.jpiamerica.com, localhost
<petia> now i have this
<petia> which is the correct domain
<ivoks> ok
<ivoks> looks good
<petia> is anything else needs to get changed when changing the domain address ?
<petia> i mean
<petia> when changing the domain name
<ivoks> hm... maybe myorigin
<petia> i think it working now :)
<fevel> thanks Jeeves_
<fevel> always_direct
<Jeeves_> :)
<kirkland> mathiaz: okay, i tested the hardy kvm-84 packages in my ppa a bit more thoroughly
<kirkland> mathiaz: i've published an update to ~ubuntu-virt's ppa
<mathiaz> kirkland: great. I'll give it a try then.
<mathiaz> kirkland: have you made some progress on the kernel part?
<kirkland> mathiaz: yes
<kirkland> mathiaz: you can now install and run the kvm-source to get the kernel module
<kirkland> mathiaz: it's installable now
<kirkland> mathiaz: will require dkms, though, don't know if that's a problem for you
<mathiaz> kirkland: awesome - well I'm running hardy - would it be?
<kirkland> mathiaz: shouldn't be
<mathiaz> kirkland: *a problem*
<kirkland> mathiaz: i just reinstalled my test box with hardy amd64
<mathiaz> kirkland: awesome - I'll give it a try once alpha6 is released
<kirkland> mathiaz: running kvm-84 kernel and userspace
<kirkland> mathiaz: i left it running last night, testing nested virtualization
<kirkland> mathiaz: a virtual machine running a virtual machine
<kirkland> (something that was just introduced in kvm-82)
<mathiaz> kirkland: did it freak out?
<kirkland> mathiaz: i was going to blog about that :-)
<mathiaz> kirkland: are you able to run a bundle of guest inside a kvm guest?
<mathiaz> kirkland: or is it just one-one
<kirkland> mathiaz: it did not freak out
<kirkland> mathiaz: it's been running over night
<kirkland> mathiaz: with the guest's guest doing debuild qemu in a loop :-)
<kirkland> mathiaz: no errors yet, on either host
<kirkland> mathiaz: it's not one-to-one
<ivoks> mathiaz: i think i have solution for that dovecot bug...
<kirkland> mathiaz: basically, the first guest can modprobe kvm, and get a /dev/kvm
<kirkland> mathiaz: the second guest isn't nearly as fast as the first
<kirkland> mathiaz: but it's not as slow as emulation either
<mathiaz> kirkland: oh it does qemu debuild - this is why it doesn't freak out. The guest *knows* about it and is just very gentle and peaceful...
<kirkland> mathiaz: :-D
<kirkland> mathiaz: i should try mysql instead
<mathiaz> kirkland: that will bring down your mythtv setup - be careful
<mathiaz> kirkland: !
<mathiaz> ivoks: yes?
<ivoks> mathiaz: i added comment to the bug
<mathiaz> ivoks: ok  - I'll get to it then
<kirkland> mathiaz: one note about nested virtualization ....
<kirkland> mathiaz: it only works on amd cpus
<mathiaz> ivoks: during my daily bug folder processing
<ivoks> mathiaz: sure... no rush
<mathiaz> ivoks: thanks for following up!
<mathiaz> kirkland: awesome - that's what I have :)
<ivoks> mathiaz: well... here's c/p: I suggest we check if dovecot.conf (minus ^protocols.* line) is different than the default one (minus ^protocols.* line) and, if it is, in preinst we copy that file into dovecot-postfix.conf and let the ucf do the rest. What do you think?
<mathiaz> ivoks: hm - so why not always do that and get rid of dovecot-postfix.conf?
<mathiaz> ivoks: even for the normal case?
<ivoks> i suggested that in first place
<ivoks> but then we would have to steal dovecot.conf from dovecot-common
<ivoks> cjwatson said that wasn't good idea
<cjwatson> attempting to share or move configuration files among packages is generally a recipe for creating bugs, especially if the movement is in any way conditional, yes
<mathiaz> cjwatson: even between binary packages?
<fevel> can I set apache to listen on the same port as my proxy? they are both on the same server
<cjwatson> configuration files are only ever owned by binary packages
<cjwatson> it doesn't matter if they're in the same source package
<cjwatson> you *can* move a configuration file from one binary package to another, but it absolutely has to be unconditional, and it ought to be permanent
<ivoks> and that's not the case here
<cjwatson> it's also a lot of fiddly code to get right; naive approaches will fail in messy ways
 * cjwatson has been there and done that
<mathiaz> cjwatson: right. I think we're talking about modifing a config file from a another postinst binary package
<cjwatson> mathiaz: absolutely forbidden
<ivoks> right, i've myself was trying to find a sane logic for ucf stealing and i didn't come up with a good one
<mathiaz> cjwatson: even if both binary packages are from the source package?
<cjwatson> mathiaz: (from the policy point of view, as well as sanity)
<cjwatson> mathiaz: yes.
<cjwatson> that makes no real difference
<mathiaz> cjwatson: hm - well then I know a couple of package that do that
<cjwatson> mathiaz: if they're from the same source package, then the binary package that currently owns the file can and should take steps to expose some proper interface by which the other binary package can do what it needs to do
<cjwatson> mathiaz: "proper interface" does not include the other package editing the configuration file
<ivoks> like postconf
<cjwatson> mathiaz: yes, I know of buggy packages too ;-)
<cjwatson> but I don't think that means we should design in bugs
<mathiaz> cjwatson: the dovecot package for example, where dovecot.conf is shipped by dovecot-common and -pop and -imap change one line to enable their respective protocoal
<mathiaz> protocol
<cjwatson> this really is a case that Debian and Ubuntu policy has historically been very firm about, because it IS a cause of upgrade disasters
<cjwatson> there are plenty of ways to do this sort of thing that do not involve messing about with configuration files between packages
<cjwatson> you just need to be creative
<mathiaz> cjwatson: ok - thanks for the clarification
<cjwatson> having packages include configuration files from a .d directory or similar is one traditional solution
<mathiaz> cjwatson: I thought the rule about config file would only apply to src packages
<ivoks> cjwatson: so, how about proposed idea; bug 339966
<uvirtbot> Launchpad bug 339966 in dovecot "dangerous action: dovecot-postfix force-installs new conf file" [Low,Confirmed] https://launchpad.net/bugs/339966
<ivoks> cjwatson: does that sound sane?
<mathiaz> ivoks: well no
<mathiaz> ivoks: oh hm yes
<mathiaz> ivoks: probably since you're not modifying dovecot.conf but dovecot-postfix.conf
<ivoks> right
<cjwatson> ivoks: I haven't checked it in detail for feasibility, but it sounds like a workable approach
<ivoks> i'll test it later today
<cjwatson> it's basically migrating across the configuration file changes already applied by the sysadmin
<cjwatson> this sort of bug is exactly the kind of rat's nest you get into with a poor design, btw ;-)
<ivoks> i have to expand this a bit
<cjwatson> but it's very difficult when it's not done right up-front of course
<ivoks> we should copy dovecot-postfix.conf to /usr/share and check ucf on upgrades
<ivoks> if we changed something, show the diff, if we didn't change anything, don't bother admin
<ivoks> take care, bbl
<mathiaz> kirkland: are you also working on improving hardy as KVM guest?
<mathiaz> kirkland: IIRC virtio devices weren't supported in hardy
<petia> is there such thing as a free outgoing mail server for all isps ?
<mathiaz> kirkland: IIRC virtio *block* devices weren't supported in hardy
<Deeps> petia: you can run your own, however open relays generally get shut down or blacklisted as spammers will find them and use them
<kirkland> mathiaz: which bug?
<mathiaz> kirkland: hm - I don't think there is such a bug.
<mathiaz> kirkland: you'd better talk to dendrobates or soren about that one.
<kirkland> mathiaz: then i'm probably not working on it :-)
<kirkland> mathiaz: i'm living by those bugs
<mathiaz> kirkland: hopefully you're not living *only* by those bugs
<kirkland> mathiaz: that's all i'm eating
<kirkland> mathiaz: i like the shiny blue ones the best
<kirkland> mathiaz: the brown ones are kinda gross
<kirkland> mathiaz: can vegetarians eat bugs?
<mathiaz> kirkland: ohhhhh - don't get me started on that one
<kirkland> mathiaz: mmmm, boiled crawfish!
<acidborg> hi
<soren> mathiaz: I think actually the problem with virtio devices in Hardy were on the host.
<soren> ....and only for block devices. Network was fine.
<soren> Well... Except when it hanged.
<mathiaz> soren: yes - I remember network was fine.
<mathiaz> soren: then... it was *not* fine?
<soren> mathiaz: For most people in most situations, yes.
<soren> For some people (like yann2), it just seemed to hang after a while. kirkland and aliguori seem to have tracked down the problem, so that should be either fixed already or be on its way to be fixed.
<yann2> soren > mostly using SMP
<yann2> fine with single CPU
<soren> yann2: Ah, yes.
<yann2> is not fixed though
<soren> kirkland: You were working on an SRU for that, right? (removing the odd-ball patch from Debian's ancient QEMU, that is)
<bhsx> if i install ubuntu 8.10 and use tasksel to uninstall the ubuntu-desktop, what else would i need to strip out to get a base "server" without having to DL 8.04 server edition?
<kirkland> soren: oddball patch"es"
<bhsx> i know there are a couple of kernel differences in server edition
<soren> kirkland: Even better :)
<kirkland> soren: i actually put that aside, i think trying for a bump to -84 would be far more beneficial
<bhsx> but i'm just talking for a home server... i dont need anything major, but it's an old box, so i want to squeeze a lot out of it
<yann2> soren > tested that one, didnt fix it
<soren> yann2: Oh.
<kirkland> soren: and what yann2 just said ... didn't solve his problem
<yann2> may have fixed the rtl one
<kirkland> soren: his problem is using SMP
<yann2> kirkland > btw, maybe you can try to reproduce now? :)
<soren> Shame.
<kirkland> soren: SMP is completely broken in kvm << 79ish
<yann2> kirkland > don't say that... how badly broken?
<kirkland> soren: yann2: alguori explained it to me this way ....
<soren> I found it to be racy under some circumstances, yes.
<kirkland> in older kvm (definitely 62, probably 72 as well)
<kirkland> cpu and io are both handled in the same thread
<soren> The extracting I/O into a separate thread didn't go completely flawlessly.
<kirkland> which introduces a number of race conditions
<kirkland> most which manifest themselves by a vm sort of "hanging" until you give it some io
<petia> where can i specify the maximum size of the file attachments for dovecot email server ?
<kirkland> like hitting a button, or clicking in the vm
<kirkland> there are other issues too, enough for aliguori to say that he'd have no confidence run smp on kvm-62
<bardyr> Hey, how can i mount a software raid0 on a live cd?
<Vog-work> bardyr: mdadm I think....
<Jeeves_> bardyr: mdadm -a etc etc etc
<Vog-work> http://ubuntuforums.org/showthread.php?t=408461
<Vog-work> bardyr: that last link has everything you should need.
<bardyr> Vog-work, thanks
<petia> where can i specify the maximum size of the file attachments for dovecot email server ?
<jmedina> I dont think that is controlled via dovecot, instead in the MTA, well you can use dovecot's MDA
<Vog-work> petia: If you are using postfix I htink you can manipulate the message_size_limit parameter found in main.cf
<Vog-work> http://muhdzamri.blogspot.com/2007/02/postfix-mailbox-size-limit-and-message.html
<storrgie> just installed apache2 with the taskel lamp install. But i get a 403 when trying to view my index.html
<yann2> check permissions on documentroot
<yann2> www-data needs read access
<Vog-work> check permissions wherever you have your web folders /var/www/ for example.  If your web user (www-data or something like that) dosen't have read or write permission it won't work
<W8TAH> hi folks: - i did a stupid, and need some help.... when i built my ubuntu server, i configurd LVM but forgot to tell the partitioner to mount it to /data i could rebuild the box but ive already got samba configured and would rather avoid re-doing that if possible -- how do i find out what is necessary to put in /etc/fstab and whatever else i need
<mrwes> Just installed Denyhosts and it keeps adding my login IP from my laptop to the hosts.deny file -- why?
<mrwes> W8TAH: what device is it?
<W8TAH> mwares there are a total of 3 drives that have LVM physical volumes on them
<storrgie> Vog-work: yann2: its /var/www
<W8TAH> oops - im sorry mrwes there are a total of 3 physical drives with LVM physical volumes on them
<uvirtbot> New bug: #302148 in samba (main) "Installing wine starts winbindd as a daemon" [Low,Confirmed] https://launchpad.net/bugs/302148
<W8TAH> im about 90% sure they are sda sdb and sdc
<W8TAH> mrwes?
<storrgie> who should the /var/www be owned by?
<racecar56> u?
<storrgie> my user?
<storrgie> or root?
<W8TAH> ok thanks anyway folks --
<Vog-work> storrgie it should be owned by www-data
<racecar56> i gtg i'll still be logged on but i won't be active for a long time
<Vog-work> W8TAH: people work for a living while helping in the channel, responses aren't always immeadiate.
<Vog-work> As for myself I don't know the answer to your question.
<giovani3> W8TAH: do you see the partition in /etc/fstab?
<W8TAH> Vog-work:  i understand that -- as do i -- i wasnt mad -- was more indicating a closure to my question -- the box is reformatting now and ive properly activated the LVM this time -- i meant no unkindness or disrespect
<giovani3> oh ok
<W8TAH> giovani3: it was not there thanks anyway
<Vog-work> W8TAH: that's cool, thanks for clearing up the confusion.
<giovani3> in the future, W8TAH
<giovani3> you should be able to run lvdisplay, and find the missing UUID
<petia> Deeps, i'm looking at /etc/postfix main.cf , no where i could find message_size_mimit
<giovani3> and manually add it to /etc/fstab
<petia> Vog-work, i'm looking at /etc/postfix main.cf , no where i could find message_size_mimit
<giovani3> petia: it doesn't need to be in there -- you need to ADD it
<petia> oh
<Vog-work> petia: You can add the line yourself
<petia> i see
<petia> so thats for all the files in the message, or for individual files ?
<Vog-work> The message size all together.
<petia> so umm, lets say i want it to be 30 megabytes, what should i put message_size_mimit = 30mb
<petia> ?
<giovani3> what's mimit?
<Vog-work> that config option dosen't limit the size of the attachement, it limits the size of the e-mail and any attachments added to it.
<giovani3> you mean limit?
<petia> yeah
<petia> limit
<giovani3> and no, the number is in bytes
<petia> sorry
<giovani3> so 30 megabytes would be 31457280
<Vog-work> yeah postix can only parse the number in bytes
<petia> so its
<petia> message_size_limit = 31457280
<giovani3> yep
<Vog-work> yep
<giovani3> and then restart postfix
<thewrath> hello all
<Vog-work> petia: there is also a another channel called #postfix if yoiu have more complicated postfix questions.
<petia> no thats it :)
<petia> i have questions but not postfix related
<petia> so if i'm able to login to the mail server with wemail client and send messages in webmail, why cant i use it as outgoing smtp server ?
<kirkland> Nafallo: ping
<kirkland> Nafallo: when you get a chance, upgrade to screen-profiles 1.37
<kirkland> Nafallo: and put the following in ~/.screen-profiles/status
<kirkland> menu=0
<kirkland> whoami=1
<kirkland> hostname=1
<kirkland> Nafallo: and hit F5-enter
<Vog-work> petia: do you have it registered under dns with proper my records?
<giovani3> petia: there are two servers you're interfacing with -- postfix is an MTA, it only does SMTP, if you want to access your mail via webmail, you use something like dovecot, whcih provides imap/pop
<petia> i'm confiused
<Vog-work> sorr myrecords  is supposed to read mx records
<giovani3> you can use postfix as your outbound smtp server, you just need to configure it so that it will accept mail from you (the best way is to set up SASL) -- but these are more questions about postfix, and are best directed at #postfix
<thewrath> i wnat to run a private home server
<thewrath> i have ubutu server but gui is the bets
<giovani3> thewrath: we don't support GUIs here
<giovani3> but ubuntu enables you to install one, if you like
<thewrath> all right
<thewrath> i am trying ot install drbl stuff on  my server
<thewrath> i do some imaging
<thewrath> from http://samiux.wordpress.com/2008/10/22/howto-penguinzilla-drbl-on-ubuntu-8041/
<thewrath> says i have to crate a /tftpboot partition in ext3
<giovani3> how does that relate to having a GUI?
<thewrath> i ahve not that done that before hwo i do that?
<thewrath> the guy who works with me was hoping for me to do a gui interface
<giovani3> well I told you, ubuntu allows you to install one
<giovani3> but we don't recommend it, and don't support it here
<giovani3> so it's your choice
<mrwes> setting up a server with ONLY command line is the best way to truly learn linux --IMHO
<thewrath> ok
<thewrath> from http://samiux.wordpress.com/2008/10/22/howto-penguinzilla-drbl-on-ubuntu-8041/
<thewrath> says i have to crate a /tftpboot partition in ext3
<thewrath> i ahve not that done that before hwo i do that?
<thewrath> any idea guys have?
<Vog-work> thewrath: what are you hoping to do with drbl?
<Vog-work> I'm not sure it is what you need.
<trondkla_q> can I ask apache2 questions here? :)
<thewrath> image the windows machines
<thewrath> its a private server here at thes chool i help with
<thewrath> well the school district
<Vog-work> ok.
<thewrath> so i just use a live ubuntu desktop cd to do the reimaging?
<Vog-work> no.... you'll need to learn about pxe boot and other stuff.
<Vog-work> drbl isn't a simple setup process.
<trondkla_q> Hi, i've tried to change apache2 to UTF-8, but it's still outputting ISO-5589-1. Ive set /etc/apache2/conf.d/charset to utf-8, and added IndexOptions Charset=UTF-8 to /etc/apache2/apache2.conf
<Vog-work> thewrath: https://wiki.edubuntu.org/SettingUpClonezillaDRBLonUbuntu I used this at a local school
<thewrath> ok thank you Vog-work
<thewrath> brb
<thewrath> i assum u r there now>?
<Vog-work> thewrath: It's a good idea to use drbl / clonezilla for what you want but you'll need to do some reading before you start configuring the server.
<Vog-work> There might be better how to's out ther eas well.
<bromic94_> back
<Vog-work> trondkla_q: might be a better question for #apache
<thewrathjr> Vog-work: can you send that to me again i didnt get a chance to copy it before i moved net connections
<trondkla_q> thanks :)
<Vog-work> thewrathjr: https://wiki.edubuntu.org/SettingUpClonezillaDRBLonUbuntu
<Vog-work> thewrathjr: It's a good idea to use drbl / clonezilla for what you want but you'll need to do some reading before you start configuring the server.
<thewrathjr> yea i have been told that
<thewrathjr> i just want to preliminary set it up
<Vog-work> Following a how to will not tell you the whole process, like settign up the clinets.
<thewrathjr> we have some stuff with vista nad what nto that for some reason does not want to work wtih zenworks
<thewrathjr> how do i create that parition it talks about
<Vog-work> http://www.users.bigpond.net.au/hermanzone/p17.htm#help_on_partitioning
<Vog-work> thewrathjr: http://www.linux.com/feature/115208
<bitsbam> hey all, needing some advice regarding creating temporary tables
<petia> i just went through this tutorial https://help.ubuntu.com/8.10/serverguide/C/postfix.html , but dont get the same output shown in the example when i do telnet mail.jipamerica.com 25
<petia> Vog-work, i'm confused
<Deeps> !paste | petia
<ubottu> petia: pastebin is a service to post multiple-lined texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu.com (make sure you give us the URL for your paste - see also the channel topic)
<Deeps> what output do you get?
<petia> Deeps, k 1 sec
<petia> Deeps, http://pastebin.com/m65afd15d
<petia> Deeps, i did everything on that page without any errors or problems before the section of testing it with telnet
<Deeps> petia: indeed, thats the welcome banner, you need to send the 'ehlo' command to get the following output as described on that page
<bitsbam> my deal is: our customers log into our website to check the conditions of their machines, each machine reports many times a day, and we use this to draw charts etc.. so the machine history is long long and we pull hundreds of thousands of records. how many is too many temporary hash tables to have on a server . Do any of you use temporary tables?
<fevel> hey friends
<fevel> how can I make squid redirect web access to a certain homepage whenever a browser is launched?
<bromic94_> hey sorry i was out
<bromic94_> i had to use the facilities and locked out of the office
<thewrath> how can i create a ext3 partition for drbl after i installed ubutnu server all ready
<thewrath> also when i do sudo apt-get update i get 11 resources temp unavaiable how can i change that
<petia> Deeps, a lot better, but slightly different
<thewrath> never mind ont eh last question
<petia> Deeps, can i show you ?
<Deeps> petia: sure, pastebin away
<Deeps> although if you're paying attention, you'll notice the guide says
<petia> Deeps, i'll pastebin in pm ok ?
<Deeps> "If you see the following lines among others, then everything is working perfectly."
<Deeps> ok
<petia> Deeps,  so in outlook how should it be configured then , i mean the outgoing smtp server
<petia> Deeps, i should put smtp.jpiamerica.com ?
<Deeps> yep
<Deeps> what's up filipe_xD?
<stanman1> hi, anyone running ubuntu-server and sbs2003 on one box?
<thewrath> how can i create a ext3 partition for drbl after i installed ubutnu server all ready
<filipe_xD> Deeps, fine, i'm just testing here
<filipe_xD> Deeps, ignore this
<ivoks> still configuring mail server?
<Deeps> filipe_xD: k
<petia> ivoks, working on smtp-auth atm
<ivoks> via dovecot?
<petia> since using our isp's smtp server is not suffecient because we are gonna run into problems when traveling
<petia> ivoks, yeah dovecot
<ivoks> i see you are discovering benefits of running your own mail server :)
<petia> ivoks, :)
<petia> its stressfull cause i get pressure from my boss, but i love working on this stuff
<ivoks> you should've play with it in virtual system or at home
<ivoks> deploying mail server isn't playing minesweeper
<petia> ivoks, i think its ready, waiting for smtp domain to propagate
<thewrath> how easy is it to do a new parition for drbl
<ivoks> drbl?
<thewrath> open source imaging software
<thewrath> clonzilla?>
<petia> how can i test if smtp server is working before the domain propagates ?
<ivoks> connect to it and send mail
<petia> heh
<petia> connect via ssh ?
<ivoks> you have thunderbird, right?
<petia> yeah
<thewrath>  do you know clonezilla ivoks ?
<ivoks> set up an account in it and put your new server as incoming and outgoing server
<ivoks> thewrath: nope
<petia> yes thats what i am trying to do ivoks , but my isp still thinkgs smtp.jpiamerica.com is an old ip
<ivoks> petia: well, you have an IP of that server, right?
<petia> ivoks, i have an ip for the new server that i am trying to get working yes
<ivoks> ip of your new server
<ivoks> so, instead of hostname, enter IP
<thewrath> does anyone know how to set up a second parition or what not after you installed ubuntu server?
<ivoks> thewrath: cfdisk
<thewrath> just run cfdisk in ssh or on the server live and follow the steps?
<jmedina> thewrath: create the partition with fdisk or cfdisk, format the new particion with mkfs.ext3 for example
<petia> ivoks, Relay Access Denied when trying to send mail :(
<jmedina> create a mount point for example mkdir /newvolume, add the entries to your fstab
<jmedina> something like
<giovani3> petia: because you need to authenticate to your SMTP server
<ivoks> petia: that's good
<jmedina>  /dev/sdb1    /newvolume    ext3  defaults 0 0
<thewrath> jmedina: i have never done this before
<thewrath> with paritioning after install
<petia> so i should use "this server requires authentication" ?
<ivoks> cfdisk /dev/sda
<ivoks> if you want to patition disk sda
<ivoks> petia: yes
<jmedina> thewrath: where is the new partition?
<jmedina> a new disk, existing disk?
<petia> ivoks, heh it worked :)
<thewrath> same same hard drive
<thewrath> so eya
<thewrath> existing disk
<petia> i love you guys
<ivoks> petia: now try sending mail to your email
<petia> this is awesome
<ivoks> petia: and then, to some gmail account
<ivoks> to see if everything works
<petia> ok
<petia> first thing i tried is to send to my gmail account and it worked
<thewrath> jmedina: ?
<ivoks> thewrath: cfdisk /dev/sda
<jmedina> thewrath: use 'sudo cfdisk /dev/sda' and check if your disk has free space
<thewrath> i know it does
<petia> hmmmm, didnt work between internal accounts
<jmedina> well then selecto free space and create a new partition
<ivoks> petia: check /var/log/mail.log
<thewrath> sda1 boot primary linux ext3 38502.54
<jmedina> then write changes, you need to restart your system in order to detect new partition so you can create a new file system (format) in the new partition
<ivoks> jmedina: ????
<thewrath> sda5  logical linux swap / solaris 1513.46
<ivoks> restart?
<ivoks> in which century do you live? :D
<jmedina> ivoks: or run partprobe
<ivoks> jmedina: cfdisk does all that for you :)
<jmedina> ivoks: sometimes it is not possible to the kernel detect new partitions, the /dev/file is not created
<thewrath> ivoks: to create a new parition after running that cdisk command to see if i have space what do i do then?
<ivoks> thewrath: i didn't understand the question
<thewrath> i only have bootblae, delete, help, max, print, quit, type, units and write oeptions
<jmedina> thewrath: do you have free space?
<thewrath> yes alot
<ivoks> then highlight free space
<ivoks> and select new
<ivoks> press the arrow key 'down'
<ivoks> :)
<jmedina> :D
<petia> seems like the problem of receiving messages now :(
<jmedina> thewrath: your mouse doesnt work in cfdisk
<petia> no matter where they are coming from, messages are not received
<thewrath> i have no optino for new
<ivoks> petia: well, check the /var/log/mail.log
<thewrath> i know that jmedina
<ivoks> you'll have New, when you highlight FREE SPACE
<jmedina> thewrath: then you dont have free space, thewrath pastebin the output of cfdisk
<thewrath> hold on when i installed ubuntu server i installed it to take up the entire hard drive
<thewrath> is that my issue
<jmedina> thewrath: do you have something like "                                           Pri/Log         Free Space                                        70285.02" ?
<thewrath> no
<thewrath> bc i installed it to take the hard drive
<thewrath> the entire hard drive
<jmedina> then you dont have free space
<ivoks> hehe
<thewrath> how can i "re parition" it
<thewrath> do i have to use a live ubuntu desktop cd
<ivoks> yes
<thewrath> ok hold on
<ivoks> then resize filesystem
<ivoks> and then resize partition
<jmedina> I like minidistro parted image
<jmedina> it is 10MB iso
<jmedina> parted magik
<thewrath> there should be one or somethin gont he ubuntu 8.04 install cd right or do i need a live cd?
<Vog-work> live cd would be good
<thewrath> okay then i might need to go and dnwoload it then
<thewrath> where do i get that
<thewrath> having issue with the ubuntu site to find it
<jmedina> of course, make sure you have a backup :D
<ivoks> petia: so, anything in logs?
<thewrath> i have nothing the server at this point
<petia> ivoks, can i pasten the logs to you in a pm ?
<ivoks> petia: ok
<thewrath> pastebin
<thewrath> jmedina: where can i download the ubuntu live cd
<jmedina> ubuntu.com I guess
<ivoks> thewrath: you could use ubuntu server CD too
<Vog-work> thewrath: www.distrowatch.com ?
<ivoks> hm... or not?
<ivoks> hm..
<petia> ivoks, pm
<thewrath> live or standard install
<thewrath> bc i have teh instandard cds
<thewrath> but not the live cds
<thewrath> nvm
<thewrath> what  am i looking for when i get into the live cd
<jmedina> thewrath: just download ubuntu desktop
<thewrath> i ahve it
<thewrath> it is booting from the cd in the server now
<thewrath> well maybe
<thewrath> lol
<thewrath> there we go just really slow lol
<thewrath> i am in so i am looking for what exactlyu'
<thewrath> parition editor?
<thewrath> under /dev/sha1 it says unused space 34.39gb
<ivoks> sha?
<giovani3> new encrypted drive device ;)
<thewrath> i meant sda
<thewrath> i am in the prarition editor
<thewrath> do i click on sda1 resize
<thewrath> reduce that by 10gb
<thewrath> and what after that?
<ivoks> well... if it says unused space
<ivoks> then it's free space, right?
<thewrath> i would assume so
<thewrath> so i want to create a new partition from it
<giovani3> ivoks: he means the editor is evaluating the free space within the partition, not unpartitioned space
<thewrath> how do i do that from here
<ivoks> paste on pastebin:
<ivoks> sudo fdisk -l /dev/sda
<thewrath> mer?
<thewrath> what does that do?
<giovani3> it lists partitions
<ivoks> lists partitions
<thewrath> after i run taht what do you want me to do
<ivoks> paste it on the pastebin
<thewrath> k
<furicle> Anybody else using iSCSI on 100bT and crud equipment?  It's running 50% slower than SMB to/from same machine. Thought it would be closer to par or slightly faster...
<thewrath> http://pastebin.com/m6fff020c
<ivoks> so... no free space :)
<incorrect> I want to have a user that can change users without sudo
<thewrath> how do i create free space
<ivoks> first, disable journaling on sda1
<ivoks> like this:
<incorrect> I want to be able to deploy my software and then run a script to execute it, but i want to change user
<ivoks> tune2fs -O^has_journal /dev/sda1
<petia> FYI everyone : ivoks rules !
<ivoks> :)
<thewrath> so tune2fs -0^has_journal /dev/sda1
<thewrath> ?
<ivoks> right, with sudo
<ivoks> O
<ivoks> not 0
<ivoks> big o
<thewrath> so the letter
<thewrath> good
<thewrath> after that what do i ahve to do
<ivoks> run parted /dev/sda
<thewrath> ^has_journal invalid
<uvirtbot> thewrath: Error: "has_journal" is not a valid command.
<ivoks> sudo tune2fs -O^has_journal /dev/sda1
<thewrath> letter or n umber
<ivoks> letter
<ivoks> did you backup your data? :)
<thewrath> i did the last command
<thewrath> now says (parted)
<thewrath> now what
<thewrath> lol
<ivoks> quit parted
<thewrath> jsut ctrl c?
<ivoks> type quit
<thewrath> done
<ivoks> check filesystem:
<ivoks> e2fsck -f /dev/sda1
<thewrath> what shoudl it shwo
<ivoks> (disclaimer: this worked for me every time, but you are warned to backup the data)
<ivoks> it should start checking your partition
<thewrath> already normalized it says
<thewrath> which i have nothign on it
<thewrath> lol
<ivoks> ok then
<thewrath> this is going to be a dedicated imaging/multicasting image server
<ivoks> how big do you want it to be?
<thewrath> the new one like 15gb
<ivoks> how big is the current?
<thewrath> used is 1.47gb
<thewrath> unused 34.39
<ivoks> ok
<ivoks> so you want /dev/sda2 to be 15GB or /dev/sda1?
<ivoks> current partition or the new one we will create
<thewrath> create the new one
<ivoks> resize2fs /dev/sda1 20G
<thewrath> it needs to be named tftpboot
<thewrath> Create a /tftpboot partition in ext3 or reiserfs in the size of 200MB per client at least,
<ivoks> did you run resize?
<thewrath> no not yet
<thewrath> i was following your instructions to the t
<ivoks> so... gparted is graphical tool?
<ivoks> and it can do it
<ivoks> lol... i didn't know that
<ivoks> did you resize it?
<ivoks> if not, don't
<ivoks> use gparted :)
<thewrath> how i use gparted
<thewrath> and where is it
<ivoks> gksudo gparted
<thewrath> hold on i am on the phone
<thewrath> in there
<thewrath> found phone has speaker phone sincei  dont work here i dont know all the kool technology the school has lol
<thewrath> need to leave soon so any thing right now to get this finshed quick or what not greatly appreciate it
<ivoks> if you can't find your way out using gparted
<ivoks> then i can't help you
<ivoks> :)
<ivoks> it has slider and what not...
<thewrath> im in gparted and resize it and reduce it by 15 gb?
<ivoks> right
<uvirtbot> New bug: #312437 in samba (main) "version `GLIBC_2.8' not found (required by /lib/security/pam_smbpass.so) (dup-of: 256238)" [Undecided,New] https://launchpad.net/bugs/312437
<thewrath> done
<thewrath> then right click on unallocated and select new?
<thewrath> how do i call it tftpboot?
<ivoks> partitions can't be called
<ivoks> they don't have names
<thewrath> ok
<thewrath> but am i right about righc click on unallocated and seelct new
<ivoks> you can mount them in directory, but you can do that after you create them
<ball> how do I get a list of the installed packages?
<ivoks> yes
<ivoks> dpkg -l
<thewrath> what?
<thewrath> oh ok
<ball> thanks
<thewrath> nvm
<thewrath> ivoks:  then click on apply?
<ivoks> yes
<thewrath> well ivoks ty i am goign to leave
<thewrath> and let this run over night
<thewrath> u going to be in tomorrow?
<thewrath> how do i mount the parition btw after i am done
<ivoks> over night?
<ivoks> it'll finish in couple of seconds
<thewrath> well i have to go anyways
<thewrath> its taking a whiel to resize stuff
<thewrath> how do i mount the parition btw after i am done
<thewrath> r u goign to be in here tomorrow
<ivoks> mount /dev/sda2 /mnt/my_other_partition
<thewrath> its done
<thewrath> so to orrow i can do teh rest for drbl
<thewrath> /mnt/my_other_parititon?
<thewrath> sda3
<thewrath> btw
<thewrath> but i got to roll i should be back
<thewrath> ttyl
<loshok> asdf
<incorrect> I need to give a user that is not root the ability to change user without needing a password
<jmedina> incorrect: what you mean by "change user"?
<incorrect> I want to copy on my apps as user X, then i want to start them over ssh
<incorrect> so i would ssh su - user -c /script
<incorrect> its just not the root user
<incorrect> ideally i would like to restrict which users they can be
<bromic94_> hello all
<ivoks> i didn't understand anything
<thewrath> hey ivoks
<thewrath> thanks for the hel[
<ivoks> np
<thewrath> *help
<thewrath> the reason i was talking to you about the information was this link and how they worded it
<loshok> asdf
<loshok> sdf
<thewrath> http://samiux.wordpress.com/2008/10/22/howto-penguinzilla-drbl-on-ubuntu-8041/ under step 1
<thewrath> hi loshok
<loshok> damn
<loshok> its orudie
<thewrath> its wat
<ivoks> ?
<loshok> ivoks,
<loshok> cant send to outside emails now for some reason
<loshok> can you please have a look at the logs ?
<ivoks> you said you've sent it to gmail account before
<loshok> ivoks, yeah i havent changed anything stopped working all the sudden
<thewrath> yea
<loshok> now get system administrator error undeliverable
<loshok> but only when sending to outside
<ivoks> and, what the error says?
<ivoks> there's some meaningfull info in the message
<loshok> http://pastebin.com/m674c2b65
<ivoks> and who is the sender?
<thewrath> re authenticate
<thewrath> i used realying with xampp
<loshok> paul@jpiamerica.com
<ivoks> you need to authenticate
<thewrath> ?
<loshok> i did
<ivoks> your server isn't open relay
<loshok> it was working
<ivoks> paste main.cf somewhere
<thewrath> what mail server you using?
<loshok> dovecot
<ivoks> postfix
<loshok> i am authenticated, i have that option checked in the email client
<thewrath> close it down and do it again
<ivoks> paste the main.cf already!
<thewrath> soemtimes that happens
<thewrath> lol
<thewrath> ivoks: do you understand what it meant on that link i sent you abotu the /tftpboot parition
<thewrath> or is that the mounted name?
<ivoks> that's mount name
<ivoks> not a partition
<thewrath> ok
<incorrect> there has got to be a way to allow a non-root user exec scripts without needing to enter a password
<ivoks> scripts not, programs yes
<ivoks> you can setup setuid bit
<ivoks> and make that script executable only by some group
<thewrath> so sudo mount /dev/sda3 /tftpboot?
<ivoks> and in that group would be users you will grant execution of that script
<loshok> http://pastebin.com/m2ded421e
<ivoks> thewrath: you need to create filesystem on it
<thewrath> yea it has ext3
<ivoks> then just mount it
<thewrath> with the command i said?
<ivoks> yes
<ScottK> lamont: Nice things said about your work on postfix-users yesterday.
<ScottK> Actuall nice things about Ubuntu Server with Postfix generally.
<ivoks> yay! :)
<loshok> it worked in thunderbird but doesnt work in outlook
<loshok> maybe it has to do with accepting the certificate
<ivoks> i've heard this before
<ScottK> loshok: I didn't read the backscroll but works on thunderbird not outlook makes me guess the answer is login
<ScottK> You need it and you don't have it.
<loshok> it worked fine before
<loshok> i configured everything
<loshok> authentication and stuff
<loshok> and stopped working all the sudden
<incorrect> that doesn't really scale
<incorrect> must be another way
<loshok> http://pastebin.com/m69191aea
<loshok> scope006,
<loshok> ScottK, http://pastebin.com/m69191aea
<loshok> its authenticated, testing settings ok
<ScottK> Dovecot or Cyrus for SASL?
<loshok> send and receive
<ivoks> dovecot
<loshok> ivoks, any suggestions ?
 * ScottK doesn't know the config file for dovecot.
<loshok> please.....
<ScottK> loshok: Are you sure you offer both login and plain for sasl?
<loshok> ivoks,
<loshok> ScottK, yes
<ScottK> What do your logs say about the error?
<loshok> maybe it has to do with the certificate
<ivoks> i think that outlook doesn't authenticate
<thewrath> it does
 * ScottK wants to see logs.
<loshok> it does
<thewrath> there is a setting you sneed to set
<loshok> it was working earlier
<loshok> i set it
<ivoks> from outlook it probably doesn't work
<loshok> yeah but it was working earlier with outlook too
<ScottK> If you haven't looked in the logs and found the error there, then you aren't looking in the right place for an answer.
<loshok> i made several tests before
<ScottK> There are quite a few reasons you could get that error message.
<loshok> sending to internal users work, but not outside
 * ScottK won't speculate.
<loshok> sending to the users created by me on the server works fine in outlook
<ivoks> for that you don't need to authenticate
<ivoks> server must accept mail for them
<loshok> it has to use TLS
<ivoks> it's when you use it to send outside, then you need to authenticate
<loshok> i dont see a setting for TLS in outlook
<ivoks> secure or something like that
<ivoks> http://www.rescom.psu.edu/ssl/outlookssl.htm
<ivoks> this is for POP3
<loshok> ivoks, thats for incoming, the problem is with outgoing
<ivoks> you should set it for SMTP too
<ivoks> well, open your eyes
 * ScottK is deciding he's pleased the lack of logs save him having to spend time on this.
<ivoks> ScottK: :)
<ivoks> http://help.expedient.com/mailnews/ssl.shtml
<ivoks> there
<ivoks> sorry, it doesn't have pictures
<ivoks> i hope you'll be ok with that
<thewrath> lol
<loshok> loll
<loshok> dude
<loshok> i have it checked already
<loshok> i am trying everything
<ivoks> did you enable SSL in postfix?
<ScottK> Trying everything except providing logs.
<loshok> hang on i'll give you logs
<ScottK> I'll be asking for the output of postconf -n next.
<loshok> scope006, ivoks logs http://pastebin.com/m3a60067f
<loshok> ScottK, sorry
<ivoks> it doesn't authenticate
<loshok> so what should be done in autlook i'm checking the box where it says "this server requires authentication"
<ivoks> enable smtps in postfix
<scope006> loshok, did you need something?
<ivoks> loshok: open /etc/postfix/master.cf
<loshok> scope006, sorry wrong message
<scope006> lol np
<ivoks> loshok: find a line smtps     inet  n       -       -       -       -       smtpd
<ivoks> loshok: and uncomment it
<loshok> ivoks, done
<ivoks> restart postfix
<ivoks> and then try again
<ivoks> which outlook version is that?
<ScottK> Make sure you point Outlook at port 465 too.
<ivoks> for outgoing server
<ScottK> loshok: Where in that log is a failed attempt to send outgoing mail?
<loshok> same
<ivoks> gives us the log
<ivoks> the real log
<ivoks> what you pasted before isn't related to the problem you are talking about
<loshok> but where can i find the other log  ?
<ScottK> You got the correct logfile, just not the right spot in it.
<ivoks> find the address you were sending mail for
<ivoks> and then paste couple of lines before and couple of lines after it
<ivoks> dovecot-postfix should solve issues like this :)
<loshok> http://pastebin.com/md92f60f
<ivoks> it doesn't authenticate
<ivoks> it uses TLS, great, but doesn't authenticate
<jmedina> what is the ubuntu virtualisation channel?
<ivoks> loshok: you have 'broken_sasl_auth_clients = yes' in main.cf?
<jmedina> isnt broken for old outlook clients using login instead of plain?
<jmedina> I think it only affected outlook express before 5 version
<loshok> ivoks, broken_sasl_auth_clients = yes
<loshok> i have that in main.cf
<ivoks> ok
<loshok> ivoks, what can i do  ?
<ScottK> loshok: Pastebin the output of postconf -n please.
<ivoks> grep mechanisms /etc/dovecot/dovecot.conf
<loshok> http://pastebin.com/m57a21d20
<loshok> mechanisms = plain
<ivoks> now... adding login there would be wise, right?
<ivoks> Another plaintext mechanism is LOGIN. It's typically used only by SMTP servers to let Outlook clients perform SMTP authentication.
<ivoks> http://wiki.dovecot.org/Authentication/Mechanisms
<ivoks> add login in that line
<ivoks> restart dovecot
<ivoks> restart postfix
<ivoks> and then try again
 * ScottK headdesk.
<ivoks> ScottK: ? :)
<ScottK> [16:00:34] <ScottK> loshok: Are you sure you offer both login and plain for sasl?
<ScottK> [16:00:44] <loshok> ScottK, yes
<ivoks> :)
<ivoks> 250-AUTH PLAIN
<ivoks> 250-AUTH=PLAIN
<giovani2> haha
<ivoks> loshok: you know what to do?
<loshok> no
<ivoks> loshok: open /etc/dovecot/dovecot.conf
<loshok> ok
<ivoks> find auth default {
<loshok> opend
<ivoks> line ~770
<ivoks> there should be:
<ivoks> mechanisms = plain
<loshok> yes i see it
<ivoks> add login:
<ivoks> mechanisms = plain login
<loshok> ok i did it
<ivoks> save
<loshok> :) saved
<ivoks> restart services:
<loshok> wheew
<ivoks> Vog-work: right, it doesn't
<ivoks> Vog-work: we don't have anything like that
<ivoks> Vog-work: i use rc.local for that
<ivoks> Vog-work: maybe you should take a look at ufw
<incorrect> in sudoers can i do wildcards?
<Vog-work> ivoks: I would except this is a production server I need to make changes on.
<giovani2> incorrect: wildcards on what field?
<ivoks> loshok: point your boss to http://dovecot.org/donate.html
<loshok> heh
<incorrect> well i have something like /customdir/*/bin/startscript.sh
<ivoks> Vog-work: there's something you can do
<loshok> ivoks, you are super my friend
<ivoks> Vog-work: /etc/network/interfaces supports pre-up pre-down etc
<loshok> i will definetly experiment with future ubuntu server releases
<ivoks> Vog-work: so pre-up could have 'iptables-restore /some/file/with/rules'
<loshok> but i'll doubt i'll change anything on company's host since its working
<ivoks> Vog-work: while pre-down could have 'iptables-save /some/file/with/rules'
<ivoks> or, just save the rules in that file
<ivoks> without pre-down
<ivoks> that way you can have firewall rules per interface
<ivoks> and common stuff can be under lo interface
<ivoks> when i think of it... this sounds very cool and flexibile
<ivoks> :)
<Vog-work> os is my only other option to go and manually edit iptables.conf in /etc and restart the service from init.d
<ivoks> ?
<jmedina> Vog-work: you can create a rc script that loads iptables -restore
<Vog-work> jmedina: does that address the need to save the changes I made to the config before restarting?
<Vog-work> so... sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
<Vog-work> or something
<Vog-work> sudo /etc/init.d/iptables save    would work but  it dosen't make the changes permanent
<Vog-work> then sudo /etc/init.d/iptables restart
<jmedina> Vog-work: if properly configured, it will run iptables-save on init 6 and 0
<jmedina> and iptables-restore in default runleves
<Vog-work> I don't think rebooting should be necessary....
<Vog-work> (referriung to init level 6)
<ivoks> reboot?
<ivoks> am i missing something?
<ivoks> what is it that you are trying to do?
<Vog-work> I was under the impression init level 6 would reboot the machine
<ivoks> you asked how to make changes permanent
<ivoks> i guess everybody thinks of that as 'available after reboot' right?
<Vog-work> All I'm trying to do is save changes to an iptables config and then restart the service to have the edited changes made permanant
<ivoks> i see, now i understand what you want
<Vog-work> The server itself being a production mail server shouldn;t be restarted in the middel of the business day. I don't think my users would be happy :)
<ivoks> no one said you need to reboot
<Vog-work> My misunderstanding then
<ivoks> you just want to flush your iptables, right?
<ivoks> with some predefined rules
<Vog-work> just sec.... think I got it...
<Vog-work> flush triggered a memoey....
 * ivoks smells flushing with DROP as default policy :)
<Vog-work> Ahhh.... a classic case of PEBKAC
<Vog-work> I was attempting to update changes to iptables while running as a user other than root.... or sudo
 * Vog-work needs coffee
<Vog-work> thinks a bash script like this will make things simpler... http://pastebin.ca/1359327
<ivoks_> modprobing is done via /etc/modules
<ivoks_> kernel setup via /etc/sysctl.d/
<ivoks_> take care
<Vog-work> ivoks: good pointa
<lex> buenas
<lex> una ayudita
<lex> FAIL2BAN
<lex> PLEASE
<MianoSM> lex is that a question?
<MianoSM> if so, I think the answer is: to the left of your "a" key.
<jmedina> :D
<lex> spanish?
<MianoSM> Full sentences are pretty neat-o too. ;)
<pjsturgeon> anyone have trouble getting samba shares to follow symlinks in leopard?
<giovani2> leopard?
<giovani2> this doesn't sound ubuntu-server related
<pjsturgeon> its on an ubuntu server, using samba and viewing in leopard
<giovani2> heh
<pjsturgeon> ;)
<giovani2> well if the problem is isolated to a single leopard box ... I don't see how it's an ubuntu issue
<giovani2> if it's a samba issue ... it belongs in #samba
<pjsturgeon> thats the way with IRC, bouncing between chatrooms. I am setting this up on an ubuntu server and im sure im not the only mac user here
<pjsturgeon> I have NEVER recieved a useful answer in samba IRC, and am not aware of a leopard IRC? :p
<MianoSM> pjsturgeon: I'm pretty sure leopard and samba have had issues for the past 2 years...
<pjsturgeon> ahh, darn
<ScottK> giovani2: Until you know it's not related to an Ubuntu specific issue in the samba package I'd suggest it's premature to push people away.
<MianoSM> It's been a known issue since November of 2007, possibly earlier. Sorry about that m8.
<pjsturgeon> no worries. let me explain my setup and see if anyone has an idea of how to solve. I have a headless ubuntu box acting as an automated torrent grabber and UPnP media sharer. I have a folder that contains 3 symlinks, one to videos, one to music and one to photos all on another HDD. I also have a torrents folder on the ubuntu disk. Each of these folders is a samba share.
<pjsturgeon> When I use a GUI to move files around between the shares, it sees it as a remote transfer and does it over the network, which is SLOW!
<pjsturgeon> if i mount a samba share to the media folder root, I see symlinks which go nowhere. how to get around this problem?
<MianoSM> If you were to ssh into it and move files that way....that wouldn't alleviate your problem?
<pjsturgeon> yes indeed it would
<pjsturgeon> but it takes me a while to do that. If 10 files have downloaded, organising them via terminal is not fun
<Deeps> pjsturgeon: if each samba share is on a separate mount, that is why
<Deeps> and afaik there's no way to get around that
<Deeps> as it copies the data from the remote store to your machine, and then back again
 * MianoSM agrees with Deeps.
<Deeps> (Essentially)
<Deeps> if you had a single mountpoint that you were shifting between, this issue wouldn't occur
<pjsturgeon> bah, I should make torrents folder a symlink too and mount a samba folder directly to my 2nd HDD. That would break none of my scripts and get me under a single mount
<pjsturgeon> its not pretty but it just might work
<Deeps> if you're transferring between different mountpoints, as far as i know it'll always be slow
<Deeps> rather than simply being a case of modifying the paths in the inodes
<Deeps> as it actually involves shifting the data around
<giovani2> yeah, there's no FXP equivalent for samba
<Deeps> if you try with a windows box shifting data between 2 shares, it'll be slow there too i think
<Deeps> as it still involves moving the data
<pjsturgeon> not sure why i didnt put torrents folder on the 2nd HDD in the first place actually. nevermind, i have my solution! one of those "figure it out as you explain it" problems
<Deeps> if you still have 3 seperate shares and mountpoints, it'll probably still be slow
<pjsturgeon> it wont be anymore. the issue was i couldnt mount 1 to A) the media folder (containing a few  symlinks + a real folder) or B) the 2nd hard drive. if all folders are on the HDD and all have symlinks on the first drive, i can set samba to look at the 2nd HDD and problem solved :)
<pjsturgeon> thanks anyhow guys
<thewrath> okay i am back
<thewrath> lol
<Vog-work> pjsturgeon: my main problem with samba and lepoard is that when you have multiple users using one share changes one user make are not seen by a user who already has the directory open until they reconnect to the smb share.
<Vog-work> has somehting to so with the way that lepoard chches the contents of the directory
<Vog-work> ^^caches
<uvirtbot> Vog-work: Error: "^caches" is not a valid command.
<pjsturgeon> i noticed a delay on file changes, but i dont think it waits for a reconnect. they pop up after a while for me
<pjsturgeon> haha, you invoked the bot
<Vog-work> lucky you, I work with a bunch (200 +) of clueless technophobe reporters who can't be bothered to reconnect to see a new file recently added to the share.
<pjsturgeon> that ius unlucky! just little old me here on a home network so its a non-issue really
<Vog-work> Had to make a apple script to periodically reconnect to the share.
<pjsturgeon> im sure they'll get there in they end >.<
<pjsturgeon> if only apple bothered turning up to the interoperability meetings like the other big players
<Vog-work> Yeah they do. BUt not without hassling me first
<pjsturgeon> but thats a rant for another channel
<Vog-work> lol
<Vog-work> well... when dealing with samba and interoperability, it not like M$ shows up either...\
<Deeps> not really in their interests to, it should be in apples
<thewrath> when you mount a drive how do you make sure the permissions or the owner of that drive?
<pjsturgeon> thewrath: I use a single login who owns the files anyway, but if you use "admin list" you can force all those users to act as root
<pjsturgeon> warning, that'll only work if the disk is in a unix friendly format, lost 3 days trying to get it working on a fat 32 >.<
<pjsturgeon> thewrath: ignore me, we arent talking samba anymore are we :$
<thewrath> an honest answer anyone?
<kpettit> I had some network speed issues and resolved them by changing some "ethtool" options.  Where is the proper place to put those changes for boot?  All the docs I've found point to some older sysconf directory that isn't in ubuntu 8.10
<jmedina> kpettit: in /etc/network/interfaces, something like this:
<jmedina> post-up ethtool -K eth1 tx off
<jmedina> in the iface options
<kpettit> ok
<kpettit> thanks, I like that alot better than some of the other suggestions I've seen on the net.  Clean and simple
<kpettit> Is ethtool options better to do as a pre or post-up?
<jmedina> I use it in my machines, when they dont work properly, for example to change speed, or disable tcp offloading
<jmedina> and prevent TCP bad checksums in xen enviroments
<kpettit> man I was getting 16-32k down/up speeds from my 10MB fiber until I did that change.  It couldn't auto-negotiate correctly
<jmedina> kpettit: I having some problems with Windows paravirtualized drivers in a fully virtualised xen enviroment
<kpettit> sounds fun
<thewrath> hey all
<bitsbam> if i plug in a usb storage device, how do i know what /dev/whatever it was assigned so i can mount it?
<bitsbam> thewrath: hey
<jmedina> bitsbam: most of times you can use 'dmesg'
<kpettit> you can type dmesg or tail /var/log/messages
<bitsbam> yup, there it is, thanks kpettit
<unit3> Hey, just backported the mysql-server-5.1 package to hardy for use in a cluster, and discovered all the NDB/Cluster stuff is missing.
<bitsbam> jmedina: was in dmesg
<kpettit> I usually like to "tail -f /var/log/messages" so I can see what goes on when I plug/unplug things
<unit3> Is there some docs as to what's going on with 5.1?
<bitsbam> kpettit: yep, going to have to add it to conky now
<unit3> or anyone who knows?
<jmedina> unit3: what about launchpad?
<jmedina> I read apost about mysql 5.1 in universe in the ubuntu server blog
<unit3> Oh, that might be what I'm looking for.
<unit3> I'll check there.
<unit3> hrm... just says it's available, no info on why clustering is MIA.
<jmedina> unit3: you can file a bug in launchpad
<unit3> Yeah, maybe the easiest. Just make my life harder, since I'm supposed to be rolling this cluster out in the next week or so. :(
<unit3> But I'll start with that, and see if someone can let me know what's going on.
<jmedina> https://bugs.launchpad.net/mysql-server
<ScottK> unit3: If you're planning on production using 5.1, you're probably on shaky ground anyway.
<ScottK> jmedina: Wrong package.  5.1 is a separate package.
<jmedina> ScottK, you are right, I just remember that after paste
<unit3> ScottK: I was under the impression they'd worked out the launch problems?
<unit3> Maybe I'll just drop back to 5.0, and make sure I've got enough RAM in the systems. :P
<ScottK> unit3: It's pretty new, so I'd want to test it pretty hard first.
<ScottK> 5.0 is safer and more supported.
<unit3> Yeah. Just the NDB stuff is also a lot more limited, from what I've seen.
<unit3> Oh well, it looks like it'll be the better option for now.
<unit3> I can look at upgrading to 5.1 once Jaunty's out and more people are using it, I guess.
<olcafo> kpettit, I find "tail -f |dmesg" has more useful information. But that might be overkill.
#ubuntu-server 2009-03-13
<Myst> Hey everyone... does anyone know of a cd/dvd buring program that uses command line interface?
<Deeps> mkisofs / growisofs
<twb> genisoimage now, I think.
<twb> Traditionally genisoimage (nee mkisofs) is used to create an .iso, and this is then burnt with cdrecord.
<twb> For DVDs, the growisofs tool can perform both steps at once.
<Myst> ok so growisof
<twb> Incidentally, most (all?) GUI burning tools end up just calling the CLI tools internally.
<Myst> yeah... just the server im using has no GUI and i would prefer not to have a gui installed in order to keep the system hardened
<twb> That won't really "harden" the system much.
<twb> But using a GUI remotely is often slow, and in this case is certainly unnecessary.
<Deeps> will "weaken"
<Myst> most vulnerabilites are found with GUIs and if i install one then i have to go through a battery of tests to ensure that the system is hardened
<twb> Myst: if those GUIs can only be used by ssh -X, then you're only as weak as ssh.
<twb> Unless end users have shell access to your server, I guess.
<Myst> the last time i installed a GUI for linux it had alot of processes that ran at startup, which in turn created some vulnerabilities during the testing... so i have avoided using guis on any server...
<twb> Installing a single graphical application is different to what you call "installing a GUI".
<Deeps> along with all the xauth etc. packages that are then needed for x-forwarding
<twb> I'm not advocating installing ubuntu-desktop or anything, just e.g. xcdroast.
<twb> Deeps: well, openssh-server Recommends xauth anyway
<Deeps> recommend, doesn't depend
<twb> Deeps: recommend is opt-out now
<Deeps> twb: i opt out
<Myst> yeah... just if i want to run a burning utility and need the gui to do it all those unnecessary processes run... but ill look into xcdroast
<Deeps> i'd take the opposite attitude to twb here, and i'd avoid any X related applications on a server
<twb> Deeps: well so do I
<Deeps> especially since like he says, in the case of cd/dvd burning, they all just end up using the same cli tools internally anyway
<twb> Deeps: I'm just saying that of all the things that you can do to harden a server, refusing to install libx11 is pretty negligible.
<Deeps> so skip their front ends and just do it yourself
<twb> Removing anything non-essential listed in netstat -nap or in /etc/init.d/ or /etc/inetd would be a start.
<twb> Installing a firewall, possibly auto-applying security updates.
<twb> Compared to installing, say, xdu or xterm, those will have far more impact.
<Myst> hehe... i didnt mean to start a debate about hardening a system... but advice is always welcome
<twb> Myst: no worries, I should be working...
<twb> Regarding smartmontools...
<twb> I need to tell it my drives are SATA, not SCSI.  I can add /dev/sda and /dev/sdb to smartd.conf.
<twb> But what happens if the device names happen to change (e.g. because USB gets modprobed before SATA for some reason)?
<ScottK> Does it know about UUID?
<twb> Can I tell smartd to identify devices by a UUID or serial, or even just to default to assuming SATA?
<JanC> eh, it should detect all drives automaticly?
<twb> JanC: yes, but it detects them autoamtically as SCSI
<twb> Because of the SAT emulation layer in the kernel.
<twb> (That's why drives are called /dev/sda not /dev/hda nowadays)
<JanC> of course, but it's easy enough for them to see that it's really a SATA disk...
<twb> JanC: smartd apparently doesn't know to do that.
<giovani2> smart works fine with my sata drives
<giovani2> I don't know anyone who's had a problem ... can you describe exactly what's going wrong?
<twb> giovani2: I have to add -d sat.
<twb> giovani2: this means explicitly listing the drives, AFAICT.
<twb> giovani2: this means that if the drives change names, smartd.conf will be wrong.
<giovani2> what happens if you don't?
<twb> Device /dev/sda: ATA disk detected behind SAT layer
<twb> Try adding '-d sat' to the device line in the smartd.conf file.
<twb> For example: '/dev/sda -a -d sat'
<twb> ...in syslog.
<giovani2> odd, that doesn't happen for any of my SATA drives, is the SAT layer non-default?
<twb> giovani2: AFAIK this is a stock Ubuntu 8.04 Server install.
<giovani2> https://help.ubuntu.com/community/Smartmontools
<giovani2> did you read that?
<giovani2> the script on the bottom seems to be a solution, as it's doing "-d ata" -- you could simply swap that out with "-d sat"
<giovani2> it's iterating through /dev/disk/by-id/
<JanC> twb: it should also understand the device links in e.g. /dev/disk/by-id/
<JanC> and probably the other ones under /dev/disk/ too
<twb> giovani2: sorry, I'm multiplexing multiple tickets.  Looking now...
<twb> giovani2: OK, that link seems to basically be circumventing the existing init.d infrastructure entirely.
<twb> Oh, no, I see, they're talking about using that script as a replacement for the default scanner execed in smartd.conf
<twb> Hmm, this actually looks like it works:
<twb> Just add "-a -d sat" to the DEVICESCAN line itself.
<twb> Wish I knew how to simulate a smart failure on a remote machine, though :-)
<JanC> on my system SATA drives are automaticly recognized as such, it seems...
<giovani2> twb: don't have any old, bad drives around?
<uvirtbot> New bug: #342056 in samba (main) "Samba automatic account creation assumes local accounts" [Undecided,New] https://launchpad.net/bugs/342056
<twb> giovani2: the machine is in a data center
<twb> And more to the point, I'm not
<doginize> I'm usig ufw for firewall. if a port is not ALLOW, does it not mean DENY? what defference does explicitely DENY make?
<thewrath> anyone know anything about this: http://linux.slashdot.org/article.pl?sid=08/02/13/0227231
<twb> doginize: run "iptables-save" and pastebin the output.
<twb> doginize: if you look at the top of that output, you'll see something like...
<twb> :INPUT ACCEPT [0:0]
<twb> :FORWARD DROP [0:0]
<twb> ...those lines mean that INPUT defaults to ACCEPT and FORWARD defaults to DROP.
<twb> Unless you're working on a router, you mostly care about the INPUT chain.
<twb> doginize: however what you might find is that the default policy for INPUT is ALLOW, but at the bottom of the chain ufw has a rule like "otherwise, always reject".
<twb> I haven't used ufw enough myself to know if that's the case by default, and I don't have a ufw-enabled box nearby to check.
<doginize> twb: here is the output: http://dpaste.com/13850/
<twb> doginize: OK, so the default policy for INPUT is DROP.
<twb> DROP is like REJECT, only the packet is thrown away without sending a "piss off, loser" packet to the other computer.
<doginize> thanks, twb
<twb> What that means is the other end will just see something like "connection timed out" instead of "connection refused".
<twb> And I can't see any generic "otherwise -j REJECT" at the bottom of the INPUT chain.
<Fenix|home> Greetings
<Fenix|home> any way to get real time logs to show on tty?
<ScottK> tail -f logfilename?
<doginize> twb: i dont know what those output mean. any suggestion for me to read?
<Fenix|home> ScottK... hehe.  My bad for not articulating.
<ScottK> That did seem too easy.
<Fenix|home> I'd like to have syslog show in realtime on a tty straight from boot.
<Fenix|home> I can always switch to another tty to log in...
<Fenix|home> but typically I do all from SSH and the server doesn't have keyboard/mouse... just a monitor
<twb> doginize: basically ufw is an abstraction layer over a more powerful (and more complex) system called "iptables".
<twb> doginize: so to know what ufw did, I looked at what the actual iptables settings were "under the hood".
<twb> doginize: unfortunately I don't know a good single reference for iptables, I just kind of picked it up as I went along.
 * Fenix|home mumbles about iptables... which is like learning icelandic through greek, through latin, through chinese and you don't know any of those languages.
<Fenix|home> doginize, wanna learn IPTables... go to the project homepage at http://netfilter.org/
<Fenix|home> be prepared though... it's a rough ride
<doginize> ok, thanks.
<Fenix|home> alright... have a good night boys and girls.
<twb> Eh, the iptables homepage never really helped me grok iptables
<twb> I needed to actually try it, and see working examples, and read articles on how to do specific things (e.g. DNAT) with it
<twb> Is there any point in zeroing (dd if=/dev/zero of=/dev/sda1) md RAID1 nodes prior to creating the array?
<kees> twb: not really
<twb> How do I discover the UUID of a swap partition?
<ajmitch> vol_id will probably tell you
<twb> ajmitch: where is that?
<ajmitch> in /sbin
<twb> Hmm, not on the Etch box I'm pinching bits from.
<twb> Apparently that's provided by udev
<ajmitch> sorry, I assumed you were using ubuntu
<twb> No worries, I'm duplicating it *to* ubuntu
<twb> This box being Etch was a bit of an accident.
<ajmitch> why do you need to copy the UUID?
<ajmitch> since it probably won't have one on that swap partition
<twb> swaps have UUIDs
<ajmitch> they aren't required to
<twb> Ah.
<twb> In general I'm copying the UUIDs so that when I copy the OS from one set of disks to another, /etc/fstab and menu.lst and such will be correct without me needing to munge them.
<twb> Last time I did this for an Ubuntu box, which uses UUIDs more than this silly Etch system.
 * ajmitch is different & has LVM, so the swap volume is just addressed by name on this sid box
<ajmitch> but has a UUID on the hardy system here
<twb> ajmitch: even though it's on LVM, it's addressed by UUID here.
<twb> (On the Ubuntu example I'm looking at)
<twb> But then I probably set up LVM on it via d-i
<ajmitch> either way works
<twb> Nod.
<joanki123> hi... can anyone help me?  i want to get an online file host... any recommendations?
<joanki123> something cheap but very good and FAST, though it doesn't have to have a ton of storage space
<giovani2> joanki123: file host for what purposes? backups? remote access? media delivery?
<joanki123> backups
<joanki123> just for me personal
<joanki123> for a few files - not a ton
<giovani2> amazon s3 works great for me
<joanki123> mostly for convenience, so it doesnt' have to be anything large
<giovani2> they only charge you for what you use, so it's good for small usage
<giovani2> 15cents per gb
<joanki123> i have less than 1 gig of stuff to backup
<giovani2> per month
<joanki123> is there anything flatrate or annual so i don't have too stres about using too much?
<joanki123> to*
<giovani2> why would you stress?
<joanki123> and stress rather
<giovani2> unlimited is a bad idea
<joanki123> because tomorrow's files are going to be HUGE
<joanki123> i have to do a one time large transfer
<giovani2> no service is unlimited, and nyone that sells it are overselling
<joanki123> ok i see
<giovani2> http://aws.amazon.com/s3/
<joanki123> .15 /gb not bad
<giovani2> I asked you how much you wanted to back up, you said less than one GB
<giovani2> now it's more? which is it?
<joanki123> tomorrow it will be like 100
<giovani2> 100GB?
<joanki123> for a day
<joanki123> and then it will reduce to like 1 gb
<giovani2> do you have a connection to upload that amount?
<joanki123> on average
<joanki123> it will take time, of course....
<joanki123> so flat-rate file server is hosting is not.... adviseable
<joanki123> i would think one would just prefer to know what she is paying for ahead of time
<giovani2> you would know ... because you know your usage
<giovani2> why pay every month for something you're not using?
<giovani2> if you pay $5/mo
<giovani2> for a year
<giovani2> you've paid $60
<giovani2> but, if you average 1GB per month on s3 ... you pay $1.80 per year
<giovani2> how is the former better?
<joanki123> lol
<joanki123> ok
<giovani2> if you need to store 100GB for a year
<giovani2> that's different
<giovani2> but, I wouldn't trust anyone who provides 100GB of backup for $5/mo
<giovani2> they are losing money, and overselling to provide it to you
<joanki123> they also charge for data transfer
<joanki123> what is a tb?
<giovani2> yes? backups aren't retransfered all the time
<giovani2> Terabyte? 1024 Gigabytes
<giovani2> for backups, you should be using rsync
<giovani2> which only transfers the files that have changed, and not everything
<joanki123> rsync
<joanki123> ok
<giovani2> rsync is software
<giovani2> for linux
<joanki123> i will look it up
<giovani2> http://samba.anu.edu.au/rsync/
<joanki123> but can you backup to a file server?
<joanki123> an online one
<joanki123> from that program?
<giovani2> rsync doesn't care where you're backing up to
<joanki123> how often do you backup your personal computer?
<giovani2> you mount your s3 account on your linux computer
<joanki123> i think i want to do it every month
<giovani2> and then rsync to that mount point
<giovani2> I run rsync nightly
<joanki123> ohhhh coool
<joanki123> if you run rsync nightly.... do you transfer data to s3 nightly?
<giovani2> yes ... only what has changed
<giovani2> which is probably 50-100MB at most
<joanki123> ooo i see
<joanki123> and you can choose which files to back up
<joanki123> i was using an external harddrive
<giovani2> of course ...
<giovani2> I have to go
<giovani2> good luck
<joanki123> aww
<joanki123> ok i will get rsync
<joanki123> thank you
<racecar561> has anyone actually ever got to get ubuntu working on a hp mediasmart ex4xx?
<joanki123> sorry dell here
<racecar561> i gotta go... bye everyone
<nomoa> hi, we have a strange problem on 8.10, we recently upgraded from 8.04 to 8.10 and now the system refuse to boot (it cannot find the root partition), fstab contains /dev/disk/by-uuid references. In BusyBox if we wait a while devices starts to appears and we can mount them (sort of delay problem?). We use ata_piix for disks. Any clue?
<ScottK> nomoa: If you look in the release notes there's a note about failure to boot with Intel 945.  It's now known to affect more broadly than that.
<ScottK> What you have sounds somewhat like that.
<ScottK> In busybox if you wait a few minutes what happens if you just exit (don't manually mount anything)?
<nomoa> ScottK: I'm going to try
<ScottK> OK.
<nomoa> ScottK: It works
<ScottK> nomoa: Add rootdelay=90 (90 works for me, you might have to adjust) to the boot parameters in /boot/grub/menu.lst.
<ScottK> Also there's a bug you should comment in.  Let me find it.
<ScottK> nomoa: Bug 290153 <-- Please comment with the particulars of your hardware.  Also there's a recent request for testing with newer kernels you might consider.
<uvirtbot> Launchpad bug 290153 in linux "Fails to find boot device in Intel D945Gnt" [High,In progress] https://launchpad.net/bugs/290153
<nomoa> ScottK: We deactivate our scsi LTO-2 medium autochanger wich takes ages to load, I'll check with it and adjust the delay
<nomoa> ScottK: ok many thanks
<ScottK> YW.
<chrisadams> anybody here use screen?
<chrisadams> is there an easy way to save sessions in screen if your server's going to reboot?
<simplexio> chrisadams: far as i knoe there isnt anyawy to save screen session
<chrisadams> damnation
<chrisadams> ah, nevermind
<simplexio> offcourse there is way around it
<chrisadams> I guess I should be using screen rc to save settings anyway...
<simplexio> yeah.. the way around it is use virtual machine which you can transfer toa nother computer
<simplexio> i think Xen supports it, maybe little overkill for just keep screen running
<chrisadams> lol
<chrisadams> definitely :)
<asac> so ... in jaunty one of the last days broke my nfs mount
<asac> that worked for ages
<asac> i get "authentication failure"
<asac> anyone has any clue how to debug this?
<simplexio> problem is in authentication part :)
<Nafallo> kirkland: looks good. an extra space I'm not sure where it's from, but otherwise what I expected.
<thehook> I have a vps running ubuntu 8.10, and lately that server suddenly just freeze so i have to force restart it.. anyone else here have the same problem?
<kirkland> Nafallo: if you pull 1.38 from the PPA, there's an item in the Menu that lets you configure on/off each of the status indicators
<Nafallo> ah yea. kewl.
<Nafallo> kirkland: found a bug then...
<kirkland> Nafallo: what's that?
<kirkland> Nafallo: the whitespace?
<thewrath> hey all how do i mount a new drive to server?
<Nafallo> kirkland: I turned off mem-available and mem-used now says: ,46%
<thewrath> sudo mount /dev/sda3 /tftpboot ?
<kirkland> Nafallo: so that's the same problem as with the whitespace
<Nafallo> kirkland: ah. after the hostname. yea pretty much.
<Nafallo> :-)
<kirkland> Nafallo: enabling/disabling those status items doesn't actually remove them from your profile
<kirkland> Nafallo: it just makes the scripts exit 0
<kirkland> Nafallo: immediately
<Nafallo> ha. oki.
<kirkland> the status line is still
<kirkland> # Status string, last line
<kirkland> hardstatus string '%{+b kr}\%{= ky}o%{=b kY}/%{=b Wk} %100` %{= Wk}%112` %= %{=b bW}%102`%{= Wk} %{=b rW}%101`%{= Wk} %{= Wg}%108`%{= Wk} %{= Yk}%106`%{= Wk} %{= Wk}%104`%{=b cW}%103`%{= Wk} %{=b gW}%105`%107`%{= Wk} %Y-%m-%d %0c:%s'
<kirkland> Nafallo: I haven't thought of a good way to clean that up
<thewrath> anyone?>
<Nafallo> as long as you don't have to care about / read it ;-)
<kirkland> Nafallo: ?
<Nafallo> kirkland: I don't see that line when I use the application :-)
<Nafallo> kirkland: ...so not as bothered :-)
<kirkland> Nafallo: cool
<kirkland> Nafallo: so besides the leading/trailing chars, you like?
<Nafallo> kirkland: I've got arch now :-)
<Nafallo> kirkland: yea. it's fine.
<kirkland> Nafallo: \o/
<Nafallo> kirkland: except of course... the constant annoyance you're already told about multiple times :-)
<Nafallo> plain, light, dark
<kirkland> Nafallo: too many colors
<Nafallo> kirkland: plain, light, dark, random ? and the random one can generate stuff like green text on pink background? :-)
<kirkland> Nafallo: <barf>
<kirkland> Nafallo: would you please file a wishlist bug on that one?
<kirkland> Nafallo: requesting the colored profiles be split out to screen-profiles-extras
<Nafallo> kirkland: if you consider it a serious suggestion I will :-)
<Nafallo> ah
<Nafallo> doh
<kirkland> Nafallo: not green/pink :-)
<Nafallo> kirkland: random could generate whatever really ;-) white on white? :-P
<kirkland> Nafallo: yeah, that would increase security
<Nafallo> Bug #342244
 * Nafallo pokes the bot!
<uvirtbot> Nafallo: Error: Could not parse data returned by Launchpad: The read operation timed out
<Nafallo> bug 342244
<uvirtbot> Nafallo: Error: Could not parse data returned by Launchpad: The read operation timed out
<Nafallo> uvirtbot: why do you hate freedom?
<uvirtbot> Nafallo: Error: "why" is not a valid command.
<nomoa> is there a way to identify a device (e.g. /dev/sgX) with a script by reading an entry in a file like /proc/scsi/scsi (we had a usefull device /dev/tape/by-id/XXX but it disappears after we upgrade from 8.04 to 8.10)?
<bn43> hi I've installed nagios3 for server 8.10 - do I still need to download plugins off the site or is this included when installing nagios?
<ttx> bn43: you might want to install the "nagios-plugins" package
<Nafallo> bug 342244
<uvirtbot> Launchpad bug 342244 in screen-profiles "Split out the themes to screen-profiles-extras" [Undecided,Confirmed] https://launchpad.net/bugs/342244
<Nafallo> ^-- kirkland
<uvirtbot> Nafallo: Error: "--" is not a valid command.
<bn43> ah - silly me I was just doing a sudo aptitude search nagios3
<Nafallo> god damn bot!
<bn43> I see I have "i A" next to nagios-plugins
<Nafallo> does it have to trigger on any god damn character :-P
<bn43> what does the A mean?
<bn43> I can see a whole lot of scripts in /usr/lib/nagios/plugins  - I don't see this on the web interface ie check_disk  - what gives?
<_ruben> those are the scripts used by the actual checks
<bn43> _ruben: but how do they get activated and configured? I can't see how to do this on the webpage
<soren> bn43: Look in /etc/nagios/commands.cfg or something like that.
<soren> bn43: It defined the names of commands you can use for service checks.
<soren> Sometimes there's a one-to-one mapping between those and the scripts in /usr/lib/nagios/plugins, sometimes there's not.
<bn43> ah ok that has rung a bell - on the web interface, there is a commands page where I can see all the scripts and switches
<bn43> now I just got to figure out what they all mean
<mrwes> I have a script in cron.daily and it doesn't run -- what should I be checking?
<mrwes> it's owned by root too
<bn43> mrwes: does it run manually?
<mrwes> yah
<mrwes> 25 9    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
<mrwes> it's a blocklist script to udpate transmission blocklist
<mrwes> it didn't run at 9:25
<bn43> so it runs sometimes?
<mrwes> no, never from the cron.daily
<mrwes> do I need to 'activate' root's cron?
<bn43> i know I had a weird one previously - its the naming of the scripts
<bn43> did not accept
<bn43> _ for instance
<bn43> check for that
<mrwes> the script is called blocklist.sh
<mrwes> maybe it's the .sh ?
<bn43> yeah - check that
<bn43> rename and see
<mrwes> k
<mrwes> set to run in 2 minutes...
<mrwes> Liinux servers are powerful -- what a great way to really learn Linux
<bn43> yeah I'm learning constantly - I'm a noob but can do some funky stuff now
<bn43> running a couple of servers and having fun
<mrwes> same here -- been running this server for hrmm.. maybe a week :)
<mrwes> just a simple file and print server
<bn43> me too
<mrwes> and transmission-daemon too
<mrwes> script still didn't run
<mrwes> er
<bn43> bugger
<mrwes> hrmm...script won't run without the sh blocklist
<mrwes> it's chmod +x too
<soren> mrwes: Do you have cron installed?
<bn43> ?? u took the sh out right? so its "blocklist"
<mrwes> hrmm
<mrwes> yah installed :)
<mrwes> bn43, yah blocklist
<bn43> so if you ./blocklist it does not run?
<mrwes> and it's sitting in /etc/cron.daily
<mrwes> administrator@ubuntu:/etc/cron.daily$ sudo ./blocklist
<mrwes> Waiting for the daemon to exit  done
<mrwes> blocklist updated
<mrwes> Starting bittorrent client transmission-daemon...
<mrwes>    Starting bittorrent client transmission-daemon succeeded
<mrwes> and the level1 file is there
<mrwes> wtf am I missing here? heh
<soren> mrwes: How can you tell it isn't run?
<mrwes> soren, I deleted the level1 file and had the cron.daily set to run at 9:25am, then I checked for the level1 file after that time and it wasn't there
<mrwes> maybe it ran and there's an error?
<bn43> and when you edit crontab you go "sudo crontab -e" right?
<soren> mrwes: Perhaps your script doesn't like running without a controlling terminal.
<mrwes> bn43, that's not necessary for cron.daily
<soren> mrwes: check your e-mail. cron sends the output of the commands it runs to root.
<mrwes> soren, no mail :(
<soren> mrwes: No mail because you don't have an mta or no mail because it didn't send any?
<bn43> yes but it ensures that root is running the script - not the user
<mrwes> and I have an alias root=administrator
<thewrath> bn43: you still there
<bn43> yeah
<thewrath> try getting to https://www.wasd.k12.pa.us
<mrwes> bn43, /etc/crontab calls out to cron.daily
<mrwes> you don't need to 'install' it
<mrwes> says that right in /etc/crontab
<bn43> thewrath: why am I looking at that website?
<mrwes> cuz cron.daily is run by root
<mrwes> er
<bn43> ?? no each user has there own as far as I know
<bn43> crontabs don't belong to root only
<mrwes> yah I know that
<mrwes> # /etc/crontab: system-wide crontab
<mrwes> # Unlike any other crontab you don't have to run the `crontab'
<mrwes> # command to install the new version when you edit this file
<mrwes> # and files in /etc/cron.d. These files also have username fields,
<mrwes> # that none of the other crontabs do.
<mrwes> it's system wide
<bn43> ok then I'm talking outa my behind! not sure how to help
<mrwes> and crontab is running
<mrwes> soren, any ideas?
<bn43> but just to xperiment - can you try it in root crontab?
<mrwes> bn43, yah I was about to do that
<mrwes> I just wanted to use cron.daily for what it's designed for
<soren> mrwes: Still: It's possible your script doesn't like getting run without a controlling terminal.
<bn43> thewrath: u still there?
<mrwes> this looks right
<mrwes> 55 9 * * * cd /etc/cron.daily && ./blocklist
<soren> mrwes: Try running it like 'some command < /dev/null' and see if it does what you expect it to.
<mrwes> k
<mrwes> let me see if the root cron runs
<mrwes> I can use MAILTO=root no?
<mrwes> or is that automatic
<soren> That's automatic.
<mrwes> bingo! it ran
<mrwes> why would it run from root's cron and not the cron.daily ?
<_ruben> i had a similar issue, forgot how i fixed/worked around it though :p
<mrwes> oh...thanks :P
<_ruben> ah .. i put a job in a /etc/cron.d/ instead, and decided to run it 4 times a day as well :)
<mrwes> well..I'm just confused on why cron.daily won't run it, that's what it's for
<bn43> can't comment as I use root crontab only for system scripts
<bn43> was not aware of cron.daily - will look into it
<bn43> I'm still not figuring out nagios plugins
<bn43> is there some idiot proof how to configure plugins?
<mrwes> shit...now it appears the cron.daily ran -- I got an email
<mrwes> heh
<bn43> good for you!
<mrwes> gotta test that again
<mrwes> Ok..now if someone can speed up file transfers with cifs I'd be very happy
<mrwes> heh...shit is way slow
<thewrath> bn43: yes i am
<bn43> what was that link about?
<thewrath> sorry i stepped out for a littl ebit
<thewrath>  wantedu to see if outside access is not reestablished
<bn43> ??
<thewrath> all this week this website was not accessible to the outside network
<bn43> oh....well glad to help - got to it
<bn43> anyone here knows nagios?
<tominglis> hey, i have ubuntu server 8.04.2, and i updated it yesterday, the packages updated included xen hypervisor 3.2
<tominglis> unforunately i am no longer able to ssh or xm console into my xen instances, does anyone know what could be wrong?
<tominglis> i can still see the websites they are serving
<ball> hello VK7HSE
<ball> VK7HSE: any packet down by you?
<soren> mrwes: Do you have anacron installed?
<VK7HSE> ball: 20m 2m
<ball> VK7HSE: people around here tell me packet's dead
<AlexC_> g'morning
 * ball shrugs
 * ball <- KB9YLW
<VK7HSE> ball: pretty much! but I like to make a noise!
<AlexC_> I've got some issues with the "NetXtreme BCM5754M" network card, it just isn't working. Network interfaces are all setup correctly, routes are good - it just sits there. If however I switch over to the onboard network card, it works perfectly fine (with same settings)
<AlexC_> I've got the 'tg3' module loaded, can someone help debug what the issue actually is?
<orudie> ivoks, hi
<ivoks> hi
<Genericnoob13> Probably a stupid question, but is there a way to port forward from ubuntu server?
<ivoks> forward port?
<Genericnoob13> K, i just got an old box and i'm trying to set it up as a home server- problem is, it just doesn't have the HD space. What I'd like to do is set it up so if i try to connect via ftp, it sends me to xp box
<ball> Genericnoob13: that's just silly.
<ball> ...though I suppose it's doable with Samba
<Genericnoob13> well, I'm a noob. Obviously
<ivoks> samba?
<Genericnoob13> and strapped for cash, so actually upgrading old box isn't really an option
<ball> How much space do you need it to have?
<ball> I can relate to being strapped for cash.
 * ball <- impoverished (and now unemployed) student.
<Genericnoob13> Well, on my windows box I have a 500gig drive I use for media- it's about 1/5th full last i checked
<Genericnoob13> this box has like 54 gigs total.
<ball> How much free space on the Ubuntu box?
<Genericnoob13> 42
<Genericnoob13> gigs
<ball> ...and that's not enough for what you want to do?
<Genericnoob13> Well, I have 100 gigs of media- and I'd like to be able to access that whenever. I set up ftp on my xp box, but that either means somehow getting another free dynamic dns account, or having to remember the IP for xp-box
<Genericnoob13> so i just wondered if it'd be possible to point ftp client at ubuntu box's url, but have the server send the connection along to xp
<Genericnoob13> yeah, there's probably a really easy way to do  what i want that i just know nothing about
<ball> Why are you using ftp instead of sftp?
<ball> Genericnoob13: there are ways to do it, but probably also reasons not to.
<Genericnoob13> Eh, once i get it set up, I feel competent enough at setting up accounts/permissions etc.
<Genericnoob13> aaand also on my todolist is moving all my important personal information to a truecrypt drive
<ball> Genericnoob13: that's pointless if you're going to use ftp to access it.
<ball> ftp sends passwords in cleartext
<Genericnoob13> Well, i'm not against learning how to set up something more secure
<Genericnoob13> and actually, I don't think i'd be accessing personal information via ftp- I'm mainly thinking if i am elsewhere and want a movie, or music, or want to share my collection with a trusted friend
<Genericnoob13> the truecrypt thing would just help ensure if someone did somehow get full-access to my box, they wouldn't have access to said information
<Genericnoob13> Hmm, I could probably set up a folder and samba share it, but most tutorials etc I'm finding on google are for sharing ubuntu files with MS, rather then the reverse.
<Genericnoob13> samb
<Ethos> hi guys, i'd like to take an image of a current ubunt-server and then restore it onto an identical machine for testing
<Ethos> I've tried "ping" but it's .. well shit
<Ethos> Any recommendations please?
<W8TAH> in the intrepid setup there is an option for virutal machine host -- what VM system is that installing?
<soren> W8TAH: kvm and libvirt
<W8TAH> thank you soren
<Guest98439> hi all
<Guest98439> anyone here can install Ubuntu 8.04.2 in Server IBM System x3105???????????????????
<jmedina> Guest98439: where?
<Guest98439> in IBM System x3105
<jmedina> what is the problem?
<jmedina> are you asking for compatibility?
<ball> I didn't know there was an x3105.  Have you tried it?
<Guest98439> well the hardware are this
<Guest98439> CPU: AMD single-core Athlon processor or dual-core Opteron processor
<Guest98439> NIC: Broadcom 5721 10/100/1000
<Guest98439> CPU is AMD Opteron
<Guest98439> Im going to have problems with hardware drivers during the instalation?
<jmarsden_> Guest98439: Just try it -- does it boot from the CD OK into Hardy?  In LiveCD mode can it see the NIC?
<Guest98439> but i like to use Server Edition, not Destktop edition, and Server edition dont have Live CD
<ball> Guest98439: Ah, that's why I didn't see that model, I looked mostly at Intel
<jmedina> Guest98439: my workstation has 2 single opteron and broadcom, it used to be a server
<Guest98439> is a DUal Core Opteron AMD 1210?
<Guest98439> is a Broadcom 5721?
<jmedina> nop
<jmedina> :D
<Guest98439> is a Broadcom 5721?
<jmedina> most of servers we install are amd, never had problems
<jmedina> even in sun workstations
<jmedina> Guest98439: why not boot it with a live cd and test your hardware
<Guest98439> jmedina but Hard server edition dont have Live CD
 * ball tested someone's RAM yesterday via a Xubuntu liveCD
<jmedina> Guest98439: it is a live server
<jmedina> boot with cd
<jmarsden_> Guest98439: Test it with the Desktop Edition CD
<jmedina> and change to TTY2
<jmedina> :D
<jmedina> and test your hardware
<jmedina> Guest98439: do you already have this server?
<Guest98439> yes I have
<Guest98439> I installed ubuntu server in Dell Power Edge 6850, 1950, 2950 without problems
<Guest98439> also in a IBM System x3200 M2
<Guest98439> jmedina, you recommend me to run with a Live CD Desktop edition and try my hardware???
<jmedina> yeap
<Guest98439> ok
<Guest98439> im tri
<Guest98439> im going to try
<Guest98439> thanks
<jmedina> or you can use server edition, but you need to know how to test hardware in the command line
<Guest98439> but anyone here use IBM System X Servers????
<Guest98439> jmedina Ubuntu Server have a Live CD???
<Guest98439> Live cd for test hardware ???
<jmarsden> Guest98439: No, just use the desktop edition CD.
<jmedina> Guest98439: you can change to a TTY when the installer is loaded,
<Guest98439> jmarsden but the desktop edition have the same drivers like a Server edition ?
<jmarsden> Guest98439: Yes.  It has a differently configured kernel, and a different selection of packages, but the same drivers.
<Guest98439> ok
<Guest98439> thanks all
<Guest98439> im going to try first before install
<XiXaQ> regarding Darwin Calendar Server. Last time I used it, which was quite some time ago, it wasn't possible to connect it to an OpenLDAP directory. Does anyone know if this is still the case?
<jmarsden> http://trac.calendarserver.org/wiki/DirectoryService suggests it can use LDAP...
<XiXaQ> yes, the problem was using it on Ubuntu. The library it used to connect to an LDAP server was only available on Mac OS X, I think.
<jmarsden> Then you can either try the latest Debian/Ubuntu package of it, or just build it from latest source, and see how far you get :)
<XiXaQ> there aren't any packages, I think. But I thought maybe someone knew. There was some interest for it.
<jmarsden> There are Debian packages, definitely...
<jmarsden> http://packages.qa.debian.org/c/calendarserver.html
<XiXaQ> really? :)
<XiXaQ> it's actually in intrepid! :)
<XiXaQ> in that case, I think I'll have to test that properly and update http://wiki.ubuntu.com/CalendarServer
<jmarsden> Go for it :)
<XiXaQ> heh, we wrote that guide when 7.10 was an infant. :)
<HiebY> how much space do a minimal installation of ubuntu take?
<thehook> HiebY: i think it was about 100mb
<HiebY> ok
<mrwes> Epiphany web browser is pretty nice -- and fast
<thewrath> hey i just put another network card in my computer
<thewrath> how do i configure it
<jmedina> thewrath: is it detected?
<jmedina> check with
<jmedina> ifconfig -a
<thewrath> no not that i know of
<thewrath> yes its detected
<thewrath> i tried to go into the /etc/network/interfaces and added auto eth1 iface eth1 inet dhcp
<thewrath> then /etc/init.d/networking restart
<thewrath> nothing
<jmedina> then read https://help.ubuntu.com/8.04/serverguide/C/networking.html
<jmedina> use
<jmedina> ifup eth1
<thewrath> thati s what i needed thanks jem
<thewrath> jmedina: can you see if youc an get to https://www.wasd.k12.pa.us/helpDesk/index.php
<thewrath> want to make srue outside access has been restored
<thewrath> it was up and down today
<jmedina> thewrath: yes with expired cert
<thewrath> that is fine
<thewrath> its a self signed anyway
<thewrath> i have to talk to network admin
<thewrath> how do you mount a partiiton and how do you call it something
<chrisadams> anyone here had trouble deploying rails with ubuntu intrepid, and mysql gems of late?
<chrisadams> I'm getting a  'MissingSourceFile (no such file to load -- mysql):' error, and I can't work out why
<chrisadams> http://gist.github.com/78672
<thewrath> is it sudo /mount /dev/sda3 /tftpboot ?
<ivoks> mkdir /tftpboot
<ivoks> mount /dev/sda3 /tftpboot
<jmedina> dejavu? i think it was already answerd
<ivoks> jmedina: it was, couple of times
<jmedina> ivoks: do you use openvpn?>
<jmedina> I mean OpenLDAP
<ivoks> both :)
<jmedina> vpn with ldap auth?
<ivoks> hehehe no
<jmedina> what services are your authenticating to ldap?
<ivoks> mail, http, web page, radius (and then ppp, wifi)...
<jmedina> ivoks: with radius, that auth mech?
<ivoks> ?
<jmedina> im interested, I deployed radius+ldap groups and vlans, undocumented :S
<jmedina> are you using ldap groups in you wifi setup?
<ivoks> no
<ivoks> just auth and accounting
<jmedina> how do you limit user access?
<jmedina> by attribute?
<jmedina> or any ldap user has access to wifi?
<ivoks> bah... sorry, my fault
<ivoks> radius uses mysql as backend
<jmedina> :(
<jmedina> well, I think im going to beging again from scratch :S
<ivoks> but very soon i'll use as ldap backend for radius
<ivoks> for cisco vpn
<thewrath> ivoks: whats the command in server you told me to use when i want to look at different partitions?
<ivoks> any ideas how to fight against syn flood? :/
<jmedina> I use it but right now is autenitcating every user, then I need to allow by groups and then set a vlan to each group with different access policies
<ivoks> i'll need something like that too
<jmedina> ivoks: do you know if there is any effort to translate server guide?
<ivoks> it is translated
<jmedina> where?
<jmedina> in help.ubuntu.com I only found english version
<jmarsden> https://translations.launchpad.net/ubuntu/jaunty/+source/ubuntu-docs/+pots/serverguide/
<jmarsden> Shows status of all translations of it :)
<ivoks> there, it's possible to translate it
<ivoks> pick your language and start :)
<jmarsden> jmedina: What language are you hoping to find a translation of it in?
<jmedina> jmarsden: spanish
<jmarsden> I know there is one for Spanish already somewhere...
<jmedina> the documentation is in docbook right?
<jmedina> well, I found docbook xml in ubuntu-docs package
<acicula> if you do a netinstall from the ubuntu-server cd does it install the default ubuntu-profile? since i do have a ubuntu-standard meta package installed, i expected something along the lines of ubuntu-server?
<giovani2> acicula: ubuntu-standard is the minimal "server" install
<giovani2> I'm unclear on what you're looking for -- could you rephrase?
<acicula> giovani2: trying to understand the difference between ubuntu and ubuntu server i guess
<giovani2> acicula: the "desktop ubuntu install" (which is what most people mean when they say just "ubuntu") installs a GUI, specifically Gnome, and many desktop-oriented applications
<giovani2> "ubuntu server" installs a minimal command-line setup, ready for you to install only what you need (servers are supposed to be kept minimal, so as to increase performance, among other things)
<acicula> yeah i did not install that, i used the netinstall images of the ubuntu-server iso and installed the base install and openssh, nothing else yet really
<giovani2> right
<giovani2> so you have a base server install
<acicula> well i swapped to the kernel image
<giovani2> yes, the server install also has a different kernel image
<giovani2> but you can always change that manually, as you've found
<acicula> well it installed a generic image by default, is that the same that is used on vanilla ubuntu(desktop), or is that already a different kernel
<giovani2> I believe that's the one that installs with ubuntu desktop
<giovani2> when most people install servers -- they do so with the server install iso
<giovani2> and that will provide the -server kenel
<giovani2> kernel*
<acicula> hmm, well it didnt here, but that's probably because i used the netinstall from the cd instead of booting the cd
<giovani2> where'd you get the netinstall iso?
<acicula> i didnt use the iso either, in the server iso there is a netinstall directory somewhere which contains a kernel and initrd
<giovani2> ohh, ok
<acicula> booted those via grub, as i cannot change the cdrom iso in the vm
<giovani2> why can't you change the cdrom iso in the vm?
<acicula> The provider installed centos by default, and provides the iso to do a reinstall, you can only change it to another iso+install
<acicula> for which incidently you have to pay
<giovani2> oh, that's no fun
<acicula> so i went with the default centos at first, but i found it to be a bit impractical
<acicula> and since i can do local logon i'm only limited to installing what i can boot from inside the vm
<acicula> anyhow do you know any documentation about optimizing a kernel to run inside a vm?
<giovani2> I'd consider moving to a more user-friendly VPS provider
<giovani2> if I were you
<giovani2> there used to be a special installer (with a special kernel for that) called Ubuntu JeOS
<acicula> well apart from that i have no real problems with them
<acicula> and it's my furst one
<acicula> *first
<giovani2> but, I believe all of the virtualization optimizations were migrated into the standard -server kernel
<giovani2> http://www.ubuntu.com/products/whatisubuntu/serveredition/jeos
<acicula> yeah but i think i read something about the kernel image having some differnt settings such as disabling preemting?
<giovani2> yes, as of ubuntu server 8.10, the jeos optimizations are offered in the installer
<acicula> giovani2: i looked at that but doesnt that mostly focus on a small install base?
<giovani2> what do you mean?
<acicula> my current install uses just 30mb after deducing cache, that's with a openssh and some bash shells open, seems ok
<acicula> memory that is
<giovani2> ok
<giovani2> yeah, you can remove some of your TTYs
<giovani2> to save some ram
<giovani2> I do that on my low-ram VPSes
<acicula> well this one has 256mb, seems ok
<giovani2> ok, so what's the issue, exactly?
<acicula> that i wasnt sure if i installed the server metapackage instead of the default ubuntu-base package, but it seems there is no difference between the two
<giovani2> the kernel is the only difference I'm aware of
<acicula> guess i'm good then :)
<giovani2> but feel free to stick around -- someone might have a more confident answer than I do :)
<acicula> then i'm just looking if there any tweaks to make to a kernel which runs inside a vm, or if the defaults are good enough
<giovani2> it appears there-s the linux-image-xxxxxxx-virtual kernel
<giovani2> which, it seems, is geared towards vm clients
<acicula> i'll check that out
<giovani2> you running intrepid?
<acicula> also i get a message about some tools not being in my path because i do not have administrative privilidges, where  can i change that?
<acicula> yes
<giovani2> linux-image-2.6.27-9-virtual
<giovani2> that'd be the virtual kernel then -- do some research and make sure it's what you want -- and don't install it directly
<giovani2> the linux-virtual metapackage seems to be the best way to make sure it gets updated properly
<acicula> yeah i only use those
<giovani2> you do not have administrative privs?
<acicula> well i can do a local logon when booting
<giovani2> are you familiar with ubuntu's policy of "disabling" root login, and encouraging the use of sudo instead?
<acicula> yes, that's not what i meant
<acicula> if i type chkconfig as a user
<acicula> i get the message that its not in my path, because of lacking privilidges
<acicula> adding the correct Path variables fixes that
<giovani2> right ... but if you run sudo chkconfig?
<giovani2> however, chkconfig isn't used in debian/ubuntu afaik
<acicula> that works since sbin/ usr/sbin is in the root path
<giovani2> right ... because chkconfig is only meant to be run with root privs
<giovani2> you don't need root-only dirs in your user path
<Deeps> /sbin/ contains apps that unprivledged users use too
<acicula> i like tab completion, and some sbin programs work fine with normal user privlidges, chkconfig lists the runlevels, even with just user privs
<acicula> i was just wondering if this being an administrative user was explicitly defined somewhere
<giovani2> your user should be a member of the 'admin' group
<acicula> ah
<giovani2> the installer should've done that
<giovani2> for your first user
<acicula> hmm i am
<acicula> hmm instead of chkconfig, use update-rc.d?
<sakjur> #ubuntu-se
<giovani2> acicula: ubuntu uses upstart
<giovani2> now
<sakjur> Wrong channel, sorry
<acicula> ah
<acicula> is there a reason to not mount the same partition under /tmp and /var/tmp, or am i better of symlinking it/leaving it alone
<giovani2> not ubuntu specific, but /var/tmp is supposed to be kept after reboots, etc
<giovani2> so, it's typically separate from /tmp
<acicula> hmm that makes sense, since i think /tmp is wiped on reboot
<giovani2> it isn't always ... but can be
<giovani2> so, /var/tmp is used, afaik, when data needs to be kept longer
<acicula> ah thanx, i made a seperate partition for tmp so it could be mounted with noexec, but i suspect doing that with /var will break some things
<thewrath> can i get the ubuntu server 8.04lts guide in pdf format?
<giovani2> thewrath: not that I'm aware of -- however, html to pdf converters are easy to come by
<orudie> is anyone familiar with setting up a spam filter for dovcot/postfix ?
<giovani2> orudie: dovecot is just the imap/pop server, so spam filtering is usually done on the postfix side
<orudie> giovani2, i see
<giovani2> I've done RBLs, greylisting, etc, but no spam filtering like spamassassin, if that's what you meant
<giovani2> what are you looking to do, specifically?
<hads> amavis/spamassassin are probably the most common
<thewrath> giovani2: html to pdf converter?
<thewrath> i want it all in one p df file
<thewrath> not in 50
<orudie> well since my company switched to the mail server i set up , now everyone is complaining about spam
<thewrath> use a barracudda spam firewall
<thewrath> lol
<giovani2> orudie: while this is probably not the answer you want ... I advise strongly against jumping into production spam filtering without a lot of experience
<thewrath> great peice of equipment
<hads> Riight
<giovani2> barracuda is awful
<thewrath> what
<thewrath> we have it at both schools i help out with
<giovani2> overpriced, for what it delivers
<orudie> giovani2, well what if i will configure it for minimal filtering
<thewrath> spam has reduced alot
<thewrath> what other  features would you add to it for the price
<giovani2> orudie: it's up to you -- but spam filtering is complex if you haven't had experience
<giovani2> thewrath: I wouldn't -- it's overpriced
<thewrath> oh ok
<orudie> giovani2, i have minimal experience, but what is spamassassin about?
<hads> orudie: I'd start with checking HELO, using spamhaus and postgrey myself.
<thewrath> giovani2: are you talking about a html to pdf converter?
<giovani2> thewrath: yeah, that's what I said -- I didn't realize it wasn't offered as a single html page
<thewrath> yea
<thewrath> anyone else have any ideas?
<giovani2> orudie: spamassassin is a framework for applying all sorts of filters
<giovani2> orudie: personally, I've found greylisting and RBLs to be effective -- but they're not for everyone, and I'd advise against using them in production without playing around with them a lot
<orudie> giovani2, do you think with my minimal experience in this i actually will be able to set it up ? just for minimal spam filtration
<giovani2> orudie: I don't know your level of experience, I'm leaving that up to you to judge
<giovani2> as well as how mission-critical your use is
<ScottK> Ubuntu documents amavisd-new with spamassassin and clamav reasonably well.
<ScottK> It does take adjustment and understanding for your particular situation.
<orudie> great, ScottK do you have a url ?
<orudie> i'm reading the spamassassin wiki right now
 * ScottK looks
<orudie> eheh
<thewrath> has anyone used some sort of imginag server on ubuntu server to image windows and linux based comptuers
<DragonLord-> I get "There is 1 zombie process" when I login, how do I check which process and information about the crash?
<ScottK> orudie: https://help.ubuntu.com/8.10/serverguide/C/mail-filtering.html for Intrepid.
<thewrath> how do you see if an ethernet interface is up?
<thewrath> whats the command that you run
<orudie> very cool thanx scope006
<orudie> ScottK,
<orudie> scope006, is gonna be like , WTF? again lol
<orudie> just like yesterday
<mrwes> Is is necessary to have a rate limit rule in iptables ?
<ScottK> If you want to rate limit. yes.
<thewrath> rate limit does what ?
<mrwes> well..I have denyhosts running, i was thinking of limitiing a brute force, say 8 in 60 seconds
<mrwes> yumm...damn good Malbec :)
<thewrath> denyhosts last update was back in 2006 right?
<mrwes> shrug
<mrwes> works on my bo
<mrwes> x
<thewrath> mrwes: i was just curious
<thewrath> that is what i saw i thought on sourceforge when is aw it
<thewrath> mrwes: is it easy to set up/
<mrwes> yah
<thewrath> has nayone done anythign with drbl?
<mrwes> http://ubuntuforums.org/showthread.php?t=450853
<thewrath> ?
<thewrath> nvm the ?
<mrwes> just make sure you don't have any failed login attempts from where ever you login, cuz that IP will get added to the hosts.deny, so add your IP to the hosts.allow
<thewrath> how do you see if an ethernet interface is up?
<thewrath> wahts teh command
<mrwes> ifconfig
<thewrath> to see if it detected
<thewrath> i thought it was something else
<thewrath> lol
<giovani2> nope
<giovani2> ifconfig will show the interface, and it's UP/DOWN status
<mrwes> :)
<mrwes> google is your friend
<giovani2> it's exactly what you're looking for :)
<thewrath> i could run if eth1 ifup or something
<giovani2> heh
<thewrath> how do you bring up an interface
<giovani2> you can run ifup eth1 ... to bring up the interface/config
<thewrath> oh ok
<thewrath> that is what it is lol
<giovani2> but that's not what checks to see if it's up or down
<thewrath> right
<thewrath> that just brings it up
<giovani2> right
<giovani2> and ifdown
<giovani2> is self explanatory :)
<thewrath> right :)
<thewrath> i wish i had alot more time to mess with DRBL lol
<mrwes> and ifright does what?
<giovani2> you configure the settings it runs, etc in /etc/network/interfaces
<mrwes> :)
<MatBoy> does someone know how I get the xferlog of proftpd into a fifo file ?
<uvirtbot> New bug: #338982 in samba4 (universe) "evolution crashed with SIGSEGV during MAPI authentication" [Medium,New] https://launchpad.net/bugs/338982
<uvirtbot> New bug: #342556 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu7 failed to install/upgrade: underprosessen post-installation script returnerte feilstatus 1" [Undecided,New] https://launchpad.net/bugs/342556
#ubuntu-server 2009-03-14
<docta_v> how can i tell if a gpg secret key is encrypted or not?
<JanC> see if you can use it without passphrase?
<docta_v> yeah i actually just thought of that
<docta_v> thanks
<docta_v> heh
<photon> how can I search a whole directory and each file in it for a text string and find out which files have that string in them?
<ScottK> photon: grep [your string] *
<photon> ScottK, thanks
<ScottK> photon: That'll give you a pretty verbose output.  You can man grep for more knobs to turn to refine it.
<giovani2> we all love the grep knobs
<ScottK> I like piping through a series of greps to get what I want.
<DavidCraft> how do I set up an LVM encrypted system with a separate unencrypted boot partition?
<giovani2> DavidCraft: your /boot partition should not be LVM
<giovani2> you can do this all during install time
<giovani2> just create a normal boot partition, then create the LVM PVs with the left over space
<giovani2> and put your logical volumes in there
<twb> It doesn't hurt to put /boot in LVM, though currently grub can't have /boot be both md RAID1'd and in LVM at the same time.
<giovani2> grub doesn't support multiple VGs yet does it?
<giovani2> guess I could just try it in a virtual machine
<twb> giovani2: sorry, I don't know.
<twb> giovani2: IME most of the problem is install-grub, not in grub itself.
<twb> i.e. auto-detecting what modules to include in the MBR
<uvirtbot> New bug: #342580 in samba (main) "smbcontrol reports Segmentation fault" [Undecided,Fix released] https://launchpad.net/bugs/342580
<uvirtbot> New bug: #298680 in samba (main) "samba restart broken in intrepid" [Undecided,Incomplete] https://launchpad.net/bugs/298680
<psteyn> Silly question, how can I tell if a distro IS 64bit currently or not?  something like ldd on a file that tells me its 64bit compiled?
<cemc> psteyn: try file <binary_file>
<psteyn> ./usr/bin/gcc-4.2: ELF 64-bit LSB executable, x86-64,
<psteyn> cool :]
<psteyn> thanks
<psteyn> the 64bit default ubuntu server install will detect more than 2gb of ram right?
<Jeeves_> yes
<psteyn> thanks..think something is wrong on this 64bit install then...initially installed as 64bit dapper drake...bios picks up 4gb ram, but only see ~2gb in linux. so upgraded to latest, and still only see ~2gb
<psteyn> gonna reinstall with a fresh 64bit latest install (sure I could fix the problem otherwise, but id rather just try this)
<psteyn> but for interest sake, what else can I look at?  to debug this?
<psteyn> current installed kernel: 2.6.24-23-server #1 SMP Mon Jan 26 01:36:05 UTC 2009 x86_64 GNU/Linux
<Deeps> the default ubuntu server 32bit will detect more than 4gb of ram too, PAE extentions are enabled by default (in hardy, at least, i dont think that's changed in intrepid)
<psteyn> so if after I reinstall and I still only see 2gb ram the memory might be faulty even though the bios does report (soz, its 8gb..not 4)
<Deeps> as for where it's going wrong, i'd suggest running memcheck when booting from the cd
<Deeps> and seeing what that reports
<Deeps> memtest even
<psteyn> Ok, good idea, thanks
<Deeps> as yeah, it sounds like you may have some faulty ram
<psteyn> yeah...
 * domas thinks, why this tool doesn't have ubuntu package yet: https://savannah.nongnu.org/projects/parallel/ ;-)
<BulleT--> Can somebody please tell me what is this mail error about ebox? http://pastebin.com/m27b65e7c
<Deeps> BulleT--: line21 is the key
<Deeps> EBox.pm seems to be missing from the paths listed, which is where perl normally looks for inclusions
<BulleT--> I installed ebox.. but it's not running.
<Deeps> domas: do it! :D
<BulleT--> I mean... it's runing on the server... but in my browser it doesen't.
<Deeps> i have no idea what what you just said means
<Deeps> but that error is being generated by a cronjob on the machine running ebox
<Deeps> #
<Deeps> /etc/cron.hourly/99purgeEBoxLogs
<Deeps> that cronjob
<Deeps> lines 24+25 related to another different cronjob that's also failing
<BulleT--> I resolved that issue.
<BulleT--> I have another one...
 * Deeps engages mind reading powers
<BulleT--> :))
<BulleT--> I have ubuntu server 8.10 and somehow i managed to install ebox. It's running right now, on the server. But if I type in my browsers address bar https://myIP/ebox it's doesen't work.
<Deeps> check apache logs
<Deeps> /var/log/apache/access.log and /var/log/apache/error.log
<Deeps> and possibly others to
<Deeps> o
<Deeps> might be apache2 instead of apache in that path
<BulleT--> Oh my god.
<BulleT--> http:////pastebin.com/f6d3adc03
<BulleT--> Sorry
<BulleT--> http://pastebin.com/f6d3adc03
<Deeps> most relevant line there appears to be 16
<Deeps> with the 404
<BulleT--> http://pastebin.com/f689ae9
<Deeps> again, only relevant line there appers to be the last one too
<Deeps> logs are time + date stamped
<Deeps> stuff from friday isn't particularl relevant if you're encountering the error right now ;)
<Deeps> good way to see exactly what's occuring, is to open 2 terminals, one with tail -f access.log, one with tail -f error.log, hit enter a few times on both to clear some space, and then try to access ebox
<Deeps> you'll see the hits in both immediately, and see what error is occuring, if any
<Deeps> and can then try to debug
<Deeps> by looks of things, you've got an alias for /eBox pointing to data in /var/www/hub/vhcp/ebox
<Deeps> but there's nothing there
<BulleT--> Nothing it's happening.
<domas> hehehee, my 'iostat' manual is already #5 on google for 'iostat'
<domas> got there in few hours
<Deeps> fuse4bsd.creo.hu?
<Deeps> i lie, there's not an alias for /eBox since /eBox 404'd
<Deeps> and the error was generated 20mins later
<Deeps> gl, bbl
<psteyn> I have my /var on a separate drive, and want to do a reinstall...do I still _need_ to do mysqldump or can I just point the new install's mysql to use /var/ ? itl be the same mysql version etc...
<psteyn> would do a mysqldump but dont have a spare drive / spare space..
<mrwes> My server has 2 processes running and like 59 sleeping -- why does it say almost all of my 2gb of RAM is used?
<domas> psteyn: you can use existing directory, if  you don't wipe it out accidently
<mrwes> Mem:   1945428k total,  1915644k used,    29784k free,    72272k buffers
<mrwes> Swap:   975232k total,      188k used,   975044k free,  1764352k cached
<domas> mrwes: see "cached"
<psteyn> domas: ok, yeah I won't mount the mysql drive (/var) until I've got the new install up and running, then ill just mount it and change mysql to use that?
<psteyn> that shouldnt wipe out right?
<mrwes> domas, so the caches memory is reserved for those sleeping processes, so it can write the data from the cache instead of the disk?
<lex> hy..
<lex> hi
<lex> i need heelp please
<nivanson> hi, i need to set up a bridge on my server so i can have my virtual machines reachable outside the server. however, the bridging docs (http://doc.ubuntu.com/ubuntu/serverguide/C/network-configuration.html#bridging) is using static ip and my isp has dynamic ip
<nivanson> can anybody help me out?
<simplexio> nivanson: same thing,execpt you put iface br0 inet dhcp
<nivanson> ok i will try thanks!
<nivanson> should my network/interfaces have the eth0 lines?
<simplexio> nivanson: wait  a sec, i cehck my interfaces
<simplexio> nivanson: i have , auto eth0 , auto eth1, auto br0, and then i have "iface br0 inet static"
<nivanson> is your bridge a bridge between eth0 and eth1?
<simplexio> nivanson: yeah.
<nivanson> simplexio: i think i got it working now, thanks alot :)
<dazman> Hi. Anyone running Ubuntu-Server in ESXi?
<lex> hola
<lex> hi
<nivanson> i am getting really really bad performance on my virtual server (kvm)... how can i see if the virtual server is hardware accelerated?
<nivanson> using kvm as in the docs
<Nafallo>  egrep '(vmx|svm)' /proc/cpuinfo
<nivanson> yes i have support but can the server be set to not be hardware accelerated somehow? i think i might have missed some configuration somewhere
<nivanson> becouse the performance is really lousy... especially on read write
<simplexio> nivanson: how much top shows  %wait ?
<nivanson> 0.2
<simplexio> well then best quess its that kvm is just slow
<nivanson> it seems to only get really slow when there is read/write involved
<nivanson> cpu load is low
<giovani2> cpu load will be high when the i/o is being maxed out, unless you're at the max of your i/o bandwidth -- not likely
<simplexio> im mean if there is big rw going on and %wa is huge, there is too much io traffic to hd, but if not then it is just slow
<simplexio> like vmware was, then i changed to vbox and install time on VS2005 went from 10H to 30min
<nivanson> you see, i am doing an rsync to the virtual machine from a third machine, while doing this it gets extremely slow
<simplexio> rsync is slow
<nivanson> i guess i will have to see what happens when the real load (webserver) starts then
<nivanson> thanks!
<simplexio> i got only 20M over 1G ethernet when i synced data dires over nfs mount, just plain cp hit 40M per second
<nivanson> the pc sending the data to the virtual machine with rsync is on wireless, so i could not possibly max out the i/o but still even locally on the server (as i have keyboard and a screen connected) commands are slow now
<nivanson> i stop the rsync and it gets fast
<nivanson> still i cannot see the load on top... it shows almost idle
 * Nafallo usually gets ~13MB/s on his guests
<nivanson> how can i benchmark i/o like you did nafallo?
<Nafallo> nivanson: wget? ;-)
<simplexio> i didnt know hat kvm is that slow..
<nivanson> hehe true true
<Nafallo> simplexio: it isn't :-)
<simplexio> nivanson: bonnie++ if ists linux
<nivanson> simplexio: i think i might have an error in my config somewhere cuz this is rediculous
<nivanson> simplexio: ok thx
<simplexio> actually im intrested difference between native vs kvm speed
<simplexio> nivanson: so if you test it, feel free to paste result to me
<lex> fail2ban
<nivanson> results: http://pastebin.com/m6e89ca81
<nivanson> (host machine and virtual machine)
<nivanson> i did a small test just for measure
<nivanson> results tell me i have good i/o even on virtual machine, maybe problem was in my network
<simplexio> yeah.. some problem in bridge configuration or something
<nivanson> simplexio: it seems like that, i think i go by some internet route even though my pcs are in the same network
<nivanson> making my ssh (that SHOULD be quick) very slow
<simplexio> nivanson: you could ry play with mtu and txqueuelen
<gabrieltomate> Please, my VINO dont work. Anybody know how i have to do?
<simplexio> nivanson: http://ubuntuforums.org/showthread.php?t=744378
<nivanson> simplexio: thanks!
<simplexio> nivanson: it seems that 10M nic is default for guests. or attleast has been
<nivanson> yeah, but that is fine becouse im on a slower connection anyway
<nivanson> 3mbit/s upstreams :S
<simplexio> :)
<nivanson> and since my complete internet connection gets really 100% upstream loaded when i do the rsync between my laptop and my virtual server it means that it just doesnt take the shortest route here
<simplexio> nivanson: well traceroute should tell you that
<nivanson> i am reading up on traceroute atm, i just dont know how to use it
<simplexio> traceroute targetip, and if it shopw more than two lines data goes wrong way
<nivanson> oh that was easy xD
<nivanson> 17 steps :)
<nivanson> hahahaha this suxx
<simplexio> :) yeah.. you configured route wrong
<nivanson> yeah
<nivanson> traceroute to the ip holding the servers is only 1 ste
<nivanson> p
<simplexio> route add guestIp br0 could help if i recall syntax right
<nivanson> so i will have to take a look at my bridge hehe
<nivanson> on my server holding the virtual machines?
<simplexio> yes
<simplexio> actually you should see only one step on host to guest and back
<nivanson> yeah
<nivanson> i lost my password to the host lol
<simplexio> :D
<nivanson> i ran a passwd earlier cancelling it by pressing ctrl+d and it said password successfully updated
<nivanson> haha
<nivanson> i have terminal access to it since i can ssh with my key but i cannot change my password... this is for shit :S
<simplexio> thats reason why i first configure root passwd.. and make another username with sudo permission
<nivanson> xD
<nivanson> anyway
<nivanson> i reset my root password now using rescue cp so np
<nivanson> ok that route command you gave me fixed my problem with traceroute, however my rsync command is still slow, can i flush my dns?
<nivanson> what the hell, i will restart my pc
<nivanson> thanks alot for your help simplexio :)
<nivanson> hug
<nivanson> i rebooted my laptop, from it i can traceroute my virtual servers ip and get 1 route. its great! but somehow when doing my rsync it still routes through the other way...
<nivanson> rsync -avh -e ssh my.backup my_user@85.****.***.**:/folder
<nivanson> could it be that it somehow cached the route somewhere :/
<simplexio> i think that 10M nic is slowing things now..
<nivanson> simplexio: true!
<nivanson> it should be that
<nivanson> yeah it is, my network is quick again!
<nivanson> this is lovely :)
<simplexio> probably alot faster now :)
<nivanson> 1000x
<nivanson> xD
<ylon> any developers around?
<trym_work> I have a raid setup with a blocksize of 64k. When I format it to ocfs2, should I use the same block size?
<trym_work> I have a big hw raid logical device. I created a lvm lv and made it an iscsi target. Do I need to partition this block device, or could I format it directly?
<_ruben> no need to treat an iscsi volume other than a physical disk really
<_ruben> s/other/different/
<trym_work> _ruben: Ive never formatted an entire disk without paritioning it
<trym_work> but I take it that works then?
<trym_work> anyone?
<cemc> trym_work: should work without partitions too
<trym_work> thanks :)
<trym_work> any idea when it comes to blocksize?
<sparky_> hello?
<giovani2> read the title
<giovani2> topic*
<giovani2> just ask if you have a question
<sparky_> Ok, i'm a noob to irc and ubuntu.  Anyways  i'm setting up a home NAS with Ubuntu server 8.04.2 and was looking for a way to install a desktop gui..  but i want one that didn't have much to it just something plain.  does anyone have any suggestions on which on to use?
<giovani2> this channel doesn't support guis on servers
<giovani2> they're not going to help you administer a server
<giovani2> but, if you want one -- you can install one
<sparky_> well is there a web interface that could help me instead of a desktop gui?
<giovani2> there are many ... I doubt anyone here uses them, you should really learn to use the console interface
<giovani2> there are some good books on getting started, I'd recommend some if you're interested
<twb> The package gnome-system-tools provides a handful of GUIs for basic things, such as adding and removing users.
<twb> These GUI apps can be use ssh -X, so that X need not be installed.
<twb> These GUI apps can be use ssh -X, so that X need not be installed (on the server).
<twb> There are also packages like webmin, which try to do the same thing in a browser.  In my experience, they all really, *really* suck, and will quietly destroy your entire system.
<sparky_> what are some of the books that would help me with the console.  I would like to learn that more anyways but don't have anyone to teach me
<twb> sparky_: RUTE is an excellent, if dated, introduction to general Linux system administration.
<twb> (It is also downloadable gratis.)
<twb> For paid-for dead-tree books, I have a general preference for anything published by ORA (O'Reilly).
<giovani2> http://www.apress.com/book/view/9781430210825
<twb> Apress is my second choice :-)
<giovani2> I've read this book, it's pretty decent, and offers some good ubuntu-specific info
<JanC> apress has some good books too
<giovani2> and it's up-to-date
<giovani2> to 8.04 LTS
<giovani2> so covers upstart, etc
<twb> sparky_: as for getting people to teach you, I strongly suggest you get in touch with your local Linux User Group.  They often have monthly meetings and such.
<twb> giovani2: sounds good.
<JanC> there are several free "books" for the console though
<giovani2> some minor spelling/grammar issues, but minimal technical errors
<hads> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<sommer>  
<twb> I don't like Apress' use of ragged right and humanist fonts, though :-)
<sparky_> JanC, where are these free "books" ?
<JanC> http://tille.garrels.be/training/ has some books too (maybe not always 100% correct(?), but good enough for many)
<JanC> and there are the IBM redbooks
<JanC> and some of these books: http://www.informit.com/promotions/promotion.aspx?promo=135563 are downloadable for free too
#ubuntu-server 2009-03-15
<tonyyarusso> I'd like to set up a caching server (DNS, web, other?) just for personal home use.  Would I need it to have two NICs, or could it be done with just one?  (I'm considering an Eee Box for the task currently.)
<giovani2> I don't see why you'd need more than one nic, nope
<tonyyarusso> Excellent.  Wasn't sure if such a thing would need to also be acting as the gateway or some such thing.
<giovani2> well, you don't have to make it your gateway
<giovani2> you could, but, running a web cache, and a dns cache don't require that, they just require that your user computer's dns be pointed at it, and it have your box as its http proxy
<tonyyarusso> Any other tricks like that that you can think of to make my internet connection appear faster?
<giovani2> uh
<giovani2> how fast is your internet connection?
<tonyyarusso> 12Mbit DSL, iirc.
<tonyyarusso> Obviously I can't make big downloads faster, but for my everyday stuff every little bit helps.
<giovani2> unless you have extremely high latency, it's unlikely you're going to see much gain
<tonyyarusso> (I'm also partly doing this just to learn how, so I could potentially do similar things on a broader scale later in life.)
<ball> Anyone tried boinc on Ubuntu Server?
<uvirtbot> New bug: #341816 in samba (main) "samba PANIC: failed to set uid" [Undecided,Incomplete] https://launchpad.net/bugs/341816
<kriel> Okay; so. The linux installer asked me if I wanted to raid my system; I set it up / to be raid1. For some reason it didn't do a dpkg-reconfigure mdadm during setup (or at least didn't ask those questions.) I did sudo dpkg-reconfigure mdadm after the fact. Halfway through it gave a bunch of "rm: cannot remove md0-: Read-only file system". Should I be worried?
<J_P> hi all
<J_P> ï»¿how I change of "Sun Mar 15 01:38:53 EDT 2009" to ï»¿"Sun Mar 15 01:38:53 BRT 2009" ? I try set in tzselect and set ok to brazil location, but still are EDT.
<Shoopuf> Hello.. Does anyone know how to add the PHP path directory so that any scripts I run can use PHP? I don't know where to add the PATH to
<ball> If you install php, doesn't it go into your path by default?
<Shoopuf> Hmm... Well when I type "php" I get an error "-bash: php: command not found"
<sommer> Shoopuf: you'll want to install the php5-cli package
<Shoopuf> sommer: ok, apt-getting now :)
<Shoopuf> sommer, ball: thanks guys, workin like a charm now... had to install php5-cli
<sommer> welcome
<Shoopuf> i figured i had to edit some text file and add a path somewhere :/
 * ball wasn't much help, but I appreciate the sentiment
<rags> how to add a second NIC to ubuntu...it gets listed in lspci
<llragsll> any one...
<uvirtbot> New bug: #342655 in sysstat (universe) "Confusing debconf message when upgrading" [Undecided,Invalid] https://launchpad.net/bugs/342655
<Sooke1> Hi
<Sooke1> I have mounted a Linux server with POSTFIX, it doesn't have a domain name, just a static public IP. All the emails I send trought it are put into spam folders, I have read something about MX records, etc.. Can anyone please help me?, thanks
<llragsll> s: how to add a second NIC to ubuntu...it gets listed in lspci
<ziroday> Hi, would anyone know why ssh would be laggy? (I mean just typing in text is slow)
<acicula> heu busy link?
<Nafallo> 3G connections do that.
<ziroday> but its just text, surely even the most basic internet can not lag on that?
<acicula> well your letter has to go to and from your server first
<acicula> i think
<ziroday> any idea on how to speed it up, if it were the connection?
<Deeps> latency is latency
<Deeps> ping the server
<Deeps> 200ms+ ping can make ssh appear 'laggy'
<Nafallo> no it isn't. it's screen re-draws.
<ziroday> Deeps: ah right
<Nafallo> :-P
<ziroday> eek 2000ms, time to get a better connection methinks
<Deeps> there you go then
<Deeps> it's taking 2 second for a simple message to reach the server and get back to you
<ziroday> Deeps: thanks!
<Deeps> ssh is an ecrypted bytestream, not just plain text ;)
<rags> I am trying to add a second nic to my ubuntu server..it get's listed on lspci..but how to give it an interface name like eth1??
<Deeps> (so that'll add an additional 5ms or so?)
<Deeps> rags: ifconfig -a, do you see a new interface there?
<rags> oh ya...it is eth2 though not eth1
<rags> i did only ifconfig previosly tht's y i missed it
<Deeps> ifconfig only shows interfaces that are marked as 'up'
<Deeps> ifconfig -a shows all interfaces
<Deeps> you can change your network interface labels with /etc/udev/rules.d/70-persistent-net.rules
<Deeps> or rather, not with, but in that file
<rags> all right...thx..now all I have to do is add it to /etc/network/interfaces file right??
<Deeps> yep
<rags> in the 70-persistent-net.rules( wht a weird name ) there is another device (USB device 0x0bb4:0x0b0c ) which is taking the eth1 name.. any idea wht it is?
<rags> it says in brackets rndis_host
<joaopinto> hi
<Noble> Hi, is it possible to install firestarter on ubuntu server edition without GUI?
<acicula> Noble: yeah, you can forward x11 connecions wit ssh, it'll still install a lot of gui libraries and such though
<pmatulis> Noble: have you tried ufw?
<uvirtbot> New bug: #343244 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.0.67-0ubuntu6 [modified: /var/lib/dpkg/info/mysql-server-5.0.list] failed to install/upgrade: subprocess pre-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/343244
<Noble> No, is that similar to firestarter?
<acicula> is there a log parser of sorts that can parse the firewall messages from the syslog?
<jpds> !ufw | Noble
<ubottu> Sorry, I don't know anything about ufw
<jpds> Arg..
<acicula> heh
<acicula> Noble: it's a commandline front end of sorts for the firewall, just like firestarter does
<pmatulis> ufw comes installed by default on recent releases
<acicula> but without the gui and the crapload of gui dependencies
<pmatulis> it's pretty basic.  depends what you want to achieve
<Noble> Ok, sounds nice. Im looking for something to share my connetcion with other computers
<Noble> I have 2 NICs
<pmatulis> hmm, not sure it can set up NAT
<pmatulis> read up though
<Noble> Ok, I was looking at IPcop, but it seems it cant run all the other servers I want.
<Noble> So its kinda somthing similar im looking for.
<Noble> just on top of ubuntu
<Noble> If I understand this correctly, many firewalls uses iptables to set up everything?
<Noble> So basicly I can use iptables as well with a bit of reading?
<acicula> everything uses iptables
<acicula> well, linux 2.6/2.4 anyway
<Noble> Would it be very optimistic of me to thinker with it?
<Noble> Thanks guys, I found what I needed for now: http://www.howtoforge.com/nat-gateway-iptables-port-forwarding-dns-and-dhcp-setup-ubuntu-8.10-server
<bosco_> never heard of this { Suhosin-Patch Server } does that have something to do with php securing it what exaclty does it do???
<sommer> bosco_: http://www.hardened-php.net/suhosin/
<bosco_> sommer, thanks so it is a good thing i have it installed
<sommer> I think so
<bosco_> lol thanks
<bosco_> i would think so as well
<bosco_> just never heard of it dont know much about it
<genii> Does anyone know what sort of format the pine -passfile expects?
<genii> eg: I'm trying to start pine so it auto uses a supplied password
<genii> nvm, I just discovered it uses same as .netrc
<roy_hobbs> Does anyone have any experience with openldap->sasl->gssapi->kerberos
<roy_hobbs> I can't quite get things working.
<zaapiel-mobile> hello
<zaapiel-mobile> how do i get ufw back to default?
<zaapiel-mobile> ive been messing with it all day and probally borked it
<zaapiel-mobile> id like to get it back to the default settings
<sgla1> bind9 question ok here?
<sgla1> question: Bind9 will not perform all required zone transfers between ubuntu 8.04 master and 8.04 slave dns servers.
<sgla1> I see the notify for the reverse zone occur on the master, but it's not received on the slave server.
<sgla1> Manual zone transfers from the master to the slave server (i.e. dig -t axfr my_reverse_zone) work fine.
<bosco_> how do i take a directory and make it so no one can view it in a web browser never done it before ???
<freaky_t> is there any nice new webapp like a newly started cms or smth. else? :D
#ubuntu-server 2010-03-15
<uvirtbot> New bug: #538876 in samba (main) "Upgrade to version 3.5.1 in Lucid" [Undecided,New] https://launchpad.net/bugs/538876
<uvirtbot> New bug: #538902 in munin (main) "munin-node is no longer being started during boot" [Undecided,New] https://launchpad.net/bugs/538902
<Sorell> hey guy I have a question about cloud controllers
<Sorell> I have a controller with 2 NICs
<Sorell> one goes to the net and the other goes to the nodes
<Sorell> I can't seem to get the one going to the nodes to issue IP addresses
<Sorell> and it does not show up in my if config
<Sorell> http://paste.ubuntu.com/395431/
<Sorell> that's my setup
<lifeless> are you forwarding DHCP?
<Sorell> I don't think so
<Sorell> http://paste.ubuntu.com/395432/
<Sorell> that's the whole file
<Sorell> Do I need to forward DHCP
<Sorell> ?
<Sorell> I want the nodes on the inside to not touch the outside would.
<pschulz01> Greetings.. down anyone know if there is as ppa for an updated version of 'resolveconf'. I am experienceing issues relating to bug #448095
<uvirtbot> Launchpad bug 448095 in resolvconf "resolvconf starts after ifupdown, does not pick the dns-nameserver and dns-search lines up from /etc/network/interfaces" [Undecided,Confirmed] https://launchpad.net/bugs/448095
<pschulz01> sorry.. that should 'resolvconf'
<twb> Is that because resolvconf isn't upstartized yet?
<axisys> how do I enable syslogd to receive logs from remote host? i want my linsys router to send logs to my ubuntu server
<axisys> syslogd -r -u syslog should do it .. so modify the init ?
<axisys> got it
<axisys> /etc/default/syslogd with a -r
<GheRivero> morning everyone
<Jeeves_> Hi all
<Jeeves_> Does anyone know if there is a nice way to automatically delete iscsi disks in Ubuntu?
<Jeeves_> I can echo 1 > /sys/block/sdx/device/delete and than iscsiadm -m node --rescan, but that's not very nice when you have multiple machines
<Noble> Is the apache2.conf secure by default? If not, what changes should I do? I use only PHP(mysql) and HTML.
<twb> Noble: define "secure".
<twb> IMO if you're running *any* PHP or MySQL, you're probably already horribly insecure.
<twb> Which is to say that while, with diligence, you can write secure PHP code, most people don't.
<Noble> About the apache2.conf now..
<Noble> Is there anything I should add or remove from the default file?
<Noble> I have removed serverSignature and servertokens
<twb> Security through obscurity isn't a particularly clever move.
<andol> Noble: One thing I like to do is add an explicit <Directory "/">...</Directory>, including Options None, AllowOverride None, Deny all, etc. That way I will explicity add Directory permissions later.
<andol> Noble: Of course, the included httpd.conf is probably a better place to add your own global configuration.
<Noble> andol: Thanks for the tips.
<zul> morning
<_ruben> g'day
<eagles05138785> hey guys im having some issues trying to start x ovr ssh
<_ruben> define "start x over ssh"
<eagles05138785> im using xming to get an x environment running on windows yet i keep getting an error about the user not being in the X group
<eagles05138785> thing is when i look at the groups file in /etc there is no X group hence it wont start
<_ruben> well, ubuntu *server* doesnt have a *desktop*
<persia> There wouldn't be an "X" group even if it did.
<persia> But it certainly doesn't export xdmcp
<eagles05138785> _ruben: i have installed the basic kde packages instead of using the metapackage im slowly building a desktop environment from the ground up
<persia> eagles05138785: Assuming you have xterm available on your server, are you able to run that with X11 forwarding?
<eagles05138785> persia: im sshed into the server
<eagles05138785> as i dont have a 2nd monitor
<persia> OK.  What happens if you run xterm?
<eagles05138785> xterm Xt error: Can't open display: localhost:10.0
<persia> OK.  There's almost certainly something wrong with your xserver (xming).  Go make sure that works, and then try again.
<persia> It may be that your ssh client isn't passing the right credentials to your X server, or similar.
<eagles05138785> could it be a firewall issue
<eagles05138785> windows firewall im talking about here
<persia> Potentially, yes.  Check those logs.
<eagles05138785> will get back to ya time to check the logs
<eagles05138785> persia: nothign is showing up in the logs
<eagles05138785> on windows at least checking logs on server
<persia> I put a file in /etc/cron.d/ and expected entries there to be treated like those in /etc/crontab , but nothing seems to run.  Should this work?  If so, any suggestions on troubleshooting?
<_ruben> persia: iirc, the syntax is the same between those versions
<_ruben> troubleshooting cron is a bitch in my experience though
<persia> Ineed.  As far as I can tell, the jobs just don't get triggered at all.
<_ruben> it sounds familiar, but dont recall any fixes or anything :(
<persia> :/
<eagles05138785> sry to but in but would it help trying to run cron through gdb
<persia> eagles05138785: Not really, because 1) it's not crashing, and 2) it's a system service.
<eagles05138785> ok
<persia> I suppose I could strace over a trigger boundary, but that's also painful in terms of volume.
<smoser> good morning all
<smoser> good morning us/eastern
<zul> what about canada/eastern? ;)
<Ninjix> smoser: good morning
<soren> Wow. Painful morning. I haven't been able to get onto IRC until just now.
<persia> soren: Just pretend you're in Texas, and then you can claim it's early yet :)
<soren> persia: ..but then I'll have to stay up late, too :(
 * persia doesn't have a good silver lining for that one
<hggdh> well, it *is* early here in Texas!
<hggdh> <yawn/>
<smoser> kirkland, ping
<huntsville> persia, execute permission by chance?  ;)
<twb> It's always weird to think of TX as having anything technological (like TI or U. Austin).  In my head the entire state is just a bunch of cowboys riding horses around their ranches
<twb> And those fucking huge American utes
<soren> utes?
<twb> Things that tradesmen drive
<twb> The front half is a sedan and the back half is a flatbed, usually with sides and occasionally with a top
<persia> huntsville: Indeed, that was likely it.  I'll know for sure in a couple hours.
<uvirtbot> New bug: #536628 in samba4 (universe) "evolution-data-server-2.28 crashed with SIGSEGV in convert_string_talloc_convenience()" [Medium,New] https://launchpad.net/bugs/536628
<twb> http://en.wikipedia.org/wiki/Coup%C3%A9_utility
<persia> twb: Two thngs about TX: lots of space to do stuff, and a fair amount of local silicon available in handy forms.
<twb> I run into math guys from U. Austin all the time
<huntsville> persia, let's hope so.
<Sorell> Is there something special that I have to do to a a region to elasticfox?
<Sorell> I click on the add tab and nothing happens
<zul> persia: i though you were talking about boobs for a minute
<persia> zul: I defer any interpretation of my descriptions of the virtues of TX to the corners of your mind, without needing explicit feedback :)
<kirkland> smoser: pong
<smoser> was hoping to get some time on your cloud today
<twb> "Gytha, is there anything you can't make sound dirty?"
 * Ng hrms at ocfs2-tools putting its startup scripts in rcS. Surely that's way too early?
<twb> Ng: depends what it does
<Ng> twb: well I can imagine that that's ideal for people running ocfs on local storage, but if your storage is remote then it's way too early
<Ng> (at least, that's what I found when I was playing with a SAN accessed via FC)
<twb> Does ubuntu use insserv yet?
<Ng> I would assume not, since we're moving away from stupid init.d scripts to smart upstart configurations :)
<twb> From the lack of pere and the NIH'd upstart, I'm guessing not.
<persia> twb: the inserv package is available, for stuff still in init.d scripts : you may want to check to see how well it works.
<twb> I was asking because if it was the default (as is the case for Debian 6), it ought to automagically be ordered after $remote-fs
<twb> For my own gear, I only run Ubuntu on servers and other people's desktops, so I don't care if it takes an extra five minutes to boot
<jaco> Hey Guys, I want to to the following on my Debian 5 box: set it up with a gsm modem so it'll send me an sms whenever the internet connection drops. Any ideas?
<smoser> kirkland, ^^%
<persia> twb: For the desktop case, I know that it's being agressively manually profiled (see http://people.canonical.com/~scott/daily-bootcharts/ ), for the server case, insserv might be interesting, but likely only for less-common services.
<twb> I thought bootchart was totally fucked by the upstart transition
<twb> Probably that was fixed two years ago and I didn't notice
<jaco> Hey Guys, I want to to the following on my Debian 5 box: set it up with a gsm modem so it'll send me an sms whenever the internet connection drops. Any ideas?
<persia> I think it was more like 21 months ago, but yeah.
<twb> Or maybe it was just fixed on Ubuntu but not Debian
<twb> That rings a bell
<twb> jaco: that's really a question for #debian (on OFTC).
<persia> #debian works here also
<persia> (different folks, but same purpose)
<twb> Ew, scott isn't providing svgz's for those clever bastards with browsers that support them.
<persia> I suspect rather that bootchart isn't producing them.
<persia> But complain to scott
<twb> bootchart produces a tarball
<twb> There's then java crapware that emits png/svgz
<kirkland> smoser: would you like it freshly installed?
<kirkland> smoser: it's a mess right now
<smoser> fresh is fine, yeah.
<smoser> kirkland, i should probably reinstall my 2 system one too, would you suggest from daily iso ?
<smoser> or do you think i have a chance at upgrade from alpha3
<kirkland> smoser: probably daily, buti i haven't tried today's
<smoser> ok. thats fine.
<kirkland> smoser: installing ...
<wack479> For some reason the sites that run php on my web server stopped working, and i am receiving this error in my syslog http://paste.pocoo.org/show/189879/
<twb> In Lucid, I need casper to be in rcS, but *not* be in rc0.  "update-rc.d casper disable 0" doesn't DTRT.  Is there anything else I can try, short of a simple rm -f?
<twb> dpkg-divert obviously won't work.
<wack479> anybody home
<twb> wack479: sorry, mysql is boring
<jiboumans> morning
<soren> jiboumans: Moving half way around the world has made you delusional. It's not morning anymore. That was many hours ago.
<eagles05138785> morning hey guys im having some issues getting x working on ubuntu server. when i try to start x i keep getting this error http://pastebin.com/YeRT8v2h anyone have any idea what im doing wrong. i have to forwarded using xming to allow x to be loaded but it gives me the error that i linked before and no desktop loads
<yann2> Hello
<yann2> I am trying to find what software Ubuntu recommends to do unions of directories
<yann2> unionfs? aufs? Other?
<soren> eagles05138785: Try in #ubuntu. X problems are unrelated to ubuntu server.
<yann2> from what I read ubuntu doesn't seem to like any of those :)
<soren> yann2: Depends on the version of ubuntu.
<yann2> lucid
<jiboumans> soren: http://xkcd.com/448/ is applicable here
<eagles05138785> soren: ok
<yann2> I read about a VFS implementation that was supposed to be ready for karmic, but wasnt, not sure if it is now, couldnt find anything about that
<soren> yann2: I forget what the verdict was. There was talk of moving to union mounts, but I think we ended up sticking with aufs. "Think" being the operative word.
<soren> "union mounts" "a VFS implementation"
<soren> Err..
<soren> "union mounts" == "a VFS implementation"
<MTecknology> how can you use a 'find -exec' in a bash loop?
<yann2> do you know where I could find out about this, if it is ready or not?
<twb> http://bugs.debian.org/573189 covers aufs and union mounts.
<soren> MTecknology: What do you want to do, exactly?
<MTecknology> soren: just got it, I needed to use ;; at the end - because the find -exec needs ; at the end
<yann2> so right now, virtualfs not ready and only solution is aufs which is a dirty hack? is aufs sure enough to be used on critical data?
<wack479> anyone home?
<yann2> I guess I'll wait for lucid+1 :)
<persia> yann2: aufs is the basis of the Ubuntu live environment.
<yann2> persia, I know, but a live environment doesnt really need something ultra stable, plus there have been talks of removing aufs support from ubuntu as well I think
<persia> yann2: I've never encountered issues using it, but I've only ever used it for short-lived stuff (days, at most).
<wack479> for some reason my server has "stopped running php" when i go to my site it just downloads a file that inside says this: http://paste.pocoo.org/show/189881/ <--anyone seen it b4?
<persia> yann2: For what purpose do you want a persitent overlay filesystem?
<yann2> http://ubuntuforums.org/showthread.php?t=1430303
<twb> persia: here, one application is change management -- an easy way to rollback a broken upgrade (as long as normal users are kicked off before you start).
<persia> yann2: You don't need a real overlay for that.  Just use a fuse hack to trap file-not-found and look in the NFS mount.
<persia> twb: That's short-lived, typically.
<twb> Yes.
<twb> Another idea I had for my netbook was to roll a squashfs root filesystem once a month or so, and store changes in the cow.
<yann2> persia, I did thought about autofs, if that s what you're referring to. But I need the index to display all folders
<yann2> unions seemed more sensible
<twb> Since it's compressed with LZMA1, it ought to reduce my root filesystem from about 1GiB to about 400MiB.
<twb> But live-helper's still way to flaky for me to trust it with something that complicated -- and in any case I now have btrfs -o compress.
<persia> twb: Something like that was done for the hardy Ubuntu MID release.  It ended up being not as good idea as it first seemed.
<twb> persia: the key difference being that I'm smart enough to be able to reroll bmonthly
<twb> I guess if you were tracking a stable release, it'd be less of an issue
<persia> yann2: Actually, I wasn't talking about autofs, but rather some 15-line Fuse.pm script you could probably write.
<yann2> persia, would the "project" folders actually display the list of all projects? ie appear as an union?
<persia> twb: OK.  Give it a shot.  Take care for version skew in libraries.  Always re-roll when you update the initramfs.  Mount a separate /boot.
<twb> persia: like I said, it doesn't matter now I migrated from ext2 on 4GiB to btrfs on 64GiB.
<yann2> auto_fs with BROWSE_MODE could do I think, but I would end up with several hundreds NFS mounts, a bit scared by thart
<twb> The OS is no longer consuming a third of my space
<persia> yann2: Sure.  Basically, you'd check if a file was in the fast tree, and if it was, serve it.  If not, you'd pattern-match the path to specify an alternate location, and if it's there, serve it.  Try 3-5 locations befoe returning file-not-found
<yann2> persia, but that would work if the user knew what folders he needs "projects/THAM38" for example. But if he wants to go to "projects/"  and just see what's in there, I dont understand how it would work
<yann2> else what you re saying seems very close to autofs :)
<persia> yann2: You'd get listings from all the directories, and cat them together.
<yann2> you got any documentation for that?
<yann2> its too bad, unionfs seems to be exactly what I'm after, just it doesnt sound stable
<persia> yann2: e.g. http://podgorny.cz/moin/UnionFsFuse
<persia> yann2: You don't want overlay filesystems: those don't typically store entire files anywhere in particular, which is inherently messy.  You just want concatenation.
<yann2> persia, yes I did try that one out... does it work well?
<yann2> ... maybe :P
<persia> no idea.  the description seemed to macth mine above, and it seems to do a merged hierarchy, rather than a lower-level overlay.
<yann2> what would happen if a user created a new folder in that virtual folder?
<persia> http://olfs.sourceforge.net/ appears to be another one.
<persia> If it's not configurable, it ought be.  make it so, or grab Fuse.pm and write some match script.
<MTecknology> Can I use usermod to remove a a user from a group?
<MTecknology> or do I need to edit /etc/group manually for that?
<hggdh> the server install does not install PHP when we task-select LAMP. Is this a space issue (on the CD)?
<wack479> for some reason my server has "stopped running php" when i go to my site it just downloads a file that inside says this :http://paste.pocoo.org/show/189881/ <---anyone seen it b4?
<wack479> anyone?
<sherr> wack479: something changed - did you change it? The problem will be in the server setup somewhere. Eiher you changed something, or a recent update? No idea myself. That link is just some PHP that should run.
<asmarin> on ubuntu server karmic amd-64 in certains conditions says mysql-server-5.1 and mysql-core-5.1 are old and unecesary and i can delete with apt-get autoremove....is it a bug?
<echa> hi, i'm trying to install ubuntu-server 9.10 on a network which requires me to change my MAC first
<echa> i thought i could do this from busybox in the installer console, but ifconfig isn't available
<echa> any suggestions on how i can achieve this?
<uvirtbot> New bug: #539051 in eucalyptus "eucalyptus ignores VNET_INTERFACE setting when creating volumes" [Undecided,New] https://launchpad.net/bugs/539051
<LinuxAdmin> Hi guys
<LinuxAdmin> I'm trying to install ubuntu-vm-builder but I'm getting problems with apt-get command
<LinuxAdmin> I  get this error:
<LinuxAdmin> It was not possible to get some archives, try to run apt-get update or try with --fix-missing
<LinuxAdmin> I've run apt-get update already
<LinuxAdmin> any ideia?
<LinuxAdmin> My ubuntu server is 9.10
<uvirtbot> New bug: #263398 in spamassassin (main) "spamassassin sends some mails to junk" [Undecided,Incomplete] https://launchpad.net/bugs/263398
<LinuxAdmin> can someone help  me?
<NoReflex> hey guys! I'm having some problems with upstart ik karmic server 64bit. It won't start postgresql. I tried http://superuser.com/questions/98702/how-to-make-postgresql-start-at-boot-time-in-ubuntu but it didn't work. can I remove upstart? I'm afraid to test it because the machine is at a remote location and if I mess it up I won't be able to connect to it anymore.
<brontosaurusrex>  is webdav tied to actual system users, or are those some apache virtual users?
<brontosaurusrex> or do i have a choice?
<philthno2> Hi, I installed the dektop-Ubuntu but don't want GDM to start. What's the best way to achive this? I find the /etc/gdm.conf but is it ok just to uncomment the "start on" and "stop on" - lines?
<LinuxAdmin> I don't remember to see #ubuntu-server channel so stopped
<LinuxAdmin> no one helping?
<NoReflex> philthno2: try startx and then sudo gdmsetup - that should allow you to configure how ubuntu starts
<philthno2> NoReflex: Thank you for the hint! I tried to get rid of gdm but no alsa seems to have some problems. I'll try my luck
<NoReflex> philthno2: yw
<philthno2> can someone tell me if the server version also contains alsa and that stuff? I mean -- can I install it and play a sound out-of-the-box?
<NoReflex> philthno2: I don't think the server version comes with alsa ....but ubuntu-desktop installs it as a dependency
<vegar_> I'm having a lot of problems installing ubuntu server from usb. The installer runs, but it's having problems with the packages on the usb  stick. Is is possible to force it to only use a network mirror?
<warmexxus> hey everyone its been a long time since I've been on IRC... I thought it would be easier than endlessly searching the net for an answer... very simple... i have a lack of understanding of the overall picture.. what's happening is I am running a basic Ubuntu Server 9.10 (karmic?) and I intended to use PuTTY to remote control the operations. From my windows box I can login in but after about 3 minutes the connection drops
<jMCg> warmexxus: idle time, I suppose...
<jMCg> I've not seen that kind of behaviour in Linux yet, but who knows..
<pmatulis> warmexxus: going through a firewall/router?
<jMCg> warmexxus: in PuTTY you can set a Keep-alive package...
<warmexxus> it's gotta be something very simple...
<warmexxus> a linksys router...
<pmatulis> warmexxus: any filtering on it?
<warmexxus> the two are on the local network..
<warmexxus> i looked but I didn't filter any ports explicitly... in fact I forwarded outside requests to the server... on port 22
<warmexxus> so if anything the ports are open
<pmatulis> warmexxus: try connecting the 2 machines directly and testing.  is that possible?
<warmexxus> im intending to make it headless and keeping it hidden in a closet... keep the baddies away.. haha
<jMCg> warmexxus: In the Connection Category, there's Seconds between keepalives
<pmatulis> warmexxus: you can also run the ssh daemon in the foreground and watch for any suspicious error messages.  or run it as a daemon with higher log level
<warmexxus> not really... I'm not totally new to *nix but it's been a while and I just set it up last week
<warmexxus> right the keep alives
<warmexxus> the service is set to use keep alives and when i change PuTTY to use the keep alives the connection never happens.
<warmexxus> let me try again... I'll keep messing with it...
<NoReflex> hello! I'm having some problems with upstart ik karmic server 64bit. It won't start postgresql. I tried http://superuser.com/questions/98702/how-to-make-postgresql-start-at-boot-time-in-ubuntu but it didn't work. can I remove upstart? I'm afraid to test it because the machine is at a remote location and if I mess it up I won't be able to connect to it anymore.
<jMCg> warmexxus: if you want (or have) to keep administrating the system, you better familiarize yourself with it... </2cents>
<warmexxus> one thing that I need clarification on is the RSA keeyws
<warmexxus> yeah are correct... it's more like OJT/hobby
<warmexxus> i've been reading and reading... and reading... just it's getting cumbersome when I can't discover something simple like a timed out connection using PuTTY...
<warmexxus> there are tons of results on google... I've read 50% of them...
<jMCg> Bummer.
<pmatulis> NoReflex: get an strace of your starting method
<warmexxus> I wonder if it's the authentication type / config
<pmatulis> warmexxus: if you're authenticated you're authenticated.  it's something else
<NoReflex> pmatulis: I can start it manually from init.d but it won't start by itself at boot ... there are links in /etc/rc*.d......
<jMCg> warmexxus: Timeouts generally happen for the same reasons... I think pmatulis and moa already named them -- and no, it's NOT about the config.. Unless there is a new pam_fuckupconnection.so I don't know anything about.
<dharrison> hello room
<warmexxus> alright I'll take that and run with it... believe it or not you have helped. :) Thank you kindly
<jMCg> warmexxus: you're very welcome.
<jMCg> Also: 19:15 <+DrBacchus> Where did people pick up the notion that running an internet daemon was something that should be trivial?
<warmexxus> true
<dharrison> hi I have some networking issues with ubuntu. it will not ping certain ip addresses
<NoReflex> dharrison: what do you mean by certain IP addresses?
<dharrison> external ones
<dharrison> we have a monitoring system in place that pings certain ips
<NoReflex> dharrison: well maybe your router / gateway blocks ICMP
<dharrison> but it wont ping a certain ip address
<dharrison> other servers on the network are ok
<jMCg> dharrison: maybe the system doesn't want to be pinged.
<dharrison> lol i know the feeling
<NoReflex> dharrison: try mtr YOUR_IP_ADDRESS
<dharrison> mtr?
<NoReflex> mtr TARGET_IP_ADDRESS
<NoReflex> my traceroute
<dharrison> has come up blank??
<dharrison> works on others
<NoReflex> dharrison: so you have more machines inside your LAN? the other machines can ping certain IP address but your ubuntu server can't?
<dharrison> correct
<NoReflex> dharrison: do you have ufw, iptables set up on ubuntu_server?
<dharrison> ufw is off
<NoReflex> what does sudo iptables-save return?
<dharrison> # Generated by iptables-save v1.4.1.1 on Mon Mar 15 18:25:54 2010
<dharrison> *filter
<dharrison> :INPUT ACCEPT [16058:1400749]
<dharrison> :FORWARD ACCEPT [0:0]
<dharrison> :OUTPUT ACCEPT [16256:1169307]
<dharrison> COMMIT
<NoReflex> dharrison: try yo use pastebin next time... so it seems that iptables is set to allow all
<NoReflex> does ubuntu_server have the same gateway as the other machines?
<dharrison> yup
<NoReflex> can you ping the other computers on your LAN from your ubuntu_server machine?
<dharrison> yes no worries at all
<NoReflex> pmatulis: I also tried to enable bootlogd but it didn't help ... I can seem to be able to figure out why postgresql won't start at boot. I was thinking about removing upstart - that should install other packages to take upstart's place
<pmatulis> NoReflex: i wouldn't do that
<pmatulis> NoReflex: not sure what bootlogd is
<NoReflex> dharrison: do you manage your router / gateway? perhaps it only allows some IP addresses or MAC to reach the internet. Can you "reach the Internet" at all from ubuntu_server: for example curl http://google.de
<dharrison> i can yes no problems
<NoReflex> pmatulis: bootlogd should log boot messages to /var/log/boot - but it does not seem to work with upstart
<pmatulis> NoReflex: well i gave you 2 things to try
<pmatulis> NoReflex: sorry, that was the ssh problem
<NoReflex> pmatulis: np
<NoReflex> I don't think the strace method would help because I can start it manually with /etc/init.d/postgresql-8.4 after I login
<pmatulis> NoReflex: ok
<NoReflex> dharrison: what does traceroute google.de
<NoReflex> give
<pmatulis> NoReflex: a few days ago someone was having the same trouble with mysql
<pmatulis> NoReflex: what LOCALE are you using?  sometimes that gummies things up
<NoReflex> pmatulis: i think my location is setup to Germany ... how can I check?
<pmatulis> NoReflex: hmm, interesting
<NoReflex> pmatulis: locale gives LANG=en_US.UTF-8 - and the rest of the vars are set to en_US.UTF-8 as well, except LC_ALL which is empty
<MTecknology> can I make an entry in my hosts file that's something like    127.0.0.1  *dev* *kalliki* *profarius*
<MTecknology> so any request that has any of those will go back to localhost
<NoReflex> MTecknology: I don't think you can use wildcards in /etc/hosts
<MTecknology> NoReflex: that sucks
<NoReflex> MTecknology: well I don't think it's that hard to create separate lines for each host you need
<pmatulis> NoReflex: i'm sorry, i can't find the resource i was looking for, there was a mysql bug on starting and locales
<pmatulis> NoReflex: might help you with postgresql
<NoReflex> pmatulis: np; thx for trying :)
<vegar_> Tasksel includes the entry "Basic ubuntu server", what does this task install?
<MTecknology> NoReflex: problem is there's a whole lot of them and there's new ones added every day
<NoReflex> MTecknology: can you tell us why do you need to map so many names to your own machine? do you need it for apache virtualhosts?
<vegar_> is there a general way to see which packages a tasksel entry installs?
<MTecknology> NoReflex: nginx
<MTecknology> NoReflex: the sites complain if they can't refer back to themselves
<NoReflex> MTecknology: haven't used nginx ... but I guess you could make a cron job which checks which sites have been set up in nginx and updates the /etc/hosts file accordingly
<nico__> hello
<nico__> I've just installed 2 PCs with ubuntu enterprise cloud, but I'm unable to make instances running
<nico__> running euca-describe-instances, I can see the instances pending and after they terminate
<uvirtbot> New bug: #522060 in euca2ools "[patch] broken file path processing" [Low,Fix committed] https://launchpad.net/bugs/522060
<nico__> the image is emi-E22310BC (the standard ubuntu 9.10 karmic image)
<nico__> is anyone here running uec properly?
<MTecknology> NoReflex: wouldn't work - nginx only knows where to send traffic - i handle all of that other junk through php
<MTecknology> NoReflex: i'll figure it out though - thanks
<NoReflex> MTecknology: yw and good luck
<hggdh> would bug 442498 be looked at for Lucid?
<uvirtbot> Launchpad bug 442498 in openldap "openldap install bare bones need default DIT separate package" [Wishlist,Triaged] https://launchpad.net/bugs/442498
<aouldr78> hello everyone
<aouldr78> i need help about pinging my computer
<aouldr78> i am not able of reaching my computer through my IP address
<aouldr78> what could be the problem
<nico__> aouldr78: IP changed?
<aouldr78> no it id the same
<aouldr78> it is the same
<aouldr78> i do not have firewalls just a modem that is it
<GhostFreeman> I'm running Ubuntu Server in a VM. I'd like to be able to make the terminal window larger than it is right now, is that possible?
<aouldr78> if you are running Ubunto in Vbox, all you need is adding the Vbox extras and the terminal will be larger
<GhostFreeman> doesn't that require X?
<GhostFreeman> I'd like to avoid using X unless I really have to
<veebull> I don't know if its the only way, but the only way that I've found thus far to get a bigger console window into a server running in Vbox is to ssh in via PuTTY... and then re-size your PuTTY window.
<GhostFreeman> that certainly would be the fastest way
<nico__> anyone here uses ubuntu uec??
<nico__> my images instances die just after being created...
<veebull> I've managed (finally) to be able to run my VMs completely headless (in Windows) so once they are installed, I just ssh in...
<GhostFreeman> How do you set VBox to run a VM as headless?
<veebull> you have to run it from the command line, and have to download a dos app to allow it to detach from the console once running
<veebull> http://forums.virtualbox.org/viewtopic.php?f=1&t=28901
<jaypur> hi can someone help me how to install phpsys???
<GhostFreeman> thanks veebull
<jaypur> lol got it!
<zul> RoAkSoAx: your apport hook for vsftpd was upload....nevermind
<axisys> i am having weird issue with my ubuntu server.. unless i have a monitor plugged in i dont get a login prompt in serial console
<axisys> http://pastebin.com/R6C0K6SV this is how my grub setup
<axisys> i followed this https://help.ubuntu.com/community/SerialConsoleHowto
<axisys> it is karmic and i am using grub, not grub2
<axisys> i did `sudo start ttyS0 ` .. so serial console has the boot message as well as vga console
<axisys> any idea why i need to have a monitor plug in to see the login prompt in serial console ?
<axisys> another wierd think is also related to display .. if the laptop lid is closed all the way i am not getting any gnome menu bar on the top ..
<axisys> let me post this last question to #ubuntu
<aouldr78> how can i ping my computer that is on LAN?
<aouldr78> i ping 192.168.1.101 but i can not reach it .why?
<aouldr78> i have a DSL connection
<axisys> aouldr78: is it your computer's ip ?
<jasonmchristos> hi
<aouldr78> yes it is but the default gateway is 192.168.1.254
<axisys> ifconfig eth0 shows that ip ?
<axisys> aouldr78: ^
<aouldr78> it does not make sense to me
<aouldr78> yes
<axisys> can you post the ifconfig eth0 output in pastebin ?
<aouldr78> how can i do that
<aouldr78> hi
<aouldr78> are you still there
<axisys> !pastebin
<aouldr78> yes what is that?
<axisys> http://pastebin.com
<axisys> aouldr78: ^
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<axisys> aouldr78: ^
<aouldr78> do i do ping 192.168.1.101?
<axisys> aouldr78: ifconfig eth0 shows it `UP' ?
<aouldr78> yes i am using the internet right now
<aouldr78> i have two connection through LAN but i can not get to ping anyone
<aouldr78> they are working fine but i can not connect between them
<blue-frog> aouldr78, netstat -rn
<aouldr78> the one i an using right now has 192.168.2.2 as IP the other one has 192.168.1.01 as IP
<aouldr78> 192.168.1.101
<aouldr78> both are working fine but i can not connect them
<larsemil> aouldr78: looks like your routes are messed up.
<aouldr78> i am back
<aouldr78> can anyone please ping 192.168.2.2?
<uvirtbot> New bug: #539250 in samba (main) "Windows XP gets access denied when trying to rename folder over smb" [Undecided,New] https://launchpad.net/bugs/539250
<Japje> aouldr78:
<Japje> --- 192.168.2.2 ping statistics ---
<Japje> 5 packets transmitted, 0 received, 100% packet loss, time 3999ms
<Japje> i think its broken
<aouldr78> this is my IP and i am using it right now?
<Japje> that... or i just tried to ping a host on your internal network.
<aouldr78> what could be the problem?
<Japje> which ofcourse is unpingable for everybody not on your network
<lifeless> 192.168.2.2 is a RFC1918 address range, it cannot be pinged from the internet
<aouldr78> so how can you reach this host?
<Japje> only with the IP that you have from your isp.
<Japje> and only if your router is set-up correctly
<aouldr78> how can i know the ip from my ISP
<aouldr78> i thought i just need to do ifconfig to find my ip for my computer
<Japje> http://www.whatismyip.com/
<Japje> well if you have a router, that gets the IP
<aouldr78> i am still not able to ping my computer
<aouldr78> my computer is connected to a modem that connect to ISP telephone
<aouldr78> that is it
<aouldr78> i am trying to remote connect to my computer
<sherr> aouldr78: It is quite hard to do support over IRC sometimes, even with "pastebin" support
<sherr> aouldr78: Your questions are very basic unix networking
<sherr> very basic
<sherr> You 2 IP's are 192.168.2.2 and 192.168.1.1 ?
<sherr> Then without a router you cannot ping between them - they are on different networks
<sherr> Make your 2 computers on the same network (subnet) e.g.
<sherr> 192.168.2.2 and 192.168.2.3
<sherr> or 192.168.1.2 and 192.168.1.3
<sherr> i.e.
<sherr> 192.168.1.X --> X = different for both systems
<sherr> Do some linux/unix network research/reading, set the PC's on the same network and try a ping.
<sherr> Try the Ubuntu Forums if you get stuck.
<kirkland> hggdh: i just merged/pushed your testdrive config changes, thanks!
<kirkland> hggdh: i had to make one minor change, initializing hasOptions = false
<kirkland> hggdh: other than that, looks good
<kirkland> hggdh: to your other question, about IMG and ISO cache, i'm not opposed to adding a configuration change for that;  write up the patch, put it in a branch, link against a bug
<kirkland> hggdh: i won't use it, but i'm not opposed to it ;-)
<hggdh> kirkland: thank you. I also just found I had forgotten to init hasOptions
<kirkland> hggdh: i also found that -d and --desktop have different behavior, somehow
<kirkland> hggdh: do you see the bug immediately, before i dig deeper?
<hggdh> kirkland: will check it
<hggdh> kirkland: I am also considering factoring KVM and vBox to allow for more control over VM parameters
<kirkland> hggdh: yeah;  keep in mind that testdrive was originally written for KVM, and vbox/parallels/vmware are bolt-on's after the fact
<NoReflex> hello! I'm having some problems with upstart ik karmic server 64bit. It won't start postgresql. I tried http://superuser.com/questions/98702/how-to-make-postgresql-start-at-boot-time-in-ubuntu but it didn't work. can I remove upstart? I'm afraid to test it because the machine is at a remote location and if I mess it up I won't be able to connect to it anymore.
<hggdh> kirkland: heh. I do not why, but I also had this feeling ;-)
<sherr> NoReflex: upstart is so fundamental a part of Ubuntu now that I would *not* remove it.
<kirkland> hggdh: it greatly increases the complexity of testdrive, but opens doors as to *who* can testdrive Ubuntu
<kirkland> hggdh: so i bolted it own :-)
<sherr> NoReflex: You might have big problems - and you are a distance away from the server ..
<NoReflex> sherr: I don't know what else to try...I can't reinstall since the machine is over 500 Miles away
<hggdh> kirkland: on the -d and --desktop -- what is the difference (so that I will not have to run both to find out)
<sherr> NoReflex: No idea on the problem - but if all else fails, there's always /etc/rc.local
<sherr> Put a "start postgresql" in the /etc/rc.local script
<sherr> Search web for rc.local usage - it's just a script that gets run every boot
<NoReflex> sherr: I searched for a way to log what's happening during boot ... the older init package had a bootlogd program that would log to /var/log/boot those messages .... but bootlogd doesn't work with upstart
<kirkland> hggdh: --desktop prompts the "do you want to run usb-creator", -d does not for some reason
<sherr> NoReflex: postgres stuff should be visible in syslog I'd expect
<NoReflex> sherr: rc.local would be a solution but that would not shutdown the postgresql server gracefully when the machine is rebooted I think
<NoReflex> sherr: I can't find it in syslog, also there's nothing about why it isn;t started in the pg_log dir
<hggdh> kirkland: weird. Will look at it
<hggdh> kirkland: both -d and --desktop should set opt.desktop to True
<sherr> NoReflex: On reboot/shutdown, postgres will be sent a "kill" and should shut itself down gradefully. I wouldn't worry too much about that.
<kirkland> hggdh: agreed, i don't see the bug yet
<NoReflex> sherr: will try, thxx
<wack479> sherr!!!!!!
<sherr> NoReflex: can you start/stop/restart manually?
<sherr> hey wack479
<hggdh> kirkland: BTW, on why I am doing this: I like the idea of an ad-hoc tester for ISOs using VMs, and I would like to make it simple for
<NoReflex> sherr: yes...start/stop/status/restart all work when I do /etc/init.d/postgresql-8.4 "action"
<hggdh> kirkland: ad-hoc testers to run it (so different configurations for different tests, like no network, no USB, etc)
<wack479> lol hey
<wack479> does mysql question anyone?
<wack479> lol i cant type
<wack479> could someone answer a mysql question?
<NoReflex> sherr: I tried the solution posted here: http://superuser.com/questions/98702/how-to-make-postgresql-start-at-boot-time-in-ubuntu - but it did NOT work
<NoReflex> so I guess the problem comes from upstart...
<sherr> You downgraded upstart?
<NoReflex> sherr:
<NoReflex> yes
<NoReflex> however it didn't work so I installed the latest version
<wack479> does anyone know how i can block the logging of "robot.txt"?
<wack479> in the apache log
<NoReflex> Would it be possible to somehow specify which daemons should use the upstart feature and which should use the "init" system?
<sherr> Maybe check the forums again then. I am not 100% familiar with upstart - but it's a "service" interface - maybe also check "update-rc.d" ...
<sherr> NoReflex: should be a bug logged - check launchpad
<blizzkid> Hi all, anyone would like to do a talk about Landscape at LOAD (a linux event in Belgium)?
<wack479> does anyone know how i can block the logging of "robot.txt" in the apache log?
<blizzkid> jkakar: you here?
<kirkland> hggdh: sounds great
<kirkland> hggdh: you'll have my support ;-)
<mathiaz> hggdh: hy
<mathiaz> hggdh: seems that you've updated the samba test case: http://testcases.qa.ubuntu.com/Install/ServerWhole
<mathiaz> hggdh: the last point in the testing procedure is: net usersidlist
<mathiaz> hggdh: however that doesn't return any user on a default installation
<yeason> I've got a postfix smtp server that I think I've enabled TLS on but every time I try using STARTTLS through I get an error from thunderbird stating it doesn't advertise starttls. I've used telnet to connect and checked it and STARTTLS appears in the list of supported services. Can anyone help me out?
<yeason> I'm a little unclear on if I have it supporting SSL/TLS or if it actually supports STARTTLS. If someone can answer a couple questions I may be able to figure it out
<sherr> yeason: how are you doing a basic check for support?
<hggdh> mathiaz: hum
<mathiaz> hggdh: IIRC samba doesn't import the local users by default
<mathiaz> hggdh: and point 11. doesn't list an UBUNTU domain by default
<mathiaz> hggdh: http://paste.ubuntu.com/395830/
<mathiaz> hggdh: ^^ this is what I get
<hggdh> mathiaz: I will do it again, but I remember running it from a fresh install. I most probably messed up somewhere
<hggdh> mathiaz: let me repeat it and find out what I missed to add to the tests
<yeason> sherr: I've connected to the server on port 25 and issued the ehlo command. I see STARTTLS in the list. The error I'm getting suggests that the server isn't even advertising it, hence my confusion
<mathiaz> hggdh: ok - I'm going to remove 11 and 12 from the test cases for now then
<yeason> sherr: I'm currently trying to look up how to test the actual starttls command =)
<sherr> yeason: If it advertises TLS, then it is there. Thunderbird's misconfigured.
<sherr> This page has some thoughts on testing :
<sherr> http://adomas.org/2006/08/postfix-dovecot/
<sherr> Using perl and MIME::Base64
 * hggdh wonders from which window he copied the samba stuff :-(
<yeason> sherr: I think something else is going on, openssl s_client connect address:25 -showcerts -starttls should test starttls right? cause it is also telling me the server doesn't advertise it
<sherr> yeason: But you said the server *does* advertise it? ehlo?
<sherr> Maybe it is advertised but not configured properly
<yeason> sherr: yea... although I think I might be closer to figuring out the issue... I think it's something with my laptop. I'm running 7 and had to try telnet from another computer. I just tried putty and the starttls line shows up as 250 - BXXXXXXXX
<lunaphyte> yeason: you can use s_client to test smtp/starttls
<lunaphyte> as a side note, you really should not be using port 25 for submission.
<yeason> lunaphyte: what should I be using...?
<lunaphyte> the submission protocol - port 587.
<yeason> lunaphyte: I tried s_client and it was also stating that the server was not advertising it
<yeason> lunaphyte: interesting... I'll have to look into it, I basically followed one of the many guides out there on how to setup postfix
<yeason> and it didn't mention that at all
<lunaphyte> if what you paste above is literally what is appearing, then you can thank the p.o.s cisco equipment somewhere along the way for molesting the smtp conversation.
<lunaphyte> i hate tutorials.
<lunaphyte> they're usually written by people who shouldn't be writing them, it seems.
<lunaphyte> they're ok if you're not a beginner, which is the unfortunate irony.
<lunaphyte> anyway, masked strings like that traditionally mean that there is a cisco firewall in between the client and server with it's "fixup" protocol turned on.
<yeason> lunaphyte: I've heard that before... and it was what I suspected this time but doubted it for complicated reasons... gogo weird network
<yeason> lunaphyte: so how hard is it to modify an existing postfix setup to use port 587
<yeason> lunaphyte: I'm a bit confused on how it's supposed to work
<lunaphyte> well, a big part of it is uncommenting the submission entry in master.cf.  there are other steps too though.
<yeason> ok... I've got to go soon, do you know of a guide/article that talks about how to set it up?
<lunaphyte> well, we like to see people using the docs provided by the software's author.
<yeason> lol... obviously, but those don't always explain how it works, well thanks for the info I'll look into that
<lunaphyte> other stuff is, well, not really endorsed by me.
<jkakar> blizzkid: Hello.
<blizzkid> jkakar: can I pm you for a sec?
<blizzkid> regarding an event
<jkakar> blizzkid: Sure...?
<eTiger13> how can i do multiple OR arguments in an if statement? ie if [ $member:f =='this' || $member:f == 'that' ]
<hggdh> mathiaz: I just installed a brand new server (i386) with SAMBA. When I ran 'sudo net usersidlist' I *do* get output
<jaypur> can you advice me some good apps for ubuntu server???
<jaypur> like monitoring apps..
<ruben23> can i setup my email server and web server to host bot email and web site hosting service for a company...would it be ok- xeon dual core 2.4Ghz, 4Gb ram ddr2, 250 GB
<ruben23> nagios
<ruben23> munin
<ruben23> and cacti
<jaypur> hmmm thanks
<jaypur> i'm using phpsys
<ruben23> can i setup my email server and web server to host bot email and web site hosting service for a company...would it be ok- xeon dual core 2.4Ghz, 4Gb ram ddr2, 250 GB
<jaypur> ruben23, i dont know much, but... why wouldn't??
<ruben23>  jaypur: just worried having 2 apps, for a single server..
<jaypur> ruben23, what applications my friend?
<ruben23> email apps and web server..
<ruben23> handles my company email and host my company website
<jaypur> ruben23, you can do that...
<jaypur> i just set the apache here... and i'm looking foward to have a ftp, and web mail
<jaypur> :)
<ruben23> ok, im worried when traffic increase..would my server can withstand it..
<jaypur> for that i think you should look about your connection
<jaypur> is it gonna be home made?
<ruben23>  jaypur:nup i got dedicated line for it..
<ruben23> is you apache hosting a website now..?
<jaypur> ruben23, yes!
<jaypur> ruben23, and a counter-strike server
<jaypur> ruben23, http://tuxserver.no-ip.org/
<ruben23> jaypur: how did you setup it..? the webserver is using public IP right..?
<jaypur> ruben23, it's using an ip and i set the no ip to redirect it
<jaypur> ruben23, set the apache
<ruben23> ok that IP si public right..?
<ruben23> do you have ugide how did you do it..?
<ruben23> ill plan to setup it..can you guide ma and help me on it..
<jaypur> man
<jaypur> i think it's the public ip
<jaypur> almost sure
<jaypur> because... if it wouldn't ppl wouldn't connect to it...
<jaypur> pay attention to the port of apache that is 80...
<ruben23> ok
<jaypur> ruben23, aand it's easy man... if i could, why wouldn't you :D
<ruben23> did you have guides on doing it..?
<jaypur> ruben23, you can find some good tutorials about it...
<ruben23> it would be my first time to setup it.
<jaypur> ruben23, tell me what to you wanna do... apache,... and host wat else
<jaypur> ruben23, i have a website that is on brazilian portuguese
<jaypur> it will install
<jaypur> apache2, php5, mysql
<jaypur> and i can help you to set phpsys, that i just have installed this afternoon :)
<jaypur> and install ssh too :)
<jaypur> for remote control :D
<ruben23>  jaypur:do you have any IM account, can i add you up please.
<jaypur> no i havent but i can create one
<jaypur> do you have any gmail or msn acc.?
<ruben23> i have gtalk-
<jaypur> ruben23, jaypur.linux@gmail.com
<jaypur> :)
<ruben23> its- 3trglobal.it@gmail.com
<ruben23> ill add you now
<jaypur> i'll connect here
<jaypur> i'm not always at gtalk... but when i'm on line i'm always here on irc
<mathiaz> hggdh: what kind of output do you get?
<GhostFreeman> In using byobu, is there any way I can rename what each window is?
<hggdh> mathiaz: a list of users, like http://pastebin.com/fPmhbha4
<mathiaz> hggdh: is this after a package install?
<mathiaz> hggdh: or an install from iso when the samba server task selected?
<mathiaz> hggdh: is the XANGO domain already existing in the network?
<hggdh> mathiaz: no, it was not existing
<hggdh> mathiaz: this is an install from ISO with SAMBA selected in the tasksel
<mathiaz> hggdh: so where did the XANGO name come from?
<hggdh> mathiaz: this is what I gave as a hostname
<hggdh> mathiaz: I am reinstalling, now with amd64
<mathiaz> kirkland: does the option of encrypting directories show up in the desktop install now?
<jayvee> mathiaz: in Ubiquity, there's an option for that
<jayvee> it's the 3rd option after "Log in automatically", and "Require password to log in"
<jayvee> worded something like "Require password to log in and decrypt my home directory"
<mathiaz> jayvee: great - thanks
<jayvee> it's in 9.10
<mathiaz> kirkland: so how about removing the encrypted directory question from the -server installer?
<hggdh> mathiaz: net usersidlist & etc still work fine after an install
<v0lksman> anyone know of a good how to to compile openssh5 on hardy?  I have no choice at this point...scared to miss a lib or something though
<hggdh> mathiaz: are we talking about the same thing (i.e. install from ISO; select SAMBA, check it works)
<mathiaz> hggdh: right - are you logging into the system via the console?
<hggdh> mathiaz: yes, I am
<mathiaz> hggdh: right - I've installed a new system from ISO and saw the same behaviour as yours when I logged through the console
<mathiaz> hggdh: however my first tests were done by logging via ssh into the system
<hggdh> mathiaz: ah. This is interesting
<mathiaz> hggdh: just after typing the password you can see a message (from the pam stack?) that says:Added user XXXX.
<hggdh> mathiaz: yep. There we got added to the directory
<mathiaz> hggdh: I'm going to redo another install and only login via ssh
<hggdh> mathiaz: this is not really kosher, if it does not happen via SSH
<mathiaz> hggdh: right - we may have ran into a bug
<mathiaz> hggdh: well - actually no
<mathiaz> hggdh: it makes sense
<mathiaz> hggdh: I login via ssh using public key
<hggdh> mathiaz: why?
<mathiaz> hggdh: to import the user into the samba database you need to enter the password
<mathiaz> hggdh: which doesn't happen on a public key login
<hggdh> mathiaz: heh. Makes sense. Now, this should be documented somewhere
<hggdh> mathiaz: sheer curiosity: how can you ssh via public key on a brand-new server?
<mathiaz> hggdh: :) - I'm using preseeds to automate my installations
<hggdh> heh
<mathiaz> hggdh: the late_command in my preseeds stick my public key on the installed system
 * hggdh was wondering about the chicken and egg issue
<hggdh> mathiaz: then this is not a problem form the casual tester from the wiki
<mathiaz> hggdh: yeah - I'm going to update the test case then
<hggdh> I can do it, if you are busy
<hggdh> kirkland: I cannot repeat the -d / --desktop issue on testdrive
<mathiaz> hggdh: wiki page updated
<rZr> xmpp please !
<RoAkSoAx> zul, the information of the syslog for the apport hooks should be attached only if the user accepts it, or it should always be attached?
<hggdh> mathiaz: thank you
<Anthony_Whitford> I'm trying to research Ubuntu's market penetration in Financial Services firms.  Can anybody tell me a financial company running Ubuntu server, or point me to any survey/research that demonstrates that it is popular in this vertical?  I have the Ubuntu Server Survey results for 2009 already, and am aware of Equitec.  Anybody else?
<jayvee> v0lksman: why do you want to compile it?
<v0lksman> cause hardy comes with a way old version and package managers claim they won't even backport it
<jayvee> so? it's not like it has any security issues.
<v0lksman> can't chroot
#ubuntu-server 2010-03-16
<v0lksman> https://bugs.launchpad.net/hardy-backports/+bug/286337
<uvirtbot> Launchpad bug 286337 in hardy-backports "Please backport OpenSSH 5.1 to Hardy" [Wishlist,Won't fix]
<jayvee> apt-get build-dep openssh-server
<jayvee> that'll get the build-dependencies right, and you can just ./configure && make && sudo make install
<v0lksman> so I won't need to re-compile openssl?
<v0lksman> also does that compile in support for pam and all that?
<jayvee> no idea
<jayvee> v0lksman: uh, why don't you use this PPA like in the bug comments? https://launchpad.net/~rainct/+archive
<lifeless> if you use debuild, it will do a normal ubuntu build, so yes pam included. however, PPA's ++
<v0lksman> cause it's a bit of a security risk?
<jayvee> so is compiling it yourself
<v0lksman> I'd rather blame myself than tell my client I got the software from some joe...
<lifeless> v0lksman: you can check the signatures on the ppa, check the diff against the lucid diff
<jayvee> it's not like the PPA binaries are closed source
<v0lksman> lifeless: hrm...not a bad idea
<v0lksman> the ppa is outdated too though
<jayvee> yeah, that's what you'll be dealing with from here on out
<jayvee> at least hardy's 4.9 is maintained
<v0lksman> until lucid is released
<v0lksman> 4.9?  4.7
<v0lksman> where is 4.9 for hardy?  that'll do...anything more than 4.8
<jayvee> whoops, yeah
<jayvee> 4.7
<v0lksman> damn
<v0lksman> had me hoping there
<jayvee> lifeless: I'm really slack. I haven't looked into that test-driven stuff at all.
<lifeless> jayvee: de nada
<jayvee> I've now got some people using python-iview as an "upstream". :-D
<jayvee> The xbmc and boxee iview plugins re-use python-iview. I honestly didn't think it was *that* reusable. :)
<jayvee> So yeah, the test-driven whatever is all the more important.
<lifeless> \o/
<jayvee> most things I set my mind to, I "just get"
<jayvee> but not unit testing
<jayvee> the things I don't "just get" are unit testing, BGP, Wave Robot API, and OAuth.
<jayvee> everything else I can get my head around in an hour or so.
<jayvee> Oh, and D-Bus. I don't get that either.
<lifeless> BGP is fune.
<lifeless> dbus is NIH DCOM is NIH CORBA
<axisys> still trying to find out why i need to have a monitor plug in to see the login prompt in serial console ?
<lullabud> axisys: that is a very strange problem.  never heard of that one...
<lullabud> i'm curious to know the solution.
<lullabud> what package do i need to install to get ubuntu to auto-register in Active Directory DNS?
<axisys> posted my message in ubuntu server and user mailing list ..
<axisys> looks like exact same problem was posted before
<axisys> https://lists.ubuntu.com/archives/ubuntu-users/2009-April/182662.html
<axisys> but on older version ubuntu
<axisys> this seems to be similar bug in lucid
<axisys> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/529234
<uvirtbot> Launchpad bug 529234 in linux "monitor drops to standby with "no signal" during boot splash on luicd alpha 3" [Undecided,New]
<axisys> looks like i need to disable gdm .. becasue it is a desktop ..
<axisys> so update-rc.d -f gdm remove should do it?
<MTecknology> axisys: should - i prefer rcconf
<axisys> MTecknology: rcconf ? i have not used it
<MTecknology> !info rcconf
<MTecknology> ubottu: *tap* *tap*
<MTecknology> 20:21 < mtbot> MTecknology: rcconf (source: rcconf): Debian Runlevel configuration tool. In component universe, is optional. Version 2.2 (karmic), package size 22 kB, installed size 132 kB
<ubottu> rcconf (source: rcconf): Debian Runlevel configuration tool. In component universe, is optional. Version 2.2 (karmic), package size 22 kB, installed size 132 kB
<persia> update-rc.d is the solution used by all the maintainer scripts : other stuff also works, but be warned that it's 1) not default and 2) it's essential not to break update-rc.d
<MTecknology> axisys: /pp 18
<MTecknology> axisys: sorry
<axisys> persia: hmm... i guess i will just go with update-rc.d then
<MTecknology> axisys: rcconf is extremely simple. I only use update-rc.d when I need to add a script manually to /etc/init.d
<MTecknology> there's a lot of opinions in here - but usually something to back up the reason.. postgresql vs mysql.. opinions and reasons for picking one over the other?
<Razernok> hi
<Razernok> Anyone here?
<lifeless> no
<Razernok> So much for getting help then
<lifeless> well, its a bit of a silly question
<lifeless> perhaps you could ask your questions
<persia> There's lots of folk here
<persia> !ask
<lifeless> 'don't ask to ask, just ask'
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<Razernok> no point to ask if no one is here to answer.
<persia> Razernok: /names will give you a list of folk.  Also, when lifeless said "no" he meant approximetely the opposite.
<lifeless> Razernok: people don't reply to 'is there anyone here' style questions: folk are generally working and thats a fairly social question.  Its also very disruptive if everyone answers (because you get a lot of noise)
<Razernok> What is the point of the DNS server?
<persia> It translates arbitrary strings into IP addresses.
<persia> Or translates arbitrary strings into other strings (depending)
<cef> a DNS is basically a semi-flat database of mappings of names to ip addresses, names, or informational data about the name, and as such can also provide mapping of ip addresses back to names. without one, you can't get a computer to recognise that www.google.com.au maps to an IP address that is used behind the scenes to make the connection. Think of it a bit like a phone book.
<Razernok> so with it a dynamic ip can have a .com domain?
<persia> cef: "IP addresses back to names" is just a superset of names to infomational data.
<lifeless> Razernok: the DNS Server in Ubuntu can be used to do that but isn't needed for that. They are separate problems.
<lifeless> Razernok: you need the DNS Server if you want to run a DNS server. You only need to run a DNS Server if you need to maintain a DNS zone yourself: small organisations and below generally outsource this to their domain registrar
<cef> persia: sort of.. it's constrained.. you can get a name to an ip, but not necessarily get an ip back to a name. that depends on if it's say a CNAME or if they have bothered to even provide reverse mapping (in-addr.arpa)
<Razernok> my server is on a dynamic ip
<cef> Razernok: there are services that allow you to give a constant name to a dynamic ip, but it does have issues (eg: when the ip changes, etc). places like dyndns.org and the like.
<Razernok> dyndns.org only gives dynamic ips a subdomain
<persia> cef: My point is that when do you the in-addr.arpa dance, you're back into the "names into *" area.  Just a nitpick of mine: there *is no* reverse DNS, just a cooperative convention that mimics it.
<cef> persia: ahh I see what you mean.. yeah true
<Razernok> I guess no .com domain for my server then... :(*
<Razernok> :(
<cef> Razernok: you get a unique hostname, and that you can point at with a different DNS (eg: ISP/DNS Registrar hosted) using a CNAME record. But, it'll still suffer all the dyndns type issues.
<Razernok> i can only get dynamic ips not static ips
<cef> eg: get a DNS Registrar to host iwantafunnyname.com for you, then set up a CNAME record that maps www.iwantafunnyname.com to whaterveritis.dyndns.org
<cef> it's a hack, and you can have issues (eg: you can't the CNAME use the name for an MX - mail exchange - record, etc etc.)
<Razernok> hmm, I hope no-ip.com's software works on linux then
<cef> Razernok: there are a number of things and services like that, so there are options if you do go down that path
<Razernok> hmmm, its a download. I dunno how to transfer it to the server or install it from a file. Unless they gave me a apt-get command I'm clueless on how to install it.
<Razernok> well, then again I could run the no-ip.com software on my windows pc since the public ip is the same
<Razernok> but prefer it on the actually server though
<Razernok> anyone know the apt-get for no-ip.com's software?
<sekyourbox> Having trouble designating an ethernet port for sending DHCP communication using DHCPD3 on ubuntu
<sekyourbox> What needs to be added in DCHPD3.CONF to only set up dhcp on one interface connected to a router?
<Razernok> i guess no one knows then :(
<Razernok> bye then
<persia> apt-cache search dynamic shows several
<persia> apt-cache seach no-ip even shows three.
<lunaphyte> sekyourbox: see /etc/defaults/dhcp3-server
<lunaphyte> although i don't quite know what "connected to a router" means.
<sekyourbox> thx, brb
<sekyourbox> test
<KD7SPO-rod> need help settig up virtual host on ubuntu server 9.10, I can configure on other linux and bsd no problem so I must be missing something. I've copied the default file to www.ezyaspi.com in my sites-available folder and ran a2en  www.ezyaspi.com and restarted apache with sudo /etc/init.d/apache2 restart. But I still get the default page when I browse to www.ezyaspi.com page. DNS is pointed correctly. Any suggestions?
<sekyourbox> do you need network manager for ATH0 / wifi connections?
<sekyourbox> How does this work exactly, I have a physical interface WIFI0, and what seems to be some type of virtual interface ATH0.. ath0 is where im picking up my ip address.
<sekyourbox> sorry, wlan0
<jayvee> yeah, they're a bit funny
<jayvee> don't touch the one that doesn't do anything
<jayvee> I have an aironet card that comes up as wifi0 and wmaster0
<jayvee> I don't touch wmaster0
<sekyourbox> I need to remove network manager
<sekyourbox> I have wmaster0 too
<sekyourbox> ath0 has the ip of the wifi network
<sekyourbox> I dont want to hose my internet if I drop netman
<Razernok> how do i open a file using command line?
<jayvee> Razernok: you mean like a text editor?
<jayvee> nano is a good text editor to use if you're not familiar with the command line.
<jayvee> $ nano the_file.txt
<Razernok> hmmm, and it looks like there is no way to add users to the ftp server. weird
<pwnguin> Razernok: maybe it's based on system users?
<jayvee> maybe the ftp server uses your unix users as authentication
<jayvee> what pwnguin said :)
<pwnguin> personally, i avoid ftp
<jayvee> strongly agree
<Razernok> nope
<jayvee> Razernok: install openssh-server, and use an SFTP program to transfer files
<pwnguin> slowly training our interactive media dept that it's bad
<jayvee> most FTP cilents support SFTP these days
<Razernok> i just tried my user and pass
<jayvee> it's not even worth getting FTP going
<jayvee> just use SFTP
<Razernok> need ftp for dreamweaver
<jayvee> eww
<pwnguin> pretty sure dreamweaver supports SCP
<jayvee> that's another thing you'll want to get rid of :P
<pwnguin> http://webmaster.iu.edu/tool_guide_info/dreamweaver_pc.shtml
<Razernok> from the ftp server "Response:	530 This FTP server is anonymous only."
<Razernok> wtf
<pwnguin> heh
<pwnguin> Razernok: you have ssh set up for administration right?
<Razernok> yea
<Razernok> I'm on putty
<pwnguin> i haven't checked but im pretty sure that if you can scp to the box, you can sftp
<Razernok> need ftp so i can use Filezilla
<pwnguin> filezilla should support sftp as well
<jayvee> Razernok: filezilla supports sftp
<pwnguin> #
<pwnguin> # Supports FTP, FTP over SSL/TLS (FTPS) and SSH File Transfer Protocol (SFTP)
<jayvee> like I said, most FTP clients out there support SFTP
<pwnguin> Razernok: if you want to get really crazy, set up WebDAV
<sbeattie> actually, I believe it is possible to configure sshd to not support sftp while allowing scp, but it's not a common configuration.
<pwnguin> probably, but i think openssh-server allows both out of box on ubuntu?
<Razernok> i got it but it won't let me transfer files
<Razernok> open for write: permission denied
<Razernok> :(
<pwnguin> with FTP?
<Razernok> sftp
<pwnguin> where you trying to write to and with what user?
<Razernok> yes
<Razernok> its an admin user
<pwnguin> if it's root
<pwnguin> that's disabled by default
<Razernok> its a folder in root
<pwnguin> where you trying to write to and with what user?
<Razernok> in /Storage
<jayvee> Razernok: ubuntu "admin" users aren't automatically root
<jayvee> they are only "root" when they use sudo
<jayvee> when you use sftp, you just have normal permissions
<jayvee> so you have to "chown razernok:razernok /Storage"
<jayvee> or the equivalent
<Razernok> so how do i upload into root?
<Razernok> huh?
<jayvee> what is the user you are trying to log in as?
<jayvee> s/log in/upload/
<Razernok> my admin user
<jayvee> yeah, but what's his username?
<pwnguin> does it matter?
<Razernok> john
<jayvee> you want to go "chown john /Storage"
<pwnguin> unless
<pwnguin> it's web data
<Razernok> where do i do chown john /Storage
<pwnguin> ubuntu web server doesn't run as root
<jayvee> /Storage doesn't sound like it's being served by the web server to me
<Razernok> its so i have an easy to find folder for my uploads to the server but not to be seen on the web server.
<pwnguin> thats what your homedir is for?
<Razernok> but you then have to type "/home/john"
<Razernok> and i don't want the extra stuff to remember
<pwnguin> depends on the client
 * jayvee bangs head on desk
<pwnguin> many will automatically chdir ~
<jayvee> true
<Razernok> chdir???
<jayvee> and if you use "scp some_file.txt myserver:" it automatically copies it to your home directory
<jayvee> change directory to ~ (where ~ means home directory)
<jayvee> cd and chdir mean change directory
<Razernok> ok you've lost me
<Razernok> I just want to be able to upload to anywhere on the server.
<jayvee> you could enable the root user, but you're really shooting yourself in the foot
<qman__> Razernok, that can't be done without massively destroying the system permission scheme
<jayvee> mmm...it's a very Windows-like attitude
<pwnguin> qman__: you could just use root, which preserves permissions but opens up another fun attack. perhaps theres a way to limit it to scp, but even then, yikes
<qman__> allowing root to log on remotely is something I don't ever consider doing
<jayvee> yeah, rssh can limit it to scp, but with root permissions, you could just delete rssh with scp
<qman__> and when you run as root, everything you create is owned as root, which mucks up permissions anyway
<jayvee> Razernok: I know when I was new to Linux, I wanted the same too, so I understand where you're coming from.
<jayvee> But really, it's shooting yourself in the foot still. :)
<Razernok> so if I'm only allowed to upload to my user's home directory then how am i expected to upload to say the htdocs folder? or custom folders used as storage?
<qman__> yes
<qman__> it's a Very Bad Idea (tm)
<pwnguin> Razernok: you could make yourself the owner of that directory
<Razernok> ya can't then unless you place the folders in the home directory
<pwnguin> you can totally change the ownership of files
<jayvee> exactly
<pwnguin> probably there's a smarter way
<qman__> Razernok, there's also ~/public_html
<sekyourbox> I need to configure /etc/networ/interfaces.. Can i configure ath0 (wireless) in here or do I have to use some IW config file???
<jayvee> Razernok: you should "chown" (i.e. "change owner") of your htdocs directory to yourself so you can upload to it
<jayvee> which is why I was talking about chown earlier
<pwnguin> im thinking, mod_userdir and mod_rewrite
<Razernok> i don't see a command line in filezilla
<pwnguin> use ssh
<jayvee> yeah you'll need to do it from putty
<Razernok> oh god
<jayvee> sudo chown john:john /var/www/htdocs
<jayvee> or wherever htdocs is
<qman__> actually
<qman__> you probably want john:www-data
<qman__> otherwise it might break the website
<jayvee> true
<pwnguin> what you can do is use mod_userdir to provide a mapping from www.example.com/~john to /home/john/public_html
<Razernok> i have no idea where htdocs is located at
<qman__> well, there is no htdocs by default in ubuntu
<jayvee> well if you haven't touched anything, it's actually /var/www
<qman__> so unless you created it, it doesn't exist
<pwnguin> and then make a public_html dir with group www-data, and set it to be sticky
<jayvee> Razernok: I'm guessing you're reading tutorials for Fedora or Red Hat?
<qman__> and the default website is /var/www
<jayvee> Red Hat uses htdocs by default, but Debian and Ubuntu call it different
<jayvee> don't read Fedora tutorials
<qman__> yeah, that'll really mess you up
<Razernok> I use apache
<qman__> stick to ubuntu docs for ubuntu
<jayvee> yeah
<pwnguin> Razernok: the apache tutorials for Fedora aren't the same as the ones for Ubuntu/Debian
<jayvee> my head definitely explodes when I try and do things the Fedora way in Ubuntu, or vice versa
<qman__> debian docs are secondarily preferred, since most of the packages are similar
<qman__> but definitely use ubuntu centric docs wherever possible
<pwnguin> Razernok: for example, last i checked, apache ran as root on Fedora, while it runs as its own user in Debian
<Razernok> i came from windows not fedora
<jayvee> pwnguin: on CentOS 5.4 at least, it runs as the 'httpd' user
<qman__> debian and ubuntu also have a very different apache configuration setup
<qman__> apache on windows is a disaster waiting to happen
<pwnguin> jayvee: perhaps. my rhel env at work is best described as pathological
<jayvee> Razernok: but you must have been reading a Red Hatâcentric tutorial or heard from a Red Hat using user if you wanted to access a "htdocs" folder
<jayvee> Ubuntu has never had a "htdocs" folder. :)
<pwnguin> i think htdocs is the default apache stuff
<Razernok> no
<Razernok> on apache its called htdocs
<pwnguin> it's probably that way on windows too
<jayvee> yeah, possibly
<pwnguin> Razernok: trust me, apache doesn't care
<jayvee> pwnguin: come to think of it, you might be right
<pwnguin> Razernok: its all in the config files in /etc/apache2/
<jayvee> %PROGRAMFILES%\Apache\htdocs or something
<jayvee> last time I ran Apache on Windows was way back in 2007
<pwnguin> heh
<jayvee> or 2006, actually
<pwnguin> i havent but my boss tried
<jayvee> yeah, 2006
<qman__> the only thing worse than trying to run apache on windows, is trying to run PHP in IIS 5
 * jayvee shudders
<jayvee> I've done that...briefly
<pwnguin> i accidentally recomended we replace our IIS ftp site with webdav
<pwnguin> but it turns out apple is spawn satan
<pwnguin> and only supports apache, poorly
<jayvee> you mean apple can only connect to apache webdav servers, and not, say, iis webdav servers?
<pwnguin> correct
<jayvee> ouch
<pwnguin> it demands webdav 2.0
<jayvee> webdav is kind of...meh
<pwnguin> and even then, fucks up the syntax
<jayvee> it was new back in...1998
<pwnguin> webdav is neat; pokes straight through firewalls
<jayvee> Windows 98 had the glorious new "Web Folders" feature in My Computer
<pwnguin> supported out of box by windows, ubuntu and OSX.... IF you run apache
<jayvee> it is easier to secure than sftp if you don't trust your users, I have to admit
<jayvee> rssh is kind of meh as well
<pwnguin> anyways
<pwnguin> ive taken to a simple /var/www/www.pwnguin.net approach, with Virtual Directory in apache config
<jayvee> that's what I do on my linode
<qman__> userdir is a great solution when you want normal users to be able to upload to a web server
<jayvee> dead easy to maintain
<pwnguin> qman__: is it possible to combine userdir and modrewrite?
<qman__> not sure, never tried
<Razernok> so what do i need to do????
<qman__> I've done userdir, and I've done rewrite, but not both at the same time
<qman__> Razernok, if you're the only user, just SSH in, and `sudo chown john:www-data /var/www`
<qman__> then you'll be able to upload files there, and they'll show up in the default apache site
<qman__> assuming your username is john
<pwnguin> protip: set the group sticky bit if you want new dirs to inherit the group
<pwnguin> (you do)
<Razernok> and what about normal folders for storage?
<qman__> just use your home folder, or a folder within it
<pwnguin> i'd just use my homedir, because thats what should show up if you dont' type in anything for a directory
<qman__> a tilde, ~, magically refers to your home folder
<qman__> so you don't have to type /home/john every time
<Razernok> i want to avoid typing long paths like /home/john/something/fgse.txt
<qman__> use tab completion
<pwnguin> Razernok: in your HTML?
<qman__> also, that's not a long path
<Razernok> so i need to place it in the root
<qman__> you do not want to place files in the root
<Razernok> like /Storage
<qman__> that's also a Very Bad Idea
<qman__> to clarify
<Razernok> alot faster to type /Storage then /home/john/Storage/file.txt
<qman__> if your user is john, '~/Storage' means the same thing as '/home/john/Storage'
<jayvee> yes
<jayvee> 100% agree
<jayvee> home folders are there for a reason
<qman__> you can also type that path very quickly
<jayvee> I use the ~ shortcut all the time
<qman__> keys pressed, in order: / h TAB j TAB shift+S TAB
<jayvee> I probably haven't typed /home/jeremy for years
<pwnguin> well
<pwnguin> my bash autocomplete isn't network aware
<qman__> mine isn't either, but I have a /home/ryan on all my machines
<Razernok> damnit, can't find a remove folder command
<Razernok> wtf
<qman__> the new version of openssh that just came out a few days ago adds tab completion to the built in sftp client
<pwnguin> rm
<qman__> I use scp the most, though
<jayvee> Razernok: use "rmdir" if the folder is empty
<Razernok> this is ticking me off
<pwnguin> mostly i just use gnome's remote scp mount tool
<qman__> Razernok, rm -r if you want to remove a directory and all files within
<jayvee> or "rm -r" if you want to remove all files inside it but BE CAREFUL with that command â you can do a lot of damage
<Razernok> rmdir: failed to remove `/Storage': Permission denied
<jayvee> sudo rmdir
<pwnguin> ls -l /Storage
<pwnguin> well, maybe that lists out the dir contents, i just want to know it's ownership and permissions =/
<jmarsden> pwnguin: ls -ld /Storage
<pwnguin> ^ Razernok that one
<uvirtbot> pwnguin: Error: "Razernok" is not a valid command.
<pwnguin> wth
<Razernok> didn't work
<Razernok> still won't let me delete it
<qman__> yeah, ^ is apparently a bot command
<jayvee> sudo rm -r /Storage
<qman__> Razernok, this must be done from SSH or local command line
<jayvee> it will delete everything inside /Storage, if that's what you want to do
<qman__> not from inside an SFTP connection
<pwnguin> ls just lists the files & permissions. its how you tell what the permissions currently are
 * pwnguin is surprised
<pwnguin> i upgraded my linode to 10.04 and it still works
<qman__> ha
<qman__> jumping the gun a bit, eh
<Razernok> there
<pwnguin> well
<lifeless> heh, been running lucid since jan
<pwnguin> i split my disk in two
<Razernok> hmmmm, how do you unpack webmin_1.510_all.deb on windows?
<pwnguin> one is the in case of emergency undo snapshot
<pwnguin> Razernok: 7zip can probably do it, however...
<Razernok> so i can place it in the preuploading folder for dreamweaver
<qman__> Razernok, webmin is not supported on ubuntu
<Razernok> i use winrar
<lifeless> Razernok: you don't unpack debs on windows
<pwnguin> Razernok: if you really want webmin, just upload the deb directl
<pwnguin> directly
<Razernok> Debian package suitable for Debian, Ubuntu or other derived Linux
<Razernok> http://prdownloads.sourceforge.net/webadmin/webmin_1.510_all.deb 14M
<Razernok> says on webmin's site
<qman__> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. See !ebox instead.
<Razernok> wtf, then what do i use????
<pwnguin> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<Razernok> same as webmin?
<Razernok> this is gonna be a web hosting site
<pwnguin> hahahahh
<qman__> you have a long way to go before you host people
<pwnguin> well, i guess you mean one for personal use?
<qman__> you should not be using webmin or ebox for a web hosting site, either
<qman__> you want something more like cpanel I'm guessing
<qman__> webmin and ebox are aimed at general server administration, not web hosts
<Razernok> I'm planning to host only about 5 trusted clients
<qman__> and anyone hosting others should know his way around the command line, otherwise when something breaks you're going to be in for a world of hurt
 * pwnguin is barely qualified to host his own stuff, let alone other people's
<pwnguin> is there a file or command to just identify the ubuntu release you're running?
<qman__> pwnguin, cat /etc/issue
<qman__> I think you can also run 'lsb-release'
<pwnguin> ah, i always go for something specific like debian_version
<pwnguin> thanks
<jayvee> cat /etc/lsb/lsb_release
<qman__> ah, that's what it is
<jayvee> cat /etc/lsb-release, sorry
<lifeless> lsb_release -a
<lifeless> the interface is the command, not the file AIUI
<Razernok> I have a friend who will be my site testing partner
<Razernok> weird, sudo apt-get install â^ebox-.*â won't install ebox
<Razernok> wtf
<Razernok> the site said it would
<qman__> Razernok, apt-cache search ebox
<Razernok> wtf
<Razernok> ebox doesn't do anything i need
<pwnguin> firewalls, vpn, network config, cert authority
<Razernok> I wasted all that time to install it and its not at all like webmin
<pwnguin> you wasted all of 20 minutes, if that
<Razernok> webmin is for managing accounts and stuff. aka a web host
<Razernok> ebox does none of that
<Razernok> and i don't know how to remove it
<pwnguin> sudo apt-get remove ebox
<pwnguin> however
<pwnguin> i see an ebox-usersandgroups
<Razernok> so how do i install webmin then?
<Razernok> i just installed ebox and it has nothing webmin does
<Razernok> so how do i install webmin?
<pwnguin> ebox is modularized; does it install any extra modules by default?
<pwnguin> nope
<Razernok> A web-based interface for system administration for Unix. Setup user accounts, DNS, file sharing and the program is freeware.
<Razernok> thats webmin
<Razernok> eBox Platform is an open source SMB network server that acts as a network gateway, a unified threat manager, a network infrastructure manager, ...
<Razernok> two different things
<pwnguin> http://www.ebox-platform.com/
<pwnguin> i think you have to install extra modules?
<Razernok> So i need to know how to install webmin_1.510_all.deb
<pwnguin> ie, apt-get install ebox-usersandgroups ebox-ca ebox-dns
<pwnguin> personally, i dont use ebox or webmin
<pwnguin> just ssh and emacs
<Razernok> I need to webmin so i can manage clients and hosting plans and such
<pwnguin> however, if you have an ubuntu .deb, just upload it and use sudo dpkg -i wtfever.deb
<pwnguin> it sounds like people think it wont work
<pwnguin> i didnt know webmin supported hosting plans
<twb> I think what he's really asking for is a free version of cpanel.
<pwnguin> good luck with that
<twb> Yeah
<Razernok> thats what webmin said it does
<Razernok> allowing you to manage the clients and plans
<twb> Webmin is made by people that couldn't find their arse with both hands
<pwnguin> Razernok: i cant find any webpage describing webmin with that phrase
<Razernok> thats what i found
<Razernok> that its suppose to be for managing a hosting site. (hosting plans, accounts, etc.)
<pwnguin> its supposed to be for administrating a server
<pwnguin> there's a virtualmin plugin, but hell if i know if the GPL version works
<Razernok> this server is gonna be a hosting site. so thats why i need webmin for the (hosting plans, accounts, etc. management)
<pwnguin> have you ever actually used webmin?
<pwnguin> looks like their demo site is down
<Razernok> no cause i can't figure out how to install it
<pwnguin> http://www.webmin.com/demo.html look at screenshots
<twb> Can we just agree that webmin is offtopic here and move it to -overflow or whatever?
<pwnguin> how bout we point Razernok at #webmin and never speak of the matter again
<Razernok> no idea what to do now :(
<pwnguin> apparently the webmin channel is on undernet? are they still around?
<pwnguin> Razernok: try joining #virtualmin; maybe they can help you out
<FireCrotch> You don't want to use Webmin on Ubuntu. Trust me.
 * pwnguin is off to go watch House or something
<FireCrotch> actually you don't want to use it on any debian-based distro
<pwnguin> But it says 'Supports Ubuntu Linux!'
<twb> pwnguin: support is a directed relation.
<pwnguin> i think you're trying to say we don't support it, but i never claimed otherwise. im trying to drop the subject, really
<twb> Find by me
<Razernok> i guess there is no way to do hosting plans, user accounts, etc.
<pwnguin> there tools that do that for a price. obviously we don't support it, but if you're paying for it the vendor should
<Razernok> need a free one
<pwnguin> seriously now. house time
<FireCrotch> Razernok: There's one called VHCS that I've looked at in the past. Never used it so I can't say  how good it is, but I've considered using it
<FireCrotch> They have a live demo that you can try out though
<jayvee> I thought ISPConfig was like a next-generation VHCS
<FireCrotch> ah ISPConfig. that's the other one that I couldn't think of the name of
<twb> Grr!
<twb> Lucid's coreutils doesn't have timeout
<au> ajmitch: your linode just stop replying? i had to reboot mine just then
<twb> OK, it appears to be in a separate "timeout" package there, for some reason.
<NoReflex>  hello! I'm having some problems with upstart in karmic server 64bit. It won't start postgresql. I tried http://superuser.com/questions/98702/how-to-make-postgresql-start-at-boot-time-in-ubuntu but it didn't work. can I remove upstart? I'm afraid to test it because the machine is at a remote location and if I mess it up I won't be able to connect to it anymore
<NoReflex> I can start postgresql using /etc/init.d/postgresql-8.4 manually after login. There are links in /etc/rc*.d to postgresql init script but for some reason it isn't started automatically after reboot
<lifeless> you cannot remove upstart
<lifeless> its how the server is booted
<NoReflex> lifeless: i thought that removing upstart would install the old init system
<NoReflex> I also searched for a logging possibility because I don't know why it won't start....and it's hard to "debug" a problem when you don't have the error message
<FireCrotch> NoReflex: there should be an error somewhere... /var/log/syslog, the postgres log file, somewhere
<NoReflex> FireCrotch: there isn't...I searched messages, dmesg, syslog, postgresql log, the entire log dir - I believe the log should be created by upstart - but upstart does not work with bootlogd
<lifeless> usual thing here is it not being enabled in /etc/default/postresql or some such
<NoReflex> lifeless: the error log is enabled in /etc/postgresql/8.4/main/postgresql.conf...and I can find there the error messages that occured after I started the application manually...but there's nothing about the "automatic" failed startup
<sherr> NoReflex: Is there a bug logged about this? Sounds like there should be. This situation would not be ideal for Lucid.
<NoReflex> sherr: I couldn't find a bug related to this fact...I did however find a bug regarding the inability of upstart to work with a boot logger
<twb> Sigh.  acpid is broken in lucid atm
 * persia points at all the lovely hardware that doesn't use ACPI :)
<twb> I only want to use it to shut down machines that aren't responding to ssh or PS/2
<twb> Usually because there was no PS/2 keyboard plugged in when they booted
<persia> Do they have USB ports?  That's always a convenient way to hotplug /dev/kbd
<uvirtbot> New bug: #276472 in samba (main) "cp -p on CIFS mount does not preserve permissions and returns a permission denied error" [Medium,In progress] https://launchpad.net/bugs/276472
<twb> In this case it probably does, but that's beside the point.
<twb> If acpid provides support for ONE event, the power button, that event handler shouldn't assume the not-depended-upon acpi-support package is installed.
<persia> Oh, good bug.  File & fix :)
<twb> Yeah, so now I have to install firefox into a vm, because apport-cli doesn't honour BROWSER=w3m
 * twb grumbles
<persia> You already filed that bug?
<persia> That should be *easy* to fix.
<twb> I'm assuming it doesn't because it would inconvenience the GUI-only people
<twb> Actually, probably s/BROWSER=w3m/sensible-browser/, because this is a VM, so my dotfiles aren't present.
<twb> OK, it does honour the x-www-browser alternative, so if I'm running X on my server, any graphical browser would work.
<persia> No.  Any browser should work.
<persia> It tries x-www-browser *before* www-browser
<twb> Hm.
<twb> When I tried in pbuilder --login, it simply failed to use w3m.
<persia> So for me, www-browser is w3m and x-www-browser is epiphany-browser (on my laptop)
<persia> Was w3m installed there?
<twb> Oh, I bet I forgot to install w3m, forgetting it was priority: standard, not essential: yes.
<persia> That would do it.  Most build environments are type=buildd which is very minimal indeed, and doesn't include such fripperies as a browser.
<twb> The other exciting problem I'm gonna have now is that Xorg appears to disable the pointer emulation key (shift + num lock), so I gotta shave that yak before I can copy the transcript into the browser
<persia> twb: A clean yak is a beautiful yak :)
<twb> No, it's even weirder than that.
<twb> Mouse emulation is working, it's just the pointer's location doesn't change onscreen until I redraw the window under it.
<twb> OK, screen env BROWSER=w3m apport-bug acpid, and I can copy the typescript into the browser that way.
<twb> #539432
<twb> ubottu: malone bug #539432
<uvirtbot> Launchpad bug 539432 in acpid "powerbtn.sh assumes acpi-support is installed." [Undecided,New] https://launchpad.net/bugs/539432
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<twb> "PATH=(custom, no user)"... sounds like an apport fuckup
<persia> !ohmy
<ubottu> Please remember that all Ubuntu IRC channels share the same attitude of providing friendly and polite interaction with all users of all ages and cultures. Basically, this means no foul language and no abuse towards others.
<twb> Sorry.
<persia> But yes, that does indeed.
<twb> I was logged in as ubuntu (uid 999), as created by casper.
<GeorgeJ> Hello, is it possible to create a lvm from a software mirror without having to erase all data? If so, where could I find a tutorial?
<GeorgeJ> I have no lvm groups set up, so I'm not trying to extend one, but create a group.
<persia> GeorgeJ: I've never found a way short of having sufficient space and copying from one place to another.
<twb> Now I'm getting timeout errors from the launchpad "edge server" telling me I'm part of the "launchpad beta team", so I'm not going to bother reporting that apport bug
<twb> GeorgeJ: what is a "software mirror"?
<GeorgeJ> Well, I'm running ubuntu server and I realy don't know which channel this goes to.
<GeorgeJ> Basicaly I have two hard drives which are mirrored using software raid.
<persia> twb: Note that due to bug #538097, there might be issues anyway.
<uvirtbot> Launchpad bug 538097 in apport "+storeblob fails with "500 Internal server error" on production (works on edge)" [High,Invalid] https://launchpad.net/bugs/538097
<twb> persia: I got it after punching in the Summary line on the first page it takes me to
<twb> persia: I would normally blame my end, because it's being used to implement/test network failures, but if that was the case I wouldn't expect to even get to the edge server.
<persia> Hard to say, really.
<jayvee> GeorgeJ: this channel is fine for raid questions
<jayvee> although I'm not a raid person
<a_ok> I'm getting tired ssh and her known host problems. For some reason all my servers have a new rsa fingerprint. Is there a way i can reset this known host nonsense without disabling ssh?
<GeorgeJ> Well, I'm not having any issues with my raid setup. I'm trying to create an iSCSI target, which requires me to create a LVM group.
<GeorgeJ> And when I want to create a LVM group, I'm instructed that all my data will be erased, and I was looking to go around this.
<twb> OK, now I have an even more exciting behaviour.
<twb> On the lucid host, this loop stops after an unpredictable number of iterations:
<twb> while ip route show dev eth0 | grep -q .; do printf .; done
<twb> If I take out either side of the pipeline, the problem goes away and the loop runs forever
<twb> From just running "ip route show dev eth0" in a loop, it certainly seems that nothing is changing...
<twb> OK, and it's ip, not grep, that's failing, because I can't reproduce it without set -o pipefail
<twb> Sod it, I'll turn pipefail off and this error detection script can just be flaky.
<nenne> does apt-get upgrade have a function to ensure no upgrades that need reboot will be installed?
<acalvo> hi!
<acalvo> in order to deploy a HA Cluster, is it really necessary to have separated service servers and load-balancer servers?
<twb> nenne: it does not.
<nenne> twb: ok, thanks
<twb> nenne: *usually*, the only thing that would require a reboot would be kernel packages.
<nenne> twb: aah yes. would a "safe upgrade" skip thoose?
<twb> "aptitude safe-upgrade" should skip any paths that require removing or installing new packages.
<philthno2> nenne: for most packages that recommend reboot a simple restart of the graphical interface or respective servce would be enough
<twb> nenne: but even if you upgrade the kernel, the old kernel will continue to work indefinitely.
<twb> nenne: packages name linux-image-* (i.e. kernels) are NEVER automatically removed in the default configuration.
<nenne> twb: Aaah true, thing is im gonna be upgrading a server with some sensitive services and it would be nice to know what upgrades will need a reboot on forehand. To warn users. But i guess i can allways upgrade and then do the reboot 1 week later(with planned downtime) ?
<twb> nenne: what I normally do is "aptitude -s full-upgrade", and look at what will happen (-s for simulate).
<twb> nenne: then I just know, from experience, what is likely to need an update, and I report those to my customer
<nenne> twb: aah i see. that seems like the best way of doing it :)
<twb> Actually I have a cron job do the -s run and email me the result
<nenne> Aah nice
<uvirtbot> New bug: #539454 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.37-1ubuntu5 failed to install/upgrade: il sottoprocesso nuovo script pre-removal ha restituito lo stato di errore 1" [Undecided,New] https://launchpad.net/bugs/539454
<uvirtbot> New bug: #227837 in libvirt (main) "[Hardy] overzealous masquerading affects vm to vm traffic" [High,Fix committed] https://launchpad.net/bugs/227837
<RoyK> hi all. the package linux-image-virtual, is that for a virtual guest?
<uvirtbot> New bug: #539469 in squid (main) "does not terminate at computer shutdown" [Undecided,New] https://launchpad.net/bugs/539469
<twb> RoyK: yes
<RoyK> k
<zul> morning
<uvirtbot> New bug: #357067 in beautifulsoup "javascript parsing error" [Undecided,Confirmed] https://launchpad.net/bugs/357067
<blinkiz> Hi. I have a iscsitarget LUN up and active. I have now added more LUNs. It seems like I have to do /etc/init.d/iscsitarget restart. No reload function. Will this create any disturbance in the already existing active LUN? It can NOT go down.
<zul> timmah!
 * zul does his gollum php impression
<jalons> Thoughts on how to return the TLS version exim is using?
<persia> jalons: e.g. `apt-cache show exim4-daemon-heavy | grep ^Depends` : check the gnutls library version dependency.
<jalons> persia: thanks, but I was looking for TLS version 1.2 (per RFC 5246) vs RFC 4346, and how an exim install running (built from source to utilize libssl over gnutls due to the smaller entropy pool required for libssl)
<jalons> I built it about a year ago and recall nothing other than that - am I stuck stracing and looking for the calls to the libraries?
<persia> I'd guess you could determine the protocol version from the library version, but I could be entirely wrong.
<persia> If you built it yourself without a packaging framework, perhaps so.
<persia> Although I'd recommend using nm rather than stracing to determine how it's linked.
<blinkiz> How do I add more iscsi targets (ietd.conf) without having to restart iscsitarget service?
<stanman246> guys.. My datacenter is changing it's default gateways, how can i be sure my server will be accessible?
<ball> stanman246: ask your sysop for his mobile number and blackberry email address. ;-)
<stanman246> lol
<stanman246> thought maybe it 's possible to use a 'backup' default gateway
<stanman246> is that possible?
<zul> to ask your sysop for his mobile and blackberry email address? yes probably
<persia> stanman246: Yes, but you don't really want to do that.  if you want that kind of function, look at a dynamic routing solution.
<persia> (and for which is the right choice : ask your network admin)
<jalons> I'm trying to dump all remote ciphers accepted by this mail server - anybody know the magic combination of openssl s_client -connect <host> -starttls smtp, and getting -cipher into the mix?
<jalons> i know server decides what, but is there a way to trace what it's contemplating?
<jalons> other than guessing what the lowest cipher suite is, and watching it step through them
<jalons> nevermind, it appears I just needed to keep googling to find the answer: http://rt.openssl.org/Ticket/Display.html?id=1162&user=guest&pass=guest
<invisime> I am formatting a second hard drive on my home server. should I format it as ext3 or ext4?
<ball> invisime: yes.
<invisime> ball: should I format it as ext3 xor ext4? :P
 * ball grins
<ball> I suppose that would still be a "yes", assuming a single partition.
<invisime> damn clumsy english language. *must learn lojban*
<invisime> I'm basically looking for a comparison of the two or best practices. could you supply the information or a pointer to the information?
<ball> invisime: I don't use Linux much, so I'm not the best person to ask.  I think I ran ext4 experimentally on my last Linux box and it seemed to work.
<ball> invisime: but I'd want more sensible testing before I used it for anything important.
<invisime> I see.
<jalons> invisime: for all intents and purposes ext4 is just ext3 + extents
<invisime> jalons: k. thanks for the clarification.
 * invisime types 'sudo mkfs -t ext4 /dev/sdb1' and hits enter.
<acalvo> in order to deploy a HA Cluster, is it really necessary to have separated service servers and load-balancer servers?
<jalons> acalvo: no, but it works better.
<ricdanger> hi there
<ricdanger> I just stup ubuntu server 9.10 with a separate /boot and LVM
<ricdanger> but grub will now create a wrong "set root=" entry
<uvirtbot> New bug: #539663 in php5 (main) "FFE for php 5.3.2" [High,New] https://launchpad.net/bugs/539663
<nimrod10`> ricdanger, is /boot outside of LVM ?
<ricdanger> nimrod10: yes
<ricdanger> update-grub will set my "/" volume as root
<ricdanger> my / is a LVM volume btw
<jalons> what's it generating that's incorrect?
<maxx_h> Hi. I have a new Intel MB with built-in Hardware-VNC-overIP which is used for a server. (no monitor connected)
<maxx_h> Is there a way to send the video output even if no monitor is connected, so i can fetch it with vnc?
<persia> maxx_h: You essentially need video drivers for your hardware-VNC solution.  It oughn't be any different from any other framebuffer, except for the drivers.
<jalons> when you power up the motherboard, the baseboard management controller will likely request a DHCP address for the management port: watch your dhcp server logs and VNC in
<jalons> or as persia said, if I misunderstood what you're trying to do
<maxx_h> i can connect via vnc(management ip), see the bios and grub - but when the linux kernel loads i only have a black screen
<persia> That's definitely a driver issue then.
<maxx_h> if a monitor is connected, vnc works too
<persia> You'll need framebuffer drivers and to direct console to that framebuffer.
<persia> Oh, cool.  That means you don't need new drivers, you just need to fix the bug in the drivers you have.
<persia> Essentially, they need to not inform the kernel there's no monitor attached when VNC is active.
<warmexxus> i have a similar problem... a headless ubuntu server, that I can only SSH when the keyboard/monitor is attached.
<persia> warmexxus: That's not similar at all.  That's likely a buggy BIOS failing to initialise properly without a keyboard.  There are hardware stubs you can buy that fake it.
<warmexxus> :( it's just so strange... I can ssh with a monitor/kb attached, and I can connect while headless but it disconnects after so many minutes... but only when headless...
<warmexxus> i will take your tip and google up the best I can... thank you
<maxx_h> graphiccard: VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 12)
<maxx_h> chipset: intel Q57
<maxx_h> what i see is that it uses the vga16fb - should i trie different fb drivers?
<persia> maxx_h: I'd just report a bug against the driver you have.
<RoAkSoAx> kirkland, ping
<kirkland> RoAkSoAx: howdy
<RoAkSoAx> kirkland, how's it going? :)
<RoAkSoAx> kirkland, could you take a look to: https://wiki.ubuntu.com/GSoC/2010/AndresRodriguez and give me some feedback?
<kirkland> RoAkSoAx: sure thing
<RoAkSoAx> zul, for apport hooks, attaching info from the syslog should be whenever the user accepts to attach the conffile or should it always be attached?
<RoAkSoAx> kirkland, awesome, thanks :)
<uvirtbot> New bug: #539727 in vm-builder (universe) "python-vm-builder_0.12.2-0ubuntu3 doesn't set locales correctly" [Undecided,New] https://launchpad.net/bugs/539727
<zul> RoAkSoAx: if the user agress then attach the syslog info
<RoAkSoAx> zul, ok will do it that way then
<persia> RoAkSoAx: I'll suggest you grep syslog for the class of interesting stuff, and only attach that.  I'd be happier showing you my output from bind without showing you that cron is executing mirrorpr0n every 12 minutes.
<RoAkSoAx> persia, yeah I'm doing that actually: report['SyslogBind9'] = recent_syslog(re.compile(r'named\['))
<persia> Cool :)
<RoAkSoAx> :)
<kirkland> zul: howdy howdy
<zul> kirkland: hey
<kirkland> zul: where are the daily qemu-kvm builds?
<zul> https://edge.launchpad.net/~ubuntu-server-edgers/+archive/server-edgers-qemu-kvm
<zul> is it just me or is the wiki slow today?
<kirkland> zul: really slow
<zul> k thats what i thought
<kirkland> zul: did you take a snapshot of the debian/ from the lucid package, I guess?
<zul> kirkland: yeah I havent had a chance to update it yet
<kirkland> zul: okay, cool; yeah, there's some good stuff that would be nice to get in the daily too
<zul> kirkland: k ill put that on my todo list
<kirkland> zul: i was going to blog about your work, announce it, etc.... is that okay with you?
<zul> kirkland: sure
<uvirtbot> New bug: #526823 in mysql-dfsg-5.0 (universe) "error al actualizar" [Low,Invalid] https://launchpad.net/bugs/526823
<kirkland> zul: http://blog.dustinkirkland.com/2010/03/qemu-kvm-daily-builds-thanks-chuck.html
<zul> kirkland: nifty cool
<kirkland> zul: same to you ;-)
<uvirtbot> New bug: #526827 in mysql-dfsg-5.0 (universe) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10.3 failed to install/upgrade: subprocess post-installation script returned error exit status 1" [Low,Invalid] https://launchpad.net/bugs/526827
<preben> sorry to bother, but I may need som help with Karmic+multipath+lvm2
<preben> anyone got this to work?
<axisys> how do I safely enable telnetd on ubuntu server? there is no host on the same network, just cisco router and it needs to access the ubuntu server .. so telnet seems only option
<persia> axisys: Install the IOS ssh client (reed)
<persia> axisys: http://www.cisco.com/en/US/tech/tk583/tk617/technologies_tech_note09186a00800949e2.shtml#settingupaniosrouterasssh
 * persia has no good recommendations for CatOS other than "Upgrade"
<axisys> persia: hmm.. that wont be an option for this scenario due to company policy .. but i am glad it is a valid option
<axisys> how do I start telnetd .. may be then use ufw limit telnetd ? i do that limit for ssh
<persia> There exists no safe way to enable telnetd, unfortunately.  There are lots of unsafe ways.  `apt-get install telnetd` is likely the easiest of these.
<axisys> persia: yep.. telnetd is not safe period.. looks like it will be sitting in a  private network
<axisys> persia: what you know looks like that router has ssh already there.. these network guys never ever used it.. lol
<jaypur> if i run sudo apt-get update on server
<jaypur> it will download AND  install  the updates or i need to run something to install them
<zul> mdeslaur: does this look sense to you? https://bugs.edge.launchpad.net/ubuntu/+source/ntp/+bug/517701
<uvirtbot> Launchpad bug 517701 in ntp "ntpd apparmor rule does not allow reading dhcp based config" [Low,Incomplete]
<preben> anyone used /etc/lvm/lvm.conf ?
<persia> jaypur: Neither.  It just updates the status of updates available.  "upgrade" and friends perform upgrades.
<jaypur> hm
<jaypur> let me try
<mdeslaur> zul: yeah, that looks okay...don't remove the old one just in case, just add the new one
<zul> k
<jaypur> .yeah it's upgrade  thanks persia
<zul> mdeslaur: rw?
<mdeslaur> zul: same as the old one
<jalons> preben: yes, but I haven't touched multipathd on anything newer than hardy
<preben> jalons: it seems to me that lvm somehow prevents multipathd to add the mulipath devices if diskgroups are on the pvs
<preben> jalons: so I'm trying to configure lvm.conf, so it wont touch the disks before the multipath devices are available
<preben> .. or something
<jalons> which is logical, you don't want to start your lvm until the physical disks are ready
<jalons> I'm trying to find a host I have this one, as I seem to recall fighting CLVM vs LVM on this issue
<preben> it works pretty good. multipath -ll is good, pvs, vgs, lvs all good.
<preben> but soon as I reboot, the lvm access the pvs as /dev/sda,c,d and so on, instead of /dev/mapper/alias (from multipath)
<preben> and multipath -ll is completly silent
<jalons> ok, so you want to strip out the devices {} section of lvm.conf?
<jalons> basically, ignore /dev/sd{a,b,c}*, and manually specify what you want
<RoAkSoAx> kirkland, how do I change the color of the byobu's status bar?
<jalons> there's a regex in there somewhere to ignore all sd{a-z}.{1-9} devices, so if you drop another drive in later it doesn't freak out - is that the jist?
<preben> it gets worse, I have installed the os on lvs to
<kirkland> RoAkSoAx: F9
<jalons> hrm.
<RoAkSoAx> kirkland, oh got it thanks lol :)
<preben> there is a filter seciont in /etc/lvm/lvm.conf - but I havent managed it to work yet
<jalons> preben: you want to multiplath the connection to the OS i take it?
<preben> jalons: I have managed this to work in a linux kvm guest with a single path
<preben> jalons: the os boots on a single internal disk, the multipathed disks are on a EMC san
<jalons> right, it's replacing the LVM devices with their multipath siblings
<preben> I think so
<kirkland> RoAkSoAx: ;-)
<jalons> preben: ok, it definitely sounds like you want to set up filtering (although if the system disk is seperate from the actual LVM, you should be fine)
<preben> I tried this filter: filter = [ "a|^/dev/sdc|", "a|^/dev/mapper/|", "r/.*/" ]
<preben> to include the boot disk(/dev/sdc) and include the multipath devices under /dev/mapper and ignoer evrything else
<jalons> so that's accepting sdc and anything under /dev/mapper, and rejecting everything else
<jalons> why do you want the boot disk included?  is it part of the LVM that's hanging off your SAN/
<preben> no, buts the boot drive is in a diskgroup by it's own
<preben> with root and swap volumes
<jalons> ah.
<jalons> duh, even though it's seperated out we still have to tell lvm about it
<jalons> so to start from the top now that i get it - upon reboot it's finding the LVM that contains / and swap, but it's mangling the mpath hosts, showing them up as /dev/sdX rather than the appropriate mpathX devices
<jalons> you're sure LVM is starting after multipath daemon?
<jalons> or before, not sure on the logic there - sorry to wall-o-text and run, but market closes in 40 minutes and I need to start final checks
<preben> I think thats the core of the problem. But i tried grepping under /etc/init.d after lvm, but no such luck
<preben> jalons: thanks for helping out
<uvirtbot> New bug: #539791 in samba (main) "mount.cifs cannot mount a DFS share when using Kerberos authentication" [Undecided,New] https://launchpad.net/bugs/539791
<RoAkSoAx> zul, bug #533603 bug #533601 and bug #531978
<uvirtbot> Launchpad bug 533603 in net-snmp "Apport hook for net-snmp" [Wishlist,Confirmed] https://launchpad.net/bugs/533603
<uvirtbot> Launchpad bug 533601 in bind9 "Apport hook for bind9" [Wishlist,Confirmed] https://launchpad.net/bugs/533601
<uvirtbot> Launchpad bug 531978 in vsftpd "Apport hook for vsftpd" [Wishlist,Confirmed] https://launchpad.net/bugs/531978
<zul> RoAkSoAx: the vsftpd one is pending
<RoAkSoAx> zul, oh ok :)
<Fougner> Hi! I'm trying to setup lighttpd+rtorrent with a scgi_local, but all I get from lighttpd is 500 - internal server error, where should I start looking?
<wack479> trying to get the "SetEnvIf Request_URI "^/robots\.txt$" dontlog" (keep the server from logging any robots.txt requests) but for some reason its not working for me. does the code not just go in the apache.conf?
<wack479> trying to get the "SetEnvIf Request_URI "^/robots\.txt$" dontlog" (keep the server from logging any robots.txt requests) but for some reason its not working for me. does the code not just go in the apache.conf?
<ventz> Trek: hey
<Trek> hiya ventz, i'm just here to track your issue is all
<ventz> thanks :)
<ventz> so here's the problem
<Trek> ventz: sounds to me like whatever you used to set your IPs went *poof* and isnt operating properly
<ventz> we bootup templates of ubuntu (jeos actually). At somepoint, we forget to re-ip a template. Now there's a duplicate IP.
<ventz> The problem is that the system doesn't tell us when we boot it up
<ventz> this constantly happens and we manually trace it down
<ventz> i am wondering if there's a way to at least know right away that there's a duplicate system IP
<ventz> in RHEL, when you boot it up, when it takes the network IP, it tells you about this
<ventz> Same for windows.
<ventz> anyone? :)
<Trek> if anyone is here, please poke me with /CTCP Trek IAMHERE
<ventz> when you have > 1200 hosts, it can become somewhat of a hassle to deal with this
<Trek> ventz: a thought, have you tried using a script to find the culprit computers, rather than tracking it down manually?
<Trek> ventz: i'm not a networking expert, but just a thought
<ventz> Usually the network guys trace it to an esx rack and from there to an esx port
<ventz> after that it's usually our job to find it
<jalons> a HA dhcpd that assigns the initial IP, a script that runs on the host that converts itself from the DHCP ip to static with whatever IP is assigned via DHCP?
<ventz> my problem is not just finding it -- that's annoying, but it's something that we can do in a finite steps. What kills me is finding out that this has happened in the first place. Usually we notice as people start getting different results running projects from different networks (thus hitting different resources or not reaching a resource)
<jalons> a central server that runs through each network class arping the IPs, looking for collisions is a passive solution
<jalons> i've used both, didn't like either method
<jalons> ventz: IPwatchD? http://www.linuxconfig.org/IPwatchD_an_IP_conflict_detection_tool_for_Linux
<jalons> looks like it might have some X dependencies from the page, which would be evil
<jalons> but, *shrug*
<jalons> might be a starting point
<Maletor> Hey I added ServerName mydomain.com to a sites-available file, but I can't get apache to start up without this error "Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName"
<Maletor> I have multiple TLDs so I need to be able to put them in the virtual host file...
<ventz> jalons: let me check that out. I've never used that
<ventz> I think it's for a running system though
<jalons> ventz: so you need something in the installer that's able to detect an IP conflict?
<Fougner> Maletor: set ServerName in apache2.conf to a FQDN
<Maletor> Fougner: I'm reading about that, but it doesn't say what to do if I want to link multiple FQDNs to this IP...
<jalons> Just pick one.
<jalons> determine a ServerName, and then user ServerAlias for the others
<Fougner> Maletor: you might as well use the hostname
<Maletor> Fougner: ?
<Maletor> How would I forward requests from domain1.com to 192.168.1.13 internally and requests to domain2.com to 192.168.1.10 ?
<jalons> with virtual hosts
<brontosaurusrex> i'am trying with some basic ircd-hybrid setup..., but i can only connect to 127.0.0.1, what is minimum one should do in ircd.conf ?
<jalons> brontosaurusrex: in your listen {} block, make sure you don't have host localhost or host 127.0.0.1.  You really shouldn't be asking what's the minimum you need for ircd.conf, but rather actually go through and understand each configuration option before you fire up your daemon
<ventz> jalons: yes
<ventz> something that will detect it on bootup
<ventz> and not take the IP if there's an ip conflict
<brontosaurusrex> jalons: thanks
<jalons> ventz: I think you're going to need to script it to ifup then arp during install
<jalons> that's the only way I can think to solve that
<jalons> ifup && arping <IP to use>.  if response die else carry on
<klaas> I'm running a full encrypted system using luks and cryptsetup plus dropbear for remote unlocking - I want to change the dropbear port from 22 to something else but I can't fint the setting where its changed - adjusting etc/init.d/dropbear /etc/defaults/dropbear does not work - anyone got an idea where the port could be hidden? :)
<McWiney> help! when accessing my server through a browser, php files are being downloaded instead of displaying in the browser. How can i fix?
<McWiney> here's what i mean: http://eckcite.com/test has two folders with identical index files (with html and php file extensions)
<Maletor> How do I add a virtual host so that it is going to 192.168.1.10 ?
<Maletor> When my web server is 192.168.1.13
<faileas> i'm noticing that there have been a LOT of attempts to brute force my system via SSH. While they haven't suceeded, (mostly due to incompetance) any suggestions on what i should do, other than changing the port i'm listening on?
<jiboumans> faileas: this is Normal unfortunately. there's 2 obvious things you can do
<jiboumans> 1) use knock to only open ports if you know the knock sequence
<jiboumans> 2) add a rule to your firewall to block IP addresses after X unsuccessful attempts
<faileas> jiboumans: how do i do that?
<jiboumans> faileas: there'll be tutorials all over the web. I'm sure google can point you there quicker than I could
<faileas> alternately, could i restrict SSH access to maybe 2-4 ip addresses - all my systems are ipv6 enabled and static?
<jiboumans> faileas: if you dont have a need to have access open to the world, of course you shouldn't
<jiboumans> personally, i have my ssh open to a very finite set of IP addresses, 2 of which are company jump boards which are open to the world
<Maletor> use certificates
<Maletor> How do I tell apache2 to forward incoming requests to mydomain.com to a another IP address (internal)?
<\sh> Maletor: you should read about apache2 mod-proxy, or iptables ... depending on your needs
<Maletor> well can i just set it so that all requests go to some default, and the ones that go specifiically to mydomain.com go to my computer ip
<Maletor> given that my computer is the web server
<goose> is there some way I can have all emails sent to /var/mail/ forwarded to my real email address via sendmail?
<qotsa> i successfully set up a bridge-interface with bridge-utils (version 1.4-5). after i got an ip from my adsl-modem with dhclient my computer justs shuts down. no joke. this is reproducible. any help is highly appreciated.
 * Mojo- waves
<Mojo-> Anyone got a moment for a n00b question?
<Mojo-> For 225 people, it sure is quiet in here...
<goose> just ask, don't ask to ask :p
<Mojo-> Hah
<Mojo-> Alright
<Mojo-> Anyone had any issues with SSH hanging takin the cpu(s) to 100%?
<goose> um. I could maybe see that if you were using like, a 30 GiB RSA key, but otherwise no, I'd say that's an abnormal problem
<Mojo-> Heh
<Mojo-> 4 separate installs.
<Mojo-> Debian 5, and 3 ubuntu server installs
<Mojo-> No major changes done
<Mojo-> Save a static ip and installing vim-nox
<Mojo-> Also, not using the keys yet, just using passwords.
<goose> I just checked
<goose> when I SSH into my server, my CPU does spike to 100% when verifying my RSA key, but it doesn't hang
<goose> I barely even caught it
<Mojo-> Mine doesn't even offer a password prompt.
<Mojo-> Even on the localhost.
<goose> it didn't even have time to show up on the graphs, just flew by the htop screen
<pwnguin> you have ssh keys
<pwnguin> just how crazy were they?
<Mojo-> I do.
<Mojo-> Um
<goose> lol. of course I do :p as compared to my 8192 bit RSA key, a "password login" is crap
<Mojo-> Whatever the default install is.
<goose> anyways, incase anyone was wondering, the answer to my question was simple:
<goose> echo "your@email.com" > ~/.forward
<goose> (and ofc, have "sendmail" installed)
#ubuntu-server 2010-03-17
<Mojo-> Perhaps I'll try a different OS.
<Mojo-> Thanks for the help anyway.
<Bookman> I am trying to install a printer on ttyUSB0 with a ppd file using lpadmin, but all I get back is 'No such file or directory'  lpadmin -p Star -i usb:/dev/ttyUSB0?baud=9600 -P tsp651.ppd
<MTecknology> Which package provides pecl?
<MTecknology> !pecl
<MTecknology> !search pecl
<ubottu> Found:
<Pici> apt-cache search pecl reveals a few candidates.
<MTecknology> Pici: thanks
<MTecknology> When I try to run pecl, it things pecl and php should be in /usr/local/bin/ instead of /usr/bin/
<MTecknology> any ideas why - or how to correct this? purging and reinstalling php5-dev and php-pear didn't do the trick
<MTecknology> oops - logged out and back in - works now :P
<drm> hello can someone assist me with a Ubuntu 9.04 lamp sever issue or point me in the direction i need to go
<lifeless> perhaps, if you describe the issue ;)
<drm> I am new to Linux and I am attempting to set up an Ubuntu 9.04 desktop with LAMP server to access from my other PC for testing php code
<drm> I cannot access the other LINUX pc or login to the website nor FTP access
<drm> it works and then it stops working and I am new to LINUX and have no idea where to even start searching for a correct answer
<drm> when ever I try to go to the Linux website is says " URL is invalid and cannot be loaded"
<drm> but I can access it fine from the LINUX PC
<persia> what URL?
<drm> on my other computer?
<drm> http://ubuntu/dtm
<persia> This works on one machine, and not on the other, right?
<drm> this is what I use ot http://localhost/dtm
<drm> works on the linux (host ) machine but not another from my network
<persia> Try using the same URL in both places.
<persia> I suspect it's a host name issue.
<drm> already did that and same result
<drm> on the Host pc it works fine
<drm> other PC no good
<drm> even tried http/127.0.0.1/dtm and same issue
<faileas> why use the hostname while testing?
<persia> OK.  What kind of network do you have?  Is there a network administrator?  s there a router?
<persia> faileas: Good point!
<drm> http://127.0.0.1/dtm I mean
<faileas> er.. 127.0.0.1 won't work
<drm> wired network
<faileas> you need the ip address of the server there, not localhost
<faileas> drm: run 'ifconfig' on the server  and find out its IP. then try 'http://(server ip)/drm
<faileas> ' on the client
<drm> it normally works when I first installed it with http://ubuntu/dtm just fine and then stops. I am sure it is a configuration issue I just do not know where to start looking
<faileas> urgh
<faileas> don't use hostnames.
<faileas> use the server's ip address.
<faileas> unless you have a VERY specific setup, its going to cause more headaches ;)
<drm> will try that but what would cause FTP to stop working as well when both work for a while
<faileas> actually, what i don't get is how it works at all
<drm> LOL well it does
<drm> for a while anyways
<drm> then it changes its mind and I am sure it is something I a m doing
<drm> so noone can point me toward any specific configuration file or other area to start looking?
<Psi-Jack> Just out of pure curiosity, does ubuntu's xen stuff allow to install debian domU's?
<Psi-Jack> I notice than Ubuntu 9.10 has newer versions available for xen stuff, but in some specific cases I need debian.
<drm> if I knew I would tell you but I am a newbie myself to LINUX and UBUNTU
<Psi-Jack> yeah, I'm far from it. heh
<drm> The URL is not valid and cannot be loaded.
<larsemil> hmm go skiing fore some hours or go ice fishing for some hours...?
<SandGorgon> hey guys.. I am looking into configuration management for our hosted web servers. We want to freeze a particular configuration (maybe even that website should be coming from a particular tag of svn) and if anything accidentally changes, it should revert back to the good configuration. Anyone have any opinions about Puppet, cfengine, bcfg2 ?
<TeTeT> SandGorgon: if you look for something really simple, take a shot at etckeeper
<Psi-Jack> Interesting. Ubuntu 9.10 Server has the whole Enterprise Cloud installation stuff.
<learningclouds> :-D
<learningclouds> who is a veteran on enterprise cloud?
<learningclouds> anybody??
<learningclouds> how do catch some body to chat in irc channal?
<Psi-Jack> Maybe when people are awake? ;)
<Psi-Jack> I think most everyone here is in the US, like myself. I'm just up extremely early. ;)
<learningclouds> most pepole come from us?
<learningclouds> oh god.you must on sleeping.
<learningclouds> HI Eric. can you saw my msg ?
<learningclouds> maybe irc server is so slow to transaction msg.
<learningclouds> O:-)
<TeTeT> learningclouds: probably just ask a question and someone will pick it up
<TeTeT> ttx: do you know if there are any problems with using multiple TB for Walrus S3? Has anyone ever attempted this?
<ttx> TB?
<TeTeT> ttx: Tera Byte of Storage
<ttx> ah :)
<learningclouds> what is walrus s3?
<ttx> TeTeT: I didn't try personally.
<TeTeT> learningclouds: part of the Ubuntu Enterprise Cloud system, where the image file resides, and a part of persistent storage
<ttx> "walrus" is the eucalyptus equivalent of Amazon S3
<learningclouds> oh. very fresh to me.
<TeTeT> ttx: do you know if the buckets can grow dynamically?
<ttx> TeTeT: I must admit I haven't exercised Walrus so much, so no, I don't know.
<TeTeT> ttx: there's a size limit in the config web UI
<TeTeT> ttx: thanks anyway, will try to figure it out on my own then ...
<learningclouds> who know what is the differences in private cloud and enterprise's cloud?
<Psi-Jack> Oh.
<Psi-Jack> Lovely.
<Psi-Jack> I just installed Ubuntu Server 9.10 Enterprise Cloud, and trying to login with the initial default username/password admin/admin, doesn't work. Says, admin, not found.
<learningclouds> where is a training org on ubuntu cloud in Asia?
<TeTeT> learningclouds: we have a training partner in Taiwan, Geego.
<TeTeT> learningclouds: not sure if there is a strict difference between private and enterprise cloud, IMO you can exchange these terms
<learningclouds> where is Geego?
<TeTeT> Psi-Jack: wait for a few minutes, then try again
<Psi-Jack> Wow, sheash.
<Psi-Jack> Why did it take 5-10 minutes just to be able to login?
<TeTeT> learningclouds: http://webapps.ubuntu.com/partners/training/
<learningclouds> oh .thanks
<TeTeT> Psi-Jack: it was a bug, been fixed. Did you install ubuntu-updates?
<Psi-Jack> Not yet. Freshly installed system.
<Psi-Jack> I did tell it to auto update security releases though.
<TeTeT> Psi-Jack: it's not a security update, so you will have to use apt-get update; apt-get upgrade to get i
<TeTeT> it
<Psi-Jack> Understood.
<Psi-Jack> Hmmm
<Psi-Jack> What DOES this "Cloud" use for virtualization anyway? xen? kvm?
<TeTeT> Psi-Jack: kvm, check the hypervisor setting in /etc/eucalyptus/eucalyptus.conf
 * Psi-Jack shivers.
<Psi-Jack> Okay. Bye bye Eucalyptus.
<learningclouds> to go to Taiwan is not so convenient.i am in Mainland.
<Psi-Jack> Oh wait. It has options for kvm and xen. Cool.
<Psi-Jack> I might be able to live then. ;)
<uvirtbot> New bug: #540121 in openssh (main) "does not terminate at computer shutdown" [Undecided,New] https://launchpad.net/bugs/540121
<bronto2> how to apply updates from cli?
<TeTeT> bronto2: sudo apt-get update; sudo apt-get upgrade
<bronto2> TeTeT, thanks
<bronto2> another one: how do i figure out what ftpd daemon is installed?
<bronto2>  ps -e|grep ftp
<bronto2>  ?
<bronto2> is sftp-server some sort of default on 'ubuntu server' ?
<bronto2> or can i use pure-ftpd?
<persia> bronto2: openssh-server is the common sftp server.
<bronto2> persia, bingo!
<Psi-Jack> Goodie.
<Psi-Jack> ubuntu system's totally locked up nicely. Won't even reboot sanely. Just after a full brand new installation and apt-get upgrade
<bronto2> this is the first time i have to manage a system via ssh prompt only, will be a brand new experience :P
<Hawkey> hi there... have a question.. is there some way how to set up firewall when i need to allow ssh connection from host like this? abcd*.server.net
<Hawkey> using * mark to represent what ever symbol.. number usually
<soren> Hawkey: There are mechanisms for doing that, but I can't recommend it. Lots of people can make their IP's resolve to whatever they want.
<Hawkey> know that.. but here's the thing.. i need to connect to my server from mobile device.. and i doubt they will gimme static ip :(
<persia> Hawkey: Consider having some bastion that accepts ssh from anywhere with a default shell of /bin/false or some such, and using ProxyCommand to tunnel connections from your mobile through the bastion into your real network.
<soren> Hawkey: That makes it even more pointless.
<soren> Hawkey: It sounds to me like you essentially want to open your server for ssh connections to anyone using the same mobile provider?
<Hawkey> soren yes... at least for host i'm recieving when connect from mobile phone
<Hawkey> and that host is dynamic :/
<soren> So one wouldn't even have to spoof a PTR record, one would just have to happen to have the same mobile provider as you..
<soren> Hawkey: I really think this is a waste of time.
<soren> Hawkey: I think you should spend more time making sure that your password policies are sound, for instance.
<Hawkey> soren i know.. point is.. i have to have that access..
<Hawkey> need it..
<soren> Hawkey: I'm not telling you to close your firewall completely.
<soren> I'm telling you to just leave it open.
<Hawkey> well i use pretty hard password if you mean this
<soren> For most intents and purposes, that is what you will be doing anyway. Limiting access to an ISP or anyone who wants to pretend to be on the same ISP is really not adding any real amount of security.
<persia> actually, it likely reduces security because it creates the false impression that there is a meaningful filter for auditing.
 * soren concurs
<Hawkey> soren i'm aware of that...
<Hawkey> what do you thing about changing default port number to different.. it could help little bit right?
<soren> Hawkey: Then whay are you doing it?
<Hawkey> thinking now...
<Hawkey> or why?
<soren> Yes, I meant "why".
<Hawkey> to allow me to connect to different server :-D..
<Hawkey> which is locked on my static ip :-P
<soren> No, that's not what I meant.
<soren> Why bother with the firewall at all if you know that it's not adding any security=?
<persia> Hawkey: So, you basically have two choices.  1) follow soren's advice and make the server secure, and don't lock down the firewall, and just proceed.  2) Get extra paranoid which involves extra hardware, extra networking, etc.  This doesn't make it secure, just harder to access.
<soren> Hawkey: Moving it to a different port will likely help against the odd script kiddie. Anyone with half a clue will perhaps be delayed all of 30 seconds by a stunt like that.
<Hawkey> well because dont want to have all open.. want to restcirt it as much as possible
<persia> (where 2) is some variation on the bastion I mentioned above, perhaps requiring knocking, etc., but for most cases a waste of time and money)
<a_ok> I want to proxy ftp from browser through an ubuntu server. what do I need for that?
<soren> Hawkey: Ok, here's an analogy:
<soren> Hawkey: Imagine there was a way to limit access to your server only from the northern hemisphere.
<persia> a_ok: squid can also do that.  There may be other choices.
<Hawkey> soren hmmm
<soren> Hawkey: Doing so would still leave it open for half the planet.
<soren> Hawkey: ...plus anyone who has access to another system on the northern hemisphere.
<persia> plus anyone who can spoof being in the northern hemisphere
<soren> Hawkey: Which probably protects you from perhaps 30-40% of the planet... which is completely pointless.
<a_ok> persia: I tried ftp-proxy but I think that is a reverse proxy will check out squid. lighter is better in this case
<Hawkey> well noone expect me has access to my server.. so i'm confused or dont understand whats your point
<soren> persia: Right. I was assuming that hemisphere detector was infallible :)
<persia> soren: Ah, right.
<soren> Hawkey: The point is that such a security measure could so trivially be bypassed that spending time adding it is pointless, and what's worse is that it will create a false sense of security.
<persia> Hawkey: Basically, since you don't know your IP, a firewall rule is pointless.  The options are 1) leave it open (same level of security as a bad filter), or 2) do something expensive and complicated.
<Hawkey> well forgot time and money.. not a value atm..
<Hawkey> persia in the 2nd case ... could be more specific?
<soren> What the 2nd case is specifically is not the main point.
<soren> The main point is that you are wasting your time with this firewall.
<soren> Regardless of what you do instead.
<Hawkey> soren i just want to add to fw rule to allow connection from ip like this aaa.*.bbb.ccc .. that was my original question... and got it.. its not possible right?
<Hawkey> need to have exact full name
 * soren gives up
<Hawkey> no substitutions
<persia> Hawkey: Set up some server outside the firewall.  Have it listen only on ssh (optionally only listen after knocks).  Set up the firewall to only accept ssh from that sever.  Set up a nameserver to have an internal domain.  Set up the external server to know about that domain.  Set up your client ssh_config to proxy stuff for that domain through the external host.
<persia> This makes it harder.  It is still not secure.
<soren> Hawkey: It's the same as not locking your door, but putting a sign on your front door saying: "The door is locked".
<Hawkey> hmm..
<persia> The complicated solution is more like locking your door with a non-unique key.
<soren> Or locking your door, but leaving a great big window open right next to the door.
<Daviey> Hawkey: you can do ufw/iptables via subnets.. which would probably do what you are trying to do
<persia> Is it?  I've always liked bastions, because one needs to know the internal name as well as the other auth credentials.
<Daviey> so only people on your ISP can connect.
<persia> Daviey: That's pointless though.
<Daviey> persia: I disagree it's pointless, but it does address what the chap wants to do... Doesn't it?
<persia> It does.  I just agree with soren that it's pointless, because anyone can apparently be from an ISP with minimal effort.
<Daviey> considering most of my hacking attempts come from china and russia, limiting to a single ISP certainly decreaces the brute force potential massively.
<Daviey> hmm
<Daviey> PTR record is easy to spoof, agreed. UDP packets are easy, agreed.  However, TCP - forging the from is somewhat less easy :)
<persia> Daviey: Saying "only this ISP can get in" isn't secure.  Saying "everything but this ISP is sending pointless brute-force hack attempts that annoy me" is perhaps an excuse for a filter, but it's just relieving an annoyance, rather than being more secure, as such.
<soren> Daviey: No. It's as easy as signing up with that particular ISP.
<persia> soren: If you ever have time, I'd like a critique of the bastion approach, just from personal interest.  No rush of any kind: months later would also be appreciated.
<soren> Daviey: Something which thousands have already done.
<Daviey> Oh, i entirely agree it's not as secure as some options - but does add a level of security.  Coupled with something else, it should be pretty good.
<persia> Or asking someone with a botnet to give you access to a machine using that ISP.
<Daviey> fail2ban / denyhosts etc
<soren> I think assuming that people using the same ISP as you are less malicious than anyone else is a mistake.
<persia> Daviey: These are tools to keep the logs cleaner.  They don't actually help that much against anyone actually wanting in.
<Hawkey> persia well.. that could be prevented when that isp is using dnssec, or am i totally out of dish?:P
<Daviey> Well yes, but IME the majority of attempted hacks aren't targeted - just drive by pokes.  Therefore limiting to subnet, coupled with something else - is a pretty good solution IMO
<persia> Hawkey: That only means it's really a machine from that ISP.  Could be a malicious customer.  Could be part of a botnet.
<persia> Daviey: I agree it doesn't hurt: I just don't believe it's actually more secure.
<Daviey> :)
<soren> I just don't want someone come in here, ask for this sort of advice, and just tell them that "yeah, put this or that in hosts.allow", have a week pass, have him be succesfully hacked, and have him go "oh, why didn't those fuckers in #ubuntu-server say that this was a problem?".
<persia> !ohmy
<ubottu> Please remember that all Ubuntu IRC channels share the same attitude of providing friendly and polite interaction with all users of all ages and cultures. Basically, this means no foul language and no abuse towards others.
<persia> But, yeah.
<echosystm> hey super cool dudes!
<echosystm> so, i have a question
<Daviey> soren: Agreed.. i'm saying limiting to subnet COUPLED with something else, is better than just having something else.  And he did ask for that :)
<echosystm> lets say someone was to forgo redhat/centos and try to install oracle on ubunt server
<echosystm> would the experience be somewhat akin to putting ones genitals in a grinder?
<Daviey> O_o
<echosystm> or do canonical take serious business serious
<persia> !ohmy | echosystem
<ubottu> echosystem: Please remember that all Ubuntu IRC channels share the same attitude of providing friendly and polite interaction with all users of all ages and cultures. Basically, this means no foul language and no abuse towards others.
<Daviey> echosystm: Well for the last part, you might be better to direct the question towards Canonical.
<persia> Oracle has this to say about running Oracle on Ubuntu: http://www.oracle.com/technology/tech/linux/install/xe-on-kubuntu.html
<persia> I suspect there are other guides for other products.
<soren> persia: I think the #python channel gets it right. The regulars there refuse to answer people's questions if they seem to be on the wrong track.
<Hawkey> soren i'm aware of risk you're talking about... but i have no other option... need to connect from mobile device.. what ever risk it takes...
<soren> Hawkey: You are either not understanding at all or not paying attention.
<soren> Hawkey: I'm telling you to LEAVE THE FIREWALL OPEN!
<persia> soren: Aha.  I see.  Thanks.
<soren> Or not set one up at all.
<soren> It's /pointless/.
<soren> Hawkey: I'm /not/ tellling you to close it up, and not let anyone in.
<echosystm> persia bro, installing oracle xe is hardly comparable to 10g
<Hawkey> but you say that it's like i have leave opened.... what's the difference?
<soren> Hawkey: None!
<persia> echosystm: My assertion is only that Oracle seems to have guides, not that the products are similar.
<soren> Hawkey: That's the point!
<echosystm> im more interested in what canoncals commercial alliances are like
<soren> Hawkey: If there's no difference, don't waste your time.
<persia> echosystm: For that, as Daviey said, you'd do better to contact Canonical.
<echosystm> i figured someone here would have experience?
<soren> Hawkey: It's very, very simple. Adding the firewall will not increase security. So don't waste time setting one up. It will make no difference, so why do it?
<echosystm> vendosr will tell you all kinds of crap
<Hawkey> soren well that's what i DO NOT want to.. i want to restrict it as much as possible.. not whole open... that's what i ask if there is an option to add that fckin rule...
<soren> Well, I'm not going to help you.
<soren> Simple as that.
<Hawkey> soren lol... not increase security? what a bullshit?
<soren> I've tried to explain why.
<Hawkey> point was to minimalize the risk
<soren> Whatever. My life is way too short for this.
 * soren wanders off
<Hawkey> heh
<soren> persia: I also have a bastion host. It's also convenient that you don't need public IP's for everything.
<persia> That too :)
<Daviey> soren: If we bring home + ipv6 into the mix.  I have the firewall rule of allowing my /64 to connect, but the rest of the internet blocked.  Therefore my security is the same, someone could get on my LAN (wifi perhaps), and connect to the servers - but the audience of people that can attack is reduced massively.
<Daviey> Same as a spamd server i run, that is firewalled off to only allow my ipv4 subnet to connect.. that is less for security, more abuse tho.
<EhrN> hi all. We search a panel control for dedicate server. we have test webmin/virtualmin. Some admin here have a favorite panel ?
<Daviey> EhrN: Generally all panels are sub-optimal, and we recommend against them.. what operations do you want to perform with the panel?
<EhrN> My boss create and host somes websites (Magento's commerce, CMS(drupal,Joomla),ERP) + mails + ftp account for each customers
<EhrN> the dedicated server is host by OVH
<sherr> EhrN: are you relying on the panel to set everything up correctly and securely?
<sherr> Or do you have an experienced sysadmin/person who knows how to configure servers?
<sherr> Because a "panel" is a bad idea probably - best to have someone who knows how to configure a server set things up. Security is a big concern!
<EhrN> sherr, server security is ok. the panel is just for administrate a new customer's website/mail/ftp account/dns
<sherr> OK - I don't use them and do not trust them. Be careful. Maybe this is what you are after :
<sherr> http://www.ispconfig.org/
<sherr> Or something like that. I have no experience of it though.
<EhrN> yes i know ispconig
<EhrN> ehcp look like very nice too
<soren> Daviey: That's not the same at all.
<Daviey> soren: Actually, it is - you are reducing the audience that can launch an attack.
<soren> Daviey: Yes. But you're doing it beyong the point at which the quantitaive difference that makes has become qualitative.
<soren> s/beyong/beyond/ obviously.
<soren> And s/quantitaive/quantitative/
<soren> Several reasons:
<soren> I don't suppose you sell access to your home network to random strangers?
<soren> ISP's tend to do that. You know... To stay in business and all that.
<Daviey> soren: The IPv4 address space allows ~4,294,967,296 addresses, if you restrict it to just a /8 that makes the audience that can attack down to less than 16,777,216.. this is a pretty significant drop.  As i said, i'm not saying this should be the ONLY defence, but it certainly adds to the security by multiple levels.
<soren> I hear you.
<soren> And disagree.
<Daviey> soren: I do sell access to my ipv4 allocation, and have services firewalled off to only that allocation :)
<soren> Being on the same subnet as yourself (a subnet you don't control) does not make anyone more or less trustworthy.
<Daviey> i didn't claim it did
<soren> Then it's pointless.
 * persia one had a job as a network admin where it was *required* to spoof addresses in order to log into management servers.  Didn't stop anyone from doing it daily.
<soren> All you're doing is reducing noise in your logs.
<persia> s/one/once/
<soren> persia: Heh :)
<persia> soren: There is value in reducing that noise, as it makes it easier for folks to track goings on.
<FireCrotch> persia: grep -v does that just as well
<persia> FireCrotch: I don't claim otherwise.
<soren> persia: Fair. I just don't think a firewall is the right tool for that job. And neither do you :)
<Daviey> soren: the inverse of what i am saying is advertising a teenagers house party on facebook, and someone standing at the door asking people for tickets to get in.... Not advertising the party on facebook, would have reduced the people that turn up at the door and trying to force themselves in.
<Daviey> But you still have someone checking tickets, (ie denyhosts/fail2ban) for those that know about it, but you still don't want them to force themselves in.
<FireCrotch> Why would anyone in their right mind have ssh listening on a public interface to begin with?
<soren> FireCrotch: To get in?
<FireCrotch> Allow ssh only from the internal network, and use a VPN
<soren> FireCrotch: Why would anyone in their right mind have VPN listening on a public interface to begin with?
<Daviey> FireCrotch: so you have the VPN daemon listening on a public interface?
<soren> (See what I just did there?)
<persia> FireCrotch: That just moves the issue. from ssh auth to vpn auth.
<Daviey> bah
<FireCrotch> A VPN is a wayyyyy more secure way to do it
<soren> Hahahahhah
<soren> FireCrotch: Why is that?
 * Daviey avoids this discussion. :)
<persia> heh
<soren> I can write a /really/ bad VPN server and a /really/ bad ssh server. Just as easily.
<persia> I once had a client that implemented VPN via ssh.
<soren> One could easily argue that the previously discussed bastion host is a poor man's VPN.
<Daviey> persia: vpn over ssh via tap \o/
<persia> Daviey: That's a lovely model!
<FireCrotch> :( it's way too late at night for me to be pondering this topic
<Daviey> I have been tempted to run a vpn tunnel, over ssh, which is over socks proxy, made via sshing over dns.
<persia> soren: Depends on the VPN implementation.  I've seen lots of places that have nice open access through the firewall to a concentrator on the internal network.
<FireCrotch> or rather, way too early in the morning, now
<FireCrotch> I should sleep haha
<persia> Daviey: might suffer a little latency that way.
<soren> persia: Sorry, what depends on the VPN implementation?
<persia> soren: Whether a bastion host can be considered a poor man's VPN.
<Daviey> persia: So i guess adding network conencting via packet radio wouldn't help? :)
<persia> My preferred VPN implementaiton is always to bastion the concentrator, but I've seen it done other ways.
<persia> Daviey: Just remember to have your radio base station uplink via satellite, and you'll never notice.
<FireCrotch> what was this argument even about to begin with?
<Daviey> :)
<persia> FireCrotch: different viewpoints on convenient ways to pretend to be secure
<Daviey> FireCrotch: soren is having a house party at his house, and he wants everyone to turn up :)
<Daviey> (not true)
<soren> persia: I think calling something a "poor mans something" leaves a lot of wiggle room for interpretation.
<persia> soren: I guess.  I just consider bastion vs. non-bastion completely separate from different classes of device usable to reroute packets inside some infrastructure.  Maybe I've seen too many annoying networks.
<soren> persia: You're being very specific.
<soren> persia: Assuming VPN is a mechanism that allows access to internal infrastructure to authenticated users..
<persia> soren: OK.  In that sense, just unfirewalled ssh is also VPN.
<soren> persia: ..a bastion host is an implemention of VPN. Both have many other characteristics, but with that limited definition (which for many intents and purposes is suitable, I think), it holds.
<persia> I tend to think of "VPN" as some way to have an encrypted tunnel between two networks, over some other network (where one of those networks may be a /32)
<soren> /32?
<soren> Really?
<soren> You do VPN to localhost?
<persia> It's not an uncommon profile for e.g. roaming sales force needing access to intranet, etc.
<persia> Tends to be limited to some DMZ.
<soren> What would be the point of a loopback VPN connection?
<soren> I mean..
<persia>  /32 isn't necessarily loopback.
<soren> a VPN connection from my laptop to my laptop is not going to get me anywhere?
<soren> /32 only has one host in it.
<persia> No, but a vpn from 10.0.0.1/32 to 172.19.24..192/29 is useful.
<persia> And for some applications, 192.168.73.43/32 to 10.101.57.34/32 might be useful.
 * soren stares
<soren> In that example, what is 10.0.0.1 ?
<persia> My laptop
<persia> Bad IP, actually.
<soren> Ok. Oh, sure, that makees sense.
<soren> I just got confused by the "one of those networks may be a /32" bit. One of the networks in the context was the foreign network.
<persia> But that there *is* a tunnel is completely separable from how the concentrator is implemented.
<persia> Foreign /32 was my second example.
<soren> Agreed.
<soren> No, no.
<persia> My laptop to one single server in a trusted infrastructure.
<soren> Foreign as in the network that you're crossing.
<soren> (the intenret)
<soren> internet.
<soren> Man, typing is difficult today.
<persia> Oh, the tunnel media.  Yeah, that has to be larger than /32 :)
<soren> Usually, yes.
<persia> (assumping IPv4 : /32 is a fine tunnel media for IPv6)
<soren> Sorry, didn't mean to be difficult. It was the first think I thought of and just got very confused.
<soren> :)
<soren> "thing"!
 * soren decides to write some code instead
<zul> morning
<soren> Daviey: Do you have a trick for detecting the tun device on the server when you're creating tunnel using ssh?
<soren> Daviey: ...or do specify a specific one to make configuration easier?
<pmatulis> "specify a specific" ?  ;)
<soren> Err..
<soren> Yes :)
<MatBoy> mhh ubuntu cannot write the hosts.hfaxd file for hylafax
<zul> ttx: we are suppose to be doing iso testing today right?
<ttx> zul: yes
<Daviey> soren: do you mean ssh+tun vpn?
<Daviey> for your routing?
<smoser> ttx, ping, meeting ?
<ttx> smoser: meeting is at 1400 UTC, which is in one hour.
<smoser> stupid time change
<ttx> smoser: :P
<ttx> smoser: the US should just use the same dates as the others :P
<smoser> i wish i could come up with some snarky remark about why these dates are inherently better
 * smoser grew up in Indianapolis, the last hold out for "lets just *not* decide to change the time"
<andol> ttx: See you just added bug #356256 as a papercut. Plan on looking into the issue specifically in regards to slapd, or on a more general notes regarding database daemons?
<uvirtbot> Launchpad bug 356256 in openldap "dist-upgrade stops slapd" [Wishlist,Triaged] https://launchpad.net/bugs/356256
<ttx> andol: there was a duplicate bug that just said "this is a papercut", I unified them
<ttx> andol: doesn't mean we should accept it
<andol> ttx: Ahh
<ttx> andol: we'll review it in one hour in the meeting -- but that doesn't seem to meet the "obvious way to fix" criteria
<andol> ttx: my impression as well, even if it surely would be a nice thing to do something about
<ttx> smoser: the current beta1 UEC images candidates have a ramdisk: is it a bug or a feature ?
<smoser> featur
<smoser> e
<smoser> work around
<ttx> I missed the memo, I guess
<ttx> which bug ? the cloud-init / upstart thing ?
<smoser> i commented in the bug... its a work around for bug 531494
<uvirtbot> Launchpad bug 531494 in upstart "cloud-init job not running in eucalyptus without ramdisk" [Critical,Incomplete] https://launchpad.net/bugs/531494
 * ttx looks
<smoser> ttx, i think its better than a release note "uec images boot only if you're lucky"
<ttx> smoser: certainly :)
<smoser> the other option was to put cloud-init back later
<smoser> to run later.
<ttx> smoser: Note that I was ok with the no-ramdisk concept if it wasn't introducing new bugs -- I'm not opposed to drop it if that means a more stable system.
<ttx> smoser: We'll have to do a final choice by beta2
<smoser> ttx, agree
<smoser> the scary thing to me, though, is that i don't understand why ramdisk would fix this per se
<smoser> and neither does Keybuk
<smoser> so i'm not convinced its not just a bandaid that reduces likelyhood of race
<ttx> smoser: I suspect it introduces a timing change which means you win the race (almost) all the time
<smoser> :-(
<smoser> right
<ttx> smoser: I tested the UEC images alright
<smoser> and you were typically unlucky ?
<ttx> no, lucky
<smoser> before
<zul> hah
<smoser> you used to be unlucky until you found the ramdisk, right ?
<ttx> smoser: I was unlucky when trying to start multiple instances
<smoser> ok.
<smoser> dustin was  unlucky 90% of the time at least
<ttx> I no longer am unlucky.
<nucc1> hi, can anyone help? http://pastie.org/873729
<jalons> nucc1: the host has sender verification disabled - ensure your DNS names are valid, i.e. the ultimate recepient of your email can resolve app.domain.com
<jalons> er, sorry, sender verification enabled
<jalons> alternately, white list/add to a no sender call back list on the mail daemon accepting mail for domain.com
<nucc1> jalons, i can only edit the dns settings for domain.com, since it is a shared host.
<jalons> then make sure the mail host can resolve app.domain.com, and welcome to sender verification hell
<nucc1> jalons, or what should i do to make app.domain.com reply to sender verification requests?
<nucc1> jalons, by mail host, you mean the mail server for domain.com? this means i should add an MX entry that points to app.domain.com ?
<jalons> nucc1: postfix by default should respond, unless you went a little tweak happy in main.cf
<nucc1> jalons, i configured it with dpkg-reconfigure
<ameba23> I have just installed ubuntu server on system with two raid harddrives and one nomal scsi. i chose to install the system on the non-raid drive, but i am getting grub error 21.  is it worth switching to lilo?
<jalons> nucc1: is the mail from the postfix server being presented to the mail server for domain.com as app.domain.com?  If the from address is not being rewritten to be from @domain.com and is coming from @app.domain.com then yes, you'll need an MX record on app.domain.com
<jalons> you shouldn't, since it should make a call back as long as an A record points back to app.domain.com
<jalons> but, it can't hurt
<ttx> smoser: we don't have specific tests for ebs root, so you should probably test them in parallel with their instance store equivalent ?
<smoser> ttx, yes, i'll just run them through the same set of tests
<smoser> except that i will shut them down, and start back up
<smoser> and verify they came back up
<ttx> smoser: make sure we get specific AMI entries i the ISO tracker for beta2
<smoser> i hate amis
<smoser> anyway
<ttx> smoser: what do you propose ?
<smoser> nothing else. i just hate that this doubled the testing.
<ttx> smoser: I'm not sure how we can consolidate tests efficiently
<smoser> and have no suggestions or good arguments that it should not have
<smoser> i agree completely.
<smoser> it just sucks.
<ttx> smoser: i'm pretty sure you will automate all that  :)
<smoser> sorry, sometimes i just like to complain for the sake of it.
<ttx> smoser: we /could/ get rid of the i386 images
<ttx> smoser: i'm not convinced they are useful
<ttx> In the same vein, testing UEC/i386 just doesn't make any sense.
<smoser> yeah. i'll agree on uec.
<smoser> i test/use i386 all the time on ec2
<ttx> smoser: so both arch have their usefulness ?
<smoser> its 1/4 the price of the cheapest x86_64
<ttx> ah
<ttx> price :)
<smoser> you rich people
<smoser> :)
<smoser> let them eat cake and all that
<emilioeduardob> hi! im trying to add a user in a server. with  useradd -s /bin/false  prueba3 but it fails saying Can't create `/etc/passwd': not space left on device. I did df -h and i see i have plenty of space. any ideas why?
<sherr> emilioeduardob: /etc/passwd? very worrying .... :-(
<ameba23> I have just installed ubuntu server on system with two raid harddrives and one nomal scsi. i chose to install the system on the non-raid drive, but i am getting grub error 21.   i there a way i can install lilo, or boot from a floppy?
<emilioeduardob> sherr, yeah.. is too wierd...
<sherr> Maybe running out of inodes? Try : df -i
<emilioeduardob> sherr, yup.. 100% used inodes :S
<emilioeduardob> sherr, tks! at least i know now wht im against :P
<ameba23> I have just installed ubuntu server on system with two raid harddrives and one nomal scsi. i chose to install the system on the non-raid drive, but i am getting grub error 21.   i there a way i can install lilo, or boot from a floppy?
<creatorbri> Hello. Question: I have am on Ubuntu connected to a Windows network. I need to access a Windows machine by its hostname for both Filesharing and http access (i.e. intranet, svn). But my Ubuntu pc can't see the hostname at all.
<creatorbri> I have samba and smb-client installed
<creatorbri> er, smbclient
<creatorbri>  More Details: I am running Ubuntu on a VM with a Bridged connection. My host machine (a Windows 7 pc) can access the windows server just fine.
<smoser> mathiaz, you have UEC rig in appropriate testing setup ?
<mathiaz> smoser: I think so
<mathiaz> smoser: but it may be wiped out to test the beta1 isos
<ameba23> is it common for raid controllers to interfere with grub?
<soren> Daviey: Yes.
<soren> ttx: I still expect to upload a fix for https://bugs.launchpad.net/bugs/460398 before beta 1.
<uvirtbot> Launchpad bug 460398 in server-papercuts "/etc/timezone is not set correctly" [Medium,In progress]
<soren> ttx: I've just been procrastinating to see if I could fix more stuff before then.
<creatorbri> Question: How can I access a Windows server by Hostname from within an Ubuntu VM? Details: I am running an Ubuntu VM on my Windows 7 laptop, which is connected to a Windows network. The Ubuntu VM is using a Bridged connection and can access the Internet fine. My laptop can access "\\server\" and "http://server/" just fine. I have samba and smbclient installed. But for some reason I can't access the windows server from my Ubuntu VM. 
<sherr> creatorbri: not really a #ubuntu-server relevant topic
<ttx> soren: ok
<sherr> Might be better asking on the forums or #ubuntu
<sherr> ameba23: Disk order and layout can affect grub, yes. Whan you say "scsi" do you mean you have a real SCSI disk? Or do you mean SATA?
<ameba23> er its SATA
<uvirtbot> New bug: #540279 in apache2 (main) "package apache2.2-common 2.2.11-2ubuntu2.6 failed to install/upgrade: il sottoprocesso post-installation script ha restituito un codice di errore 1" [Undecided,New] https://launchpad.net/bugs/540279
<ameba23> sherr, Ive read some responses on forums to people having grub error 21 and a lot of people seem to say lilo works better but im not sure how to change it
<arch0njw> good morning all.  I know it is not standard practice to install a DE on ubuntu server, but I have reason to need something relatively lightweight.  A minimal install of gnome would be sufficient.
<arch0njw> Any recommendations, please?
<arch0njw> pm me if that is more appropriate.  I don't mean this to be a poll.
<faileas> arch0njw: openbox is pretty minimal
<faileas> or plain old x
<andol> arch0njw: In case you want a more minimal gnome you can always do an "apt-get install ubuntu-desktop --no-install-recommends"
<arch0njw> andol: oh?  Now there's something very new to me.  Interesting.
<arch0njw> faileas: Thank you.  I am familiar with good ol' OB.  However, I think the other person maintaining this server might find it a bit of a shock. I'd install FB if I had my way...
<andol> arch0njw: As of 8.10 apt was set to also install Recommended packages by default, allowing alot of not-absolute-neccesary dependencies be moved from Dependency to Recommends.
<faileas> arch0njw: or xfce, which is reasonably gnomelike
<arch0njw> faileas: Pondered that too.  I tried a straight xfce install once and embarassingly muddled my way through installing sufficient packages to make it usable
<faileas> well, there's xubuntu desktop ;)
<arch0njw> faileas: presumably I could use the --no-install-recommends there as well to get the least-most needed.
<faileas> arch0njw: probably. try it on a VM first? ;p
<arch0njw> faileas: indeed :D
<arch0njw> faileas: andol: thank you for the advice :)  I'll be applying this wisdom soon ;)
<MatBoy> on ubuntu no application sees the faxes of hylafax in the queue
<warmexxus> I came in here the last couple days talking about SSH and losing connection while headless... The problem has been resolved with using the boot option "nomodeset". It's a bug that affects old dell dimensions with intel video chips. Sleep mode on the monitor locks the computer up. Nomodeset works to resolve.... just a FYI to anyone who can use it... :)
<Notscape> Hi, does anybody here have experience in 6.06 (server) to 8.04 (server) direct upgrade ?
<Notscape> my 6.06 is a minimal LAMP instalation plus postfix, courier, squirrel, clamav, spamassassin, amavisd-new
<Notscape> only was wondering if someone here had troubles or the procedure is straightforward
<_ruben> Notscape: it *should* be straightforward
<Notscape> _ruben: :p
<_ruben> its a supported upgrade path .. then again, there's no guarantees for any upgrade scenario afaik
<Notscape> my experience in 6.06 (desktop) to 8.04 (desktop) was very bad
<Notscape> it didnt work
<Notscape> lots of unresolvable libraries conflicts
<Notscape> it is true that i have less chance in server as it has less number of extra packages
<Notscape> less chance of failure i mean
<resno> I am having a hard time getting wsgi setup, I keep getting the error that wsgidaemonprocess is mispelled or module not included.
<sbeattie> Notscape: make sure to use do-release-upgrade rather than editing sources and doing apt-get dist-upgrade.
<Notscape> sbeattie: yes I will go that way . . . but just looking for someone with previous experience
<sbeattie> Notscape: hrm, missing libraries... maybe make sure you have universe enabled before upgrading?
<sbeattie> (dapper didn't enable universe by default)
<ttx> smoser: how is EC2 image testing doing so far ?
<smoser> i've not started... been poking at my scripts to ebs-rootify them. i will be starting soon.
<ttx> ok
<ttx> smoser: we need to run "some test" soon enough to catch the kittenkiller.
<smoser> yeah.
<smoser> will have "some test" in next 30 minutes, promise.
<ttx> zul: you cover the upgrade testing ?
<resno> I am having a hard time getting wsgi setup, I keep getting the error that wsgidaemonprocess is mispelled or module not included.
<zul> ttx: yep when I get to it
<Notscape> see you, thanks
<kees> kirkland: can you point me to docs on setting up the shared screen session on ec2?
<SquidNoob> hi
<mathiaz> hggdh: hi!
<mathiaz> hggdh: are you planning to do some uec testing today?
<SquidNoob> I'm trying to set up a transparent proxy with squid, I have the latest version of iptables, which comes with the extension tproxy but I can not make it work.
<SquidNoob> I'm using this rule:
<SquidNoob> iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129
<SquidNoob> But I've seen on some forums that instead of using "iptables-t mangle" use "iptables-t tproxy" but this does not work with the current version of IPTables, Ineed to patch iptables again with TProxy patch or this is only for older versions?
<hggdh> mathiaz: yes, when can we start?
<mathiaz> hggdh: whenever you want
<hggdh> now?
<hggdh> or is it time for lunch?
<hggdh> mathiaz: ^
<mathiaz> hggdh: now is a good time for me
<hggdh> mathiaz: OK. What is it I am expected to do?
<mathiaz> hggdh: the test cases are outlined on the wiki at http://testcases.qa.ubuntu.com/Install/ServerEConfig
<hggdh> mathiaz: er. replacing karmic by lucid, correct?
<mathiaz> hggdh: I've also updated the uec-testing-preseed branch with a README.testing to outline how to do the testing
<mathiaz> hggdh: not really.
<mathiaz> hggdh: the focus of the test cases are to test UEC itself, rather the images
<mathiaz> hggdh: there are other testcases covering the images themselves (for EC2 and UEC)
<hggdh> mathiaz: which machine should I use?
<mathiaz> hggdh: IIRC you installed a topology yesterday
<mathiaz> hggdh: the test cases should be run from the Cloud Controller
<mathiaz> hggdh: (which is cempedak for most of the topologies)
<hggdh> ok, logging in to nickel
<mathiaz> hggdh: to reach cempedak, you need to go through tamarind
<mathiaz> hggdh: nickel is only used for PDU
<hggdh> k
<mathiaz> hggdh: and control the power
<hggdh> mathiaz: it does not seem I can reach cempedak from tamarind
<hggdh> my ssh request seems to hang
<mathiaz> hggdh: have you setup you ssh_config file as outlined in the README file in the uec-testing-preseed bzr branch?
<Zider> I have a problem with cryptdisk sometimes only creates /dev/mapper/name and sometimes both that and /dev/mapper/name_unformatted.. is there a known problem with this?
<mathiaz> hggdh: I can login in cempedak
<mathiaz> hggdh: if you setup a local .ssh/config file you'll be able to directly ssh into cempedak from your workstation
<Zider> also, how come the startup process tries to mount the maps before cryptdisk is finished creating them? :P
<DrNick_> hi. is anyone here well versed in samba & Active Directory authentication using likewise-open?
<hggdh> mathiaz: getting it in place
<hggdh> mathiaz: and I am expected to login as ubuntu to cemdepak?
<mathiaz> hggdh: yes
<kirkland> kees: see screenbin(1)
<KingMuty> coz linux channel is so CLI-ish
<KingMuty> dudes
<KingMuty> my network aint working
<bogeyd6> oh noes KingMuty
<KingMuty> so
<KingMuty> anyone wanna help?
<KingMuty> why do haxors sleep with each other?
<KingMuty> cause they are fags that is why
<bogeyd6> that was weird
<jiboumans> guess he showed us...
<bogeyd6> jiboumans, what was he even trying to say
<jiboumans> 'i am 13, hear me roar' ?
<bogeyd6> perhaps
<bogeyd6> sho is quiet up in this piece
<_ruben> sshhh
<bogeyd6> go trolling, trolling on the channel
<bogeyd6> lets go trolling, lets go trolling on the channel of ubuntu
<usuario_> i need help using a switch to set up a dhcp server to share internet with 6 computers any thoughts
<DrNick_> usuario_: yeah - plug all the computers into your switch, dhcp server included, then set up your dhcp server.  simple as that
<EdT2001> Hello
<EdT2001> if i set up an ubuntu media server will windows be able to map to it?
<EdT2001> fu_ck you
<guntbert> !language | EdT2001
<ubottu> EdT2001: Please watch your language and topic to help keep this channel family friendly.
<soren> Good riddance.
<jetole> hey guys. I am new to preseeding and trying it out. Does anyone know how I can have a random host name auto assigned or if there is a way to use a schema to create a random host name?
<ikonia> jetole: either dhcp or a script at start up
<jetole> ikonia: how do I use a script at start up?
<ikonia> just write a script to take random letters (26 letters) and random numbers (0-9) pick a combination of say 12 of them and write them to /etc/host and /etc/hostname
<ikonia> why you would want to do that, I don't know
<jetole> ikonia: I'm planning on deploying a lot of machines and don't want to be prompted for the host name when I install
<ikonia> so set profiles, or use dhcp to set the hostnames, that would be better than a script
<jetole> what do you mean profiles? I'm still new to this
<ikonia> profiles could be anything like, a text file with a list of hostnames that cycles through and gets the next in the list for each install, the best way would be to have dhcp offer up hostnames
<jetole> ok, I will look into dhcp
<ikonia> that's a good way of doing it, plus intergrates well in to future managment of the machines
<ikonia> eg: dns
<jetole> ikonia: what do you recommend for an apt caching system?
<ikonia> jetole: local mirror works well,
<ikonia> maybe squid hosting on a local mirror
<ikonia> tons of options
<jetole> Well I'm looking at items like apt-proxy, apt-cacher-ng, debproxy etc
<jetole> just don't know which one I should go with
<ikonia> do a little research and maybe do a proof of concept, experiement
<jetole> well the POC is exactly what I am doing now before the live deployment
<ikonia> great, have a play and keep in the back of your mind scalability for your target deployment
<ikonia> must dash
<jetole> k
<uvirtbot> New bug: #540596 in qemu-kvm (main) "NX memory not simulated for ARMv7 and above CPUs" [Undecided,New] https://launchpad.net/bugs/540596
<Sorell> hey guys, I'm having issues with my UEC private setup
<Sorell> I have the server installed and
<Sorell> have the cluster controller and the nodes setup
<Sorell> I am getting an error though I and I have no idea where to go to fix it
<Sorell> I am getting not enough resources available: address (try --addressing private)
<Sorell> but when I try addressing private option the instance will never start
<Sorell> any ideas
<alex88> hi guys..can i ask here for ubuntu server lucid?
<twb> alex88: sure.
<alex88> ok, so..i'm trying ubuntu lucid server on virtualbox, i've tried cloud install, i've set 512mb of ram, install is fine, i've set host-attached network
<alex88> host has 192.168.56.1, ubuntu cloud as guest has 192.168.56.2
<alex88> on boot it says: init: eucalyptus-network main process killed by term signal
<alex88> then there aren't any eucalyptus process running, and i can't access gui
<alex88> via https
<alex88> also there aren't any java process running
<twb> alex88: I don't do EUC yet, so I can't help.  Patiently wait for someone else to respond.
<alex88> twb: thank you anyway..maybe there is a channel for cloud?
<twb> Hmm, maybe #ubuntu-euc?  /list doesn't work for me.
<twb> !euc
<alex88> nope..i'll wait here
<Sorell> #ubuntu-cloud ;)
<Sorell> no one really in there though
<alex88> i've tried it..btw..i've done "service eucalyptus start"
<alex88> and there are a lot of connections between 9001 port and others in localhost
<alex88> oh...it's started
<alex88> thank you anyway
<alex88> =)
<alex88> -.- username not found...
<alex88> wtf..
#ubuntu-server 2010-03-18
<Sorell> alex88: glad to see I'm not the only one having issues....
<Sorell> :)
<alex88> Sorell: ubuntu cloud? =) me too..after 4 min i've started eucalyptus-cc..now starting eucalyptus-cloud
<Sorell> alex88:
<Sorell> yeah I'm having issues with addressing
<Sorell> and now I can't SSH in for some reason.... :(
<alex88> where are you installing it?
<Sorell> ?
<Sorell> It's like 30min away from me right now
<Sorell> :(
<Sorell> in my bedroom ....
<alex88> lol.. dunno man..have you ever logged into it?
<Sorell> yes
<Sorell> https://71.43.249.21:8443/
<Sorell> furthermore
<Sorell> pasileypc.com
<alex88> and now it has wrong address?
<Sorell> paisleypc.com*
<Sorell> it works when I type in the domain but when I use the address nothing
<Sorell> and I can't ping it for some reason
<Sorell> and nmap can't get any info back
<Sorell> :(
<Sorell> alex88: no I am having an issue with the vms getting IPs
<Sorell> that one should be static
<alex88> oh k, now i'm having troubles starting eucalyptus-cc
<Sorell> :/
<Sorell> that's one thing I havn't had an issue with yet
<jetole> ikonia or anyone else. How do I specify I hostname for all clients using dhcp without manually setting a hostname per each known mac address since in this case I won't know all mac addresses ahead of time. I'm using ISC dhcp3 but I will change this if someone knows a better dhcpd that supports pxe booting
<jetole> for example, when I connect to my ISP I get a hostname like c-1-2-3-4.hsd1.fl.comcast.net. through dhcp when my ip is 1.2.3.4
<Sorell> ping 71.43.249.20
<Sorell> err
<Sorell> sry
<alex88> i'm thinking, if i have about 4-5 pc in my house, can i use one as controller, other as nodes, and when someone open a pc he login to a vm that uses the power of all connected nodes?
<uvirtbot> New bug: #540625 in qemu-kvm (main) "Why was QEMU proper dropped in favor of KVM?" [Undecided,New] https://launchpad.net/bugs/540625
<Razernok> hi
<Razernok> I need some big help
<Razernok> in /etc/hosts
<Razernok> I'm to add a line of
<Razernok> xxx.xxx.xxx.xxx my_domain.com my_machine
<Razernok> how exactly do i write it?
<Razernok> from the installtion guide (xxx.xxx.xxx.xxx is your public IP and âmy_domainâ is the domain where the panel listen)
<jiboumans> 192.168.1.1         www      www.example.com
<jiboumans> for example ^
<jetole> Razernok: man 5 hosts
<Razernok> i did earlier 127.0.0.1 my.site.com
<jetole> Razernok: thats right, so is 127.0.0.1 my my.site.com
<jetole> it's whitespace seperate so a space is the same as a tab
<Razernok> what about the my_machine part
<jetole> man 5 hosts
<jetole> type that in bash
<Razernok> huh?
<jetole> and hit enter
<Razernok> bash?
<jetole> console
 * twb hands jetole a stiff drink
<jetole> bash, dash, csh, ksh, sh, etc etc
<jetole> twb: thanks! I need one
<Razernok> huh?
<twb> I think you can't get ksh on Ubuntu, only pdksh
 * jetole shrugs, never tried
<jetole> Razernok: find the console on your system
<jetole> the area where you have a screen and it a prompt and you can type commands
<jetole> you can get it from pressing ctrl+alt+f1 - f6 and use f7 to get the gui back but thats the worst way
<jetole> go to menu -> accessories -> terminal
<jetole> that works on gnome
<Razernok> i had to reinstall ubuntu server after a bug caused it unable to boot
<Razernok> i was making changes like in this guide and after restart it couldn't find some file.
<jetole> I give up
<jetole> I could use a shot of white tequila right now
<jetole> been sick as a dog for over a week and tired of it
<jebba> I did a dist-upgrade on EC2 and i saw it installed a new kernel, but on reboot it didnt use it. The grub stuff is kinda missing (nothing useful under /boot/grub/). Howto reboot into latest kernel?
<jayvee> does EC2 even boot via grub?
<jayvee> I would imagine that it would load the kernel directly
<Sorell> !cloud
<Sorell> Anyone know of a good guide on how the networking works in eucalyptus?
<jebba> jayvee: ya, it appears not to use grub, but I'm not sure how to tell it which kernel to use.
<jayvee> jebba: I'm guessing in the EC2 config for the VM
<jayvee> i.e. not inside the VM itself
<jebba> thx
<jetole> does anyone know how to make preseed exclude a package when it installs? I used the tasksel for server only but I have never needed nano on my servers?
<jetole> *on my servers!
<jetole> agh, and even with server mode it still installed libgtk and openoffice
<jetole> wtf
<uvirtbot> New bug: #540693 in spamassassin (main) "does not terminate at computer shutdown" [Undecided,New] https://launchpad.net/bugs/540693
<altf2o> quick question. I'm on Ubuntu 9.10, using rdiff-backup 1.2.8-1ubuntu2. It works great except for when it encounters files with a question mark in their name. Been searching coming up empty, anyone found a way around this?
<jayvee> nice of you guys to drop by
<marcus_> hi all. i have set up login via nss_ldap (passwd / groups). getent works fine but login takes extremly long.
<marcus_> i have already tried to set up nscd, even with local cache - without a luck.
<uvirtbot> New bug: #540747 in apache2 (main) "Apache Web DAV incorrect permissions" [Undecided,New] https://launchpad.net/bugs/540747
<twb> maintenance-check: Fetching seeds for hardy (this may take a moment) ...
<twb> ...is it just me, or does that take like fifteen minutes for everyone?
 * jussi01 breathes deeply... ok, my /etc/hosts is ruined and I need to edit it to fix. (some bug in hardy). So I tried to drop to root console in recovery mode, but it asks me for a root password (even though Ive not set one). thoughts on how to fix?
<ejat> hi .. just wanna check with u guys ..
<ejat> its it confirm nagios.cmd missing in karmic ?
<FireCrotch> jussi01: um... set one? or boot into actual single user mode
<jussi01> FireCrotch: sorted now... but actual single user mode?
 * jussi01 thought that was...
<FireCrotch> jussi01: yes... stick "single" at the end of the kernel line in grub
<FireCrotch> should take you straight to a root prompt
<jussi01> ahh, I didnt realise that. thanks for the tip
<FireCrotch> You're welcome :)
<twb> I'm still ambivalent about it not asking for a password
<FireCrotch> twb: for what? booting into single user mode?
<twb> FireCrotch: traditionally single would ask for root's password before giving you root privileges
<twb> On Ubuntu you don't have to use init=/bin/sh to break through that.
<FireCrotch> Why bother asking for root's password if you can just use init=/bin/sh ?
<twb> It's about as secure as a warm blanket, but I think I still miss it.
<twb> FireCrotch: well, after you init=/bin/sh, you sometimes have to dance about a bit to get write access to the disk
<twb> But I imagine it's hysterical raisins
<twb> Probably once upon a time init=/bin/sh wasn't a back door, either.
<FireCrotch> twb: your point is? Anyone who knows enough to bypass the password should know enough to get write access
<FireCrotch> and if someone is standing at the console, they can just unplug the machine and walk away with it, so why bother securing it any more than that
<FireCrotch> or open it up and take the hard drive
<FireCrotch> You have that problem with practically any system though
<Zider> I have a problem with cryptdisk sometimes only creates /dev/mapper/name and sometimes both that and /dev/mapper/name_unformatted.. is there a known problem with this?
<Zider> also, how come the startup process tries to mount the maps before cryptdisk is finished creating them?
<SquidNoob> is TPROXY working with ubuntu server 9.10?
<jayvee> SquidNoob: probably
<jayvee> you having troubles with it?
<owh> Is there a way that I can configure vsftpd to force an authenticated user to upload to the directory that the anonymous user can download from?
<SquidNoob> jayvee, yes, I'm not sure whether ebtables or iptables is failing, but as I read on google, it does NOT work with debian/ubuntu
<SquidNoob> I am using these rules to redirect port 80 from the bridge to port 3129
<SquidNoob> iptables -t mangle -N DIVERT
<SquidNoob> iptables -t mangle -A DIVERT -j MARK --set-mark 1
<SquidNoob> iptables -t mangle -A DIVERT -j ACCEPT
<SquidNoob> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
<SquidNoob> iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129
<SquidNoob> ebtables -t broute -A BROUTING -p IPv4 --ip-protocol 6 --ip-destination-port 80 -j redirect --redirect-target ACCEPT
<SquidNoob> When I put these rules can not open any website, and port 3129 is not getting anything
<SquidNoob> if i do an "#dmesg | grep TPROXY" get:
<SquidNoob> [   10.827732] NF_TPROXY: Transparent proxy support initialized, version 4.1.0
<SquidNoob> [   10.827738] NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.
<jayvee> oh right, this is IPv6 transparent proxying
<jayvee> neat, but I've never touched that before
<jayvee> hang on, wait, maybe it's not
<jayvee> --ip-protocol 6
<jayvee> what's that?
<SquidNoob> TCP
<SquidNoob> ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target ACCEPT
<SquidNoob> dont work too
<SquidNoob> sorry for my bad english
<Zider> I have a problem with cryptdisk sometimes only creates /dev/mapper/name and sometimes both that and /dev/mapper/name_unformatted.. is there a known problem with this?
<jayvee> SquidNoob: I'm afraid I can't help you with the TPROXY module, but have you considered using NAT instead?
<jayvee> it's relatively easy to get going.
<SquidNoob> jayvee: Unfortunately, nat does not help me, because I need to do IP spoofing with the customer IP, but thanks for your help
<ttx> zul, smoser: could you split the remaining EC2 tests between you two ?
<ttx> zul, smoser: or should we just get rid of that test, to replace it with a more thorough cloud-config test ?
<smoser> ttx, zul_ we should replace that with a cloud-config test. i will work on putting one together and getting tested here in a few minutes.
<ttx> smoser: ok
<smoser> i did test the ebs root yesterday, and pushed what i had to https://launchpad.net/~smoser/+junk/ec2-test
<zul_> smoser: hmmmm?
<zul_> smoser: you want me to upload something?
<smoser> no. thats just the "test suite" (for lack of a better term) that i use to run 'test-multi'.
<smoser> i'll get some test cloud-config stuff together and add it there.
<zul> ah nifty
<swift> Hi guys, just a query, has the support and upgrades for UBUNTU 6 series LTS expired?
<swift> is it over?
<AntORG> I think it's 5 years for the server edition
<swift> hi guys.. im talking about ubuntu-server 6 series LTS.. is it an eol now?
<swift> AntORG.. when had it started?
<AntORG> 6.06
<swift> 2006
<swift> hmm
<AntORG> that's june 2006
<swift> one year to go then
<AntORG> the "version number" of ubuntu gives you information about the year and the month it was released
<AntORG> so 8.04 for instance means april of 2008
<swift> yeah
<swift> ok
<swift> thanks
<swift> is it possible to upgrade 6.10 edgy to 6.06 LTS?
<AntORG> you mean downgrade. It probably is somehow but I wouldn't recommend it.
<swift> but edgy is bad as compared to LTS right?
<_ruben> downgrades arent supported, edgy should be upgraded to feisty, then gutsy, and then hardy (lts)
<_ruben> !edgy
<ubottu> Ubuntu 6.10 (Edgy Eft) was the fifth release of Ubuntu. End Of Life: April 25th, 2008. See !eol for more details.
<_ruben> eol'ed nearly 2 years ago
<AntORG> _ruben that's the reason he wants to downgrade to 6.06 which hasn't reached the eol yet
<swift> yes
<swift> is it possible?
<_ruben> one shouldnt downgrade to "avoid" EOL, one should upgrade
<AntORG> and 8.04 isn't an option, because...?
<_ruben> upgrades are supported, downgrades are not
<_ruben> if a downgrade breaks your system, you do get to keep both pieces though
<swift> this is a production system.. and it involes risk to go edgy  to feisty, then gutsy, and then hardy (lts)
<_ruben> the risk of downgrading is probably even bigger
<swift> ok... so il keep it untouched.. and get a new server
<_ruben> and if its a production system, it should've been upgraded ages ago
<swift> thanks soo much guys!!!...
<swift> yeah..true
<swift> i just got introduced to it the other day
<swift> boy .. it's an oldtimer
<swift> :p
<AntORG> create a backup disk image and try upgrading and if it doesn't work out use the backup
<thafreak> Is the beta 1 ISO available yet to begin testing?
<_ruben> if its got raid1, you could break it and keep 1 half as backup </horror-story-mode>
<Jeeves_> !mvo-- # Manpage slacker
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<Jeeves_> Pff, employed by Canonical, i'm sure...
<_ruben> anyone have any experience in using ssd's in their server(s)?
<Jeeves_> yes
<_ruben> Jeeves_: ran into any issues?
<_ruben> i know windows doesnt really "like" ssds until windows 7
<_ruben> currently looking at nilfs+ssd .. seems like a killer combo
<Jeeves_> _ruben: No issues at all.
<Jeeves_> Just more performance
<_ruben> Jeeves_: good to hear :)
<_ruben> Jeeves_: did you do any special tricks/optimizations?
<Jeeves_> _ruben: No, not at all
<_ruben> nice
<acalvo> any good way to do a service failover over servers? So a secondary server starts if the primary server crashes
<jalons> acalvo: heartbeat
<acalvo> ok, great (I've thought it was only a load-balancer)
<_ruben> heartbeat/pacemaker/corosync/openais
<_ruben> heartbeat doesnt load-balance, but is used a lot on loadbalancers to make em redundant :)
<acalvo> ok, thanks!
<vertx> Does anyone has experience with GlusterFS? I need expandable storage distributed across several servers. Any thoughts/suggestions?
<swift> guys, does Ubuntu 7 series have an LTS server edition?
<Japje> lts is every 2 years
<Dr_Jekyll> nope - 6.06 and 8.04 and the upcoming 10.04
<Japje> to 6/8/10
<vertx> swift: why don't you use 8.04 instead?
<swift> vertx, I want to upgrade from 6.10Edgy to an LTS version
<faileas> swift: upgrade to 7x then to 8.04
<vertx> swift: then you should use 8.04 or wait until 10.04 is released
<swift> guys, is there any risk involved?
<acalvo> about HA, there is something I don't understand. If I want to have a HA web server distributed in 2 or more server, is load-balancer's job to get the IP and pass the information? or it justs decides to which web server goes the request? I'm concerned about how to make the server available (how to set up the DNS public name to be reachable)
<_ruben> acalvo: depends on what you're trying to achieve: active/active (performance) or active/passive (failover/redundancy)
<acalvo> active/passive
<_ruben> active/passive wont need a loadbalancer
<_ruben> with active/passive there'll be a "floating" ip address which will be active on the active node
<acalvo> and where is the process that decides which server should be taking requests?
<_ruben> acalvo: both nodes talk to eachother, if one stops hearing the other, or if the other says it's going standby, the local node will become active
<acalvo> oh, ok
<acalvo> now seems more reasonable
<acalvo> I've thought that it was necessary to have a specific computer to decide which node was active
<acalvo> thanks _ruben
<_ruben> that's also possible
<acalvo> well, it makes sense to have a 3rd computer which do that job?
<_ruben> it has its pros and cons
<_ruben> the keyword here is quorum .. with 3 nodes there's always a majority/minority .. with just 2 nodes that's not possible
<arch0njw> I am attempting to install tomcat6 on ubuntu server 8.10 and it is saying that package is not found.  Am I missing a repository or misspelling the package name?
<jalons> but, fencing adds a lot of overhead
<acalvo> well, I'll start with just 2 nodes
<arch0njw> andol: faileas: I followed the simple setup for ubuntu desktop and that worked fantastically.  Thank you for the advice yesterday.
<acalvo> and see what happen
<faileas> yay
<faileas> even if as i recall, my answers weren't that useful XD
<arch0njw> faileas: someone to bounce ideas off is -always- helpful.  :)
<jalons> acalvo: I highly suggest additional cat5 or serial runs to each HA node - it's surprising how often a single cable becomes unpluged leading to splitbrain or worse situations
<arch0njw> huh-boy.  so it is tomcat5.5, not tomcat6 despite the Ubuntu Server 8.10 wiki saying that tomcat6 can be installed with apt-get from the standard repos.
<acalvo> jalons: I'll take that in mind... I've been having troubles with the only web server we have here (it crashes once a day randomly), so I'll deploy and test a HA with 2 nodes and see what happen. Bad thing is the servers I'm going to use just have 1 rj45...
<mathiaz> hggdh: hi! around?
<mathiaz> hggdh: shall we continue the UEC testing?
<swift> guys, is it ok to have nagios, smokeping setup on an eol server?
<jmazaredo> will two gateway on same network work like the other? i have this problem http://tinypic.com/view.php?pic=vys4ld&s=5
<jetole> does anyone know why libgtk and openoffice (these are just the ones I have noticed) seem to be part of the base install with preseeding, even when you use server as the tasksel
<ttx> arch0njw: tomcat6 is available in 8.10
<ttx> https://launchpad.net/ubuntu/+source/tomcat6
<ttx> (fwiw it's also in 8.04)
<ttx> mathiaz: do you plan to run your magic ISO testing script ?
<mathiaz> ttx: hm - for lucid beta1?
<hggdh> mathiaz: hi, I am here
<mathiaz> ttx: no - I thought zul would take up iso testing
<ttx> mathiaz: he did, was just wondering if you planned to run it or not
<mathiaz> ttx: as I'm working on some puppet WI for beta1 and helping out hggdh for UEC testing
<ttx> mathiaz: ok
<mathiaz> ttx: not for now
<zul> ttx: i did the iso testing yesterday
<ttx> mathiaz,zul: we could use someone for the RAID1 test
<hggdh> BTW -- all -- I would like your comments on the changes I did to the server guided whole disk install
<zul> ttx: k ill take a look
<mathiaz> ttx: IIRC RAID installation are broken
<mathiaz> ttx: cjwatson was working on it at the begining of the week
<ttx> mathiaz: yes they are -- woud be good to have bugs filed for it though
<mathiaz> ttx: not sure if he fixed it in time for beta1
<ttx> mathiaz: no he didn't
<jetole> ok, I think I just solved my own question: http://ubuntuforums.org/showthread.php?p=3088943
<mathiaz> ttx: there is a bug about it no?
<ttx> mathiaz: I'll check up with him
<ttx> arrh, who added a test case ?
<ttx> hggdh: you added the "preseeded" test case ?
<arch0njw> ttx: huh.  it doesn't show up in the package list for Ubuntu Server.
<mathiaz> ttx: seems so
<ttx> arch0njw: you must be doing something wrong
<arch0njw> sudo apt-get install tomcat6 ...?
<mathiaz> ttx: preseeded testing is probably worth testing
<arch0njw> ttx:  that's pretty standard.  No package is found.
<mathiaz> ttx: as well as a kickstart install
<EhrN> hi all. I try to install dtc-toaster, someone have success install of this panel?
<ttx> arch0njw: I guess something is wrong in your mirror/apt.sources
<mathiaz> ttx: not for beta1 though
<ttx> mathiaz: agreed, but adding the test now and not completing it makes us look bad
<mathiaz> ttx: http://www.youtube.com/watch?v=dsUXAEzaC3Q ?
<smoser> ttx, i've started instances for each candidate ami with '--user-data-file ud-multipart-01.txt' at http://bazaar.launchpad.net/%7Esmoser/%2Bjunk/ec2-test/files/head%3A/user-data/ and then verified that they did what was expected.
<ttx> smoser: cool
<hggdh> ttx yes, there is a (right now) simple pressed test. I am considering preparing presseds for most, if not all, of the common server installs
<smoser> i put 3 user data tests in that directory, the goal would be to pull those into the 'test-multi.sh' launcher in the dir above it.
<ttx> hggdh: could you complete that test for us ? It would make our beta1 test coverage look better
<ttx> hggdh: also please don't add new testcases at the last minute on milestone release day
<hggdh> ttx there is a caveat: since I have to use an url, questions will be asked until hostname is reached (the url is only loaded after it)
<ttx> hggdh: the testcases should generally be updated before a milestone campaign, not in the middle of it.
<hggdh> ttx: will not do it next time...
<hggdh> ttx: I have already tested the pressed multiple times, will mark it tested
<ttx> hggdh: cool, thanks
<hggdh> ttx: please keep in mind that I did not have much time to work on them, and they were scheduled for beta1
<ttx> hggdh: ah... maybe retargeting them to beta2 makes sense then. We need some adaptations for beta2 anyway (on the cloud image front)
<ttx> hggdh: feel free to add new tests, just make sure you mark them completed as soon as they reach the tracker
 * ttx cannot go to bed until http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all shows all tests covered
<jetole> does anyone know how I can set the default editor system wide for all users and all new users that don't exist yet?
<hggdh> ttx: will do. Sorry for the surprise
<hggdh> ttx: all tests I have run so far are marked. All required tests are now covered also
<mathiaz> hggdh: I've already got preseeds for all common test cases
<mathiaz> hggdh: this is what ttx was refering to when he mentioned whether I was running my iso testing scripts
<ttx> \o/ all tests covered !
<hggdh> mathiaz: ah, OK. If you do not mind making the preseeds public, we can add them later
<mathiaz> ttx: good night!
<ttx> mathiaz: nah, I still need to do some uec multi-network tests :P
<hggdh> mathiaz: would the installations we did yesterday qualify for test coverage?
<mathiaz> hggdh: yes - for the topology we tested
<ttx> mathiaz,zul: for perfection, we still need to cover:
<ttx> http://iso.qa.ubuntu.com/qatracker/result/3788/356
<ttx> http://iso.qa.ubuntu.com/qatracker/result/3785/357
<mathiaz> hggdh: https://code.launchpad.net/~mathiaz/+junk/iso_testing_scripts
<mathiaz> hggdh: ^^ these are my iso testing script
<mathiaz> ttx: hm - netbooting
<mathiaz> ttx: well the UEC testing rig uses netbooting
<mathiaz> ttx: but not the mini.iso
<mathiaz> hggdh: oh - and the test we did yesterday wouldn't count for beta1 as we've tested installation from the archive rather than for an iso
<zul> i dont think if have the infrastructure for netbooting
<mathiaz> zul: the test case for netbooting is actually based on the mini.iso
<zul> mathiaz: k
<mathiaz> zul: so you don't need to have a PXE server to run the netboot test case
<zul> heh well once i get through this it will be next
<hggdh> mathiaz: I branched your iso-tests-scripts. I will adpat them for future ISO tests
<mathiaz> hggdh: the scripts are actually tweaked for my own environment
<mathiaz> hggdh: and based around libvirt and qemu
<mathiaz> hggdh: the preseeding part can easily be extracted and reused though
<hggdh> mathiaz: I expected they would be tweaked ;-) this is why I expect to have to adjust them
<mathiaz> hggdh: https://code.launchpad.net/~mathiaz/+junk/iso-testing-cfg
<mathiaz> hggdh: ^^ this is actually the configuration with the latest version of the preseeds
<hggdh> mathiaz: thank you. Branched
<zul> ttx: oh you did the raid1 install
<zul> i got the same thing
<ttx> zul: well, I reported the bug secondhand
<ttx> zul: so it's good you covered it
<zul> ttx: i was able to reproduce it
<ttx> cjwatson: when you have the time, please comment on feasibility of https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/540167 : can we have the tasks available for the UEC installer while not being displayed by the Server installer tasksel ?
<zul> mathiaz: there is already a fix in the queue for munin
<mickster04> hey guys, im tryin to set up a vpn server, im not sure why it isnt working:/ i get error 800 on windows but my ubuntuinstall on my laptop doesnt work either
<mathiaz> zul: for bug 538902?
<mickster04> well it kinda works, in that i can connect to it, but i cant acces the internet thru it:/
<stas> hi guys, anybody can recommend some better replacement for nscd
<cemc> can I get some stats/status oout of my openntpd ?
<warlock_mza> hi guys, I need an init script for hostapd but I want to keep things clean. Should I add post-up to network/interfaces ? write an /etc/init/hostapd.conf ? or just update-rc.d to add to startup ?
<warlock_mza> do both the event driven /etc/init/ systeym + the /etc/init.d scripts work in parallel ?
<mickster04> well this channel is good :P
<pmatulis> warlock_mza: yes
<warlock_mza> pmatulis, hey that might be too specific :-)
<pmatulis> warlock_mza: sorry?
<warlock_mza> pmatulis, nm
<zul> mathiaz: yep
<mathiaz> zul: if you upload a fix could you leave a note in the relevant bug to avoir duplication
<mathiaz> zul: especially during freezes
<zul> mathiaz: sure sorry about that
<mathiaz> zul: since LP won't update the bug right away
<hggdh> mathiaz: whenever you have time we can go back to the UEC whatchamacallit
<hggdh> mathiaz: or, if you do not mind, I can get back there and keep on from where we stopped, and ping you if needed
<mathiaz> hggdh: please go ahead with testing
<mathiaz> hggdh: I don't use the test rig right now
<mathiaz> hggdh: I'd suggest you keep going through the test case
<mathiaz> hggdh: if you have any question I'll answer them
<hggdh> mathiaz: deal
<SquidNoob> someone has managed to run ebtables in ubuntu server? I think it's impossible: (
<hggdh> kirkland: on testdrive, if a DISK_FILE is provided, should it be formatted (as it is right now) or just used as-is? I would vote for using as-is ;-)
<arch0njw> I have Ubuntu Server 8.04 running on an ESX VM. I tried to install vmware tools and it barked furiously -- even after I got the headers installed.  Anyone here have a handy link to point me to for a tried-and-true set of steps to get vmware tools working?
<kees> mathiaz, ttx: can you guys look at bug 292971 for lucid and maybe hardy?
<mathiaz> kees: is that a security issue?
<mathiaz> kees: or is just an important bug to fix?
<vadi01> guys have a problem with the ubuntu server. apache refuses to load javascript to LAN users but WAN users can load them when they access the server home page
<kees> mathiaz: just an important bugfix (i.e. after enough time, they can't use the system due to leaked memory use)
<vadi01> any idea why? or is there a specifig permission i need to set for this
<mathiaz> kees: ok - thanks
<sherr> vadi01: have you compared the page source (same page) between WAN and LAN? maybe there's something stripping things out between server and LAN user?
<mathiaz> smoser: how horrible is it to copy your aws credentials to a running EC2 instance?
<smoser> i personally dont htink its too bad.
<smoser> but i'm not kees
<kees> mathiaz: just don't make new AMIs with that stuff on the image.  :)  lots of people do that  :(
<smoser> mathiaz, if you think about it, the most likely entity to gain access to your credentials is a AWS employee, who could have just got them from AWS
<vadi01> sherr: yea see this http://img.flashtux.org/img132b4dc2d3efx5408c1fc.jpg
<mathiaz> smoser: right
<smoser> second most likely person is someone running an instance on the same hardware that exploits xen
<vadi01> sherr: illegal characters is the main problem
<smoser> and gets access to your memory
<mathiaz> smoser: what I'm looking for is to be able to get the list instanceID that are currently running under my account
<smoser> then also possible i guess is if storage is not cleaned sufficiently between users, someone could find your data on their block device.
<RoAkSoAx> mathiaz, i attached missing info to bug #531978. :)
<mathiaz> kees: I'm currently experimenting with puppet
<mathiaz> kees: and I'm looking for a way to semi-automate the client registration
<mathiaz> kees: the idea being that the client sends its instanceId as part of the certname, and then the puppetmaster checks if that instanceId is actually running under the same account
<soren> kirkland: Is there a way to adjust how aggressively ksm should scan for duplicate pages?
<mathiaz> kees: so if the instanceId is a known running instanceId the registration proceeds
<mathiaz> kees: how does that seem?
<DrNick_> evening. is anyone fairly well versed in samba Active Directory authentication via likewise-open?
<smoser> mathiaz, http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2.html
<smoser> that has some info.
<kirkland> soren: i think there is ... dig around /sys
<kirkland> soren: ls /sys/kernel/mm/ksm/
<kirkland> soren: adjust /sys/kernel/mm/ksm/sleep_millisecs i think
<soren> kirkland: Yeah, that seems to be the only knob I can turn
<soren> 20 msecs between each complete scan..
<soren> That sounds like a very agressive default.
<soren> perf top tells me that something like 20% of my cpu time is spent scanning for shareable pages.
<soren> That's a lot in my book.
<kees> mathiaz: /me ponders
<soren> kirkland: Are you seeing similar behaviour? It's not that I can feel the machine being heavily loaded by it, I just wondered why my fan was on all the time, and then noticed this.
<erichammond> mathiaz: I think it's fairly bad to copy AWS keys to an instance, but I do it because there is no good way to perform certain functions without them.
<mathiaz> erichammond: right - I've outlined my use case above
<mathiaz> erichammond: as I'd like to automate as much as possible the enrollement of puppet client
<mathiaz> erichammond: *clients*
<erichammond> mathiaz: You want the list of all instance ids or the instance id of the currently running server?
<mathiaz> erichammond: the list of all instance ids
<erichammond> What does "known running instanceId" mean?  known to whom?
<mathiaz> erichammond: as I'm using the instance id of the puppet *client* in the certname
<mathiaz> erichammond: known to my account
<erichammond> oh, so the server is checking
<erichammond> puppet server
<mathiaz> erichammond: yes - puppetmaster checks if the instance id of the requested csr is an instance id part of the aws account
<erichammond> could the client lie about its instance id and fool the server?
<mathiaz> erichammond: yes
<mathiaz> erichammond: however it would have to lie about an instance id that is *part* of the running instances for the specific account
<mathiaz> erichammond: the puppetmaster will only sign the request (and issue the certificate) if the instance id is part of the running instances
<mathiaz> erichammond: the underlying assumption here is that the instanceId are more or less randomly generated by amazon
<erichammond> mathiaz: No, they are very sequential (with some scrambling)
<erichammond> and easy to guesss
<erichammond> or guess
<erichammond> mathiaz: If you're willing to trust Amazon security groups, then it's pretty easy to only allow connections to puppetmaster from other instances in the same account.
<mathiaz> erichammond: right - that have to be part of the same security group
<erichammond> well you can specify the security group.  I.e., security group "puppetmaster" allows connections from security group "puppetclient"
<mathiaz> erichammond: can an instance be part of multiple security groups?
<erichammond> yes
<kirkland> soren: talk to aliguori about that ... he thought it was pretty aggressive
<mathiaz> erichammond: can an instance be removed from a security group while running?
<erichammond> each security group *adds* permissions (they can't take away)
<soren> kirkland: Will do.
<kirkland> soren: we can change twiddle that knob if necessary
<erichammond> The security groups assigned to an instance cannot be changed after an instance is started, but the permissions associated with each security group can be changed.
<kirkland> soren: and yes, 20ms does sound pretty frequent
<mathiaz> erichammond: hm - ok
<erichammond> (checking to see if that's still true)
<erichammond> yep, I don't see any security group option in ec2-modify-instance-attribute.  Sometimes that command seems to have new things added without me hearing about them :)
<mathiaz> erichammond: allright - so using the instanceId doesn't really help here
<mathiaz> erichammond: as it increases sequentially
<soren> kirkland: Yeah. it's so short you almost wonder why it waits at all.
<erichammond> Can't puppet use a shared secret?
<mathiaz> erichammond: hm - yes it could
<mathiaz> erichammond: the idea here is to use the certname to convey the shared secret
<mathiaz> erichammond: the instanceId being a kind of share secret
<mathiaz> erichammond: the attach scenario I'm trying to protect against is a root compromise of a puppet client
<erichammond> oh
<mathiaz> erichammond: that should *not* lead to easy access to other puppet manifests
<erichammond> how does the puppetmaster know what kind of system the client should be configured as?
<mathiaz> erichammond: while still being able to automate the registration process
<mathiaz> erichammond: that's another part of the problem that comes later
<mathiaz> erichammond: the first step is to issue a certificate to the requesting client
<mathiaz> erichammond: making sure that the client is *expected* to be asking a certificate
<erichammond> mathiaz: My company has been using puppet on EC2 and we've faced some similar issues about how to shoehorn puppet concepts into dynamically created instances.  I don't think we've completely resolved them.  I'm looking at chef to see if it might be a better fit.
<mathiaz> erichammond: well - I've got some ideas how to do that
<mathiaz> erichammond: the key part though is which components you use to *create* the instances
<erichammond> At the moment, we've dropped having a centralized (and single point of failure) puppetmaster.  Each new instance bootstraps itself as its own puppetmaster and then runs.
<mathiaz> erichammond: if the component that runs new instances is flexible enough you can ask the puppetmaster to generate a certificate for the client and send it to the instance
<mathiaz> erichammond: how do you send the manifest?
<mathiaz> erichammond: do you ship the whole site configuration to each instance?
<erichammond> mathiaz: My co-worker did most of the work.  I believe it picks it up from a private S3 URL.
<mathiaz> erichammond: ok
<erichammond> mathiaz: Yes, the whole puppet.
<erichammond> At the moment, the S3 URL expires so attackers cannot access it a bit after startup, but that's probably going to change as we get more into spot instances and auto scaling.
<erichammond> This approach does not solve your requirement of restricting access to certain manifests from a root compromised machine.
<mathiaz> erichammond: are you using Ec2 Auto-Scaling?
<mathiaz> erichammond: or you've got your own auto-scaling system?
<erichammond> mathiaz: We are not auto scaling at the moment.  We plan to add it as we grow.
<jnss> when was the last kernel update
<jnss> or whatever update required you to reboot
<mickster04> jnss i think server is desinged not to be needed to reboot
<mickster04> howelse are you gona get 365+ days of uptime:D
<njbair> why is sshd started and then restarted when ubuntu boots?
<jnss> yeah that is true mickster04
<jnss> but kernel security updates require reboots ;(
<mickster04> well i havent gotten one of those in 40+ days i know that
<njbair> nothing like when somebody brags about their uptime, then has to restart after like 6 months only to find that half of their services aren't starting properly
<mickster04> njbair: yeah, im worried about having to reboot
<njbair> I reboot monthly, if for no other reason than to avoid that kind of thing
<njbair> also, for the 5 minutes it takes, whose server can't afford a regular reboot?
<jnss> bragging is still nice
<mickster04> yeah
<jnss> a ni uptime usually denotes bsd
<njbair> actually, it usually denotes an under-utilized machine
<njbair> "Turn it up till it breaks, then back off a little bit." that's my motto!
<mickster04> yeah mine isnt used much:D but what it does it does well
 * jnss runs centos
<jnss> that thing has an old kernel
<njbair> mickster04, my home server does web, nfs, squid, svn, ssl, all on a mini-itx
<jnss> but i am told an update was issued a few hours ago
<jnss> mini itx? what model
<njbair> VIA EPIA MII
<njbair> I just found out the board has built-in hardware crypto
<mickster04> njbair: web, squid, samba ssh all on a fit-pc2:D
<njbair> I reconfigured openssl to use VIA Padlock and ran some benchmarks, it's ridiculously faster
<jnss> it's pretty insecure though
<jnss> referring tot he number generator
<jnss> do you know the watt draw of your itx?
<jetole> does anyone know how to test commands that you run in preseed
<jetole> mine fails and I have tested it pretty thuroughly outside of the preseed run
#ubuntu-server 2010-03-19
<caseys> has anyone installed and used postfix?
<jetole> me
<jetole> why? This isn't the #postfix room
<lamont> jetole: it often is the postfix room, though. :-(
 * jetole shrugs
<jetole> anyone know where I can find the docs for a program called in-target that exists only in the install cd?
<jetole> you can ignore my in-target question. It's a shell script
<xorl> hey anyone ever experience an issue with the apache2 binding to www-data even when specifying another User in the conf? (and completely ignoring ENVVAR (statically defining it in the config)
<twb> xorl: which config file are you trying to override it in?
<twb> I imagine the right place is /etc/default/apache2
<xorl> twb: in the httpd.conf itself
<xorl> I think it has to do with the prefork mpm
<xorl> but I am not sure where the forking is calling the user that's not defined anywhere anymore
<xorl> twb: so I changed User/Group in the httpd.conf, and in envvars
<xorl> even though it's not required anymore
<xorl> and sorry I meant apache2.conf*
<jetole> does anyone know where the machines domain name is stored?
<jetole> nevermind, it's stored in /etc/hosts
<twb> If there *is* a domain, yes. :-)
<twb> If you preseed domain= in your d-i, you get a hosts entry without an FQDN
<xorl> hmmm
<twb> FSVO d-i = sid's d-i as at last week
<twb> YMMV, etc.
<SmokeyD> Hey everyone, I want to setup a fileserver for a small office with 4 people working there with ubuntu server OS.
<SmokeyD> Would a mini atx box with an intel atom chipset be enough for this?
<SmokeyD> intel atom processor I mean
<SmokeyD> with 2 sata disks in raid
<persia> SmokeyD: It depends on usage.  Unless you're doing a huge amount of file transfer, that should be more than adequate.
<SmokeyD> no, we are just four self employed people doing standard office stuff. No video work or stuff like that.
<twb> A MIPS system is adequate for a NAS
<persia> twb: Not one that runs Ubuntu :)
<twb> persia: could install it n a loongson
<persia> (last I saw the mips port was still at toolchain-level, and I'm not sure it's gotten much attention over the past year or so)
<twb> oh, right
<cjwatson> Is it possible to get KVM to use USB passthrough and, crucially, boot from the USB-passthrough hard disk?
<twb> cjwatson: you can cheat in at least one way:
<cjwatson> i.e. I'm currently using  sudo kvm -m 512 -usb -usbdevice host:VENDOR:DEVICE  where VENDOR:DEVICE identifies my external USB-connected hard disk, and would like to get the virtual machine to boot from that hard disk; by default it doesn't seem to be among KVM's boot devices
<twb> qemu -hda /dev/uba
<persia> twb: Yes, but then kvm doesn't see it as USB, which can be important.
<twb> hence "cheat"
<cjwatson> as it happens that might be OK in this narrow case
<cjwatson> good enough for finishing an install test, at least :-)  Thanks!
<cjwatson> I wouldn't mind knowing if it's possible with -usbdevice anyway, but no longer a blocker
<twb> I'm afraid I've never done USB, it's too "desktopy"
<twb> There' #qemu and probably a #kvm, but IME they're full of crackheads
<twb> You might get lucky
<Daviey> I have no idea if the standard bios supports booting from USB.
<TeTeT> ttx: any tests that should be done on UEC Lucid at the moment? I have fired up my one front end/one node controller cloud and have some time at hand
<ttx> TeTeT: nothing specific
<twb> Daviey: good point
<twb> You might be able to use -kernel gpxe or unetbootin to bootstrap a USB device that way
<stefanlsd> Anyone experienced this with a virsh create domain.xml  (error: internal error cannot parse QEMU version number in '')
<ttx> kirkland, smoser: ping me when you have 5 minutes
<smoser> ttx, here now.
<ttx> smoser: will ping you back in ~10min
<ttx> on a call right now
<smoser> k
<ttx> smoser: pm?
<smoser> yeah
<jaypur> where is the no-ip folder at ubuntu-serveR?
<Pici> jaypur: folder? Can you rephrase your question?
<jaypur> Pici, sorry just woke up
<jaypur> i wanna see the files of no-ip... and see if i can edit it.... where can i find it?
<Pici> jaypur: see dpkg -L no-ip  (or noip2, depending on what version of Ubuntu you're using)
<jaypur> just a sec
<jaypur> Pici, lol it says it's not installed both of them, but i installed
<jaypur> lol
<zul> ttx: 3.4.6 still has the debconf use of unitilized error
<zul> bug #541869
<ttx> bug 541869
<ttx> beh
<mickster04> that makes four of us!
<Pici> hrm
<mickster04> oops
<ryanakca> Could someone help me figure out why MySQL won't start? http://ryanak.ca/~ryan/mysql-install-log
<ttx> zul: yes, dupe of bug 522514
<Pici> soren: ping, looks like uvirtbot is MIA.
<ttx> zul: that one was never really fixed
<ttx> zul: the proposed patch just doesn't do it
<zul> maybe open up a bug in debian?
<zul> ryanakca: maybe "[ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13)" ?
<ryanakca> zul: Aye, and what should I do about that?
<zul> ryanakca: fix your mysql tables, check google
<ryanakca> zul: OK. Should this be an issue on a fresh install?
<zul> ryanakca: it shouldnt but i never seen that before on  a fresh install
<ryanakca> zul: Everything I'm finding requires me to connect to MySQL... which I can't do if it won't start
<zul> ryanakca: you can try re-installing mysql
<ryanakca> zul: I've tried purging and reinstalling
<Jeeves_> That doesn't work.
<Jeeves_> Purge
<Jeeves_> You have no data at all that can be lost?
<ryanakca> Jeeves_: None whatsoever. It's a fresh install
<Jeeves_> ok
<Jeeves_> apt-get remove --purge mysql(or-so)
<Jeeves_> rm -rf /var/lib/mysql
<Jeeves_> rm -rf /var/run/mysqld
<Jeeves_> rm -rf /var/log/mysql*
<Jeeves_> That remove everything there is to be for mysql
<Jeeves_> Than apt-get install mysql again
<jaypur> Pici, it's noip2
<jaypur> i wrote it wrong sorry
<ryanakca> Jeeves_: I can get rid of /etc/mysql* too?
<Jeeves_> ryanakca: Yes, that one too. Although that should be empty after a purge
<ryanakca> Jeeves_: Hmmm... decided to leave it
<ryanakca> Jeeves_: Same thing, http://ryanak.ca/~ryan/mysql2
<ttx> zul: i'll dupe it
<Jeeves_> ryanakca: Did you change the location of the mysql data in the config?
<ryanakca> Jeeves_: I haven't touched any config files
<Jeeves_> what does "grep datadir /etc/mysql/my.cnf" say?
<ryanakca> grep: /etc/mysql/my.cnf: No such file or directory
<ryanakca> lemme reinstall mysql-common
<Jeeves_> :)
<ryanakca> Oh, bravo: tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      7546/mysqld
<ryanakca> Many thanks :)
<Jeeves_> :)
<zul> ttx: ping when you get a second can you have a look at the patch at https://bugs.edge.launchpad.net/ubuntu/+source/rabbitmq-server/+bug/506985
<zul> ttx: i would really like to get that fixed for beta2 if possible
<ttx> zul: reading the comment, it doesn't appear to be sufficient ? Maybe ask soren about this ?
<zul> k
<zul> soren: ping
<jcastro> kirkland: http://lifehacker.com/5496679/testdrive-virtualizes-brand+new-ubuntu-builds-for-easy-testing
<jcastro> kirkland: Ars and Lifehacker in one week!
<kirkland> jcastro: nice :-)
<kirkland> jcastro: i haven't seen the ars article yet
<jcastro> http://arstechnica.com/open-source/news/2010/03/ubuntu-prerelease-testing-made-easy-with-testdrive.ars
<Pupeno> Do you set the hostname of a machine to 127.0.0.1 or to one of the public ips in a server in /etc/hosts ?
<sherr> Pupeno: A hostname is a "name" so do not use an IP address like 127.0.0.1. You can add a name (hostname) to the public IP if you want - and associate that interface with the chosen name.
<Pupeno> sherr: recently it happened to me that hostname was xen2, and /etc/hosts only contained 127.0.0.1 localhost, so sudo complained about xen2 not found and hostname -f failed miserably. If I assign xen2 to 8.8.8.8 instead of 127.0.0.1; wouldn't I have similar failures if the public key of that machine changes?
<zul> ttx: https://bugs.edge.launchpad.net/ubuntu/+source/bacula/+bug/515274
<ttx> zul: ok, removing milestone until this gets accepted
<_ruben> Pupeno: you running google's public recursive dns server? if not, i dont see why you'd use 8.8.8.8 as ip address
<hggdh> mathiaz: how can I go on the UEC testing blueprint (tests on B1 candidate)?
<hggdh> mathiaz: as I understood you yesterday, the test rig is not running beta1
<mathiaz> hggdh: well - it's running what's the latest from the archive
<mathiaz> hggdh: what needs to be done is to install from the *iso*
<mathiaz> hggdh: on tamarind, you can see in the topologie.yaml file that you can select the media to install from
<mathiaz> hggdh: one of the example is to set a url for an iso
<mathiaz> hggdh: you can set the url to the iso candidates from cdimages
<hggdh> mathiaz: OK. Any specific topol to be used?
<mathiaz> hggdh: all of the topologies need to be tested
<mathiaz> hggdh: so start by updating the iso rule, then update the installer-media reference in the yaml file
<mathiaz> hggdh: and then ./set_topo.py lucid-amd64-topo1
<ttx> mathiaz: have 5 minutes ?
<mathiaz> ttx: I hope so!
<ttx> mathiaz: see pm
<hggdh> mathiaz: roger wilco
<Zider> how can I make the fs-mounting initscript wait until cryptdisk is finished?
<Pupeno> _ruben: it was an example obviously.
<_ruben> Pupeno: then use example ip addresses (192.0.2.0/24 is reserved for that) :) not existing ones
<Pupeno> _ruben: ok, I'll try to remember it the next time, so nobody gets confused about my examples, but I probably won't.
<_ruben> Pupeno: the "problem" with 8.8.8.8 as a "random example" is that it's a quite "known" existing ip address, being google's open resolver :)
<Pupeno> _ruben: ok.
<_ruben> ok, enough nitpicking for today .. time to head home .. weeeekend
<grepFu> Hi can anyone help me with this : http://pastebin.com/mzfHRJnK
<persia> grepFu: Unless you are tying to do something special, try purging dovecot-common dovecot-imapd and dovecot-pop3d before running the install again
<grepFu> I did, it outputs similar errors again.
<persia> After purge?  How did you purge?
<grepFu> sudo apt-get purge
<grepFu> http://pastebin.com/rhLKJWjk
<grepFu> if I do the install again, i get the same errors as before, bit of a looping problem.
<grepFu> Can't uninstall, can't install...
<persia> purge really ought to forget that you intentionally deleted the config files.
<grepFu> Was following this - https://help.ubuntu.com/community/Postfix
<persia> Maybe hunt through and purge *all* the dovecot stuff you have installed, and reinstall?
<grepFu> It wasn't working, and is pretty hard to follow so decided to start again, and follow my official server book instead.
<grepFu> So did apt-get remove on the apt-get installs
<grepFu> and reinstalled PostFix and then tried to do DoveCot, and this is what happened ;(
<persia> Somehow it believes you manually deleted some configuration files.
<grepFu> Can I not just put them back in, so it'll shut up and reinstall?
<persia> You could.  Do you know what they contain?
<persia> Also, unless you manage to put back the *exact* files, it won't automatically update them on upgrade.
<grepFu> no idea
<grepFu> I just want to fresh install the package.
<smoser> mathiaz, fyi, you're elite yaml skilzz were helpful to me. https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/541907/comments/3
<persia> purge + install is supposed to do that.  Report a bug against the package.
<smoser> thank you for a.) suggesting use of yaml b.) showimg me about aliases (./topologies.yaml)
<mathiaz> smoser: yeah - yaml is an awesome language for configuration IMO
<mathiaz> smoser: it just has the right syntax for doing good configuration
<grepFu> damn.. ok thanks for the help
<mathiaz> smoser: but isn't a full blown langage
<mathiaz> smoser: I really like the references
<smoser> yeah. #include missing.
<mathiaz> smoser: right - includes are missing
<grepFu> the bug system is a bit wierd
<mathiaz> smoser: is ami-ad09e6c4 the beta1 EMI?
<mathiaz> smoser: *AMI*
<smoser> yes.
<smoser> us-east-1 ami-ad09e6c4 ubuntu-images-milestone/ubuntu-lucid-10.04-beta1-i386-server-20100317
<mathiaz> smoser: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=3102
<mathiaz> smoser: ^^ still list 9.10 in the AMI manifest
<smoser> fixing
<smoser> :)
<mathiaz> smoser: I haven't checked the other web page though
<bogeyd6> needs moar uec
<grepFu> managed to fix it myself
<grepFu> :D
<grepFu> Made the folder /etc/dovecot
<grepFu> then made the 3 conf files (as blank)
<grepFu> then purged, then reinstalled.
<persia> Ah.  So the purge wasn't completing because the directory was missing?  Please file a bug : it should be smarter than that.
<grepFu> ok
<grepFu> Thanks again for your help, wouldn't have figured it out otherwise... 2 hours of my life I won't get back, loggin the bug now.
<smoser> mathiaz, its fixed now.. i actually think you might have had a cache or something . but it does your's show 20100117 in the manifest path ?
<smoser> (before reloading)
<mathiaz> smoser: ubuntu-images-milestone-us/ubuntu-lucid-9.10-beta1-i386-server-201003XX.manifest.xml
<persia> grepFu: Thanks for figuring out the real reason : had me stumped.  Let's hope that gets fixed before someone else gets stuck.
<mathiaz> smoser: ^^ this is what  AMI Manifest showed
<smoser> hm.. oh well. it shoudl b good now.
<smoser> thanks, mathiaz also noticed the '9.10' in there and fixed that.
<mathiaz> smoser: are you using a template to generate the content?
<mathiaz> smoser: or is it manual?
<mathiaz> smoser: (ie the process of creating the amazon entry)
<smoser> well, mostly manual. but i bzr it.
<smoser> you have to edit a form . in aws to update it.
<smoser> with lots of fields, so its easy to miss one even if you have the correct data in front of you.
<mathiaz> smoser: the security uses a similar workflow to publish USN IIRC
<mathiaz> smoser: well - kind of similar
<mathiaz> smoser: IIRC they use a template to generate the text and then copy'n'paste into the ubuntu.com website
<mathiaz> smoser: it may not apply to the AWS website though
<mathiaz> smoser: especially if there are multiple fields involved
<smoser> yeah. i've just not gotten to generating the content.
<smoser> https://code.launchpad.net/~smoser/ubuntu-on-ec2/ami-pages is the branch
<smoser> mathiaz, if you wanted to, i'd take a patch to modify AWS web pages programmatically.
<smoser> since you don't have anything else to do
<smoser> :)
<mathiaz> smoser: :)
<mathiaz> smoser: ETA: 22nd March 3010
<smoser> sweet.
<hggdh> mathiaz: with a schedulling policy of greedy one one NC should have 2 instances (following the test), right?
<mathiaz> hggdh: I think so
<hggdh> mathiaz: I do not see it (at least, I think so). Can you check dempedak?
<mathiaz> hggdh: http://paste.ubuntu.com/397914/
<mathiaz> hggdh: seems like the test is failing
<MTecknology> what's that thing people liketo use to manage their servers from teh web?
<bogeyd6> webmin
<Hypnoz> ssh
<Hypnoz> ;)
<lenios> technically, ssh is not web
<lenios> but i guess no one cares
<Hypnoz> I care that ssh is not web lenios. I care a lot. And frankly I've been losing sleep over it lately, I haven't shaved or showered or gone to work in days contemplating the wide reaching and deep impacting ramifications of that very statement.
<lenios> :)
<lenios> anyway, some people can't tell the difference between google and internet
<Hypnoz> wait until they have chromeos
<hggdh> mathiaz: bug 542075 opened
<madcat1990> I have a problem with my ubuntu server
<madcat1990> I am using version 9.10, have in it  Dhcp server, apache server, mysql, iptables to NAT the school's internet to other pcs on the network (conected to the server through a switch)
<madcat1990> the computer I'm using is a dell server
<madcat1990> uses a Single Core XEON Processor by intel
<madcat1990> running at 2Ghz, and 1GB of ram
<madcat1990> problem is, everyday
<madcat1990> it hangs, completelly
<madcat1990> any sugestions?
<Hypnoz> what time is it hanging, look through files in /var/log for things happening at that time
<sherr> madcat1990: Nothing in /var/log/syslog?
<sherr> madcat1990: Can you put a serial console on it? Might display an error.
<madcat1990> hmm
<madcat1990> will check that log then
<madcat1990> let me take note
<madcat1990> thanks, after I see it
<madcat1990> i will be back
<VSpike> How can I prevent a dynamic interface updating from updating /etc/resolv.conf while still getting other IP info from DHCP?
<geneticx_wrk> Hello everyone. I would like to achieve auto-login using ssh to another server but no luck. I have copied the public key to the remote server but I still get prompted for a password, any help?
<VSpike> geneticx_wrk: I always find this to be the best guide by far http://sial.org/howto/openssh/publickey-auth/
<VSpike> geneticx_wrk: have you tried running ssh -vvv
<VSpike> geneticx_wrk: ime the problems is usually one of permissions :)
<VSpike> if any of the files are too weak on permissions, the ssh client or server will ignore them
<geneticx_wrk> VSpike: this is what my ssh -vvv looks like http://pastebin.com/9fHniHPL but still can't figure it out
<VSpike> geneticx_wrk: you're doing it as root?
<geneticx_wrk> yup
<VSpike> so does /root/.ssh/id_rsa exist?
<VSpike> or /root/.ssh/id_dsa
<geneticx_wrk> VSpike: only known_hosts
<VSpike> geneticx_wrk: then I don't think you've correctly set things up.  Try following the procedure on the link I gave. You just need to do stuff under "Public Key Setup".  Ignore the stuff about ssh-agent for now.
<geneticx_wrk_> VSpike: Ok I will, but when it asks to copy the contents of id_rsa.pub >> authorized_keys .. in ubuntu i have known_hosts so should I create another file called authorized_keys or just append it to known_hosts?
<VSpike> geneticx_wrk_: known_hosts is definitely the wrong file
<VSpike> geneticx_wrk_: the command it says is cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
<VSpike> That will do the right thing
<VSpike> It will create the file if it doesn't exist
<geneticx_wrk_> VSpike: ok, hummm..now I'm wondering what the purpose of the known_hosts file is in ubuntu and how it's used with ssh
<VSpike> known_hosts records hostnames and ip addresses against certs
<VSpike> YOu know the message when you ssh somewhere for the first time that says "are you sure this what you think it is? yes/no"
<VSpike> When you say yes, the machine gets added to the list
<VSpike> if the cert ever changes, ssh will quit with a warning that someone is maybe doing something nasty
<geneticx_wrk_> ah yes, you are correct
<geneticx_wrk_> geez brain fart
<VSpike> if it does that and you want to remove the cert and start again (i.e. you know everything is OK), a good tip rather than editing the filed direct is to use ssh-keygen -R <hostname or IP>
<geneticx_wrk_> VSpike: have you ever used ssh-copy-id instead?
<VSpike> nope, not come across it
<VSpike> ah neat.. looks handy
<VSpike> I kept meaning to put all the stuff from that site into a script :)
<VSpike> Looks like someone got there first
<geneticx_wrk_> VSpike: hehe..yeah
<geneticx_wrk_> VSpike: my intended plan is to use rsync and auto-login to the source machine
<neurostim> hi, I have a problem for install proftpd-mysql, I can't install it with apt-get so with dpgk & the .deb, I had a depedency problem
<sherr> neurostim: why not use apt-get? what's the problem?
<neurostim> E: Aucun paquet ne correspond au paquet proftpd-mysql
<neurostim> * no package found
<lenios> neurostim, try proftpd-mod-mysql
<neurostim> lenios : thx
<neurostim> lenios : It's worked great :-D
<lenios> aptitude search proftpd* :)
<neurostim> lenios : thx too ;)
<VSpike> How can I prevent a dynamic interface updating from updating /etc/resolv.conf while still getting other IP info from DHCP?
<VSpike> aha.. figured it out
<lullabud> VSpike: what's the solution?
<VSpike> lullabud: add 'supersede domain-name-servers 127.0.0.1;' to /etc/dhcp3/dhclient.conf ... in my case, inside 'interface "eth0" { ... }' although that's not required
<lullabud> very handy.  i was expecting something more along the lines of /etc/resolve.d, which seems to be the direction most things are going.
<VSpike> lullabud: yeah.. another way would have been a post-up command or script to just overwrite /etc/resolv.conf but that would have been more kludgy
#ubuntu-server 2010-03-20
<cbrowne> I'm having immense trouble setting up my Ubuntu 9.10 server as a Wireless Access Point; I've followed the various howtos and even tried compiling 2.6.32-wl (the rt2x00 project's kernel), all to absolutely no avail; chipset (according to lspci): "03:0b.0 Network controller: RaLink RT2561/RT61 802.11g PCI"; lsmod confirms I have the rt61pci module installed.  Any help would be greatly appreciated (sorry for the uber long message)
<cbrowne> ping...
<Hypnoz> I had problems with wireless in 9.10, which worked perfectly in 9.04
<Hypnoz> so i finally went back to 9.04
<Hypnoz> you might consider trying your setup on a 9.04 install
<Hypnoz> if you get desperate i guess
<ruben23> hi are there ways to optimize ubuntu-server default..
<qman__> ruben23, you're going to have to be more specific -- ubuntu server is optimized for typical situations out of the box
<ruben23> qman__: what are ways i can enhance and optimize ubuntu-server. my application need extensive processing...
<qman__> ruben23, ubuntu server has no superfluous or performance-hindering configurations by default
<qman__> there are some things you may be able to remove, but they would also remove functionality
<qman__> if a base ubuntu server install is not fast enough, you probably need faster hardware
<ruben23> qman__:ill take that...i just heard from other modifying kernel stack to preven memory leak.it hink thats wrong.
<qman__> a memory leak is a different problem
<qman__> if your kernel is leaking memory, it needs to be fixed, probably by updating it
<qman__> if your program is leaking, you need to complain to whoever wrote it
<ruben23> qman__: thank you for clarity.
<MTecknology> qman__: there's ways to speed it up...
<MTecknology> I figure it's the perfect balance between instant usability, security, and speed
<MTecknology> although I wouldn't mind sacraficing a little usability for the sake of more security
<ruben23>  MTecknology:hows that..?
 * genii sacrifices a goat to MTecknology's cause
<MTecknology> giovani: :P
<MTecknology> genii: *
<MTecknology> ruben23: I tend to remove a lot of cruft from a basic server install, but overall it's not bad
<MTecknology> I kinda wish we offered a hardened kernel
<MTecknology> genii: you should see my kernel config :P
<ruben23> MTecknology:you re compile the kernel.
<MTecknology> ruben23: ya, for my own laptop (only system) - not the servers I run
<ruben23> ow ok, im just worried on my ubuntu-server i ahd it keeps getting high loadavg- up 12.0:-(
<MTecknology> you check top output?
<ruben23> im using htop
<Pici> ruben23: how many cpus?
<MTecknology> I don't trust htop, never seems completely accurate
<ruben23> i got dual core xeon-intel.
<ruben23> 2.4Ghz
<Pici> Check iotop then
<ruben23> Pici: i think the application itself are putting the high load
<bogeyd6> MTecknology, how is htop not accurate?
<ruben23> whats the difference betwwen buffer memory to cached memory..
<MTecknology> apache under any normal use seems to add a plenty high load
<bogeyd6> buffer is in the processor
<bogeyd6> MTecknology, im interested in the specifics of htop being inaccurate
<ruben23> how about cached..?
<MTecknology> cached is held in memory but not being used, for being quickly recalled
<bogeyd6> MTecknology, i ask because a) i assume you are not a troll b) i use htop and rely on it for my consulting
<bogeyd6> cached is the ram
<bogeyd6> Cache Memory can be accessed more quickly by the computer microprocessor than it can be accessed by regular RAM
<bogeyd6> well cached is usually in the ram
<MTecknology> bogeyd6: there are times where I see top and htop disagree
<bogeyd6> run on the same box?
<MTecknology> ya
<MTecknology> same time
<MTecknology> buffer = being moved around; used = ready for instant use; cached = ready to become ready for instant use; free means nothing there to be used
<MTecknology> best as I know how to explain it very simply unless i'm wrong
<bogeyd6> i know about the memory usage calculation
<MTecknology> bogeyd6: I could be wrong and there could be bugs that were addressed in the time since I used it last - ~4yr
<bogeyd6> but running htop and top at the same is very easy to see the disagreement and why
<Pici> I've not had any issues running htop myself
<bogeyd6> Pici, you did on the memory calculation and didnt know it
<bogeyd6> Pici, fear not it was just the buffer usage
<pmatulis> ruben23: you have a cpu load of 12.0 ?  for how long?
<ruben23> pmatulis:actually i start at 0.89 then rising.then eventually ill get 12.0 when full usage then my apps crashed up. need to reboot again
<ruben23> to correct.
<bogeyd6> ruben23, which app?
<pmatulis> ruben23: geez, change your h/w and/or your app
<ruben23> bogeyd6: its a dialer system..for voice calling-asterisk, and he do recordings
<bogeyd6> ruben23, i read your past convo and it seems no one told about the ubuntu linux rt kernel
<bogeyd6> <ruben23> qman__: what are ways i can enhance and optimize ubuntu-server. my application need extensive processing...
<ruben23> bogeyd6:  yes whats rt kernel.?
<bogeyd6> real time processing
<bogeyd6> linux has an inherent delay in processing
<pmatulis> ruben23: even a consistent load of 2.0 is terrible
<bogeyd6> !realtime
<MTecknology> bogeyd6: so what happens with it? had to run
<ruben23> pmatulis: yeah its true..worst is 12.0
<bogeyd6> !rt | ruben23
<ubottu> ruben23: The RT kernel is the Ubuntu kernel with a realtime preemption patch applied. It is included in Ubuntu Studio by default. For more information please see: https://wiki.ubuntu.com/RealTime/
<bogeyd6> ruben23, https://rt.wiki.kernel.org/index.php/Main_Page
<bogeyd6> ruben23, just to help even further Real-time applications have operational deadlines between some triggering event and the application's response to that event. To meet these operational deadlines, programmers use real-time operating systems (RTOS) on which the maximum response time can be calculated or measured reliably for the given application and environment.
<ruben23> th recommend distro version for my apps is ubuntu-server 8.10 LTS
<bogeyd6> 8.04 lts
<ruben23> bogeyd6: this process is kernel recompile and patching it up..?
<ruben23> sorry
<MTecknology> !lts
<ubottu> LTS means Long Term Support. LTS versions of Ubuntu will be supported for 3 years on the desktop, and 5 years on the server. The current LTS version of Ubuntu is !Hardy (Hardy Heron 8.04).  The next LTS release is scheduled to be !Lucid (Lucid Lynx 10.04)
<bogeyd6> ruben23, as simple as apt-get install linux-rt
<ruben23> bogeyd6:ow , wow this is a big help..
<bogeyd6> reboot and try it
<bogeyd6> if you dont like, remove it and reboot
<MTecknology> I wonder how hard it is to write a useful OS from scratch...
<bogeyd6> upgrading to karmic will be even more beneficial but who knows if your app is supported
<MTecknology> gnu hasn't even been able to do it yet - obviously i couldn't, i'm just wondering what kind of work goes into it..
<MTecknology> ruben23: what app?
<bogeyd6> MTecknology, debian sarge had over 230 million lines of code
<MTecknology> that's with the kernel and everything?
<ruben23>  MTecknology: asterisk, with a couple of record apps, like sox and lame.
<bogeyd6> MTecknology, the whole release
<bogeyd6> ruben23, linux-rt is included in ubuntu studio because of the audio and video processing
<MTecknology> bogeyd6: if you have 10 million devs that's ion 23 million lines each :P
<bogeyd6> ruben23, asterisk is available on karmic
<bogeyd6> MTecknology, http://www.dwheeler.com/sloc/
<bogeyd6> !asterisk
<bogeyd6> !voip | ruben23
<ubottu> ruben23: VoIP is Voice over IP. The default VoIP client for Ubuntu is !Ekiga. There is also an xmpp voice component in !Empathy. Kubuntu Clients include Kphone and Twinkle. Proprietary Clients include !Skype and Gizmo5. VoIP server applications include Asterisk and Yate ( both in repositories ), FreePBX, and SipX.
<ruben23> bogeyd6:  yes it the backend of my apps, using viciidal-created through perl script.
<bogeyd6> i would try the linux-rt kernel which is made for sound processing and see how far tht gets you
<ruben23> http://astguiclient.sourceforge.net/vicidial.html  <-------see ubuntu is recommended..best
<MTecknology> bogeyd6: I have a hard time reading that article beyond the title, but it sounds interesting
 * bogeyd6 feels dirty for helping a telemarketer
<ruben23> bogeyd6:i will also try it to karmic- the goal is to lessen up loadavg generated..
<bogeyd6> !ymmv | ruben23
<ubottu> ruben23: ymmv is short for "Your mileage may vary". It means that someone else's experience with compatibility, performance etc. may not necessarily match yours. Also see !wfm
<ruben23> bogeyd6: this is a big help, thanks so much, il test it on production..
<bogeyd6> :)
<bogeyd6> we aim to please here at ubuntu
<ruben23>  bogeyd6:does karmic already have the rt kernel..?
<bogeyd6> ruben23, no it doesnt. you install and you "sudo apt-get install linux-rt" and reboot the server
<ruben23>  bogeyd6: got one more clarification
<bogeyd6> ruben23,  k
<ruben23> i got this process on before installing my apps, where i can incorporate the rt kernel here on this part..? ---------> http://pastebin.com/H9MLXRKp  -----> do i need to omit process here..?
<bogeyd6> hmm
<ruben23> bogeyd6: what you think..?
<bogeyd6> ruben23, http://pastebin.com/Lj2YrMh9
<bogeyd6> im not sure about why you need the linux-source but it cant hurt
<ruben23> ok i will omit linux source-im not needing it actually.
<ruben23> ill replace linux source with rt linux install
<bogeyd6> kk
<ruben23> thanks
<ruben23>  bogeyd6:if i install linux rt- do i need to install linux header again..? or no need
<bogeyd6> not if you followed the order i did it in
<MTecknology> ruben23: install kernel/header; reboot
<MTecknology> !info harden
<ubottu> harden (source: harden): Makes your system hardened. In component universe, is extra. Version 0.1.35 (karmic), package size 9 kB, installed size 60 kB
<MTecknology> ruben23: you ever mess with that?
<MTecknology> bogeyd6: *
<ruben23> bogeyd6: how do i cehck if rt-kernel is installed propoerly
<ruben23> i got this---> Linux vicidial 2.6.24-27-server #1 SMP Fri Mar 12 01:45:06 UTC 2010 i686 GNU/Linux
<bogeyd6> cat /etc/lsb-release
<bogeyd6> wait sorry im dumb
<ruben23> ok
<sekyourbox> I need help with wpa_supplicant. When i run the command its says the following drivers are installed: wext, nl80211, amtel, wired... I need to enable madwifi drivers, and i believe they are already installed by default on 910..Network says im using auth5. Any help please
<twb> sekyourbox: that doesn't sound like a server issue.
<sekyourbox> I need to do this for my server
<sekyourbox> I cant set up a server with network manager
<MTecknology> sekyourbox: hm?
<sekyourbox> MTecknology
<MTecknology> sekyourbox: vim /etc/networking/interfaces
<sekyourbox> I know how to set it up, i just need to know how to get wpa_supplicant to recognise the driver.
<twb> sekyourbox: you generally want the generic wext driver these days.
<MTecknology> you just need to have the driver loaded, the rest should be magic
<sekyourbox> Wext isn't working for me
<sekyourbox> it connects but wont grab a DHCP address
<sekyourbox> I dont want to set up a static address
<sekyourbox> AND i want to use madwifi
<MTecknology> why don't yyou want a static ip for a server?
<sekyourbox> because the wpa part of the setup has nothing to do with my server. I need to setup a PXE server for a seperate subnet in my lan
<sekyourbox> so its actually a workstation/server
<sekyourbox> I will also eventually be setting other things that need network manager to be disabled
<sekyourbox> looks like I need to re-compile wpa_supplicant
<Carbon_Monoxide> hi! i have an Ubuntu Server 9.04 with Error 16: Inconsistent filesystem structure during boot up
<Carbon_Monoxide> I've used a LiveCD to fsck the ext2 /boot partition
<Carbon_Monoxide> and it is marked as clean
<Carbon_Monoxide> what else do i need to do?
<sherr> Carbon_Monoxide: A google brings up some hits - looks like a serious issue and not solved by many ... filesystem issue/corruption
<sherr> http://www.google.co.uk/search?q=%22Inconsistent+filesystem%22&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-GB:official&client=firefox-a
<sherr> One person backed up the kernel and made a copy of it for boot :
<sherr> http://bbs.archlinux.org/viewtopic.php?id=72686
<sherr> Maybe you will get lucky. Good luck.
<ricdanger1> hi there
<ricdanger1> I'm testing ubuntu 10.04 and find something really annoying
<ricdanger1> no verbose output on boot
<ricdanger1> is there a way to fix this?
<twb> Remove "quiet" from the kernel boot parameters.
<twb> Depending on what you mean by "verbose".
<ricdanger1> nop
<twb> I think there's further output if you add "debug" or someting.
<ricdanger1> kernel messages show fine
<twb> ricdanger1: did you install from the server CD?
<ricdanger1> but everything from init-bottom is just black
<ricdanger1> sure
<ricdanger1> both minimal and regular
<ricdanger1> btw, regular installs plymouth.
<twb> Oh, it's THAT
<twb> Hit Ctrl+Alt+F1
<ricdanger1> so there is a splash, but still no boot messages
<twb> There's a vt switching issue.
<ricdanger1> Ctrl+Alt+F1 shows the caret scrolling vertically but not a single output from init scripts
<ricdanger1> it's like the messages are being printed black in black
<twb> OK, at this point I'm just going to say that I don't know.
<ricdanger1> this is really annoying
<twb> I can probably fix it from first principles, but I don't have hardware that can run plymouth to test a 10.04 image against.
<ricdanger1> hope it gets fixed
<ricdanger1> do you know where can I fill a bug for thgis?
<ricdanger1> twbn: without plymouth (virtual or minimal install), you only get a black space without any output
<twb> Dunno
<ricdanger1> so, the problem is there
<twb> The usual ubuntu bug reporting procedures should apply
<ricdanger1> sure
<twb> !malone
<ubottu> Launchpad is a collection of development services for Open Source projects. It's Ubuntu's bug tracker, and much more; see https://launchpad.net/
<ricdanger1> but what package should this be related?
<ricdanger1> I filled on upstart
<twb> ricdanger1: best guess, and they'll reassign it if you're wrong
<twb> upstart or plymouth are reasonable guesses IMO
<ricdanger1> btw
<ricdanger1> http://launchpadlibrarian.net/41417613/LucidBoot.png
<ricdanger1> here is the boot result
<ricdanger1> the more services I add, the more scrolled down the prompt is
<twb> Hmm, what provides the LSB script library
<twb> ricdanger1: do you have lsb-base installed?  Check its bug list.
<ricdanger1> https://launchpad.net/ubuntu/+source/upstart/0.6.5-5
<ricdanger1> they have here something about foreground color
<ricdanger1> fixed on 0.6.5-5 (the one I'm running)
<ricdanger1> maybe this is why the text is black now
<jmazaredo> can you forward 192.168.0.15:80 to 192.168.0.16:80?
<jmazaredo> so that if 192.168.0.17 request webpage on 192.168.0.15 will go to 192.168.0.16
<RoyK^> jmazaredo: there are several ways to do that :)
<jmazaredo> i need!\
<RoyK^> how?
<RoyK^> proxying?
<RoyK^> iptables can help you, but a simple redirect is easier
<jmazaredo> http://ubuntuforums.org/showthread.php?t=1434586
<jmazaredo> thats my problem
<RoyK^> <?php header("Location: http://newbox"); ?>
<jmazaredo> posted in the thread
<jmazaredo> iptables perhaps
<jmazaredo> iptables -A PREROUTING -t nat -i wlan0 -p tcp --dport 80 -j DNAT --to 192.168.0.155:80
<jmazaredo> iptables -A INPUT -p tcp -m state --state NEW --dport 80 -i wlan0 -j ACCEPT
<jmazaredo> the problem is the other box is in virtualbox
<RoyK^> why would that be a problem?
<RoyK^> if using NAT in vbox, you'll need their own port forward
<jmazaredo> the adapter is bridged
<jmazaredo> so they are on same network
<RoyK^> sorry. fell out
<jmazaredo> :)
<jmazaredo> in the internet i only see forwarding external to internal
<RoyK^> there isn't really a big difference
<RoyK^> just two sides
<jmazaredo> so this command what will this do? iptables -A PREROUTING -t nat -i wlan0 -p tcp --dport 80 -j DNAT --to 192.168.0.155:80
<jmazaredo> all request from that pc will go to 192.168.0.155:80?
<RoyK^> yes
<jmazaredo> iptables -A FORWARD -p tcp -m state --state NEW --dport 80 -i wlan0 -j ACCEPT
<RoyK^> that is, it'll forward the request
<jmazaredo> that one?
<RoyK^> that one will accept routing in the FORWARD chain if that is already blocked by policy
<RoyK^> if the policy is ACCEPT, it won't do much
<jmazaredo> ok will try to sabotage my pc now :)
<jmazaredo> i will run the command
<RoyK^> you'll have to make sure ip forwarding is enabled in the kernel, though
<RoyK^> well, no, I don't think you will need that
<jmazaredo> think all kernels can forward
<RoyK^> they _can_
<jmazaredo> using 8.04 desktop
<RoyK^> I can jump out of the window, but that doesn't make it likely for me to want to
<jmazaredo> 0_0
<RoyK^> see sysctl -a | grep forward
<RoyK^> there's a setting in kernel wheather or not to do ip forwarding
<jmazaredo> its all = 0?
<RoyK^> then kernel won't forward packages
<jmazaredo> why is that
<RoyK^> well, just set it to 1
<RoyK^> /etc/sysctl.conf
<RoyK^> you don't necessarily want your box to work as a router
<jmazaredo> firestarter makes your box like a router right?
<RoyK^> just enable ipv4 routing
<RoyK^> in sysctl.conf
<jmazaredo> got it
<RoyK^> it's the kernel doing the routing after all
<RoyK^> not some fancy userspace software
<jmazaredo> :)
<jmazaredo> will restart the network dont jump on the window ok
<RoyK^> hehe
<RoyK^> jmazaredo: there's no need to restart anything to make those changes work
<RoyK^> just sysctl -p
<jmazaredo> still aint workin
<jmazaredo> will try again
<RoyK^> jmazaredo: there's no need to restart anything to make those changes work
<RoyK^> just sysctl -p
<RoyK^> telnet to ip:port to see what's happening
<jmazaredo> nothing
<jmazaredo> will try to flush
<RoyK^> flush what?
<jmazaredo> still not working
<jmazaredo> 0_0
<ChrisRut> so there is the 'time' command which will output the time taken to run a process (i.e. time ls), is there a command I can run that will tell me the cpu usage (min,max,average) during a process (for that given process) ?
<ChrisRut> for benchmarkign
<ChrisRut> *benchmarking
<pwnguin> ChrisRut: not as easy as time, but bootchart can probably be hacked to do what you want
<pwnguin> however, min, max and average by themselves are meaningless
<pwnguin> the min will be 0 and max will be 100
<pwnguin> average will just be user time / wall clock time
<ChrisRut> right about min & max... but time's man says it can output cpu utilization(http://unixhelp.ed.ac.uk/CGI/man-cgi?time) using 'P' ?
<pwnguin>   P      Percentage  of  the  CPU that this job got.  This is just
<pwnguin>                      user + system times divided by the total running time. It
<pwnguin>                      also prints a percentage sign.
<pwnguin> all this really tells you is whether the bottleneck is CPU or not
<ChrisRut> gotcha, Thanks pwnguin
<pwnguin> i think the manpage for time and the binary don't match
<pwnguin> yea,
<pwnguin> Users of the bash shell need to use an explicit path in order to run the external time command and not the shell builtin variant.\
<chocamo> I am installing server edition  10.04 beta and I can get a dhcp address during the install, but after installation and reboot I cannot get an IP on either my wired or by hooking up a wireless device
<MTecknology> How can I reset all permissions on a home directory to their default settings?
<sherr> MTecknology: if you know the perms you want, you can use find/exec/chmod to change all file (f) perms, and dir (d) perms etc.
<MTecknology> sherr: what should I use for files/dirs with chmod?
<sherr> Something like : find . -type f -exec chmod 755 {} \;
<sherr> man find (and test/take care)
<MTecknology> 755/744 sound good enough?
<sherr> oops - that example if for "d"
<sherr> f -> 644
<sherr> d -> 755 (maybe)
<MTecknology> ok, thanks
<sherr> Don;t know defaults
<MTecknology> i don't either, 755/644 sounds great
<MTecknology> alrighty - time to get cleaned up
<guntbert> MTecknology: if you don't want anybody your files you might want to use rather 750 or 640 ..
<MTecknology> guntbert: sherr: thanks
<guntbert> MTecknology: you're good if you figured out my sentence ... :-)
<MTecknology> guntbert: made perfect sense
<ruben23> hi how do i remove startscript of an application on my ubunt-server
<mealstrom> man update-rc.d
<mealstrom> there is a remove section
<ruben23> hi guys modprobe load an application right..? how about to remove..?
<PcCowboy2005> Hi
<guntbert> ruben23: it loads a module, you can unload it with sudo modprobe -r <modulename>
<ruben23> FATAL: Module dahdi is in use.
<ruben23> guntbert: thats the message
<PcCowboy2005> hello. can you answer a most likey stupid/easy question for me?
<guntbert> ruben23: of course - a module that is currently used cannot be unloaded - try to see what it is with lsmod | grep dahdi
<guntbert> PcCowboy2005: not if we don't hear it :-)
<PcCowboy2005> how would you set up a subdomain? (eg. forum.mysite.me goes to root/forum)
<guntbert> PcCowboy2005: here I have to pass ... sorry
<ruben23>  guntbert: its empty
<guntbert> ruben23: please !pastebin the output of lsmod
<guntbert> !pastebin | ruben23
<ubottu> ruben23: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<PcCowboy2005> oh...ok lol maybe not so easy.  i know on a normal web host i just make a pointer.  but i'm thinking for running my own server from home and was wondering how to do it.
<ruben23> http://pastebin.com/uqzkgWS7
<guntbert> ruben23: strange - and you are certain that the modules name is dahdi ?
<rohanroy> Hey all, I've setup a VPN on a remote server using PPTP and CHAP Authentication on an Ubuntu 8.04 Server. Here in California, I can connect to my VPN on my mac, and Windows XP machines. I'm trying to setup a virtual machine on my home server running Ubuntu 9.10 via VirtualBox, VM is running: Ubuntu 8.04 JeOS. I would like this virtual machine to have a persistent connection to the PPTP VPN. I've managed to set it up so it successfully joi
<rohanroy> VPN, I can ping and traceroute properly to the VPN server and other clients on the remote LAN. However, I would like to have it so that ALL TRAFFIC is routed through the VPN tunnel. I'm following the information here: http://pptpclient.sourceforge.net/routing.phtml#automatic-setup However, I'm having difficulty getting it to work. Is there anyone here who can provide any help?
<ruben23>  guntbert: thanks i thnk i have unloaded the dahdi startup script.
<guntbert> ruben23: you're welcome :-)
<rohanroy> Anyone know anything about routing all trafic through a VPN tunnell?
<rohanroy> Everything I've found online describes how to do it with a gui, but I have no gui...
<darkpixel> Anyone know what's up with nz.archive.ubuntu.com?  I'm getting 'no route to host' on about 5 boxes I manage, and timeouts on another 4.  They are spread out across multiple hosting providers and small business connections (Comcast, Qwest, Sprint/Embarq, etc...) around Washington and Oregon state.
<darkpixel> (This has been going on all day)
<RoAkSoAx> kirkland`, ping
<rohanroy> Hey all, I've setup a VPN on a remote server using PPTP and CHAP Authentication on an Ubuntu 8.04 Server. Here in California, I can connect to my VPN on my mac, and Windows XP machines. I'm trying to setup a virtual machine on my home server running Ubuntu 9.10 via VirtualBox, VM is running: Ubuntu 8.04 JeOS. I would like this virtual machine to have a persistent connection to the PPTP VPN. I've managed to set it up so it successfully joi
<rohanroy> VPN, I can ping and traceroute properly to the VPN server and other clients on the remote LAN. However, I would like to have it so that ALL TRAFFIC is routed through the VPN tunnel. I'm following the information here: http://pptpclient.sourceforge.net/routing.phtml#automatic-setup However, I'm having difficulty getting it to work. Is there anyone here who can provide any help?
#ubuntu-server 2010-03-21
<MTecknology> !nickspam | Guest55786
<ubottu> Guest55786: You should avoid changing your nick in a busy channel like #ubuntu, or other Ubuntu channels; it causes excessive scrolling which is unfair to new users. Please set your preferred nick in your client's settings instead. See also Â« /msg ubottu Guidelines Â»
<bogeyd6> chan sure is quiet
<bogeyd6> everyone must be off work
<psyferre> hey folks, does anyone know of a way to disable the automatic installation of secutiry updates?  I chose that without thinking during my installation and would like to change that...
<psyferre> my googling has so far been foiled by a million results for the desktop edition =-/
<rohanroy> Hey all, I've setup a VPN on a remote server using PPTP and CHAP Authentication on an Ubuntu 8.04 Server. Here in California, I can connect to my VPN on my mac, and Windows XP machines. I'm trying to setup a virtual machine on my home server running Ubuntu 9.10 via VirtualBox, VM is running: Ubuntu 8.04 JeOS. I would like this virtual machine to have a persistent connection to the PPTP VPN. I've managed to set it up so it successfully joi
<rohanroy> VPN, I can ping and traceroute properly to the VPN server and other clients on the remote LAN. However, I would like to have it so that ALL TRAFFIC is routed through the VPN tunnel. I'm following the information here: http://pptpclient.sourceforge.net/routing.phtml#automatic-setup However, I'm having difficulty getting it to work. Is there anyone here who can provide any help?
<MTecknology> !nickspam > Guest97070
<ubottu> Guest97070, please see my private message
<draik> Hello everyone. I'm trying to install tomcat on the Ubuntu server, but it doesn't seem to find java. I have downloaded/installed it on the server from the java.com site.
<pda-> php5-cli in lucid seems to have lost readline support.. where I should start to look into or report this?
<kuros> Hi, I'm trying to install Ubuntu 9.10 on a PowerEdge 2500 server via PXE. It works fine until near the end, right after it installs extra packages, and it suddenly stops and displays a grey screen with a bunch of text on it.
<kuros> It has things lke /usr/lib/locale/en_us_utf/LC_IDENTIFICATION and /lib/id-2.9.so written on it. Does anyone know what is going wrong?
<MTecknology> Why isn't the default umask 0027 instead of 022?
<pda-> (re php5-cli/readline: I've submitted LP bug #543212)
<sekyourbox> Is anyone awake?  I have been fighting with wpa_supplicant for over a week now. I can successfully connect to my router, and then get an ip address with dhcpcd wlan0, but I cannot get DNS.. I can access anything from its ip address without any problems....
<sekyourbox> this makes no sence
<jronnblom> New servers from DELL cant install Lucid 10.4 Beta since the driver for the RAID cards are still missing
<RoyK^> jronnblom: is it in upstream kernel_
<RoyK^> ?
<jronnblom> yes its in the kernel but not in the kernel that is used on the install dvds
<jronnblom> What I mean is that the drivers is in the 2.6.32-16-server kernel which is used after install but the 2.6.32-16-generic that is used for installation don't have the driver
<sherr> jronnblom: But are you using the server ISO? Surely the Lucid server betaa/ISO uses the server kernel?
<jronnblom> sherr: yes I am using the server ISO but the installer uses the -generic kernel so no new DELL servers work with Lucid.
<RoyK^> wtf
<RoyK^> my servers on 8.04 have -server kernel without any manual hassle
<jronnblom> Old server won't be affected only new ones using the Perc H200/H700/H800 series of RAID controllers
<RoyK^> well, sure, but why would the server install use -generic kernel?
<RoyK^> ah
<RoyK^> oc
<RoyK^> ic
<sherr> I have the server ISO and have had a look :
<sherr> install/vmlinux contains :
<RoyK^> but doesn't the generic kernel have all drivers?
<sherr> 2.6.32-16-generic
<sherr> So - I gues it boots the generic but lets you install the -server after initial setup/update.
<RoyK^> the drivers aren't really related to -server or -generic
<sherr> jronnblom: what RAID card? You are installing on RAID?
<jronnblom> sherr: I have the H200 RAID card in my DELLS
<jronnblom> and using RAID-1
<RoyK^> those are mostly for tuning
<sherr> And installing OS on RAID-1 mirror?
<sherr> What driver is the H200?
<jronnblom> yepp, OS is on RAID-1
<jronnblom> mpt2sas
<sherr> OK, I didn;t know the initial server boot used a -generic with a different set of moduls to -server.
<jronnblom> neither did I until recently :)
<RoyK^> perhaps someone forgot to add that module to the generic set
<sherr> jronnblom: maybe log a bug (check it doesn't exist already)
<jronnblom> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/530361
<RoyK^> to install now, copy the driver out to a usb drive or something and load it manually
<sherr> Thanks - good luck!
<jronnblom> I tried that and the USB key is detected as /dev/sda and when I insmod the driver the disks show up as /dev/sdb... So the installer writes the grub to the USB key.
<RoyK^> heh - remove the usb key after copying the driver to /tmp ;)
<RoyK^> that is, insert the usb key after booting up
<RoyK^> that might help
<jronnblom> great idea :)
<jronnblom> That would make it work and on the first boot you boot the installer and go into rescue mode so you can mount the partition and add the mpt2sas driver there (if the boot-image that is generated doesn't contain the driver)
<RoyK^> I don't think you'll need 'rescue mode'
<RoyK^> just press alt+left after choosing language/keyboard, and there's a console there
<RoyK^> alt+f-something
<RoyK^> or alt+left/right
<jaypur> hi
<jaypur> does anyone use scp
<jaypur> ?
<RoyK^> well of course
<bogeyd6> !anyone | jaypur
<ubottu> jaypur: A large amount of the first questions asked in this channel start with "Does anyone/anybody..."  Why not ask your next question (the real one) and find out?
<RoyK^> hehe
<RoyK^> jaypur: what about scp?
<jaypur> RoyK, i'm trying to move some files and it says can't connect to port 22, but i can't open port 22, so i'd like to change the port for scp....
<RoyK^> heh
<RoyK^> scp uses ssh
<RoyK^> if you can't use ssh into the box, you can't use scp
<RoyK^> apt-get install ssh
<RoyK^> on the box you're trying to transfer the data to
<jaypur> RoyK, my ssh is configure to use port 1010
<jaypur> should i change another port at ssh file???
<RoyK^> scp -p
<RoyK^> man scp
<RoyK^> the are and the tee and the eff and the emm joins up
<jaypur> let me try
<jaypur> RoyK,  scp -r -p 1010 /mydirectory user@myserver: serverdirectory
<RoyK^> remove the space after :
<RoyK^> better yet, use rsync
<jaypur> i'm trying to configurate somethig to upload files to my server
<RoyK^> rsync -avP somedir somehost:/somedir
<jaypur> i'm trying ftp too
<jaypur> scp -r -p 1010 /home/jan/Imagens/tux.gif jan@jan-server:/var/www
<RoyK^> why do you have ssh on another port?
<jaypur> RoyK, because my internet provider blocks a lot of ports
<RoyK^> oh. even ssh?
<RoyK^> that's evil
<jaypur> yeah
<jaypur> llook
<jaypur> i've posted it my block
<jaypur> blog
<jaypur> sorry
<RoyK^> scp -p someport -r somedir somehost:newdir
<RoyK^> should work
<RoyK^> check first if you can ssh -p to that port
<jaypur> http://linuxjay.blogspot.com/2009/07/portas-bloqueadas-pelo-velox.html
<jaypur> RoyK, i can use ssh with port 1010, and i'm using nautilus -X on it... but i can't transfer files from nautilus_desktop to nautilus_server
<RoyK^> I don't understand much of your language - is it portugese?
<jaypur> yeah brazilian portuguese
<jaypur> you can traslate it by google
<jaypur> it has an app at the right top
<jaypur> i instaleed vsftpd
<jaypur> but i don't know how to add a ftp user or how to connect to it =//
<RoyK^> just use scp
<RoyK^> or rsync
<jaypur> =/
<RoyK^> less hassle
<RoyK^> rsync uses ssh as transport
<jaypur> but do you know how to set vsftpd?
<RoyK^> yes
<RoyK^> I use it a lot
<jaypur> i think its better for me
<RoyK^> but the manual is easier than asking me
<jaypur> use ftp...
<jaypur> where can i find the manual?
<RoyK^> I mostly use rsync these days
<RoyK^> man vsftpd
<jaypur> i've already installed and configured
<RoyK^> that's a start
<RoyK^> the config file has a lot of docs too
<jaypur> https://help.ubuntu.com/6.06/ubuntu/serverguide/C/ftp-server.html
<RoyK^> just read the manual before asking, please
<jaypur> aaaargh
<jaypur> but what's happening to scp?
<RoyK^> you still haven't given us an error message
<jaypur> aah sorry
<jaypur> ssh: connect to host jan-server port 22: Connection refused
<jaypur> lost connection
<jaypur> my port 22 is blocked and i can't get it open
<RoyK^> man scp
<RoyK^> what does that tell you?
<jaypur> ssh: connect to host jan-server port 22: Connection refused
<jaypur> lost connection
<RoyK^> still port 22
<RoyK^> didn't you say you were using another?
<jaypur> yup
<jaypur> scp -r -p 1010 /home/jan/Imagens/tux.gif jan@jan-server:/var/www
<RoyK^> then why do you try port 22?
<jaypur> is that right???
<RoyK^> you didn't RTFM, did you?
<jaypur> i've read something...
<RoyK^> try the top line
<RoyK^> about the options
<RoyK^> it says something about port
<jaypur> what? i know the problem is the port
<jaypur> but i don't know how to connect it on a different port.... 1010 is my ssh port
<RoyK^> what does the manual say about port?
<jaypur> i think it uses the same port of ssh?
<klaas> I think its -P if I remember right :)
<klaas> but if you use the site more often you should consider a bookmark in ssh config
<klaas> that one is used by scp aswell
<jaypur> klaas, right
<jaypur> klaas, connected
<jaypur> klaas, let me try, now it's right is -P
 * RoyK^ frouns
<RoyK^> jaypur: please RTFM next time and don't ask others to do it for you
<jaypur> now i need to set the right directory :)
<jaypur> thanks
<jaypur> klaas, :)
<viezerd> man pages are SO handy, I wonder why nobody reads them
<jaypur> yeaaah i got it!
<RoAk> kirkland`, heya!
<kirkland`> RoAk: yo
<RoAk> kirkland`, hey are you gonna do changes to testdrive in this upcoming days?
<kirkland> RoAk: i uploaded about 2 minutes ago :-)
<kirkland> you have a merge, RoAk ?
<RoAk> kirkland, well I already started with a first modularization, im 70% right now
<RoAk> so I was wondering if you cool freeze the code for a few days so that I can finish
<RoAk> so that I work on an up to day code
<RoAk> instead of having to merge your changes in my modularization :)
<kirkland> RoAk: heh, sure
<kirkland> RoAk: i should make any major changes between now and Lucid release
<kirkland> RoAk: "shouldn't"
<kirkland> RoAk: ie, it's basically frozen now
<RoAk> kirkland, awesome then. I'm gonna merge your changes in my changes then and bomb you with question later today since I already have a few :)
<kirkland> RoAk: cool
<kirkland> RoAk: i'm not going to be around much; drop me an email with your questions
<RoAk> kirkland, ok if not we can just discuss them tomorrow. Anyways, Btw... I'll also probably change the way the config is loaded and remove the code that defines the ISO list from there and put it on the main code
<kirkland> RoAk: okay ... is there a better way of sourcing files in python?
<RoAk> kirkland, that's what I was trying to investigate last night but I decided to first do an initial modularization and deal with that later. But yes, I was planing on reading the file and return a list with all the variables and values
<RoAk> instead of executing the file
<kirkland> RoAk: hrm, why?
<kirkland> RoAk: there's actually logic in there
<kirkland> RoAk: i kind of like exec'ing the file
<kirkland> unless there's a clearly better way
<RoAk> kirkland,  yeah me too, but the problem if I exec the file inside a function and not in the global namespace, the variables are not longer to exist
<RoAk> i men, i tried executing the file in the function, and then try using the ISO variable at It wasen't recognized. It only works when it is executed in the global namespace
<hggdh> yeee testdrive is getting a *lot* of attention :-)
<RoAk> hggdh, that's my GSoC project :)
<hggdh> RoAk, cool, good to know... I will wait for your code, then (also was considering factoring all the VM calls)
<hggdh> Dustin just added a way to call a previously created VM (this is the upload he just did); in doing so he preempted me...
<RoAk> hggdh, VM calls are actually on my todays TODO list
<hggdh> RoAk, cool. I will just wait, then :-)
<gbhambri> Do any one have idea abt putting filters in syslog.conf file ?
<kirkland> RoAk: interesting
<kirkland> hggdh: \o/
<kirkland> RoAk: hggdh: there's only one more change i really need to make before lucid releases
<kirkland> RoAk: hggdh: and that's a good way of determining the *next* release's code name
<kirkland> RoAk: hggdh: such that 10.04 users will be able to testdrive 10.10, 11.04, 11.10, 12.04, etc.
<kirkland> RoAk: hggdh: I'm going to ask the IS guys to create a file at http://cdimage.ubuntu.com/iso.list which is just a find . -name "*.iso" or something
<kirkland> RoAk: hggdh: but i'm open to other ideas ....
<RoAk> kirkland, woulnd't it be just easier to change that in the settings?
<RoAk> i mean in the config file?
<kirkland> RoAk: ah, but 10.04 will be frozen; don't want to SRU that
<kirkland> RoAk: but yes, that's way i put that in the config file a few weeks ago
<kirkland> RoAk: it was hardcoded in the script :-o
<RoAk> kirkland, just leave it like it is then, since everytime the user will have to change that in testdriverc
<kirkland> RoAk: i was thinking something like this:
<kirkland> wget -q -O- http://cdimage.ubuntu.com/daily/current/MD5SUMS | head -n1 | sed "s/^.*\*//" | sed "s/\-.*$//"
<kirkland> RoAk: could you translate that into python, and have it set it to lucid on exception? :-)
<RoAk> kirkland, ok will do it today
<kirkland> RoAk: thanks, don't you think that would be a bit more dynamic?
<kirkland> RoAk: so that Lucid's testdrive version will work with MM, NN, OO, PP .. ?
<kirkland> RoAk: without us having to change the code?
<RoAk> kirkland, indeed
<kirkland> RoAk: great, thanks; okay, i gotta run
<RoAk> kirkland, ok enjoy the day :)
<kirkland> RoAk: cheers ;-)
<kirkland> RoAk: drop that one merge in before your refactor, if you don't mind ;-)
<kirkland> RoAk: adios
<RoAk> kirkland, will do
<LinuxAdmin> hi guys
<LinuxAdmin> I've got 2 servers for virtualization purposes
<LinuxAdmin> I want to have the possibillity to migrate VMs between fisical hosts
<hggdh> kirkland, I guess having a +1 seed file somewhere is a good idea
<LinuxAdmin> but now I'm not sure if I use NFS ou DRBD
<LinuxAdmin> wich do you advice?
<hggdh> LinuxAdmin, you mean running VM1 on either hostA or hostB?
<hggdh> and *not* copying the images back and forth?
<jronnblom> Got Lucid to install to my new DELL by manually loading the driver from an USB stick. After the install I had to add the -server kernel instead of the -generic that was installed by default
<sherr> jronnblom: That's good stuff. Did you have to mess with the initrd at all?
<sherr> It seems strange that the server install starts up with the -generic kernel though?
<jronnblom> nope, it detected the card without problem
<jronnblom> mm, I agree
<docta_v> is there a good open source job scheduler for linux?
<dougsk> you mean like cron or at ?
<VSpike> Does ubuntu server by default save and restores iptables?
<klaas> VSpike reboot & look? :)
<docta_v> dougsk: like a batch job manager
<docta_v> when you have a bunch of different systems and need to coordinate and pause or retry failed jobs
<dougsk> docta_v: yeah I can see that, .... uhmm tivoli :D nah teasing, not sure, if you find something let me know.
<dougsk> docta_v: you might get rancid to do something like you want.
<dougsk> docta_v: designed more for pulling switch configs inputting into cvs and implementing change tracking but you could modify the connectors to do at 2 minutes telinit 6 or something like that
<sherr> VSpike: Ubuntu has a firewall system called "ufw" - see : man ufw (can save/load etc.)
<docta_v> VSpike: firehol is pretty cool too if you need something more robust
<sherr> docta_v: I have heard good things about gearman (http://gearman.org/index.php). In universe.
<docta_v> yeah gearman is pretty cool
<docta_v> i think we're going to have to do some in house stuff.. it's just a big mess of batch jobs right now
<sherr> docta_v: what are the jobs doing?
<docta_v> some of them use gearman now actually
<dougsk> sherr: docta_v wow, that's quite cool, thanks for the tip.
<docta_v> database stuff mainly.. this is a medium sized web company i'm dealing with
<docta_v> some of it is fcron, some new stuff is gearman
<docta_v> it would be nice to have some kind of dashboard where i could see everything but it's probably not feasible with the current state of things
<sherr> Probably something you'll need to build :-)
<sherr> fcron .. not heard of that before.
<docta_v> it's like a slightly better cron
<VSpike> klaas: that's an option, true :) It's my gateway machine so it's disruptive, but yeah, that's plan B.
<VSpike> sherr/ docta_v : thx, I have iptables doing most of what I want - was just trying to get linux-igd working
<VSpike> It would be nice if imap used http as a transport
<VSpike> imap seems like something that would be handled better by a REST api
<VSpike> I'm on a high latency link, and all the imap clients seem very cavalier about dropping connections and then re-opening when needed
<VSpike> The whole login/check capability/get folder status can take 10 seconds or so
<VSpike> And they do it every time I switch folder
<docta_v> switch to gmail... that's my advice
<docta_v> or google apps
<docta_v> email is way too much of a pain imho
<brontosaurusrex> is there something i can use as virus checker?
<jpds> brontosaurusrex: clamscan
<brontosaurusrex> mainly to check for win viruses?
<jpds> brontosaurusrex: Yes.
<brontosaurusrex> jpds: thanks, pain to setup or? (i would just like to scan few file folders once or twice a day)
<jpds> brontosaurusrex: sudo apt-get install clamav clamav-data; clamscan <file-to-check>
<jpds> That's about it.
 * jpds â supper.
<brontosaurusrex> i'd do a cron?
<VSpike> docta_v: mm yeah, i do wonder about that. I quite like having control over my server though,
<qman__> VSpike, pop3 has less overhead, but has the downside that it's a "deliver once and delete" system
<qman__> even though it's slow, I like imap because I like to get my email on my desktop and my phone
<qman__> and with it I can get all my messages with either, not just whichever happened to download the message first
<qman__> but, if you only have one client downloading the messages for any one mailbox, pop3 will get the job done quicker
<nosse1> I have a U. server running karmic, but has been upgraded from previous version. I find "scrollkepper" among the obsolete packages in aptitude. However, if I try to remove it I need to remove a real bunch (79!) of packages, including update-manager.
<nosse1> I'm sceptical to remote the lot, but I cannot understand why scrollkeeper is among the obsoleted ones if it is req'd by the system
<nosse1> Got it... rarian-compat is a replacement for scrollkeeper
<sherr> nosse1: it's probably just an old "ubuntu-desktop" requirement. Aptitude can sometimes tell you why (aptitude why <package>)
<nosse1> BTW: After the reinstall, the server lets the users log in before e2fsck is done checking the user's home drive
<DormantOden> hey, just wondering if having about 50 pop3-login processes normal...?  xD
<RoAkSoAx> kirkland, ping
#ubuntu-server 2011-03-14
<tohuw> I just installed mysql. How do I run "mysql" if the user mysql has no password?
<tohuw> oh, mysql -u root -p
<tohuw> nevermind
<uvirtbot> New bug: #734645 in openssh (main) "package openssh-client 1:5.3p1-3ubuntu5 failed to install/upgrade: " [Undecided,New] https://launchpad.net/bugs/734645
<Cherish> Why I use auth_digest can't login my web site?
<overrider> I am having trouble with my 10.04 samba installation. How can i completely remove it, then reinstall it? apt-get purge does not do it, because it leaves /etc/samba and other files in place. Manually removing those also bad, because then a reinstall will not recreate those default files. Any clue? Thanks
<whalesalad> Anyone know how to get the latest nginx package running on karmic? I need at least 8.4 and would rather not build from source (I enjoy using apt-get, not from ease of use, but for portability)
<EvilPhoenix> whalesalad:  no backports?
<whalesalad> EvilPhoenix: ?
<EvilPhoenix> s/no/did you check the/
<whalesalad> ah, not sure what those are
<EvilPhoenix> they're not-completely-supported updates
<whalesalad> where can I look for something like that?
<EvilPhoenix> pushed to an older distribution from a newer build
<EvilPhoenix> uhm...
<EvilPhoenix> afaik, it'd be added to your sources.list file
<EvilPhoenix> you were looking for nginx 8.4 or greater?
<whalesalad> yes
<whalesalad> im doing an "apt-get upgrade" right now
<EvilPhoenix> iio
<EvilPhoenix> oop*
<EvilPhoenix> uhm...
<EvilPhoenix> hate to say it, but according to packages.ubuntu.com (the site for the package lists), there's no nginx >= 0.7.x available in anything
<EvilPhoenix> cept the natty repos and natty isnt released
<whalesalad> yea that's retarded to be honest
<whalesalad> 8.54 is the current nginx stable
<EvilPhoenix> well let me explain it like the QA people explained it to me about backports
<whalesalad> okay
<EvilPhoenix> "there's nobody willing to put the time nor the effort into making something that's proven stable on the newer builds compatible with the older builds"
<EvilPhoenix> s/nobody/almost/
<EvilPhoenix> and also
<EvilPhoenix> from experience
<EvilPhoenix> "Something proven to work on a newer build might just completely fubar your computer"
<EvilPhoenix> ^  true statement
<whalesalad> okay
<EvilPhoenix> so yeah, you might be stuck compiling from source ;/
<EvilPhoenix> :/ *
<uvirtbot> EvilPhoenix: Error: "true" is not a valid command.
<whalesalad> okay
<EvilPhoenix> shut up bot
<whalesalad> hah
<EvilPhoenix> god, this bot's almost as bad as my loco team's bot >.>
<EvilPhoenix> but yeah, that's one reason backports arent really commonly done
<EvilPhoenix> only time i've seen a DECENT backport is with the 2.6.35 kernel backport from maverick => lucid
<whalesalad> how painless is it to do full distribution upgrades btw?
<whalesalad> like karmic -> lucid
<EvilPhoenix> um...
<whalesalad> not that it helps in this specific case..
<EvilPhoenix> do you want my brutally honest opinion, or a somewhat honest opinion?
<EvilPhoenix> or both
<whalesalad> well
<whalesalad> I have a production server in sweden that I am in control of
<whalesalad> I can't afford a ton of crap, because I only have one to play with (they only have one prod server)
<EvilPhoenix> so...
<whalesalad> so I can't stage everything (updates)
<EvilPhoenix> basically you cant fubar it
<whalesalad> yeah
<whalesalad> but at the same time, I have zero patience
<whalesalad> :<
<EvilPhoenix> well...
<whalesalad> I frequently ignore major updates to linux boxes
<EvilPhoenix> lets put it this way
<whalesalad> and I'd like ot get into the habit of keeping them fresh
<EvilPhoenix> in my honest opinion...
<EvilPhoenix> the best upgrade is a fresh install and you restore data from backups
<EvilPhoenix> BUT
<EvilPhoenix> that's because there's ******ALWAYS****** the chance something goes wrong during upgrading
<whalesalad> ah, okay
<whalesalad> that makes sense
<EvilPhoenix> i've successfully went from jaunty -> karmic upgrade
<EvilPhoenix> then the subsequent karmic -> lucid upgrade failed badly
<EvilPhoenix> hundreds of broken things
<EvilPhoenix> ended up fresh installing anyways :P
<whalesalad> right on
<whalesalad> I need to start my own hosting company
<EvilPhoenix> so yeah, you could do dist-upgrade if ya want
<EvilPhoenix> but be aware: you should   A L W A Y S   have a backup copy of everythign
<whalesalad> yeah
<whalesalad> well the db server is external, and the code is in git
<EvilPhoenix> JUST IN CASE the upgrade fails miserably
<whalesalad> only thing I need is config info
<whalesalad> do you happen to have any kindof backup in place for your /etc dir or anything like that?
<EvilPhoenix> for my boxes?
<whalesalad> also, hmm. this was my fail. I tried to install the 8.53 deb directly from natty or whatever
<whalesalad> ran into some issues... so tried to sorta undo all that now nginx appears as uninstalled
<whalesalad> but nginx is still installed (the karmic one), and it's still running
<EvilPhoenix> RAID6 across 6 disks + local-net rsync on a weekly basis + monthly physical cloning of the drive
<EvilPhoenix> that's my setup ;P
<whalesalad> sounds pretty tasty... I haven't graduated from VPS hosting yet
<EvilPhoenix> (yes, i've got some mean hardware here :P)
<EvilPhoenix> yeah, i've got 2 VPSes for my business
<EvilPhoenix> this box here though with the crazy setup like that...
<EvilPhoenix> no, that box is my personal server box :P
<EvilPhoenix> that thing's too good for hosting things cept the stuff i personally need >:)
<EvilPhoenix> but yeah, VPS hosting is decent
<EvilPhoenix> i personally cba to set it up
<EvilPhoenix> (hence why I dont have a dedi box somewhere :P)
<lifeless> kirkland: hi
<lifeless> kirkland: (i realise its nuts-am for you). I'll ping you tomorrow - I want to chat about a lp bug on blueprints you filed
<Cherish> apache2-mpm-worker doesn't support php5 ?
<Cherish> I must use apache2-mpm-prefork ?
<_ruben> iirc, yes
<soren> Cherish: It's sort of the other way around, but the effect is the same.
<soren> Cherish: It's not apache-mpm-worker that doesn't support php5. It's php5 that doesn't support apache-mpm-worker. php on UNIX isn't guaranteed to be thread safe, so you're stuck with prefork.
<Cherish> soren: maybe can use fastcgi
<soren> Cherish: Sure.
<jamespage> ttx: ping
<ttx> jamespage: pong
<HackeMate> hello
<HackeMate> i can access my webdav server but i cant write on it
<Tophat> anyone mind helping me setup an SNTP server?
<jetole> Hey guys. I have a problem with libvirt failing to autostart some domains in 10.04.02 on x86_64. I traced both issues to one common factor, it seems libvirt is starting before networking is setup. Does anyone know how I can change when it boots using the new-ish /etc/init (since it no longer uses the init.d folder)?
<soren> jetole: start on network-device-up IFACE=eth0
<soren> jetole: For instance.
<soren> jetole: Eerr..
<soren> jetole: net-device-up
<soren> jetole: Not network.
<jetole> so I replace start on runlevel [2345] with start on network-device-up IFACE=br0 ?
<soren> jetole: More likely you want IFACE=br0
<soren> net-device-up, but yes.
 * jetole gives it a try
<jetole> can I specify two nics?
<soren> jetole: You might ant to do "start on runlevel [2345] and net-device-up IFACE=br0"
<jetole> I have NFS on SAN and DNS on WAN and both are relating to two different VMs
<soren> jetole: You might ant to do "start on runlevel [2345] and net-device-up IFACE=br0 and net-device-up IFACE=br1"
<soren> Err..
<jetole> can I add another and?
<soren> Sorry, that came out weird.
<jetole> I knew what you meant
<soren> Yeah, just like I did in the last one.
<jetole> start on runlevel [2345] and net-device-up IFACE=br0 and net-device-up IFACE=br1
<jetole> ?
<jetole> that should run?... ah lemme reboot and see
<jetole> *let me
<soren> Looks ok to me.
<jetole> rebooting the server now
 * jetole thanks god for KVM over IP
<Tophat> needing to setup SNTP server to serve my routers, wanting to use ubuntu-server -- want to advertise local machine time instead of relaying public SNTP servers.  is this possible?
<jetole> After I get this running I need to go over to the data center and pluck a failed disk on a RAID 5 that used to have a hot spare
<jetole> Tophat: don't don't about local machine time but have you considered NTP vs. SNTP (afaik they are different protocols)
<jetole> soren: well it worked though I still have one VM which didn't boot. Looking into the _why_ now
<jetole> soren: Do you know if there is a way to run it after either a) all mounts have finished or b) NFS mounts have finished?
<RoyK> Tophat: apt-get install ntp
<jetole> the I have _netdev as an option for the NFS mount in fstab but it seems like libvirt is still starting too soon
<RoyK> jetole: sntp is a subset of ntp, an sntp client can query an ntp server
 * jetole choose openntp since it doesn't rely on every NIC the way ntp does
<RoyK> rely on every nic?
<jetole> plus openntp was written by Theo from obsd and the obsd guys and they write good and secure software
<soren> jetole: Sure.
<soren> jetole: All mounts: filesystems
<soren> jetole: All nfs mounts: remote-filesystems
<jetole> soren: I appreciate it but don't get it. How would I re-write the following: start on runlevel [2345] and net-device-up IFACE=br0 and net-device-up IFACE=br1
<jetole> ?
<soren> jetole: start on net-device-up IFACE=br0 and net-device-up IFACE=br1 and filesystems
<jetole> soren: thanks again
<jetole> RoyK: ntp seems linked to every nic on a system and if starts / stops on every nic etc and is honestly messy on systems with a lot of nics though I hear that has been sorta fixed if you edit the ntp.conf, it's still a default
<jetole> RoyK: openntp doesn't seem to have that issue
<jetole> soren: I now understand what your comment meant after you showed the example
<zul_> good morning....everyone except for soren in which case good day
<jetole> lol
<jetole> soren: I used the following string and libvirt didn't start. I only added " and filesystems". Do you know what might have happened?: start on runlevel [2345] and net-device-up IFACE=br0 and net-device-up IFACE=br1 and filesystems
<pmatulis> jetole: yes, /etc/network/if-up.d/ntpdate is strange to have
<jetole> Does anyone have a good doc on upstart?
<pmatulis> jetole: i believe the upstream docs are best right now
<jetole> soren: crap. nevermind. it did start. it was just a late start but it seems like everything is working fine
<jetole> soren: thanks again for the help
<jetole> pmatulis: don't know what "the upstream docs" is referring to but I am specifically looking for start on conditions
<soren> jetole: Sure.
<jetole> Ah I am going to the data center for a couple of hours. bbl and thanks for all the help
<patdk-wk> upstream would mean, debian, or the manainter of the program itself
<patdk-wk> !upstream
<patdk-wk> stupid bot
<uvirtbot> New bug: #734804 in squid (main) "package squid 2.7.STABLE7-1ubuntu12.2 failed to install/upgrade: el subproceso instalado el script post-installation devolviÃ³ el cÃ³digo de salida de error 1" [Undecided,New] https://launchpad.net/bugs/734804
<RoAkSoAx> morning all
<kirkland> lifeless: howdy, here now
<polyvisual> Hello
<polyvisual> Can anyone suggest how I can monitor the amount of traffic between my web server and a range of IP addresses?
<polyvisual> I more lightweight the better. I looked at vnstat but that's 'per interface'.
<RoyK> awstats?
<RoyK> I don't think that allows for filtering per IP address or range, though
<polyvisual> Cheers, but awstats is a bit too heavyweight.
<polyvisual> The server connects to a bunch of proxy servers, pulls a bunch of data, then closes the connection. All I need is the bandwidth for the range of ip addresses.
<kim0> hallyn: hi there .. kvm-spice is crashing consistently with "kvm-spice: /build/buildd/qemu-kvm-spice-0.14.0+noroms/qemu-kvm.c:1724: kvm_mutex_unlock: Assertion `!cpu_single_env' failed"
<speakman> Not quite the right channel maybe, but do any of you have any idea how to trace what's causing this: http://pastebin.ubuntu.com/580065/ ?
<Caribou1> speakman, MCE are most often caused by hardware issues. If the problem repeats itself frequently, you could enable crash dump capture to get a vmcore for analysis
<screen-x> Hi all. Installed lucid on new server, adding an nfs mount to fstab causes boot to fail. I booted with init=/bin/bash, and commented out the nfs line. Server then boots fine including networking.
<Caribou1> You could also check with your hardware vendor if you have any kind of hardware support
<geekbri> screen-x: is it possible it just needs an option like nobootwait?
<screen-x> geekbri: very possible :)
<screen-x> I havent had to use that option before, new requirement in lucid?
<geekbri> screen-x: its just an option that says the system does not have to report an error if the partition isn't found.
<screen-x> geekbri: thing is, its a necessary partition, so really I do want it to be present and mounted before  the system considers itself booted.
<screen-x> s/partition/nfs mount/
<geekbri> screen-x: hrm yes i see
<patdk-wk> use _netdev instead of nobootwait
<kirkland> lifeless: saw your comment on https://bugs.launchpad.net/bugs/487337
<uvirtbot> Launchpad bug 487337 in launchpad "bug status for bugs linked in a blueprint is not shown" [Critical,In progress]
<kirkland> lifeless: your reading is fine;  that would be nice
<kirkland> lifeless: thanks
<screen-x> patdk-wk: can't see _netdev in man page for fstab or mountall, use it as an option in fstab?
<patdk-wk> yep
<patdk-wk> it's even in the howto: https://help.ubuntu.com/community/SettingUpNFSHowTo
<geekbri> screen-x: it basically says "this device needs networking"
<screen-x> geekbri: isnt that implicit in type=nfs?
<uvirtbot> New bug: #565578 in chkrootkit (main) "chkutmp crashed with SIGSEGV in _Unwind_Backtrace()" [Medium,Incomplete] https://launchpad.net/bugs/565578
<screen-x> I've put _netdev in fstab, rebooting now to see if it works.
<geekbri> screen-x: I guess not :)
<screen-x> still failing with "mount.nfs: DNS resolution failed for blah.blah.blah"
<speakman> Caribou1: It's completely new stuff with a fresh install of Ubuntu 10.10. Intel Xeon W3680 with 24GB Kingston ECC RAM on a Asus WS Revolution motherboard, 850W Cooler Master PSU and a CPU temp not above 45 degrees celcius.
<geekbri> screen-x:  hrm well thats strange.  just out of curiosity could you temporarily use the IP instead of the DNS address
<speakman> Caribou1: I have got mcelog installed now, but what do you suggest for further investigation? It happens randomly and has nothing to do with any load (yesterday, it happened while I was logged out and there is no server daemons or such running == completely idleing)
<patdk-wk> or add the dns name and ip to /etc/hosts
<Caribou1> By hardware issue, I did not mean hardware failure. I recently investigated an MCE which was caused by the F/W of the 10 GbE NIC
<geekbri> yup thats only dangerous if you forget to remove it eventually :(
<speakman> Caribou1: I'm all ears! And since all parts are new, there should be no problems returning anything disfuctional!
<patdk-wk> geekbri, remove it? don't update it :)
<screen-x> I'll try adding a hosts entry.
<geekbri> patdk-wk: well if its something that has an actual DNS entry you dont want to have it always be over written on manually edited, especially if its a file system critical to the system :(  what if somebody changes the DNS entry but the host entry is still there!
<Caribou1> Well, you might want to configure crash dump capture (aptitude install linux-crashdump I think) then, once you get a valid vmcore, have a look at it with the crash utility so see if there is any more information in the kernel ring buffer
 * patdk-wk just doesn't use dns at all for nfs
<patdk-wk> or iscsi
<geekbri> yeah i dont either
<patdk-wk> but for vmware it pretty much forced me too
<geekbri> i started using KVM and im hoping we will no longer need vmware ;)
<patdk-wk> I can't grow enough with kvm
<patdk-wk> plus, the whole xen -> kvm switch really didn't sit with me well
<patdk-wk> and I am not going go through another switch, kvm -> x
<BCS-Satori> Is there a way to refresh my partitions on /dev/md0 (mdadm drive) without rebooting so that the OS sees the drive?  If I do "parted /dev/md0" and print I can see the partition but the disk is missing when I do "ls /dev/md0*"
<geekbri> hey fair enough, we aren't too big of an operation and the KVM stuff is only internal so hopefully it will do for us
<patdk-wk> ya, kvm just doesn't support memory overcommit well enough yet
<screen-x> BCS-Satori: partprobe
<BCS-Satori> screen-x: I tried already, it did not detect the new partition
<BCS-Satori> screen-x: I think its a mdadm refresh issue maybe?
<screen-x> :(
<hallyn> kim0: still working on setting up a test env.  does every guest os do thta for you?  And only since version -ppa1?
<screen-x> BCS-Satori: I'm not sure what else to do, maybe could restart the array if you arent using any of the other partitions?
<BCS-Satori> screen-x: ya I have iscsitarget running on several partitions on /dev/md0.  Its in a cluster I could reboot figured there has to be a way to not reboot
<BCS-Satori> screen-x: seems blockdev and hdparm give this error "BLKRRPART: Device or resource busy" which is probably why partprobe doesn't work
<piken> Does anyone know of a way to get a list of all package dependencies including the dependencies of those dependencies?
<piken> for example, build-essential package list and their dependencies.
<piken> I am trying to build a vmware studio image but it has no access to outside repos and I must have all the deps on the VMWare studio server.
<screen-x> geekbri: patdk-wk my server boots ok, with the nfs mount as an IP, rather than DNS name.
<screen-x> Seems like a bug
<geekbri> screen-x: i honestly would avoid DNS names for NFS mounts always anyway... it just adds another possible layer for failure :)
<uvirtbot> New bug: #668295 in bind9 (main) "I just insert the CD of Ubuntu and waited for him to load the system and the error appeared." [Medium,Incomplete] https://launchpad.net/bugs/668295
<patdk-wk> that sounds like it could be a *fun* bug :)
<screen-x> heh, in bind9??
<raubvogel> upstart question: if I have "start on" in the config file for a given service with nothing after it, would I then have stopped this service  from  starting up?
<geekbri> screen-x: see just another reason to avoid DNS for nfs right ;)
<screen-x> geekbri: hmm ;-)
<uvirtbot> New bug: #633309 in bind9 (main) "host crashed with SIGSEGV" [Medium,Incomplete] https://launchpad.net/bugs/633309
<uvirtbot> New bug: #566697 in bind9 (main) "host crashed with SIGSEGV in lh_doall_arg()" [Medium,Incomplete] https://launchpad.net/bugs/566697
<nimrod10> piken, the closest that I can think of your use case is to use          apt-cache rdepends  ~Dpackage_name
<kim0> hallyn: I'm only only testing the win7 guest. AFAIK, that bug is related to qxl driver. It seemed to have started with kvm-spice separate binary
<nimrod10> piken, also see if you can find something useful here : http://www.debian.org/doc/manuals/reference/ch02.en.html#_basic_package_management_operations
<madmax_x> so i have a race condition here with my network ...after a power failure.... my ubuntu box boots before the dhcp server does...is there a way to have nm-applet or /etc/networking auto retry dhcp after a timed delay?
<laen> lol?
<patdk-wk> dhcp attempts to for several min
<patdk-wk> maybe adjust the dhcp timeout in dhcp?
<madmax_x> im using nm-applet to manage the network connection from with X....it tries fails and sits there
<madmax_x> if i wait for the server to come up its as simeple as telling it to use "auto eth0"
<patdk-wk> maybe try #ubuntu?
<patdk-wk> network-manager isn't part of server
<madmax_x> sorry to bug you...thnx
<nimrod10> madmax_x, just restart the  network-manager service
<madmax_x> the problem is w/o an ip i cant get to the box...i have to go to the physical machine and reset the network
<nimrod10> a I didn't realize that !
<madmax_x> over the weekend the power company decided we didnt need power for 8 hours here at the university ... look at all the cool new 'features'...my system has a built in exercise function
<madmax_x> i tried the retry setting in dhclienet.conf but it doesnt seem to apply to my nm-appplet setup
<madmax_x> /etc/dhcp3/dhclient.conf
<hallyn> kim0: well, i don't think i can spend time on that bug.  There are much higher prio ones to be dealth with.  like grub2.vs.-vga-std.  We should keep talking to dev-zero about packaging into universe.  We can deal with the win7 client bug later
<dschuett> Could anyone point me towards a tutorial that explains how to configure ssl for multiple virutalhosts on the same ip in ubuntu server 10.04 LTS?
<mandad06> hi
<joschi> dschuett: http://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI
<mandad06> if someone can help me to instal the terminl of ubuntu in my win7
<dschuett> thanks joschi! - just what i was looking for!
<Kimb> if someone can help me to instal the terminl of ubuntu in my win7
<raubvogel> If I am using update-rc.d to disable, not remove, a script why it wants to leave a startup script in rc1.d?
<BCS-Satori> I have several volumes on /dev/md0 carved which are offered via iSCSI to VMware, so the volumes contain VMFS Partition.  Is it possible to grow the volume in parted (or something else) by adding raw disk space on the end of the volume so that ESX can increase the partition table?
<anilhw> hi
<uvirtbot> New bug: #728385 in php5 (main) "apache2 crashed with SIGSEGV in xdebug_stack_element_dtor()" [Low,Incomplete] https://launchpad.net/bugs/728385
<RoAkSoAx> kirkland: howdy!! quick question :). For orchestra, is it planned to do automatic key exhcnage between puppetmaster and client?
<SpamapS> zul: "Opinion"?! come on.. thats at least Confirmed/Wishlist . ;)
<kirkland> RoAkSoAx: yeah, that's the idea;  i don't think we have an agreed upon solution for that yet
<kirkland> RoAkSoAx: i think that might be an oneiric target
<zul> SpamapS: what are you talking about
<RoAkSoAx> kirkland: I see, cause I'm just setting up puppet for the first time here and it indeed is tedious the whole signing process
<joschi> RoAkSoAx: look into autosigning
<kirkland> RoAkSoAx: right;  the end goal is for Orchestra-deployed Ubuntu systems would be automatically linked to the puppetmaster, and vice versa
<joschi> kirkland: what's orchestra?
<SpamapS> zul: bug 705691
<uvirtbot> Launchpad bug 705691 in cobbler "cobbler-web should include a working configuration and a README file detailing the steps necessary" [Wishlist,Opinion] https://launchpad.net/bugs/705691
<RoAkSoAx> joschi: Ubuntu's installation and infrastructure service
<joschi> RoAkSoAx: any URL on that thing available for the public? it's kind of hard to google for it ;)
<zul> SpamapS: heh yeah
<RoAkSoAx> kirkland: right, so will the orchestra package pull puppet and set custom config files or will puppet itself will be modified for such purpose?
<RoAkSoAx> joschi: note that it is just the integration of different technologies to achieve our purpose :) https://launchpad.net/ubuntu/+spec/cloud-server-n-install-service
<joschi> RoAkSoAx: just interested in it. it sounds a lot like foreman
<hggdh> jamespage, got a problem with hudson-slave
<kman> Hello.  Anyone have a fool (that's me) proof way of resetting a user password?  I can't remember the login password to log on.  I've tried the esc at grub loader > recovery but it is always asking me to login in and I have forgotten the password.
<patdk-wk> mount the fs on a different computer
<patdk-wk> or use like a livecd
<kman> if I use ubuntu live do I then edit some file to allow the reset?
<joschi> kman: https://help.ubuntu.com/community/LostPassword
<RoAkSoAx> kirkland: so i was thinking and something like what was done with euca can be done in this case using puppet's maste-side client certificate generation
<RoyK> hm... I have an issue here with Lucid and nfs/nis - a user can't chgrp a directory to another group which he also is a member of. Locally on the NFS server, this works, and testing with Maverick as the NFS server works
<RoyK> http://pastebin.com/WNwNz8La
<kman> Thanks will try the second way.  The first did not work for me.
<jamespage> hggdh - whats the issue?
<kman> I have tried all three suggested solutions in the link and none of them are working for me.  Can't find /bin/bash.  I think I will have to go with the live cd method.
<hggdh> jamespage, http://paste.ubuntu.com/580189/
<hggdh> jamespage, this was after a server restart
<hggdh> up-to-date natty
<jamespage> hggdh - looks like something might not be right on the hudson instance itself
<hggdh> jamespage, hum. Reboot as an option?
<jamespage> hggdh: might be a good first step
<hggdh> jamespage, will do it now
<jamespage>  hggdh: around but on a off for the next hour or so - ping me if this does not work
<hggdh> jamespage, ack
<jamespage> hggdh: full filesystem - now that I can see the log....
<hggdh> oh bloody hell
<jamespage> hggdh: suggest we shutdown hudson and clear up some of the older d-i logs
<kirkland> SpamapS: re: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/707479
<uvirtbot> Launchpad bug 707479 in upstart "service <service> restart does not use an updated job configuration" [Medium,Triaged]
<kirkland> SpamapS: i was just about to patchpilot that one
<kirkland> SpamapS: i see cjwatson's comment from a week ago, though
<kirkland> SpamapS: and i also saw that jhunt has a big merge for natty too
<SpamapS> kirkland: yeah, jhunt is almost done.. I'd hold off a bit longer.
<kirkland> okay, thanks
<SpamapS> kirkland: tho I definitely want that in natty.
<SpamapS> I understand why restart works how it works.. but I fail to see where its actually ever useful that way. :-P
<SpamapS> keybuk has had a lot more time to think this through and has a few use cases I'm sure.
<kirkland> SpamapS: is he just going to commit your change supstream?
<kirkland> SpamapS: when ready?
<SpamapS> kirkland: no upstart-job doesn't exist upstream IIRC
<SpamapS> upstream all thats going to happen is the restart command will be documented more completely
<kirkland> SpamapS: got it, thanks
<jamespage> hggdh: hmmm - looks like alot of the jobs don't clear down the workspace before they start
<jamespage> so they just get bigger and bigger over time.
<Orfeous> having some issues with SSL and apache2, i have generated a self-signed certificate but got some problems when restarting apache2, CN does not match servername...
<Orfeous> what configs can i show to help resolv this?
<SpamapS> kirkland: and thanks for looking into it.
<Orfeous> i followed the certificate making guide here: https://help.ubuntu.com/10.10/serverguide/C/certificates-and-security.html
<hggdh> jamespage, yes. I am considering only leaving the last -- say -- 10 runs
<jamespage> hggdh: thats not really the problem
<jamespage> hggdh: http://pastebin.com/0dyNu4hm
<jamespage> hggdh: that is for job 31 - it contains the archive for all previous jobs as well as they where present in the workspace.
<hggdh> jamespage, oh
<jamespage> hggdh: unless the job clears its workspace first you get this incremental build up
<jamespage> lemme check which ones are doing this - I did fix a load automagically
<jamespage> hggdh: looks like the TBench and DBench (for natty) tests did not have the updated script; all others looked OK to me.
<hggdh> jamespage, indeed, the D|Tbench were old, I wrote them very early in the process
<hggdh> jamespage, seems they are all set now. I will monitor and see if we missed any
<uvirtbot> New bug: #303059 in samba (main) "Samba segfaults four-five times a day on Ubuntu Server (both hardy and intrepid)" [Undecided,Invalid] https://launchpad.net/bugs/303059
<axisys> !ipv6
<ubottu> For an introduction to IPv6 and information on tunneling IPv6 through IPv4 connections, see https://wiki.ubuntu.com/IPv6 | To disable IPv6 see https://help.ubuntu.com/community/WebBrowsingSlowIPv6IPv4
<axisys> i can ping my ipv4 ip using ping .. if i ping6 ipv6 ip i get "invalid argument" .. any idea why?
<lifeless> kirkland: hi
<lifeless> kirkland: so you're saying on https://bugs.launchpad.net/bugs/487337 that it wasn't the freeform text you cared about, it was the list of explicitly linked bugs?
<uvirtbot> Launchpad bug 487337 in launchpad "bug status for bugs linked in a blueprint is not shown" [Critical,In progress]
<axisys> lsmod | grep ipv6 does not show anything
<kman> OK I just tried doing the password change by esc at grub menu then recovery but I get the prompt for a root password.  then I tried the edit mode and still get prompted.  I just tried the LiveCD method where it suggests one changes the shadow file and the password is already set to the passwordless simple hash.  What to do now?
<kman> I have forgotten the password.  When I went looking in the home directory I expected to see more than just a Ubuntu account. but that is all I can see.
<kman> Anyone have any further suggestions before I reinstall?
<uvirtbot> New bug: #735017 in php5 (main) "package php5-fpm 5.3.5-1ubuntu2 failed to install/upgrade: Ð¿Ð¾Ð´Ð¿ÑÐ¾ÑÐµÑÑ Ð½Ð¾Ð²ÑÐ¹ ÑÑÐµÐ½Ð°ÑÐ¸Ð¹ pre-removal return code error 1" [Undecided,New] https://launchpad.net/bugs/735017
<crohakon> What is a good, free, alternative to cPanel to manage shared hosting?
<Muuuh> Hi
<Muuuh> We'd like to setup a Ubuntu server box between our LAN and existing Sonicwall box to do some packet filtering/logging. I'm gathering information at this stage so I can convince the IT manager to let me use Ubuntu. Has anyone done such a setup ?
<b0gatyr> Hi can someone shed some light, I have a monitor plugged into an ubuntu box who's max res is 1280x1024 i plugged it into a KVM switch and now the resolution is 1024x768 and it gives me a max res of 1360x768 which is incorrect.. can anyone help
<b0gatyr> how can i make it detect the right res?
<pmatulis> Muuuh: i thought sonicwall *was* a packet filtering thing
<Muuuh> pmatulis: indeed, but it's reporting facilities are not giving us what we want in terms of detailed traffic to the internet
<RoyK> Muuuh: try snort
<Muuuh> pmatulis: thus the idea to use a linux box in between
<speakman> If anyone new has arrived; I'd like to help trace what's causing this; http://pastebin.ubuntu.com/580065/ on my Xeon W3680 + 6x4GB ECC RAM (QVD) on a Asus WS Revolution?
<RoyK> Muuuh: just install snort on another machine connected to a sniffer port or hub on the internet - don't set an IP address on the NIC connected there, and you're in business
<Muuuh> RoyK: ok, googling snort... you mean it will "snort" without being actually in the "pipeline" ?
<RoyK> Muuuh: with a hub or a port configured as a sniffer/monitor port on the switch (most managed switches supports that), all the data will be sent to that port
<RoyK> speakman: either a mobo or cpu failure, at least that's my guess
<pmatulis> Muuuh: what kind of information are you after?
<RoyK> mcelog has saved me some hours :P
<RoyK> pmatulis: I would guess some sort of IDS
<Muuuh> pmatulis: trying to nail down internet abusers
<pmatulis> Muuuh: and abuse means what?
<pmatulis> Muuuh: content?
<RoyK> what's that? am I abusing the internet if I use the bandwidth I pay for? :)
<b0gatyr> anyone
<Muuuh> pmatulis: content is one aspect, yes
<Muuuh> RoyK: well, you are if you're doing it on the job without your boss' authorization ;-)
<bigbang> what you guys think of zimbra ? worth looking into it ?..
<RoyK> Muuuh: spying on people's use of the web at work is bad, it's just BAD
<Muuuh> since when ?
<RoyK> bigbang: been using it for a couple of years - works splendidly :)
<pmatulis> Muuuh: well we can't recommend a solution if we don't know all the criteria.  but snort can filter on pretty much anything
<bigbang> awesome thanks RoyK .. :)
<Muuuh> pmatulis: well, we're basically trying to gather evidence as to people wasting their time on internet instead of working
<RoyK> Muuuh: if it's internal web traffic you're trying to monitor, just setup a proxy, squid works well
<RoyK> bigbang: most hard-core ubuntu freaks don't like zimbra, since it's not really very open, but I do, even so
<Muuuh> Management doesn't want to lock it down tight. It's just that they suspect some users of abuse, and wants to gather evidence before accusing
<pmatulis> Muuuh: but i think there must be an easier tool like a kind of passive content filter
<pmatulis> (monitoring instead of blocking)
<Muuuh> as I said, we don.'t want to block, just gather useage data
<RoyK> Muuuh: using squid as a transparent proxy will help you log these things quite easily
<speakman> RoyK: based on..? All parts are completely new, and has a fresh Ubuntu 10.10 install. The failures is random with about a couple of days between (but as well a couple of times a day some days).
<RoyK> speakman: sorry, you lost me - what are you talking about?
<speakman> RoyK: Another issue which might be related, is the CPU temperature not showing on some sessions. Sometimes it does, sometimes it doesn't.
<speakman> RoyK: me MCE above
<speakman> my
<RoyK> ah
<RoyK> speakman: according to the MCE dump, it seems to be a CPU issue, but I cannot say for sure - it may be the mobo as well. Are the logs showing different addresses for the issue?
<pmatulis> Muuuh: see #24 here: http://contentfilter.futuragts.com/wiki/doku.php?id=faq#general_faq
<RoyK> I have a similar issue on a server - mcelog logs every now and then - it used to hang the server during memory check, but doesn't after we got a new mobo, but still, errors keep ticking in. The supplier told me to swap the CPUs to see if the address was the same on the next error
<speakman> RoyK: I have only got one log so far :(
<speakman> RoyK: I have been running memtest86 for two hours (at least one complete pass) and no complains
<RoyK> speakman: didn't you say it was crashing repeatedly?
<RoyK> if so, use syslog, and setup remote logging to another box
<speakman> RoyK: My problem is that the CPU and mobo is from two different suppliers. But there's probably no problems changing both of them.
<RoyK> mcelog/syslog can't write to the disks if the system is panicing
<speakman> RoyK: yes, but I didn't know what caused it earlier. Since I'm on X, I had to set up a netconsole to grab that message.
<speakman> RoyK: Now I've setup mcelog as well
<speakman> netconsole is the s*it :)
<RoyK> just configure syslog to log to another box as well
<RoyK> *.* @some.ip.in.house
<RoyK> and remember to allow remote logs on that box
<speakman> why really?
<RoyK> because then you can catch logs that can't be written to disk if the system is panicing and i/o is blocked
<speakman> I can read /var/log/syslog afterwords, and still I havn't found anything there yet.
 * RoyK always uses a separate syslog server
<speakman> RoyK: isn't kernel messages enough you think..?
<speakman> I can of course...
<speakman> http://www.mjmwired.net/kernel/Documentation/networking/netconsole.txt btw
<RoyK> speakman: probably, but it won't hurt to send it all to another box with sufficient storage
<RoyK> it's not like it's a lot - a few megs or so
<speakman> I'll just send it to the same machine receiving netconsole logs
<RoyK> speakman: a good old serial cable will do well too :)
<speakman> RoyK: if there were only a good old serial connector on the mobo ;)
<speakman> Not even an internal one available.. :/
<speakman> is Ubuntu running rsyslog instead of plain old syslogd nowadays?
<RoyK> rsyslog iirc
<ScottK> Yes.  It's been the default for a few releases.
<RoyK> but opening for remote syslog is trivial
<speakman> but setting up syslog for sending?
<RoyK> speakman: just add *.* @somehost in /etc/syslog.conf
<speakman> thanks :)
<RoyK> then restart syslog and see if something is written to the remote host
<speakman> yes, but can I set port?
<RoyK> dunno - usually it uses the standard (514/udp)
<speakman> I'm already listening on another port, I'll just RTFM ;)
<RoyK> if you're worried about other hosts flooding, setup a ufw filter
<RoyK> speakman: adding :port might help
<speakman> Up and running...
<speakman> now netconsole again
<KB1JWQ> What's the polite way to tell apt-get to go with either "no config" or "local only" for postfix programmatically
<KB1JWQ> ?
<KB1JWQ> I'm trying to script a template creation and that's breaking my automation unfortunately.
<david5345> If I run the "kvm-ok" command and the output is "INFO: your CPU does not support KVM extension". Does this mean I can't use KVM or does it mean I can use KVM but it'll work slowly ?
<EvilPhoenix> i dont think you can use KVM if your CPU doesnt support it
<EvilPhoenix> hence the error message
<hallyn> david5345: 'kvm' will just end up running qemu without kvm extensions.  so like you said, slowly
<david5345> Thanks :D
<david5345> I'll update the Wiki
<speakman> ...and now I can't even get netconsole working :/
<lifeless> kirkland: thanks
<RoAkSoAx> kirkland: ping
<RoAkSoAx> Daviey: ping?
<SpamapS> RoAkSoAx: yo
<RoAkSoAx> SpamapS: yo!! what's up
<SpamapS> RoAkSoAx: writing upstart docs. You?
<RoAkSoAx> SpamapS: testing powernap's new changes for eucalyptus xD
<RoAkSoAx> how was the weekend?
<SpamapS> RoAkSoAx: busy as usual. work is so much more relaxed. ;)
<RoAkSoAx> SpamapS: indeed!! I finally got the chance to go to the beach yesterday after more than 6 months :)
<RoAkSoAx> SpamapS: too bad I'm not in spring break though as everybody else
<Daviey> RoAkSoAx, o/
<RoAkSoAx> Daviey: htough u were sleeping man
<RoAkSoAx> Daviey: anyways, I commited the changes to powernap for eucalyptus. I simplified it to only Monitor related changes
<SpamapS> Daviey never sleeps .. he is always there.. watching.. waiting.. for the next eucalyptus bug report.
<RoAkSoAx> hehe
<RoAkSoAx> Daviey: so anyways, I guess we can make a config for powernap to be shipped with eucalyptus
<Daviey> RoAkSoAx, i never sleep!
<RoAkSoAx> SpamapS: btw... upstart's restart is not intended to load the config again then?
<Daviey> RoAkSoAx, That sounds great!  Have you documented the syntax for the file?
<RoAkSoAx> Daviey: well it is the same syntax as the regular config file i.e.
<RoAkSoAx> [ProcessMonitor]\n init = "^/sbin/init"
<Daviey> RoAkSoAx, Ahh!
<SpamapS> RoAkSoAx: it should stop/start the daemon without re-loading the job configuration.
<SpamapS> RoAkSoAx: in essence, it forces a respawn.
<Daviey> RoAkSoAx, Please can i have a config for mythtv, to monitor if i'm watching something or if it is idle :)
<Daviey> (actually, could probably make a good guess based on load avg)
<RoAkSoAx> Daviey: it will just override the default config for the Monitors (if any) or add new monitors. I'll first think which one would be best
<RoAkSoAx> Daviey: yeah either that or ProcessMonitor
<RoAkSoAx> i think kirkland had similar for ProcessMonitor
<Daviey> IIRC, kirkland's just monitored if mythfrontend was running
<Daviey> I never exit out, so that is not a win for me.
<kirkland> RoAkSoAx: hey there
<kirkland> RoAkSoAx: okay, i have time to talk about powernap/eucalyptus now, if you like :-)
<RoAkSoAx> SpamapS: ahh I see now, so still the pre-start issue for restartin remains
<RoAkSoAx> kirkland: cool :). I just commited a branch
<RoAkSoAx> s/branch/changes
<RoAkSoAx> kirkland: that will allow to load configs from /etc/powernap/config.d/
<Daviey> RoAkSoAx, Did you carry the priority idea?
<Daviey> So i can nullify or extent settings in a higher value one, for example?
<SpamapS> RoAkSoAx: restart also doesn't work if there is a pre-stop
<RoAkSoAx> that will override the config from /etc/powernap/config, but *only* for the monitors
<RoAkSoAx> Daviey: yeah, python handles that
<Daviey> RoAkSoAx, Splendid!
<RoAkSoAx> SpamapS: only pre-stop or pre-start too
<SpamapS> RoAkSoAx: I'm not aware of any pre-start issues
<RoAkSoAx> oh ok so only pre-stop
<SpamapS> teag
<SpamapS> wow
<SpamapS> epic home-row fail
<SpamapS> yeah
<BCS-Satori> How would I go about making an apache2 website be only accessible by its servername or serveralias?  I don't want the website to appear if someone types in an IP address.
<RoAkSoAx> Daviey: yeah so if you place 80etc.whatever and you have [LoadMonitor] load1 = 1 and then you have a 90test.whatever with [LoadMonitor] load1 = 2 , then the 90 file has greater priority and overrides the 80 one
<RoAkSoAx> (i was on the phone, sry :) )
<SpamapS> BCS-Satori: you just need to configure set NameVirtualHost ip:port
<SpamapS> BCS-Satori: and then wrap the site in <VirtualHost ip:port> with a ServerName / ServerAlias
<BCS-Satori> SpampaS: internal or external ip or both?
<SpamapS> BCS-Satori: If you can, *:port works too
<SpamapS> BCS-Satori: you may need to creat a "default" virtual host that is empty for people who reach the site via the IP directly.
<RoAkSoAx> kirkland: so yeah the idea is to ship a config with eucalyptus for powernap in config.d/ that adds monitors or overrides the ones already defined, so that we it works out of the box and we don't have to tweak anything because of the new changes in PowerNap. Though the override is only for the monitors, I'm not touching the general config
<SpamapS> IIRC apache uss the first virtualhost definition as the default
<BCS-Satori> SpamapS: I have all of that but the default...which may be why it doesnt work
<SpamapS> s/uss/uses/
<kirkland> Daviey: mine use to look for a set of processes, like mplayer, vlc, etc.  not mythfrontend
<Daviey> kirkland, I was sure you had a myth entry there :/
<kirkland> RoAkSoAx: yeah, i liked that idea when Daviey or whoever suggested it
<kirkland> Daviey: i don't think so
<kirkland> Daviey: but that's long since been replaced with RoAkSoAx's magical new powernap :-)
<Daviey> kirkland, Your blog says otherwise :P
<kirkland> Daviey: heh
<kirkland> Daviey: what's the date on that entry?  :-)
<RoAkSoAx> kirkland: indeed Daviey.  So I guess it could pass as a bug fix for bug #711587 or shall I request a FFe? Changes are these: http://bazaar.launchpad.net/~powernap/powernap/trunk/revision/220
<uvirtbot> Launchpad bug 711587 in powernap "powernap and Eucalyptus seem unable to reach an understanding" [Medium,Fix committed] https://launchpad.net/bugs/711587
<RoAkSoAx> Daviey: Iblogposted a week ago or so about the new changes :) :P
<kirkland> RoAkSoAx: i don't think this needs an ffe, imho
<kirkland> RoAkSoAx: this is the correct fix to a known bug, a regression that affects eucalyptus
 * kirkland reviews the code
<kirkland> Daviey: what's your opinion?  needs an ffe, or not?
<kirkland> RoAkSoAx: we should convert powernap's debian/rules to a dh7 style rules file, after this cycle :-)
<Daviey> kirkland, Hmm, it fixes a bug :)
<RoAkSoAx> kirkland: yeah wanted to do that too
<kirkland> RoAkSoAx: diff looks nice and clean to me
<kirkland> RoAkSoAx: low impact
<kirkland> RoAkSoAx: let's get this merged and uploaded
<kirkland> RoAkSoAx: i'm happy to sponsor, if you like
<RoAkSoAx> kirkland: alrighty then. Adn yes please, go ahead an upload it :)
<Daviey> kirkland, It's not a new feature, it's handling a change in one of it's reverse depends :)
<kirkland> RoAkSoAx: speaking of, you should apply for upload rights for powernap :-)
<RoAkSoAx> kirkland: yeah I want this cycle to finish and I'll apply either for PPU for cluster stack+powernap or the Ubuntu Server package set
<kirkland> RoAkSoAx: cool :-)
<Daviey> RoAkSoAx, Server set sounds good... that needs more members. :)
<RoAkSoAx> Daviey: indeed!! I'm just too lazy to pass the results from the script to the wikipage :P
<RoAkSoAx> kirkland: btw.. how was London/england?? I always wanted to go there since I was a kid.... hope I get the chagne to go there soon :)
<kirkland> RoAkSoAx: uploaded
<kirkland> RoAkSoAx: i'm glad to be back in Austin, that's for sure :-)
<kirkland> RoAkSoAx: i'm working outside, in the sun, 75 degrees right now
<kirkland> RoAkSoAx: it was 33F when i got off of the plane in London :-)
<Daviey> RoAkSoAx, :P
<RoAkSoAx> kirkland: haha yeah I bet!! it is nice here too... I went to the beach yest after probably 8 months
<Daviey> kirkland, I think we need an Austin sprint. :)
<RoAkSoAx> Daviey: we need a sprint in Miami >P
<kirkland> Daviey: yeah, we do :-)
<hallyn> Daviey: hey - how is the open-vm-tools merge review going?
<RoAkSoAx> Daviey: from a beach facing hotel
<hallyn> miami sprint - +1
<kirkland> Daviey: robbiew should get on that :-)
 * RoAkSoAx wishes he was working from the beach...
<Daviey> :)
<Daviey> hallyn, How did we last leave it?
<Daviey> hallyn, You raised a FFe, right?
<Daviey> hallyn, Hmm.. i thought you had.. seems not?
<hallyn> Daviey: no, i was waiting for your review
<hallyn> actually,
<Daviey> hallyn, Oh, sorry.
<hallyn> i thought it was decided ffee wasn't needed?
<hallyn> since this is not a feature?
<Daviey> hallyn, I was a MASSIVE diff iirc.. not really easy to review :/
<hallyn> i could be wrong
<Daviey> 294925 lines (+100634/-169192) 750 files modified
<hallyn> well, maybe we should find someone in community who actually uses/cares about this to review and consider maintaining the package?
<hallyn> i would think you would want tofocus on the changes under debian/
<hallyn> the rest is just the reuslt of upstream churn
<Daviey> That is what i am diffing atm :)
<hallyn> Daviey: i wasn't asking you to do it right now :)  Just wanted to make sure you weren't waiting on some action from me
<Daviey> hallyn, I'm doing that now... but if you can go through the FFe dance, that would be good.. The size of the diff alone, makes me think we should go through that.
<Daviey> hallyn, $(KVERS) really does == $(shell uname -r)?  You checked that?
<Daviey> Oh forgive me, that is a dropped patch
<Daviey> hallyn, Actually, debian/patches/01-kvers.patch - seems to be converted to a bzr diff style patch... did it need regenerating - looks like it would have cleanly applied?
<Daviey> RoAkSoAx, bug #732759, do you have a build log, copy and paste of apt-get install handling and comment on testing you have done that it works?
<uvirtbot> Launchpad bug 732759 in ubuntu "[FFe] [needs-packaging] python-ethtool" [Wishlist,New] https://launchpad.net/bugs/732759
<jiboumans> hey folks, hudson has an apt-repo here: http://hudson-ci.org/debian/binary/ but it only has a Packages file, not Packages.gz and that makes apt-get update claim '404' -- what's the right way to use this repository?
<RoAkSoAx> Daviey: yeah will copy it in a bit, gotta run for a while
<Daviey> jiboumans, Firstly, have you tried our PPA?  jamespage has been working hard making that usable
<jiboumans> Daviey: no i haven't yet - i was told by folks that hudson-ci was the right way to go
<jiboumans> 'folks' being.. well... you know, 'them' :)
<SpamapS> jiboumans: isn't it jenkins now? ;)
<jiboumans> Daviey: i'm happy to give that a look as well, but right now I do need to get this repo working first. any thoughts?
<Daviey> jiboumans, well, i know we would appreciate additional testing... :)
<Daviey> jiboumans, I have an apt repo without a gz'd Packages, and that seems to work.. :/
<jiboumans> odd
<jiboumans> Daviey, here's what I'm using:
<jiboumans> curl http://hudson-ci.org/debian/hudson-ci.org.key | apt-key add -; echo "deb http://hudson-ci.org/debian binary/" > /etc/apt/sources.list.d/hudson.list; apt-get update
<Daviey> jiboumans, You might aswell just wget the deb :/
<Daviey> http://hudson-ci.org/debian/binary/
<jiboumans> i'd like puppet to manage the updates and dependencies
<jiboumans> Daviey: ^
<Daviey> ah
<SpamapS> You mean you'd like hudson-ci.org to manage the upgrades ;)
<SpamapS> man.. I was just curious.. there's no good place to find "the apt repository format"
<Daviey> jiboumans, I'm not sure i trust them.... based on http://issues.hudson-ci.org/browse/HUDSON-2472?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel
#ubuntu-server 2011-03-15
<jiboumans> Daviey: what's the PPA url? i can take a look now
<soren> hudson's deprecated anyway.
<soren> deb http://pkg.jenkins-ci.org/debian binary/
<soren> is what I use.
<Daviey> jiboumans, https://launchpad.net/~hudson-ubuntu/+archive/testing
<Daviey> actually, ~ubuntu-hudson is based on Natty... so soren's suggestion is probably better for LTS.
<jiboumans> Daviey: you know me well :) of course i'm on LTS (smoser++)
<Daviey> jiboumans, What?!  You aren't putting production loads on the development release?!
<jiboumans> Daviey: indeed, I'm not hitting the crack pipe :)
<Daviey> :)
<hallyn> Daviey: ok, i'll try ffe process
<hallyn> Daviey: I do seem to recall that debuild was failing until i refreshed patch 01-kvers.patch
<hallyn> Daviey: but, hopefully you're off for the night and will read this in the morning :)
<Daviey> hallyn, Make sure you note that you spent significant time trying to cherry pick a fix etc.  And new upstream snapshot seemed both more reliable, and less time - and more supportable rather than a massive patchset
<Daviey> Lay it on thick :)
<Daviey> hallyn, Ah, debuild will crap out if the patch contains fuzz... when quilt push doesn't... really annoying variation of defaults!
<SpamapS> Daviey: 3.0 quilt or --with quilt ?
<SpamapS> I think the 3.0 (quilt) thing is a bit broken. We really need 3.0 (bzr)
<Daviey> SpamapS, "quilt pop -a ; quilt push -a" works.. with default setup, mentions fuzz and offset... but the build process will only accept offset but not fuzz.
<Daviey> SpamapS, I recently pondered the idea of 3.0 (bzr), but can you imagine converting all of debian? :).... Really, i think bzr needs to have native knowledge of .pc
<sbeattie> Daviey: there is an experimental 3.0 (bzr) somewhere, 3.0 (git) got bumped from experimental to approved recently, I though.
<sbeattie> err, I thought.
<Daviey> sbeattie, Yeah - but how useful is that for /everything/... Other than a few DM that use bzr and Ubuntu native packages?
<sbeattie> Daviey: useful? when has that ever been a criteria? :-)
<Daviey> lol
<Daviey> Has everyone else had puppet adverts following them across the internet today?
<ScottK> Do nightmares about clowns count?
<Daviey> Depends if you have the nightmare whilst awake :)
<RoAkSoAx> kirkland: dude could you also please publish the tarball for poewrnap at launchpad/net/powernap?
<Tohuw> Is there any compelling reason to leave usergroups=yes in adduser.conf if I'm configuring an Ubuntu LAMP server? Granted, I need to watch my perms (e.g. 644 not 664), but that's just paying attention, not a real argument against it.
<twb> There are two cases where the group permission is relevant: /home/fred (private personal files) and /srv/share (shared files that need group write access).
<twb> IME you cannot handle both correctly with plain POSIX permissions, so it is better to err towards privacy and leave usergroups=yes.
<twb> This allows users to opt-in to a shared group on specific files (i.e. /srv/share) rather than having to opt-out of same for /home/fred.
<Tohuw> twb: good point. thank you
<twb> Obviously if you value security over convenience, you should study one or more LSM MACs, like grsecurity, selinux or apparmor.
<Tohuw> twb: I'm going to learn apparmor last, but I do plan to incorporate it to some extent. I figure, get it working, then harden it. :)
<twb> Sure.
<twb> You're running PHP so you're pretty much screwed anyway
<Tohuw> twb: why are the default directory perms in adduser.conf 0755 if you're not supposed to execute stuff in /home?
<twb> Tohuw: the default behaviour is to give world read access to files in $HOME
<twb> On the basis that sharing is useful, e.g. so you can read alice's .bashrc for handy things to put in your own
<Tohuw> twb: "You're running PHP so you're pretty much screwed anyway" <-- it depends, I would say. You can harden php decently through proper php.ini and phpSuExec configuration.
<twb> http://en.wikipedia.org/wiki/PHP#Security
<twb> One quarter to one third of ALL vulnerabilities announce each year are PHP related.
<Tohuw> twb: I was more asking about the "7" part. Doesn't that allow the user to execute from their home?
<twb> Tohuw: no, it's a umask.
<twb> Actually, not it isn't.
<twb> That's the actual permission that /home/fred will be set to, so 7 means rwx
<Tohuw> So, why allow execution? I though executables were supposed to be in */bin dirs?
<twb> Tohuw: execute permission on a directory means that you're allowed to cd into it, IIRC
<twb> Something like that.  Anyway, you need it
<Tohuw> oh. Hm!
<Tohuw> I'll be darned; you're right
<twb> Run info coreutils 'File permissions'
<Tohuw> Revoking the executable bit prevents cd to the dir.
<twb> Unfortunately there's only info page, not manpage
<twb> `x' the permission the USERS have to execute the file, or search it if it is a directory.
<Tohuw> the whole "oh let's only make info pages because info lolrulez" attitude is frankly annoying.
<twb> info foo | less
<twb> Now you have a big manpage
<uvirtbot> New bug: #735239 in postfix (main) "package postfix 2.7.0-1 failed to install/upgrade: å­ç¨åº installed post-installation script å³åäºé¯èª¤éåºçæ 75" [Undecided,New] https://launchpad.net/bugs/735239
<Tohuw> twb: haha. It's more about telling users "just use man to find out about stuff", only some things don't have a man, or even worse, an outdated/very sparse one.
<Tohuw> Though I suppose telling users, "just info it" is valid as all man pages are read by info, AFAIK
<aarcane_> anyone here using a SAS Expander, and willing to recommend it for use in a new system ?  i'm shopping for a SAS Expander for a ubuntu based system, and would love to hear from anyone running one now.
<twb> By "expander" do you mean a chassis that houses the disks, or a card that has SATA ports, or what?
<aarcane_> a SAS expander is a circuit board, often housed on a PCI(E) shaped card or in a 5.25" bay that goes inside one of the chassis that houses the drives (or a main system) that allows multiple drives to be connected to one controller.  Most of the expanders I've seen take one SFF-8088 or 8087 cable from a controller card, and "expand" it to connect 16 or 24 drives plus 2 additional expanders.
<aarcane_> I've also seen SAS expanders built directly into a backplane, but those are too expensive for me.
<Tohuw> Okay, I've RTFM, but I still don't understand what format the quota for quotauser should be in...
<Tohuw> (quotauser as set in adduser.conf)
 * twb looks for a picture
<twb> Hm, OK
<twb> I can't help; I mainly deal with SATA
<Tohuw> Is there a strict 1:1 relationship between inodes and files? I.e., every file has an inode.
<Patrickdk> everything on a disk has an inode
<Patrickdk> even directories, symlinks, ...
<Patrickdk> not all inodes are files
<twb> Tohuw: no, two files can have the same inode (i.e. a hard link)
<jmarsden> Patrickdk: well, multiple hard links often fit inside one inode, too... touch foo; ln foo bar ; ls -i foo bar  # both have same inode number :)
<Patrickdk> ya, I wasn't thinking about that
<twb> Similarly, if you open a file, delete it, then create a new file, both files will be /tmp/x, but they will have different inodes
<Patrickdk> and wasn't even thinking about it that way
<Patrickdk> I was thinking stuff (files,dir,...) -> inodes -> blocks
<Patrickdk> inodes is a collection of blocks, basically
<Patrickdk> or, an inode
<Tohuw> twb: thanks, I received the answer in another channel. I'm clear on it now. Is dumpe2fs the "standard" tool for viewing free inodes?
<twb> By contrast, FAT has no inodes, so you cannot delete an open ifle
<Patrickdk> tune2fs?
<Patrickdk> really df -i
<Tohuw> oh
<Tohuw> good ol df
<twb> Tohuw: I don't know why you would ever need to do that
<twb> df -i lists the percentage of inodes used/free
<Tohuw> twb: setting rational quotas for my partitions
<twb> Tohuw: number of files is a rough guide to current inode usage within a specific directory tree
<Tohuw> Given that I could theoretically run out of inodes before running out of blocks
<Patrickdk> then for quota purposes, it's pretty much dir+files=inodes
<twb> Tohuw: try find /home/ -user fred -xdev | wc -l
<Patrickdk> hopefully user fred is a standard user, and doesn't have only maildir files :)
<Tohuw> twb: right, but I was more interested in total free inodes, to try and devise a policy for limiting them.
<Tohuw> fred is my mta AND sql log rotation holder! :D
<Patrickdk> I'm doing one inode per 32kb currently
<Tohuw> Patrickdk: what brought you  to that decision?
<Patrickdk> it was the ratio setup before me, and I didn't see any point in changing it
<Patrickdk> 4k is just normally overkill
<Patrickdk> 64k is too big though :(
<Patrickdk> 16k would probably be good
<Patrickdk> but some users abuse inodes, not too many
<twb> Look at what users are using, then halve
<Patrickdk> I'm probably going lower the inode limit good pretty soon
<Patrickdk> once I convert all the email
<aarcane> Hi, Me again, still wondering which SAS Expanders people have used in here with a Ubuntu system
<patdk-lap> how can we answer that when you don't stick around for an answer?
<jmarsden> patdk-lap: You could try answering using memoserv, if you really want to :) :)
<patdk-lap> I could
<patdk-lap> but I seriously don't even think the os knows about sas expanders
<patdk-lap> it's just sees drives connected to a scsi card
<patdk-lap> as long as the expanders and sas card get along, all should be good
<smoser> jamespage, awake ?
<uvirtbot> New bug: #735296 in samba (main) "package winbind 2:3.4.7~dfsg-1ubuntu3.4 failed to install/upgrade: underproces installed post-installation script returnerede afslutningsstatus 127" [Undecided,New] https://launchpad.net/bugs/735296
<luckymurali> Hi all
<luckymurali> I installed ubuntu server 10.04 64 bit and there is no desktop on it
<luckymurali> i want to install desktop on that machine but there is no internet
<TheInfinity> luckymurali: of cource theres no desktop enviroment - servers usually don't have xorg / gnome
<TheInfinity> luckymurali: and - then you have to set up your network connection :)
<_ruben> luckymurali: if you want a desktop, install the desktop edition .. it's obvious i'd say
<luckymurali> yes i know that, my concern is how to install desktopn on server without internet
<_ruben> you'd use the desktop cd/dvd to install it, as it's not on the server cd/dvd
<TheInfinity> luckymurali: get an internet connection. set up your wireless connection via cli.
<TheInfinity> or whatever you need for internet
<luckymurali> TheInfinity,I need Desktop without internet
<luckymurali> but i dont know where is the desktop pacakge
<luckymurali> the desktop I need to use is for a specific reason
 * _ruben points to #ubuntu
 * TheInfinity would also say #ubuntu because i dont see the relation to server topics
<luckymurali> any way thanks for your suggestions
<uvirtbot> New bug: #735369 in bacula "Bacula Admin Tool (bat) does not work well with Qt 4.6.2 in Lucid" [Undecided,New] https://launchpad.net/bugs/735369
<jdepi> hallyn: Hi hallyn; could you please post a link to more info about updating the apparmor profile (re mail on ubuntu-server list); i found some info on https://apparmor.wiki.kernel.org/index.php/Libvirt but I must say I have no experience with apparmor...
<uvirtbot> New bug: #735402 in squid (main) "package squid 2.7.STABLE7-1ubuntu12.2 failed to install/upgrade: sub-processo script post-installation instalado retornou estado de saÃ­da de erro 1" [Undecided,New] https://launchpad.net/bugs/735402
<uvirtbot> New bug: #735409 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 (not installed) failed to install/upgrade: il sottoprocesso nuovo script pre-installation ha restituito lo stato di errore 1" [Undecided,New] https://launchpad.net/bugs/735409
<RoAkSoAx> morning all
<lynxman> morning RoAkSoAx
<hallyn> jdepi: jdstrand may know of an actual link about updating apparmor profiles for libvirt instances, but please send reply in email to whatever msg i sent yesterday, and we'll figure it out
<jdstrand> hallyn, jdepi: that link and the /usr/share/doc/libvirt-bin/README.Debian.gz should have everything needed
<jdepi> hallyn: i did :-)
<hallyn> jdstrand: thanks, I should'vethought of that one
<jdepi> jdstrand: thanks, i'll have a look
<ivoks> idea for UDS:
<ivoks> no... make that
<ivoks> request for UDS: ACL by default
<ivoks> why are we still in the 20th century? :)
<zul> heh
<patdk-wk> acl by default on what?
<ivoks> filesystems
<patdk-wk> never used acl's
<soren> ivoks: Last we had that discussion, I think Adam Conrad (infinity) said something about Samba changing its behaviour if the filesystem had ACL enabled... and then we decided not to enable it. I think it
<soren> s ripe for another discussion.
<ivoks> makes life easier in some situations... /me thiks about samba and nfs
<ivoks> soren: hm... i can't remember what it was
<soren> ivoks: Anyways, it was almost 5 years ago. :)
<soren> ivoks: Hmm... Or was it in Boston? Anyways, several years ago for sure.
<ivoks> http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/AccessControls.html#id2614541
<ivoks> i think it was prague
 * soren tries to envision the Prague session rooms
<soren> I also forget when infinity left
<ivoks> i didn't know he left :)
<zul> like 2 years ago i think
<ivoks> ah
<zul> ivoks: its on my list to bring up
<ivoks> thanks :)
<ivoks> ok, maybe pushing it by default is over the line
<ivoks> but, lots of people are unaware of it and we could probably make better docs by mentioning how to utilize acl
<ahs3> absolutely; better docs are a minimum.  simple tools to enable acl's would also help, imo
<RoAkSoAx> ivoks: pacemaker now also has support for acls
<hallyn> Daviey: do i need to create a new bug for the open-vm-tools FFE?
<RoAkSoAx> hallyn: is there a merge bug opened already? If so, nope
<hallyn> there is a bug open which the merge is needed for
<hallyn> (bug 727342)
<uvirtbot> Launchpad bug 727342 in open-vm-tools "package open-vm-dkms 2010.06.16-268169-3ubuntu1 failed to install/upgrade: open-vm-tools kernel module failed to build" [Critical,In progress] https://launchpad.net/bugs/727342
<hallyn> cani just mark that one 'new' and put the info in there?
<ivoks> RoAkSoAx: hm... i don't see how's that relevant for pacemaker :)
<RoAkSoAx> hallyn: yeah, add [FFe] at the beginning of the bug title. You can also modify the title, and off course add the description for the FFe
<ivoks> RoAkSoAx: pacemaker doesn't do anything on the filesystem... ACLs are of no use for it
<ivoks> RoAkSoAx: maybe you are refering to pacemaker's internal ACL
<RoAkSoAx> ivoks: "S.U.S.E. has implemented support for ACLs. This means that you can now delegate permission to control parts of the cluster (as defined by you) to non-root users."
<ivoks> right, no internal ACL
<ivoks> ACL = access control list
<ivoks> what i was refering to is acl on filesystem
<RoAkSoAx> ivoks: yeah yeah but I mean, your conversation of FS ACL's reminded me mof pacemaker's
<ivoks> s/right, no internal ACL/right, so internal ACL/
<ivoks> right... that's a nice feature
<RoAkSoAx> indeed
<RoAkSoAx> kirkland: howdy!! I just uploaded python-ethtool which is required by koan, could you please take a look at it when you have the time since it's in the NEW queue? thanks
<Daviey> hallyn, no
<Daviey> RoAkSoAx, Has it had FFe
<RoAkSoAx> Daviey: python-ethtool? yes
<RoAkSoAx> hallyn: yeah just use the same bug. I usually put [FFe] before the title of the bug and enhance description, etc etc  (sorry for the delay i though I replied :) )
<lynxman> p/whois soren
<lynxman> soren: hey, saw your presentation at fosdem about Openstack, great one
<lynxman> kirkland ping
<kirkland> lynxman: hi
<kirkland> RoAkSoAx: sure, i'm on a call at the moment
<RoAkSoAx> kirkland: no worries, whenever you have the time is fine ;) Thanks
<lynxman> kirkland: hey *waves* I had some unexepected troubles with the upstart file for mcollective and I modified it on my 1.1.3 packaging to get it closer to what SpamapS suggested (for mcollective-server-provisioner, which kinda is the same category) just wanted your opinion/ideas on it
<lynxman> kirkland: http://pastebin.ubuntu.com/580609/
<lynxman> kirkland: no hurries though
<kirkland> lynxman: np, i'm on the phone for the next ~1 hour
<lynxman> kirkland: can easily wait :)
<orudie> how can I check mail for root user ?
<nimrod10> orudie, 2 ways: either log in as root   and type   mail     ,   or as your user (asuming you have sudo access) type   sudo mail
<zul> isnt there a meeting today?
<RoAkSoAx> zul: daylight saving
<RoAkSoAx> is in effect
<zul> damn farmers
<RoAkSoAx> lol
<lynxman> zul: blame the sun
<zul> lynxman: farmers are more tangable
<lynxman> zul: true true
 * SpamapS <heart> daylight savings time
<SpamapS> the sun will actually be up for the meeting.. woot
<orudie> how can I view sshd log ?
<lynxman> SpamapS: I resent daylight savings time for stealing 1 hour of my life
<twb> I resent it for wasting my time and everyone elses
<twb> I resent it for being a government-controlled resource
 * RoAkSoAx doesn't really care
<twb> I especially resent when said government changes the start/stop times with only TWO WEEKS NOTICE because of some goddamn sporting even
 * RoAkSoAx doesn't really care about daily savings
<twb> *event
<ivoks> orudie: mail for root should be forwarded to the user
<lynxman> orudie: depends on what you're looking for, mostly in /var/log/auth.log
<RoAkSoAx> arght daylight*
<lynxman> RoAkSoAx: inside a datacenter there's no daylight to save :D
<twb> Anyway, edison and friends obviated the need for a daystar
<RoAkSoAx> lynxman: hehe indeed. In my case is none sense since I have light till 8pm or so
<RoAkSoAx> lol
<SpamapS> lynxman: it only borrows it ;)
<lynxman> RoAkSoAx: lucky you
<RoAkSoAx> lynxman: not really... because of that I don't sleep till late nite
<lynxman> SpamapS: yeah it's given back in October when I don't need it anymore to enjoy summertime :)
<lynxman> RoAkSoAx: me neither, but I just do it as an anti-farmer protest ;)
 * lynxman joins together memes craftfully
<SpamapS> lynxman: except in summer time when would you rather have more daylight.. when you've just awakened, or near sunset ...
<lynxman> SpamapS: hmm fair point
<hallyn> hm?  I prefer light int he morning, thank you
<hallyn> night time should be dark.  Lit by xmas lights.
<lynxman> hallyn: even if it wakes you up?
 * hallyn for a moment relives college :)
<hallyn> lynxman: because it wakes me up
<hallyn> you know.  at 8am.  feh
<twb> You're talking to someone who doesn't get out of bed before 10AM
<lynxman> hallyn: that's too soon
<hallyn> lol
<SpamapS> Unfortunately... my child/wife/alarm clock wake me up long before the sun threatens to do so naturally
<lynxman> twb: how does that feel?
<hallyn> all right, ubuntu desktop cd is still hanging on partman
<twb> Now, OK, it is 2:30AM and I am in the office
<twb> But that is not the point
<hallyn> vmbuilder to the rescue
<SpamapS> "The datacenter!?, what is it?!" ... "Its a dank dark dungeon filled with servers and star wars fans, but thats not important right now."
<hallyn> it crowd?
<SpamapS> Adaptation of Airplane!
<hallyn> (sounds like leslie nielsen)
<hallyn> ok
<twb> I liked when they gave her "the internet" in a box
<hallyn> i figured it crowd had an airplane moment
<hallyn> 'DONT DROP IT'
<ua> hello
<twb> Airplane was awesome
<lynxman> SpamapS: I've heard people get lost in datacenters... then they come back as deformed creatures after years of roaming through the cabling...
<twb> It's rebadged in .au, so when it was last on the telly I was watching for an hour going "this seems a lot like Airplane..."
<SpamapS> twb: haha doh!
<hallyn> hm,e how does one have vmbuilder write to a lvm partition...
<ua> need help please, i can not install virtual box
<twb> Admittedly at that point I had only read the script on usenet, not actually seen the film
<ua> can you help please?
<twb> ua: that's not part of Ubuntu; it would be better to talk to your vendor (Oracle)...
<SpamapS> virtualbox isn't part of Ubuntu?
<SpamapS> virtualbox-ose - x86 virtualization solution - base binaries
 * SpamapS has never tried to run it on Ubuntu so I'm actually curious
<ua> http://paste.debian.net/110739/
<ua> spamaps>sorry i didn't understand what you mean
<twb> Ah, mea culpa
<twb> I forgot there was an open source "edition"
<twb> ua: you probably want virtualbox-ose, which is 4.0 in natty, but not before
<ua> twb>yes thank you, i tested now, it seems working
<RoAkSoAx> Daviey: ok so I was thinking we can ship two powernap-eucalyptus configs. One for when in powersave and the other for when in any other action method. The one for powersave would be default and will only have a [ProcessMonitor] for kvm, while the one for any other action method will have a [ProcessMonitor] for /sbin/init
<ua> may be i've to restart
<RoAkSoAx> Daviey: they can be installed in /usr/share/eucalyptus/ and the default either symlinked or copied to /etc/powernap/config.d/
<ua> twb>can you tell me how could i inscrease the size of/boot?thank you
<twb> ua: you probably can't
<twb> Usually if /boot is its own partition, it'll be a simple primary partition, with another partition immediately after it
<twb> In theory it could be on LVM, in which case you can lvextend
<twb> If /boot is just part of your normal root filesystem, you can delete other stuff, and/or extend that partition.  But if you're complaining, this is probably not the case
<twb> Note that vbox images probably don't belong in /boot -- /srv/vm would make more sense
<ua> twb>i've given 32Mb for /boot witch is own partition, but not it's  full, that's why i'd like to increase it or purge somefiles ...
<uvirtbot> New bug: #734984 in debian-installer (main) "Ubuntu server Natty listening to port 953 in tcp and tcp6" [Undecided,New] https://launchpad.net/bugs/734984
<ua> i've given 32Mb for ( /boot) witch is it own partition, for now boot patition is  full, that's why i'd like to increase it or purge somefiles in...
<ua> twb>are you there friend?
<ua> where are you?
<kpettit> I just setup a new ubuntu 10.04 server and I'm trying to move over some apache virtualhosts.  I'm trying to use the default setup where you a2ensite and such.
<remix_tj> so what is your problem?
<kpettit> My sites show up in etc/apache2/sites-enabled but when I try to go to their url's I only get the "default"
<remix_tj> kpettit: paste an example of the virtualhost
<remix_tj> to http://paste.ubuntu.com
<kpettit> if I remove the "default" site I can see my first virtualhost and only my first one.  It doesn't seem to be paying attention to the  ServerName
<kpettit> it's fustrating becuase same config works on identical ubuntu server.  but I think that one is ubuntu 9.10.  I'll paste...
<kpettit> http://paste.ubuntu.com/580658/
<remix_tj> maybe you copied something that is connected to the old machine
<remix_tj> only this?
<kpettit> It's odd becuase if I remove the default it will show this domain's settings, but only this one.  Like it's only going to show the first virtualhost or something
<remix_tj> and other virtualhosts are set as?
<remix_tj> yes, he serves the first working match
<kpettit> exactly same as the one I paste except the domain name and directory are different.  I've got 3-4 of them on there
<remix_tj> kpettit: and you are trying to access to http://www.setuplinux.com
<remix_tj> ?
<kpettit> yes.
<remix_tj> uhm
<kpettit> Right now if you go there it goes to the "default" with is a hello world.   It doesn't pay attention to my virtualhost file unless I remove "default".
<kpettit> If I remove the "default" then it only pays attention to the next virtualhost in line for all the other virtualdomains.
<kpettit> http://paste.ubuntu.com/580663/
<kpettit> this is a copy of the ports.conf which has the "NameVirtualHost" directive and such.  I also put in the "default" virtualhost file.
<remix_tj> everithing is correct...
<remix_tj> uhm
<kpettit> yeah that's what I figured.  Haven't had a issue where it only ever paid attention to one virtual host unless I hadn't set the NameVirtualHost directive or something, but it's there so I'm a confused.
<remix_tj> let's verify something
<kpettit> It's a default brandnew install with updates as of this morning.
<remix_tj> ls -la /etc/apache2/sites-enabled/
<patdk-wk> in mine, I deleted the default one
<remix_tj> ls -la /etc/apache2/sites-available/
<remix_tj> so we can check
<kpettit> It shows 000-default and my www.setuplinux.com and other sites.
<kpettit> I just verified it's not a permission thing.
<remix_tj> tail -n 50 /var/log/apache2/error.log
<remix_tj> ?
<kpettit> it's something to do with the namevirtualhost directive not working right.
<remix_tj> maybe you do not need to add NameVirtualHost *:80 a second time
<kpettit> I did and one of the virtualhosts worked.
<kpettit> but another one didn't so I'm checking on that.
<remix_tj> i've some vhosts with namevirtualhost specified, other not
<kpettit> it should be a generic thing that you only need to specifiy once
<remix_tj> kpettit: why you specified on serveralias the same thing is specified on servername?
<kpettit> i got it wokring.
<kpettit> for whatever reason it didn't like everything being in different files.  I copied and pasted all my virtualhosts in the one "default" config and it works
<kpettit> I have no idea why it works in one config rather than having them in the seperate configs like is supposed to be the default
<remix_tj> the default is with different configs
<patdk-wk> I have mine in 12 different files
<remix_tj> because it includes sites-enabled/*
<patdk-wk> some files have more than on vhost line, most have one
<kpettit> what I mean by "default" is the file /etc/apache/sites-available/default
<patdk-wk> you shouldn't use sites-available
<patdk-wk> but sites-enabled
<kpettit> for whatever reason on this one my different virtualhosts config files weren't looked at.  But I put them all in the one file and it works fine
<patdk-wk> sites-available is ignored :)
<kpettit> that same line is still in there and it works fine.
<kpettit> so it wasn't causing it to fail.
<patdk-wk> cause default is symlinked to sites-enabled
<kpettit> odd
<kpettit> patdk-wk, yeah I had them all in sites-available and there were symlinked to sites-enabled using the a2ensite commandline
<patdk-wk> ah
<kpettit> it would only look at one file.  If I removed default it wouldn't only look at the next virtualhost and not any others
<patdk-wk> I just ignore sites-available completely
<kpettit> yeah, I normally do that.  I was just trying to follow more the standard on Ubuntu.
<patdk-wk> seems way too confusing for what I do, to have them doubled up in two directories
<kpettit> no biggie.
<kpettit> I like the idea behind it.  It's nice when having lots of virtualhosts so you can turn on/off easy.  But didn't work for me this time around
<remix_tj> kpettit: the apache2.conf is set at default? no customizations at all?
<kpettit> remix_tj, yes.  It was a default install as of 2 hours ago
<NidHelp6092> Ã¢Ã±Ã¥Ã¬ Ã¯Ã°Ã¨Ã¢Ã¥Ã²
<NidHelp6092> Ã¥Ã±Ã²Ã¼ Ã¦Ã«Ã Ã­Ã¨Ã¥ Ã¯Ã®Ã¬Ã®Ã· Ã­Ã³Ã¡Ã³?
<remix_tj> NidHelp6092: as far as i can see you're doing a pretty UTF-8 test case.
<NidHelp6092> ((((
<remix_tj> kpettit: putting everything in a file works?
<kpettit> yes
<remix_tj> kpettit: the name of that file?
<kpettit>  /sites-available/default
<kpettit> I just put all my stuff in the "default" file.
<kpettit> my 3 domains are up and running now.   All seems well.  not sure why having them in the different files didn't work.
<remix_tj> kpettit: try grep sites-enabled /etc/apache2/apache2.conf
<remix_tj> let's take a look to the config
<kpettit> Include /etc/apache2/sites-enabled/
<remix_tj> with * at the end?
<kpettit> I know it was looking in the directory becuase if I removed "default" and added my own it worked as well.  But it only ever looked at the first file it found.  NOt any others
<kpettit> no, it's just like I pasted
<remix_tj> kpettit: try replacing with this
<kpettit> Shoudl it have a * at the end?
<remix_tj> Include /etc/apache2/sites-enabled/[^.#]*
<patdk-wk> mine doesn't
<kpettit> my old server doen't either.  I'll give it a try here in a bit though.
<remix_tj> kpettit: also my lucid server does not
<remix_tj> kpettit: this is an example of my working vhost on a single file
<remix_tj> http://paste.ubuntu.com/580673/
<remix_tj> is not much different than yours..
<kpettit> that hosed me up.  I'm not sure what's happening with that.  Have to role back.
<kpettit> usually apache is very straight forward for me.  Oh well, it's all working now.
<kpettit> thanks for the help remix_tj
<remix_tj> :-)
<deucep> Hello
 * RoAkSoAx is off to lunch
<TKsw> I'm having some problems with a static ip on an ubuntu server. I have configured the network in /etc/network/interfaces and double checked it against other servers on the same subnet (but with another IP off course). When I run "/etc/init.d/networking restart" I get an SIOCADDRT: No such process and the interface fails to come up
<TKsw> Any idea what could be causing this error?
<TKsw> dhcp works just fine
<pmatulis> TKsw: maybe pastebin you interfaces file
<remix_tj> TKsw: ifconfig -a and cat /etc/network/interfaces, please :-)
<pmatulis> do *not* paste that stuff in here
<TKsw> no no
<TKsw> Just have to get a dhcp adress and then ssh :P
<TKsw> http://pastebin.com/CDDmHbPp
<deucep> Anyone here configured a samba share to have password authentication?
<patdk-wk> per user? or per share?
<TKsw> Anyone has any clue what my network problem could be? :P
<remix_tj> TKsw: remove this two lines
<remix_tj>  network xxx.xxx.102.0
<remix_tj>         broadcast xxx.xxx.103.255
<TKsw> Nope, still getting the error
<remix_tj> uhm
<remix_tj> are you sure gateway, netmask and address are matching?
<remix_tj> (i see also ipv6 configured on your eth0...)
<TKsw> haha, found the error
<TKsw> After one hour of looking at the interfaces files I had typed an ip-adress wrong ;P
<TKsw> We were even two persons looking really hard at it and still didn't discover the error
<TKsw> Anyway, thanks for the help :)
<remix_tj> :-)
<zul> Daviey: ping did you upload bind sru yet?
<format120> hey, im turning an old laptop into an ubuntu server, hopefully should be ssh, ftp, email-(I have a gmail that i want to use for now) and web-(the domain name for this will be my new email pretty soon, i hope) will someone please point me to a few guides? inital googleing didn't turn anything up
<Daviey> zul, No, doing that today
<RoAkSoAx> Daviey: they can be installed in /usr/share/eucalyptus/ and the default either symlinked or copied to /etc/powernap/config.d/ --> ideas?
<Daviey> RoAkSoAx, hmm.. either/or i think...
<format120> oh, and an IRC server, if thats possible, havent googled it yet
<Daviey> RoAkSoAx, it's a config file, that people should be able to edit IMO... so a real file in /etc/ is better IMO.
<Daviey> RoAkSoAx, if it's /usr/share/, it should be static.
<RoAkSoAx> Daviey: right, but in /usr/share there's "examples" and one of these configs should be default and the other is a choice or backup or additional
<format120> yep, I would like to include IRC into that list
<RoAkSoAx> Daviey: and since /etc/powernap is not eucalyptus config it should not go by default there IMHO
<Daviey> RoAkSoAx, I don't think it matters either way tbh... Personally, adding complexity of a symlink seems unecessary.
<Daviey> RoAkSoAx, Well fair point, if we consider something like phpmyadmin and apache2... that uses the symlink approach.
<Daviey> for a conf.d.
<RoAkSoAx> Daviey: indeed
<Daviey> RoAkSoAx, go with that!
<Daviey> make it so.
<format120> anyone want to help?
<RoAkSoAx> Daviey: alrigth
<Daviey> format120, I'm really sorry, people aren't ignoring you - we are just all really busy at the moment.  Please hang around, and hopefully someone will help soon.
<smoser> jamespage, or anyone else...
<smoser> wondering if anyone is aware of a "generic native launcher" applet
<format120> ok Daviey
<smoser> we're using this nx web runner applet. but it basically just an applet that downloads native binaries, puts them into a user directory and then launcches them.
<format120> thanks, and im not just waiting for someone to hold my hand, I'm still googlein
<smoser> it seems like there might be a more generic (open source/modifiable) applet that does that
<bobg> i rebuilt a  package (asterisk-mp3) and made it available on a local repository but my client ignores this version and downloads it from the official repo.  How do I get my clients to use my local repo with higher priority than the official repo?
<jmedina> pin
<jmedina> google for apt pinning
<Pici> !pinning | bobg
<ubottu> bobg: pinning is an advanced feature that APT can use to prefer particular packages over others. See https://help.ubuntu.com/community/PinningHowto
<bobg> ubottu, Pici, thanks.  it seems that pinning control which release a package is taken from . I have the exact version of the file on my local repo (its only rebuilt to fix a appearent build mistake in the official version(https://bugs.launchpad.net/ubuntu/+source/asterisk-addons/+bug/560656)).  How does apt decide where to get a file if it exists in more than one repo? I know I could bump the version number of my build but I would rather
<bobg>  not if I don't need to
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<uvirtbot> Launchpad bug 560656 in asterisk-addons "The mysql add-on module is out of sync with asterisk" [Undecided,Fix released]
<Pici> bobg: It grabs the most recent version number unless the pinning setting say differently.
<RoAkSoAx> Daviey: you haven't yet merged the branch that creates the new eucalyptus-admin-tools package right?
<Daviey> RoAkSoAx, don't /think/ i have
<Daviey> RoAkSoAx, I merged the adding of it, but not the new package
<RoAkSoAx> Daviey: ubuntu2 was the one you merged, which was fixing the installing of the missing euca_admin python module
<format120> is there an automatic way to get ubuntu server on a USB? all the guids I found are outdated, or give you a .exe to do it, and Im on Arch linux
<bobg> Pici, do you know what it does if the exact same filename (package version) exists on more than one repository?  Is it arbitrary which one it chooses?  is the order in the sources.list file significant?
<RoAkSoAx> Daviey: don't yet merge it though. I'm gonna  do the powernap configs on top of that
<Pici> bobg: I suppose it would be arbitrary.
<bobg> format120, when I did it about 6 months ago it was surprisingly difficult. I think that the method I used was to mount the usb stick as a virtual drive on a xen virtual machine and then install Ubuntu into the VM -- i followed some how to  (sorry i don't have any details)
<bobg> Pici, thanks
<format120> ok, ill do some googleing, thanks
<Daviey> RoAkSoAx, ack
<kirkland> Daviey: RoAkSoAx: i prefer the /usr/share + symlink approach
<RoAkSoAx> kirkland: yeah that's best IMO
<kirkland> RoAkSoAx: \o/
<RoAkSoAx> kirkland: Daviey where shall we document this changes though?
<kirkland> RoAkSoAx: hmm
<kirkland> RoAkSoAx: at least in http://help.ubuntu.com/community/UEC
<kirkland> RoAkSoAx: possibly a release note, if the admin needs to take action
<RoAkSoAx> kirkland: yeah I think a release note is definitely required as this will affect anyone who's running powernap on the NC's
<kirkland> RoAkSoAx: okay
<kirkland> RoAkSoAx: i can help you with that, if you like
<RoAkSoAx> kirkland: sure :)
<format120> hey guys, I'm trying to setup an ubuntu server, but don't have access to a cd drive.  I could use a USB key, but that looks hard, can someone point me to a guide on installing over the network?  I'm in Arch linux on this box, by the way
<geekbri> is it possible somebody might have a clue why my following cron line spits out an error about end of file unexpected expecting ")".*/2 * * * * . /etc/web/conf.d/reporting.env && cd /home/reporting/trunk/crons/ && echo "---------------------------------- `date`" >> /var/log/fc-crons/KPI_build.log && /usr/local/bin/php /home/reporting/trunk/crons/freecause_daily_KPI_build.php $(date -d yesterday +%Y-%m-%d) >> /var/log/fc-
<geekbri> crons/KPI_build.log 2>&1
<format120> wait, nevermind, I didn't bother to check the ubuntu website, and fourms said that it was really complicated
<genii-around> format120: You might want to look at https://help.ubuntu.com/10.04/installation-guide/i386/linux-upgrade.html it shows a method using debbootstrap
<_ruben> geekbri: i'd put all those commands in script and have cron call that .. much cleaner imo, might ease up the debugging too
<RoAkSoAx> kirkland: should the symlinking be handled in eucalyptus-nc.postinst or shall I just tweak the packaging for that?
<geekbri> _ruben: i may just very well do that, as it seems it runs fine as a bash script
<_ruben> geekbri: i try to keep my cron entries as clean as possible, so i basically write a script whenever it needs to do more than 1 thing ;)
<geekbri> _ruben: i was just trying to avoid the internal dilema of where do i keep my script that run as crons ;)
<_ruben> i tend to just shove 'em in /usr/local/(s)bin/
<soren> lynxman: Thanks!
<_ruben> or depending on the "frequency", one of the cron.{hourly,daily,weekly} dirs
<geekbri> i stil am dying to know what i've done wrong however :). I noticed a couple things. one since im using crontab -e i didn't need the users name, and two i used . to source the .env file instead of source
<kirkland> RoAkSoAx: it should be in a debhelper .links file
<_ruben> geekbri: you might be using bashisms and cron might be using sh .. just guessing here though
<kirkland> RoAkSoAx: unless logic is required
<kirkland> RoAkSoAx: in which case, postinst would have to be used
<_ruben> and i tend to avoid crontab -e as well ;)
<geekbri> _ruben: im pretty sure that is the problem.  I think its the entry  $(/bin/date -d yesterday +%Y-%m-%d)
<_ruben> /etc/cron.* is already scattered enough ;)
<RoAkSoAx> kirkland: yeah i'm just using dh_link in debian/rules... btw... I think I'm just gonna ship 1 conf that has kvm-euca = "kvm" and init = "^/sbin/init". Though kvm-euca will only be enabled
<RoAkSoAx> so that if admin changes from the default action of powersave to any other one, they'll have to change use the init ProcessMonitor rather than kvm-euca
<geekbri> _ruben: i think im just going to go ahead and put this one in a bash script :)
<bencc> with l7 filters, can I port forward http:// requests to port 8000 and ws:// requests to port 9000?
<bencc> I found good iptables resources but not l7 filter resources
<_ruben> bencc: no
<bencc> _ruben: so what do i7 filters do?
<_ruben> bencc: NAT happens at the very first (SYN) packet, which doesn't have the URI in it
<bencc> _ruben: so there is nothing I can do?
<_ruben> there's a lot of things you can do, just not what you want to ;)
<bencc> except for a proxy?
<_ruben> proxy would be best solution for this, yes
<bencc> _ruben: what about using port 8080, can I count on it to be open for most users?
<bencc> for http of course
<_ruben> for certain values of "most", sure ;)
<_ruben> corporate businesses filter whatever they feel like, though i think 8080 would be kinda safe
<bencc> cool. 8080 it is :)
<bencc> if my users will be blocked, I'll send them here to complain
<_ruben> or just use ipv6 and don't have to deal with the ipv4 depletion issues and port overloading
<bencc> what do you mean?
<_ruben> sounds to me your trying to offer 2 services on the same port on a single ipv4 address ... which is a result of the ipv4 depletion (no more ipv4 addresses "soon") ... ipv6 is one of the solutions to that
<bencc> aren't I depend on the hosting service?
<_ruben> usualy you are, tho any self-respecting hosting service would already offer ipv6 or atleast be able to offer it in the very near future
<_ruben> then again, i'm an optimist from time to time
<bencc> they usually charge for additional ip addresses
<bencc> it doesn't cost much but it is another manual step in the deployment
<jkg> of course a service accessible only over ipv6 is currently not accessible to many, many people...
<_ruben> make sure the service is "interesting" enough, put it on non-standard port on ipv4, and standard port on ipv6 ;)
<_ruben> "forcing" others to bite the bullet as well
<bencc> convince fb to switch to ipv6
 * Patrickdk has switched fully to ipv6 awhile ago
<Patrickdk> the only thing I have that doesn't do ipv6 is fail2ban and mysql
<_ruben> bencc: fb as in facebook? they're on ipv6 atleast partially
<bencc> _ruben: so when I go to facebook.com I see ipv6 ip
<bencc> ?
<_ruben> nah, there's www.v6.facebook.com
<_ruben> still fetches most of its resources over ipv4 though i think
<Patrickdk> fail
<bencc> server not found
<Patrickdk> www.v6.facebook.com.	3600	IN	AAAA	2620:0:1cfe:face:b00c::3
<Patrickdk> works here
<_ruben> same here
<Patrickdk> it freaking redirects me to ipv4 though
<Patrickdk> HTTP/1.0 302 Found
<Patrickdk> Location: http://www.facebook.com/common/browser.php
<Patrickdk> X-FB-Server: 10.43.78.53
<Patrickdk> extra fail, they must be doing a ipv6 http proxy -> ipv4 backends
<shauno> odd, I get sent to Location: http://www.v6.facebook.com/common/browser.php [following]
<Patrickdk> You are using an incompatible web browser. (firefox 3.6)
<Patrickdk> man facebook failing all over the place
<Patrickdk> ya, this time I got recirected to that also
<SpamapS> turbotax didn't want to run on chromium for linux.. changed the useragent to windows.. finished taxes in 20 minutes. Suck it Intuit.
<Patrickdk> facebook must be having ipv6 routing issues
<format120> speaking of ipv6, can you set that up as a service? like ipv6 routing, then PAT onto an ipv4 external network?
<Patrickdk> set it up as a service?
<Patrickdk> you mean, nat64?
<remix_tj> format120: no no no!
<Patrickdk> nat64 + dns64
<remix_tj> nat to external ipv4 is pretty useless
<Patrickdk> not if your running an ipv6 only internal network
<remix_tj> what is the utility of running ipv6 internally if you are not connected to ipv6 public network?
<Patrickdk> who said it wasn't?
<Patrickdk> but not the whole world is ipv6 yet, so
<format120> I did Patrickdk, and I know its pointless, but its fun\
<Patrickdk> oh? I didn't see the, only ipv4 external
<remix_tj> you can simply use a dual stack network, i did it in my office
<remix_tj> or you can try messing around with totd and ptrtd
<shauno> if you're going to go that far, why not just find a tunnel broker? with that 1 more step, you could make the 6 actually useful
<format120> it sounds like fun.  I have fios, and might be getting a new server, so i might add a fiberoptic nic, and just have fios directly to the server, and use it as my router, that would be sweet
<remix_tj> you can build in very few minutes an ipv6 router with a linux machine connecting your network to the ipv6 internet using a tunnelbroker like HE.net
<remix_tj> (as i did at work)
<format120> cool, well, I'm working on installing my first server, ever, right now, so it might be a while, with many dumb questions inbetween
<Fredrick> hey, what landscape? it seems like its for managing large clouds of servers. do i want it to manage my updates? this is my only, and first server
<RoyK> Fredrick: landscape works well, but it isn't free
<Fredrick> oh, lol, so i don't want it.
<RoyK> pkgsync or similar systems exist for doing the same
<Fredrick> im builing the first 100% free network, gifted crappy PC's linux OS's, and free use of my rich friends soler panels, when hes not useing them, and if I do all of his yard work. lol
<Fredrick> so what would be the best choice for a single server?
<RoyK> well, all that depends on the load etc
<Fredrick> probably small, server is for DHCP and other router stuff, maybe a little ssh, ftp, and if im lucky, webserver
<Fredrick> oh, and email
<Fredrick> thats why im making it, the rest are just bonuses
<Fredrick> im thinking, no automatic updates? and just ssh in, and do sudo apt-get update every now and again?
<RoyK> and apt-get dist-ugprade
<RoyK> dist-upgrade, even
<RoyK> apt-get update will only update the index
<Fredrick> oh, how is that diffrent
<RoyK> security updates will happen automatically if chosen so in the setup
<Fredrick> so pick that one?
<RoyK> Fredrick: apt-get update updates the package index, but doesn't download anything, use apt-get dist-upgrade (or perhaps apt-get upgrade if you're paranoid) to upgrade the system
<Fredrick> yea, I went with that one.
<Fredrick> ok, why would apt-get upgrade be more secure than apt-get dist-upgrade
<air^> AFAIK upgrade doesn't do kernel upgrades.
<RoyK> not really - dist-upgrade will only upgrade to the latest sub-release (as in from 10.04.2 to 10.04.3 when that eventually arrives)
<RoyK> air^: even dist-upgrade doesn't upgrade to 2.6.35, which is recommended
<Fredrick> oh, ok
<Fredrick> so how do I get up there?
<RoyK> apt-get install ...
<Fredrick> my install disk is kinda old, so i might be back at .1
<Fredrick> 10.4.1 that is, not 10.0
<Fredrick> 1
<RoyK> there is no 10.0
<RoyK> there is 9.10 and 10.04
<RoyK> year.month
<Fredrick> so just apt-get install is more prefer than apt-get install upgrade?
<Fredrick> I meant 10.01, but i hit enter to early
<RoyK> 10.01 doesn't exist either
<RoyK> it's year.month, with 6 months between each release
<Fredrick> and its barely perfered over apt-get install dist-upgrade?
<Fredrick> oh, ok
<RoyK> there is no such thing as "apt-get install dist-upgrade"
<Fredrick> then why so long in between 10.04, and 10.10,
<RoyK> apt-get install somepackage
<RoyK> or apt-get dist-upgrade
<RoyK> Fredrick: how long between april 2010 and october 2010?
<RoyK> it's quite simple
<Fredrick> err i mean why are we still on 10.04, and not 10.10
<RoyK> !lts
<ubottu> LTS means Long Term Support. LTS versions of Ubuntu will be supported for 3 years on the desktop, and 5 years on the server. The current LTS version of Ubuntu is !Lucid (Lucid Lynx 10.04)
<air^> 10.04 is LTS.
<Fredrick> 4, plus 6 is ten
<Fredrick> you said it was 6 months,
<RoyK> between releases, yes, but two years between LTS releases
<Fredrick> oh,
<Fredrick> ok then
<RoyK> for a home setup, any release will probably do
<RoyK> but for production setups, I use LTS
<Fredrick> thanks, i only have one keyboard, so i have to switch it back to the server, but ill stay in the channel, and switch back when i have a nother question
<Fredrick> tanks
<RoyK> Fredrick: isn't your server networked? ;)
<air^> :D
<Aison> hello, where do I set the network settings and nfs mount settings so that nfs is mounted AFTER the network is up?
<Aison> currently nfs tries to mount the fstab entries before the network is up and so it fails
<Patrickdk> what nfs version?
<Patrickdk> normally, _netdev
<Aison> 4
<Patrickdk> nfs4 ignores _netdev
<Patrickdk> have to do it manually
<Aison> very nice....
<Patrickdk> https://help.ubuntu.com/community/NFSv4Howto
<Aison> thx
<Fredrick> ok, im at Software selection, whats a LAMP server?
<Pici> Fredrick: Apache, mysql, php
<Fredrick> ok, why would I want that?
<Fredrick> what would a user be doing, in order to require a Lamp server? arent apache, and php programing languages? and msql a database?
<Fredrick> ok, well what about a Tomcat Java server?
<shauno> apache's a web server.  php is a programming language, and mysql a database.  together they make the stereotypical linux-based webserver
<Fredrick> oh, thanks
<Fredrick> what about Tomcat Java?
#ubuntu-server 2011-03-16
<uvirtbot> New bug: #735830 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.37-1ubuntu5.5 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/735830
<twb> I'm translating /etc/bind to /etc/nsd3.
<twb> What's the nsd equivalent of "auth-nxdomain no"?  Do I even need it?
<twb> In this named snippet: http://paste.debian.net/110811/
<twb> Is "notify yes" applying to the master, or the (presumably downstream) zonedit nameservers?
<^Mike> How can I run SMART tests on a HDD?
<twb> apt-get install smartmontools
<twb> smartctl <something to say "start a short test">
<twb> smartctl <something to say "show self-test results">
<twb> RTFM; I can't remember what the options are
<Datz> something like smartcl -A /dev/sda1 I thought
<Datz> er
<twb> IIRC -A reports all info
<Datz> smartctl
<twb> smartctl -l selftest /dev/sda
<twb> smartctl -t short # or long, if you have a couple hours
<Datz> -A looks lit it reports all smart attributes
<twb> You may also need -d ata for SATA disks, if you're on an old system.
<twb> Datz: right, but most of those are misleadingly scary
<Datz> inaccurate I've noticed for some things
<Datz> but right, not tests
<^Mike> thanks
<^Mike> bah
<^Mike> "Device does not support SMART"
<^Mike> Well, if you can make any sense of these logs, I'd appreciate the help: http://sprunge.us/bBGh?txt
<twb> ^Mike: maybe you're behind a RAID card
<uvirtbot> twb: Error: "Mike:" is not a valid command.
<^Mike> I doubt it, it's just a cheap external HDD connected to the machine by USB
<twb> ^Mike: "unable to read inode block" I reckon you've made the filesystem slightly larger than the partition -- did you dd it from one to another?
<uvirtbot> twb: Error: "Mike:" is not a valid command.
<twb> Also your name is stupid :-/
<twb> \bMike\b: hmm, you're getting it for several partitions, so more likely is just that either the copy is hosed or the disk is dying
<\bMike\b> no, but I could well have messed up the partition/filesystem sizes anyways
<twb> Oh, and you can't do SMART to a USB enclosure -- if you take the disk out and put it in a SATA slot, then SMART will work
<\bMike\b> oh, ok
<\bMike\b> huh, that disk also reports "Partition table entries are not in disk order"
<`3Shades> Anyone any good at configuring virutal networks? I am having problems remoting into my virtual machine. I can ssh but not remote in.
<`3Shades> I am running ubuntu server 10.10 and I have VM's set up and I would like to be able to remote into those as well from outside my network. I have a standard linksys wireless router.
<`3Shades> I have googled to no avail, or at least I'm not understanding what I'm doing wrong.
<twb> What does "remote in" mean
<`3Shades> vnc
<twb> Are you running a VNC server in the VM?
<`3Shades> I was using remote desktop so...
<`3Shades> I set up remote desktop
<twb> "remote desktop" often means RDP, not VNC.  Make sure you which you mean
<`3Shades> I installed gnome gui on the server then set up remote desktop
<`3Shades> I went into admin and clicked the allow remote desktop connections...
<twb> `3Shades: last time I looked that will display a prompt on the VM's local display saying "accept connection? [Yes|no]"
<`3Shades> allow users to take control of this computer
<`3Shades> yes I clicked accpet connections
<`3Shades> well first I"m just tring to get into the host
<twb> I don't know
<`3Shades> See it's weird, because i can ssh in.
<twb> If you were using a simple X VNC server, I could help.  I'm not familiar with this GNOME stuff
<`3Shades> my dhcp table is set to have the mac address go to an ip address yet when I look up the address you should remote into it says some weird address, but either address I try I cannot remote in.
<`3Shades> I know it has something to do with the VM bridge and settings in my linkxys router, but I am lost.
<`3Shades> what do you recommend I install then
<`3Shades> just vncserver
<`3Shades> lol
<twb> xvncserver4 IIRC
<`3Shades> can I apt-get that
<twb> Or just tunnel X over your ssh connection
<twb> tightvncserver, on my system, appears to be a simple headless VNC X server
<twb> What you have is probably exporting an existing display over VNC, which is more stupider
<`3Shades> tightvncserver is already installed
<`3Shades> It seems stuck at security type requested
<twb> Oh yeah, IIRC the vinagre/vino stuff in GNOME implements some non-standard encryption option
<twb> So if you checked "encrypt connection" or whatever, then you can't connect to it except from the GNOME VNC client
<twb> And your client will probably just hang showing a black screen
<`3Shades> vnc works thanks..
<`3Shades> still need to figure out how to route these VM's to a port though
<`3Shades> I'll wait to ask until I have another 10 hours on it lol
<uvirtbot> New bug: #735950 in mysql-dfsg-5.1 (main) "LOCK TABLE <table> WRITE; does not obtain an exclusive table lock " [Undecided,New] https://launchpad.net/bugs/735950
<acegrimm> bonjour
<xperia_> hello to all. i have a strange problem with memcached. it give me allways this error here => DataMemCache Error, Purge Old File from Cache Dir
<xperia_> where is this Cache Dir of MemCached ? i have searched it but could not find it
<twb> xperia_: caches usually go in /var/cache, occasionally /var/lib
<whoaski> hello all
<whoaski> I just installed ubuntu server w/ apache2, mysql, and rails. tring to configure a developmentserver to bounce code off of
<whoaski> I'm new to this and I want to know where i can get information navigating the command line
<whoaski> I have the ubuntu server guide
<whoaski> it's all installed I just don't know enough to get into it to configure it
<xperia_> twb okay thanks will just look at it. /var/cache i checked allready but could not find anything related to memcached
<xperia_> twb hmm strange even in /var/lib i can not find memcached ! any other help
<twb> xperia_: read the config file?
<xperia_> not till yet i will look at it just right now
<fakhir> I seem to have a IO bottleneck issue but i dont quite understand what it is telling me.  -> http://pastebin.com/Vz1btnbn
<_ruben> fakhir: i assume sdb is a single sata disk?
<fakhir> _ruben, yes
<fakhir> what i am seeing is high load average but low CPU usage on a MySQL server.
<_ruben> then having ~40 random IOs per second is pretty consistent with the 50% utilization
<_ruben> a sata disk can do about 100 of 'em per second
<_ruben> and a (busy) (my)sql server tends to do a fair bit random IOs
<fakhir> any idea why i might be seeing what i am seeing if io is fine?
<_ruben> how high is the load average?
<fakhir> _ruben,  always above 1 typically ~1.5 . CPU percentage is nearly always 0.
<_ruben> iowait is only 6% btw .. or does it go up a lot?
<_ruben> and a load of 1 on a 8way system isn't really that much of problem
<_ruben> 1 core is "occupied" while the other 7 are idle (on average)
<fakhir> ohh i see
<Daviey> twb, Interesting that you prefer to maintain your own netfilter rules.
<Daviey> Why is that?
<twb> Daviey: you mean with respect to blacklisting?
<twb> It's a static ruleset
<twb> http://paste.debian.net/110844/ is the active ingredient
<Daviey> twb, I have a call now, but in an hour or so - i'd like to talk to you further about it.
<twb> It'll have to be tomorrow, or trentbuck@gmail.com -- I'm going home shortly
<soren> twb: I've never worked out what the difference is between --state and --ctstate.
<soren> I guess I never tried to work it out either. :-/
<twb> soren: ctstate is "better"
<Daviey> twb, ok, thanks!
<twb> I *strongly* recommend anyone writing netfilter rules read http://jengelh.medozas.de/documents/Perfect_Ruleset.pdf
<twb> ...which is cited by #netfilter
<soren> twb: Yeah, the good mr. Engelhardt knows what he's talking about.
<soren> twb: The only thing in that doc that I didn't know was the ctstate thing. I wonder why the --state thing isn't just replaced entirely by --ctstate.
<twb> Probably hysteria
<soren> Yeah, there's a lot of that going around.
 * soren is excited about ipset being included in 2.6.39, by the way.
<twb> Eh, DKMS xtables makes that a non-issue
<twb> I need xtables for something else anyway, I forget what
<twb> Maybe just -j CHAOS
<soren> twb: There's a dkms variant?
<twb> It's new in Debian
<soren> twb: Or are you thinking of the module-assistant thing?
<twb> Maybe you suckers don't have it yet
<soren> twb: Oh, shiny.
 * soren files sync request
<soren> done
<soren> \o/
<soren> twb: Thanks for the hint.
<twb> np
 * soren smells an openstack patch brewing
<twb> Meanwhile flipping nsd3 definitely *will not* DTRT WRT upstart
<soren> twb: I didn't fully understand your problem with that.
<twb> one moment
<twb> soren: http://paste.debian.net/110845/
<soren> twb: It gets a new PID when you HUP it? That sounds *incredibly* broken.
<twb> YES
<twb> soren: it *is* smart enough to update its own pidfile
<twb> But even so
<twb> If you don't specify an allow-transfers {} list, does bind9 default to allowing AXFRs from 0/0 (i.e. everyone)?
<twb> It sure looks like it, which explains a lot
<twb> Fuck, I missed tea-time
<twb> Looks like I'm having ramen for supper again
<soren> "ramen"?
<soren> Ah, noodle stuff.
<whoaski> good morning
<ikonia> twb: easy on the language please
<twb> Sorry.
<azizLIGHTS> how do i run dropbox as a service? this guide is outdated because start-stop-daemon doesnot exist on 10.04? http://wiki.dropbox.com/TipsAndTricks/TextBasedLinuxInstall/UbuntuStartup
<azizLIGHTS> /etc/init.d/dropbox: 60: start-stop-daemon: not found
<joschi> azizLIGHTS: it does. it's part of the 'dpkg' package
<joschi> azizLIGHTS: you just need either a proper $PATH variable set or use the absolute path to the binary
<azizLIGHTS> oh hm
<azizLIGHTS> do i have to undo step 2 listed in the guide to edit /etc/init.d/dropbox ?
<azizLIGHTS> specifically this line: sudo update-rc.d dropbox defaults
<joschi> azizLIGHTS: why would you?
<joschi> azizLIGHTS: the "init script" is also lacking a proper shebang
<azizLIGHTS> i dont know im new to linux?
<azizLIGHTS> what is that
<azizLIGHTS> what is the proper shebang i mean
<joschi> azizLIGHTS: the first line of a script should contain a pointer to the interpreter which will run the script, e. g. #!/bin/sh to use /bin/sh
<azizLIGHTS> oh
<joschi> azizLIGHTS: you could also easily use the upstart-script from the dropbox wiki
<azizLIGHTS> ugh
<azizLIGHTS> so this is wrong?
<azizLIGHTS> joschi: ok well putting full path /sbin/start-stop-daemon, and adding #!/bin/sh to first line, now the script works with service dropbox start, confirmed it with ./dropbox.py status
<Wise_> I'm trying to create a disk image of my ubuntu install, with clonezilla... but it's telling me that the partitiont able in the disk is illegal/invalid, not supported by parted, with the error "can't have overlapping partitions", anyone know how I deal with this?
<Wise_> err, partition table* -_-
<azizLIGHTS> ok i edited the dropbox wiki for it, ty for input joschi
<MarkAng> How can I see and change the port used by my Ubuntu server to access the internet?
<MarkAng> We're planning on using it to create and distribute images to computers using PXE.
<MarkAng> But it isn't able to reach the ubuntu archives, like the ones specified in the sources.list file.
<joschi> MarkAng: that'll be on destination port 80/tcp (http). the source port is random
<MarkAng> The reason I'm asking is that we're operating under a company-wide network that has restrictions on certain ports. For instance, the port used by uTorrent is blocked. Is it possible to instruct my server to use or not use a certain port?
<MarkAng> Anyone?
<_ruben> if they're blocking based on source port, hit 'em with a cluebat, a lot
<_ruben> if they're blocking port 80, you should probably use the proxy they provide
<MarkAng> They're not blocking port 80, as I'm able to access web pages. \
<shauno> do you have a specific error from trying to reach the archives?
<_ruben> what does your sources.list look like then? those tend to list http urls
<MarkAng> They do. But whenever we issue a command that should use the repositories inside to update and/or install programs (in this case, a GUI for the text-based server)
<MarkAng> t says it cannot reach the destination files.
<MarkAng> Cannot reach/cannot find.
<pmatulis> "a GUI for the text-based server"?
<MarkAng> Yes. It's command-line as you know, and we know there are GUI skins available.
<MarkAng> We try to install one to make life a little easier for us, but so far no good.
 * Pici scratches his head
<Pici> Which package are you trying to install?
<MarkAng> Hang on, let me check
<MarkAng> ..
<pmatulis> MarkAng: you most probably need to configure a proxy for apt-get
<MarkAng> We use either of the following commands, but neither one works:
<MarkAng> sudo apt-get install xubuntu-desktop
<MarkAng> sudo aptitudeÂ  install --no-install-recommends ubuntu-desktop
<MarkAng> Both commands should fetch a desktop-like environment and overlay it over the command-line.
<MarkAng> But like said before, they don't work.
<MarkAng> Guys?
<pmatulis> MarkAng: well.  is your company using a proxy or not?
<MarkAng> I think they are, yes. Im an IT intern, so I'm not sure what it is.
<pmatulis> https://help.ubuntu.com/community/AptGet/Howto#Setting%20up%20apt-get%20to%20use%20a%20http-proxy
<Caribou> you would need a statement like "Acquire::http::proxy "{proxy URL}"; in /etc/apt/apt.conf file
<MarkAng> It's a large organisation, spanning multiple cities and with hundreds of schools. The only persons that could tell me what the proxy is are out of reach.
 * _ruben puts a sticker MarkAng's server: Warranty void if desktop packages are installed
<Caribou> MarkAng: don't you have a web browser somewhere that would be configured with the same proxy ?
<MarkAng> I just asked an admin that returned to the office, and we have no proxy.
<pmatulis> MarkAng: confirm that your server can reach the internet in general?
<Caribou> you might want to try 'wget http://www.google.com' to see if you can hit the Internet
<pmatulis> Caribou: are you dumming down my comments or do you not see what i'm writing?
<Pici> w3m is installed by default I believe.
<Caribou> no, I'm just adding a suggestion to your comment : he could confirm that he can reach the internet by doing that. Sorry if it seems rude
<pmatulis> Caribou: alright
<RoAkSoAx> morning all
<SpamapS> RoAkSoAx: howdy
<RoAkSoAx> SpamapS: how's it going man, up early today huh?
<RapidTraffic> Webmasters, you interested in some HQ, 24h ip unique, content and geotargeted traffic?
<SpamapS> RoAkSoAx: always.. the baby knows just when to wake me up so I can't go back to sleep :p
<RoAkSoAx> SpamapS: look the bright side.. you can now say you wake up earlier than everybody else her ein the US >P
<RoAkSoAx> (everybody else on the team)
<zul> SpamapS: its genetic ;)
<medberry> Caribou, howdy
 * Caribou waves back
<yann2> hello! what is the most appropriate package to provide the command "mail" - mailutils?
<yann2> there is also bsd-mailx...
<hallyn> i think i usually install mailutils
<hallyn> dunno why.  works, though.
<soren> I used to use mailx.
<genii-around> yann2: It used to be something like bsd-mailutils but I think just the mailutils now
<soren> In acient times, at least.
<soren> For the last decade, I've used mutt, though.
<soren> ..but if I needed "mail", I'd use mailx. No particular reason.
<yann2> soren, so used I, but http://pastealacon.com/27001
<soren> yann2: bsd-mailx == mailx, IIRC.
<patdk-wk> well, mailx == rh mail, and most other peoples mail command
<yann2> ok, I ll go for that
<yann2> thanks
<uvirtbot> New bug: #736149 in bind9 (main) "Ubuntu 10.10 host command ignores all conf files" [Undecided,New] https://launchpad.net/bugs/736149
<bricas> I set up some logrotate config files for my web server logs (monthly rotation), it's been running well over the last 6 months -- except for some reason it decided to rotate things yesterday (15th). Anyone else experience this? Any hints as to how I might diagnose why?
<Daviey> bricas, Depends on the rotate rule.... Did it rotate on filesize, rather than date?
<bricas> Daviey: http://nopaste.snit.ch/37851 # changed real domain to example.com for the paste, but everything else is straight copy+paste.
<Daviey> bricas, how odd... Nothing obvious is jumping out at me.
<bricas> Daviey: yeah, i didn't think my config was off -- like i said, it's been running well for the last 6 months. on the 1st of each month, it seems to fire.
<bricas> Daviey: there's no obvious place that would log something like "logrotate running .. rotating log $foo because of $bar" ?
<Daviey> not really..
<bricas> of course not :)
 * patdk-wk teachs bricas to use symlinks :)
<bricas> patdk-wk: ...? rather than a copy of the log, perhaps? :)
<patdk-wk> ya :)
<bricas> patdk-wk: i only did a copy because i was afraid someone was going to mess that file up, honestly. paranoid am i. :)
<hallyn> Daviey: if you'd like to chat sometime about https://blueprints.launchpad.net/ubuntu/+spec/ubuntutheproject-community-n-server, please ping me.
<RoAkSoAx> hallyn: hallyn include me on your conversation ^^
<Daviey> hallyn / RoAkSoAx: Yes!  That would be wonderful.. I'm so pleased you are both enthusiastic about that one. :)
<RoAkSoAx> Daviey: :)
<uvirtbot> New bug: #736205 in squid (main) "package squid 2.7.STABLE7-1ubuntu12.2 failed to install/upgrade: Ð¿Ð¾Ð´Ð¿ÑÐ¾ÑÐµÑÑ ÑÑÑÐ°Ð½Ð¾Ð²Ð»ÐµÐ½ ÑÑÐµÐ½Ð°ÑÐ¸Ð¹ post-installation Ð²Ð¾Ð·Ð²ÑÐ°ÑÐ¸Ð» ÐºÐ¾Ð´ Ð¾ÑÐ¸Ð±ÐºÐ¸ 1" [Undecided,New] https://launchpad.net/bugs/736205
<SpamapS> http://wiki.spamaps.org/
<patdk-wk> 1.7sec page load time :(
<jiboumans> soren, ttx, are you guys likely to head the celery way for messaging? http://wiki.openstack.org/UseCeleryAsQueueManager
 * RoAkSoAx goes out for lunch
<SpamapS> patdk-wk: no surprise when your mysql db is on a t1.micro ;)
<patdk-wk> SpamapS, actually, it's more of an issue of only using 1 dns name for the server :)
<patdk-wk> 500ms per request, and they are getting delayed :)
<SpamapS> so its not 1 dns name, but that you didn't have the dns name cached
<SpamapS> my resolver responds in 50 - 100 ms once its cached..
<SpamapS> your browser shouldn't be requesting it multiple times
<_ruben> hm, a dig +trace wiki.spamaps.org doesn't go beyond the .org servers
<zul> jiboumans: probably not but check on #openstack
<jiboumans> zul: will do
<hallyn> kirkland: around ?
<hallyn> kirkland: the vgabios package's links file is still sending links to /usr/share/kvm.  is tehre a reason for that?  or can i fix that to /usr/share/qemu?
<hallyn> kirkland: bc otherwise if you isntall vgabios first and then qemu-kvm, then qemu-kvm doesn't find vgabios
<kirkland> hallyn: howdy
<kirkland> hallyn: please put it in /usr/share/qemu
<orudie> what is the minimum ram required for ubuntu server 64 bit ?
<genii-around> orudie: I have a box here with 192Mb it runs on
<cloakable> I've seen a minimal install use 32Mb
<orudie> so 64 bit is recommended ?
<genii-around> if your cpu is 64 bit
<_ruben> 128MB seems to be the documented minimum
<kirkland> hallyn: you've been on a bug triaging tear :-)
<kirkland> hallyn: my bug-mail inbox is *full*  :-)
<kirkland> hallyn: also, i see the vgabios merge proposal ... don't you have upload rights to vgabios?
<kirkland> hallyn: or do you just want a sanity check?
<geekbri> question, is there something lightweight that can be installed if i only want to do outgoing mail from my server and dont care about incoming mail?
<SpamapS> geekbri: smail might be what you're looking for
<geekbri> SpamapS: thanks
<jsemar> I need to set up an smtp server to act as a relay for my ec2 instances, can one of you guys point me to a decent tutorial for that?
<cloakable> apt-get install postfix
<cloakable> Follow directions.
<jsemar> thanks cloakable
<jsemar> thats awesome
<cloakable> jsemar: ubuntu is pretty easy to use :)
<jsemar> yeah i know, but your answer is useless, obviously i can install postfix, but i need for the ec2 instance to have its postfix .cnf file send mail to my postfix installation on probably linode
<jsemar> so theres authentication, and all sorts of things besides 'sudo apt-get install postfix'
<cloakable> By default ubuntu postfix uses system accounts. You want smarthosting.
<_ruben> well, you didn't mention authentication in the first place
<cloakable> Also, does your linode have a static ip? Use that for authentication.
<_ruben> no experience with ec2, but if the ips are static, you can just use those as restrictions
<cloakable> yea
<jsemar> you wouldnt recommend using sasl/smpt auth
<_ruben> depends on the scenario
<geekbri> damnit, is there anyway to find out why another mail server is refusing your connection... I know i must be missing something that is causing it to make me marked as spam and refusing my connection but they just say error 550 connection refused
<_ruben> contact its postmaster
<Patrickdk> heh, ip's for ec2 are static per instance
<Patrickdk> so if you use ebs fs's, and stop/start, ip will change
<Patrickdk> kind of like a cable modem :)
<Patrickdk> as long as it doesn't go down, your likely to have the same ip forever
<Patrickdk> but no guarrentee :)
<geekbri> well yeah that i know
<geekbri> but i have an elastic IP assigned to the instnace
<SpamapS> jsemar: given that your instances will be ephemeral, you'll probably want to use sasl+tls
<SpamapS> jsemar: Its similar to the config I use for my laptop to relay through a smart host.
<RoyK> natty installation was quite nice, btw, at least on the desktop - it starts installing in the background before asking stupid questions like username/timezone etc :)
<SpamapS> geekbri: EC2 ips, even the elastic IPs, are pretty much banned by now
<SpamapS> too much abuse out there.
<SpamapS> RoyK: I think Maverick did that too..
<geekbri> i guess that makes sense... its just unfortunate when you want to send automated emails to yourself :)
<SpamapS> does.. shouldn't refer to the stable release in the past tense.. ;)
<RoyK> SpamapS: oh - dunno - haven't installed a maverick desktop yet - only upgraded to it...
<geekbri> you know whats nice?
<geekbri> ubuntu on ec2, i haven't found a server distro with as much support as it yet.
<ccm> Hi there
<hallyn> kirkland: I don't yet have the upload rights
<hallyn> (I test with a vmbuilder push that I want to do every morning :)
<kirkland> hallyn: still around?
<hallyn> yup
<hallyn> kirkland: what's up?
<kirkland> hallyn: just sent you the kvm auto install magic
<hallyn> ah, thanks.
<kirkland> hallyn: essentially, it's:
<kirkland> boot an ubuntu server ISO and append "priority=critical locale=en_US url=http://bit.ly/ubuntu-nqa" to the kernel  command line
<kirkland> hallyn: noting that we might need to tweak the preseed file which is pointed to by url=http://bit.ly/ubuntu-nqa"
<kirkland> nqa is my acronym for "no questions asked"
<hallyn> and so i should test it and send it on to the kvm-autotest folks?
 * hallyn is watching grub compile and his memory is failing him
<kirkland> hallyn: right
<kirkland> hallyn: what i've never quite been clear on
<kirkland> hallyn: is what/how they want to call this
<kirkland> hallyn: note that the preseed pointed to by that bit.ly link should probably go into kvm-autotest's source control
<kirkland> hallyn: and it'll need to be web-retrievable from somewhere, perhaps a raw git print
<kirkland> hallyn: and you can shorten that with bit.ly or the like
<kirkland> hallyn: does that make sense?
<hallyn> are bit.ly links permanent?
<oraqol> hey yall, quick question, what exactly is Ubuntu Enterprise Cloud?  Is it a virtualization platform?  I've got a quadcore amd 6 gig rig, is that enough to support it?
<hallyn> (that is, until .ly goes down :)
<kirkland> hallyn: yeah
<kirkland> hallyn: sure
<kirkland> hallyn: full urls are fine too in there
<kirkland> hallyn: and if it's called by a script, there's no need to obfuscate it behind a url shortener
<hallyn> kirkland: ok, i'll take a look at that next week.  (i'm out rest of this week).  thanks
<kirkland> hallyn: if you're typing it in by hand (which i do often do to quickly get a vm installed, hands off)
<kirkland> hallyn: it's nice to have a bit.ly link :-)
<kirkland> hallyn: sure
<hallyn> you control your dns don't you  so you could shorten p.c.c/~kirkland/preseed.cfg
<kirkland> hallyn: heh
<kirkland> hallyn: i reckon i could
<kirkland> hallyn: i'm at austin java, at the moment though
<kirkland> hallyn: and i haven't pwned their dns....yet.... :-)
<hallyn> ah   been back to thunderbird?
<kirkland> hallyn: i haven't, actually
<kirkland> hallyn: i just finished the bag of beans i bought there, though, so i was thinking about it yesterday
<kirkland> hallyn: i didn't pick you to be one for march madness
<hallyn> huh?
<hallyn> you'd have been right
<kirkland> *robbiew is taking the rest of the week off to watch basketball :-P
<RoAkSoAx> kirkland: maybe magic can be put into cobbler to install images with koan using that presseed
<hallyn> oh :)
<kirkland> RoAkSoAx: either there, or in orchestra
<hallyn> unrelated :)
<kirkland> hallyn: i know, i was kidding
<kirkland> hallyn: i assume you care none whatsoever for NCAA basketball :-)
<hallyn> right, but apparently our esteemed prez does
<oraqol> no love?
<hallyn> so i found 'conboy' for the n900 today.  it does tomboy format, and syncs to ubuntu one!  yay
<hallyn> oraqol: kirkland could answer you best i think :)
<RoAkSoAx> kirkland: definitely in orchestra. But with koan we can easily install KVM images using it on the host, but right now the use of koan to install Ubuntu images is broken :)
<kirkland> oraqol: what's your question?
<hallyn> (just hit up-arrow twice i think)
<oraqol> is it a virtualization platform?
<hallyn> uec
<kirkland> oraqol: UEC = Ubuntu Enterprise Cloud
<kirkland> oraqol: i suppose you could call it a virtualization platform; your hardware is enough to support it, yes
<hallyn> oraqol: it's intended for making clusters.  what is your intended use?
<kirkland> oraqol: https://help.ubuntu.com/community/UEC/CDInstall#STEP%201:%20Prerequisites
<oraqol> right now im running two vms on vanilla ubuntu server, one of which is a pbx, should i just be using uec as the host os?
<kirkland> hallyn: i'll commit/upload vgabios now, if you like
<kirkland> oraqol: do you often need to spin new vm's up and down?
<oraqol> no, they tend to be always running
<hallyn> kirkland: please do, thanks
<kirkland> oraqol: probably not necessary in your case
<kirkland> oraqol: kvm + libvirt is probably enough
<oraqol> ok cool, thanks guys, much more responsive than the #ubuntu chan
<oraqol> ttyl!
<hallyn> all right one more attempt at building grub2 in my vm.  then i give
<hallyn> up
<kirkland> hallyn: oh
<kirkland> hallyn: actually, we need one more thing to make this vgabios work
<hallyn> ?
<kirkland> $ dpkg -S /usr/share/qemu/vgabios.bin
<kirkland> qemu-common: /usr/share/qemu/vgabios.bin
<kirkland> hallyn: qemu-common puts symlinks in that particular path
<kirkland> hallyn: so we'll have dpkg conflicts
<hallyn> hold on - maybe not (checking)
<hallyn> oh
<hallyn> so why does qemu-common create that link?  if it doesn't provide the file?
<kirkland> hallyn: so actually, i think we just need to remove vgabios/debian/links entirely
<kirkland> hallyn: okay, here's the logic ...
<hallyn> well then qemu-common needs to add a bunch more
<hallyn> (vgabios.qxl.bin, etc)
<kirkland> hallyn: vgabios puts its bios files in a path that it "owns", /usr/share/vgabios/*
<kirkland> hallyn: qemu-common depends on vgabios to be functional
<kirkland> hallyn: and qemu, itself, expects those vgabios files to be in /usr/share/qemu/*
<kirkland> hallyn: so it should be up to qemu to put symlinks in its owned path to the data it needs
<kirkland> hallyn: and if qemu needs to add more links, then so be it
<kirkland> hallyn: but it should be up to qemu package to do that
<kirkland> hallyn: makes sense?
<hallyn> sure
<hallyn> it does make sense
<hallyn> so scratch that merge request :)
<kirkland> hallyn: okay, i'm going to fix that in the branch and upload
<hallyn> which branch?
<hallyn> both should be done at once, probably, so if you wanted to wait until monday...
<hallyn> i.e. you can't remove the links from vgabios without fixing qemu-common, without biting some people, right?
<kirkland> hallyn: hmm, no, i don't think that's a problem
<kirkland> hallyn: what extra links are needed in qemu-common?
<kirkland> hallyn: vgabios uploaded
<kirkland> hallyn: i'm looking at qemu now
<hallyn> Well, vgabios has all those oterh bios names now...
<kirkland> hallyn: ah, yes, i see
<kirkland> hallyn: i'll get those fixed now
<hallyn> kirkland: thanks!
<kirkland> hallyn: gotta run now, but i'll get it uploaded tonight
<kirkland> hallyn: bzr is really slow over public wifi :-(
<hallyn> kirkland: s/over.$//
<hallyn> oh no did i say that?
<hallyn> kirkland: thanks, ttyl
<hallyn> kirkland: btw, it helps to preseed (if you will) your laptop...  I have one shared bzr repo which makes subsequent re-fetches of a tree pretty fast no matter what
#ubuntu-server 2011-03-17
<EvilPhoenix> <EvilPhoenix> how can i remove a package that has a failed prerm and postinst script?
<EvilPhoenix> it cant be removed through the standard methods, hence why i'm asking
<twb> EvilPhoenix: when I get into that situation, I usually edit the prerm/postinst
<EvilPhoenix> twb:  the problem is...
<twb> EvilPhoenix: they're in /var/lib/dpkg/info/.  Do be careful, though
<EvilPhoenix> twb:  i only need to remove one line to fix it i think
<EvilPhoenix> twb:  the issue is i cant remove the file nor can i install the updates due to failed scripts
<EvilPhoenix> twb:  also, the package is a package I built, i screwed up the scripts :P
<EvilPhoenix> twb:  so i know the changes i need to make.  thanks
<EvilPhoenix> twb:  the package still fails, any MANUAL removal processes you can point me to for failed packages?
<twb> EvilPhoenix: nope
<twb> EvilPhoenix: pastebin the transcript of dpkg -P foo
<EvilPhoenix> http://pastebin.com/A4QzZqvi  <-- ignore the bash highlighting
<telive> hello everyone . i have just installed  apache2 in ubuntu and found that there is no apache2.conf file in /etc/init dir . so apache do not stat at the boot time .
<EvilPhoenix> twb:  i mean, I could just manually go through and erase the files, including the stuff in dpkg relating to the package, no?  would that not fix it?
<EvilPhoenix> twb:  again, considering I originally built the package so its my fail that's the issue
<twb> EvilPhoenix: I really wouldn't do that
<twb> EvilPhoenix: just remove the "set -e" from the broken pre/post inst/rm
<twb> And ideally fix your new version to not given an error if bbot isn't running at the time
<telive> can anyone help ?
<EvilPhoenix> twb:  i'd need the bash script to test if the process is actually running
<EvilPhoenix> twb:  i'm not pro at bash scripting
<twb> EvilPhoenix: pastebin it
<EvilPhoenix> twb:  that's the thing, i dont have code for such a script...
<EvilPhoenix> twb:  as i said, i'm not pro at bash scripting
<EvilPhoenix> twb:  i just have `kill -9 bbot` in the script
<EvilPhoenix> which is why it'd be triggering
<EvilPhoenix> :/
<EvilPhoenix> ah there we go
<EvilPhoenix> the "being careful" with the dpkg/ files fixed :P
<EvilPhoenix> s/fixed/worked/
<twb> EvilPhoenix: uh, kill takes a pid
<EvilPhoenix> twb:  killall, not kill
<EvilPhoenix> sorry
<EvilPhoenix> i'm semi tired :P
<twb> You ought to have more like "invoke-rc.d bbot stop"
<EvilPhoenix> problem is its a .py file, the executable is just a shell script that calls the .py file :/
<EvilPhoenix> i'll remove the kill part, hope it all works :P
<twb> "killall -9 bbot" is totally wrong, but you could try "killall -9 bbot || :" to ignore errors from that one command.
<EvilPhoenix> i'll test it in the next build i do, which will be tomorrowishi :P
<hallyn> telive: is this on lucid?  natty?
<telive> hallyn, it is lucid
<hallyn> telive: what happens when you do '/etc/init.d/apache2 start' ?
<EvilPhoenix> it starts the apache2 webserver
<EvilPhoenix> unless its already started
<EvilPhoenix> oh wait a sec nevermind
<EvilPhoenix> i misread hallyn's message :P
<hallyn> :)
<EvilPhoenix> hallyn:  such is the nature of skimming scrollbacks :p
<telive> hallyn, it starts the apache2 as expected  , but it didn't start apache2 at boot time , i have checked in sysc-rc-conf and the apache2 is on .
<hallyn> telive: hm.  Perhaps you'd best file a bug.  After boot, type 'ubuntu-bug apache2'.
<JanC> finding the reason why it doesn't start would be nice though  ;)
<hallyn> yeah but i think we need to look at various log and config file contents...
<hallyn> JanC: do you have an idea offhand?
<JanC> I already proposed to look into logs in another channel  âº
<JanC> (before i saw telive moved here)
<hallyn> JanC: sorry, didn't mean to interfere!  carry on :)
<uvirtbot> New bug: #736351 in qemu-kvm (main) "natty vgabios installs links into /usr/share/kvm" [Medium,In progress] https://launchpad.net/bugs/736351
<telive> JanC, hallyn , actually i have no idea which log to watch . as there is nothing wrong in /var/log/apache2/error.log
<JanC> hallyn: you were already answering here before I answered in the other channel, so you don't really interfere âº
<JanC> telive: is this with a default apache2 config, or did you change anything?
<telive> JanC, i just install from apt-get , nothing has been changed .
<uvirtbot> New bug: #736567 in apache2 (main) "apache2 didn't start at boot time . but it is checked on in sysc-rc-conf" [Undecided,New] https://launchpad.net/bugs/736567
<telive> JanC, hallyn , nobody is interfere , and i am really appreciate for your help , thanks you two .
<JanC> you could try increasing the apache log level, but that's useless if apache is never run...   âº
<JanC> telive: do other sysvinit scripts get run?
<JanC> especially ones that ony get started at the same runlevel as apache?
<JanC> only
<KB1JWQ> telive: This isn't a virtualized box by any chance?
<Godfather> I'm looking to run a pop3/smtp on 10.10 any sugestions on the dameon?
<KB1JWQ> Dovecot.
<JanC> dovecot for POP3 (& IMAP), postfix for SMTP
<hallyn> telive: say, what does 'runlevel' show, and what is in /etc/rc2.d/ ?
<telive> KB1JWQ, it is a physical machine , not a virtual one
<telive> JanC,  the vsftpd doesn't work too .seems that sysv doesn't work in my machine
<JanC> telive: sounds like we are on the right track there  âº
<telive> hallyn, ls /etc/rc2.d | grep apache K01apache2
<telive> hallyn, it is stop in runlevel 2  . i think u mean rc5.d ?
<telive> hallyn, ls | grep apache S91apache2
<telive> JanC, means sysv doesn't work ?
<JanC> telive: eh?
<JanC> default runlevel on Debian & Ubuntu is 2
<telive> JanC, not 5?
<JanC> no âº
<JanC> unless you changed something of course
<telive> JanC, 5 means the the graphic mode , it is ?
<JanC> no
<JanC> default runlevel on Debian/ubuntu is always 2, 3-5 are reserved for the sysadmin to do with whatever he/she likes
<JanC> it's not Red Hat  ;)
<telive> JanC, god , jessus . i am always told that level 5 means graphic mode in red hat series
<twb> And the stuff in upstart doesn't understand runlevels much, either
<JanC> it is, in red hat
<telive> JanC, thanks for your time . i am so ashamed . i thought the run level is the same in the two distribution .
<twb> No need to be ashamed
<JanC> ... unless you make the same error again in the future ;-)
<JanC> well, "error", more like a misunderstanding
<hallyn> telive: phew  :)  have a good night
<telive> hallyn, it is day time here. have a good night to u .
<telive> JanC, thanks a lot . hehe .
<Doonz> I just got a VPS box with ubuntu 10.04 installed. I created 2 new users. How do i get them to have the same terminal feel as the original user (sorry i dont know what to call it)
<kaushal> hi
<kaushal> Just trying to understand about any server which gets freezed.
<kaushal> I have checked dmesg, syslog and various logs
<kaushal> is there a way to trouble shoot it ?
<kaushal> dont see any issues
<kaushal> How do i know whats causing the issue
<kaushal> the only way to set it right is to reboot the server
<kaushal> Please help me understand
<Tohuw> kaushal: The first question is always: what was happening with the server when it stopped working?
<kaushal> Tohuw: its not accessible at all
<Tohuw> kaushal: if it isn't accessible, you can't troubleshoot it until you make it accessible again.
<kaushal> yeah
<kaushal> since its located in Data Center
<Tohuw> kaushal: they provide you no facility to remotely restart your server?
<kaushal> yeah
<kaushal> so we had IPMI too it did not responded
<Tohuw> Well, until you can access the server, you cannot troubleshoot it. If it is impossible to boot the server, perhaps /var/log (or better all of the files) can be recovered.
<chilli0> Hello I installed ubuntu 10.10 ( server and I'm having a few issues with samba)
<chilli0> I can't seem to restart samba, I'm not sure why though. I did install samba4 and then  uninstall it but it is still there. It's confusing.
<jmarsden> kaushal: If you have a separate IPMI NIC and its associated control processor setup, and *it* does not respond, then I would guess that you either have a network issue reaching it, or else a hardware issue that is killing both the main server board/CPU *and* the IPMI board... which I think would probably mean a power-related issue of some sort?
<jmarsden> IPMI stuff is *supposed* to run completely independently of the main server hardware.  If you (and the data center tech staff) cannot get to the IPMI board, they should probably swap out the whole dedicated server chassis for you.  (If it is coloed -- your hardware, so your problem!)
<ttx> jiboumans: yes, there is a plan in this direction. I have no clue how mature it is or how reactive they are, though.
<ttx> jiboumans: the perceived gains for us are detailed in http://wiki.openstack.org/UseCeleryAsQueueManager
<ttx> jiboumans: not sure how much this was compared to other equivalent solutions... I don't know of any in Python that would be so advanced, and we are Python-oriented.
<RoyK> Happy Paddy's day :)
<uvirtbot> New bug: #736685 in augeas (main) "unmatched [ causes segfault in augtool.c:63" [Undecided,New] https://launchpad.net/bugs/736685
<uvirtbot> New bug: #736694 in squid (main) "package squid 2.7.STABLE7-1ubuntu12.2 failed to install/upgrade: ErrorMessage: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 1 zurÃ¼ck" [Undecided,New] https://launchpad.net/bugs/736694
<eagles0513875> hey guys i created a user and for some reason tab completion is not enabled as well as i cant backspace if i make a mistake on a command any ideas how i can fix it
<ikonia> eagles0513875: how did you create the user ?
<eagles0513875> useradd jaquilina -d /home/jaquilina -m
<ikonia> interestingly basic
<eagles0513875> i found that command from googling
<ikonia> has it copied all the default files into her home directory
<ikonia> eagles0513875: you're running a business server and you had to google on how to add a user ?
<eagles0513875> yes tha .baschrc .bash  profile
<eagles0513875> ikonia: this is a new vps im migrating slowly to it
<ikonia> that doesn't change what I've said
<eagles0513875> i know how to add a user just not have it generate a home dir for that user that is what i keep forgetting
<ikonia> if your running a business server as you say you are and you can't add a user without googling you are in a lot of trouble and I suggest you install the desktop gui
<eagles0513875> cant afford to with what will be on it
<ikonia> why ?
<eagles0513875> dont have that much ram available
<ikonia> how much do you have ?
<eagles0513875> 2gb
<ikonia> that's tons
<ikonia> the desktop with no effects won't touch that
<eagles0513875> but once i get my sites migrated over here the dbs will eat that up pretty quick
<ikonia> eagles0513875: I very very much doubt your database will eat 2GB of ram
<eagles0513875> my server before i upgraded had only 1 and it got eaten up rather badly
<ikonia> then your server is missconfigured
 * eagles0513875 goes back to googling
<ikonia> how big is the database and how many TPS does it do ?
<ikonia> (or databases I should say as you suggesed you have more than 1)
<eagles0513875> its 3 databases for 3 websites the db for my crm then for cacti then database for shopping cart
<ikonia> ok, what sort of usage are they getting ?
<ikonia> (rougly I'm looking for detailed figures)
<eagles0513875> detailed figures i dont have but they will eventually be getting high usage
<ikonia> define high usage
<ikonia> roughly
<eagles0513875> in terms of queries
<ikonia> well, queries, is a good start, or anything that you think defines high usage
<ikonia> just what in your mind do you class as high usage?
<ikonia> however you want to summerise/define it
<eagles0513875> couple of thousand queries bout 5,000 queries or more
<ikonia> per day/minute/hour
<eagles0513875> atm per day
<eagles0513875> which then again wont be that high usage but i am expecting it to climb
<ikonia> ok, what about when you think it's busy
<eagles0513875> ?
<ikonia> it's currently doing roughly 5000 per day, what sort of numbers are you expecting it to climb to
<ikonia> the reason I'm asking this is I've just picked one of my mysql database machines randomly, with a 800 meg DB, it's doing over 1800 transactions per hour, with 3 of those regular queries quite large and complex, and it's using under 38 meg of ram
<eagles0513875> O_o
<eagles0513875> i think some tweaking is due on my end then
<ikonia> and I'd consider that very light use
<eagles0513875> probably poor optomization on my part
<eagles0513875> in regards to my user issue what do i need to do to remedy that issue
<ikonia> hence why I'm auerying it and suggestin 2GB of ram is super ammounts to stick the gui on and make your life easier
<ikonia> "querying"
<eagles0513875> tbh i prefer command line and would like to get familiar with it
<eagles0513875> work with it more then i have in the past
<ikonia> you've been learning it for more then 2 years and you're not progressing, if your genuinly trying to run a serious server, use the tools to help you do it
<ikonia> having the gui on there doesn't mean you can't use the command line at the same time
<ikonia> it's just a useful set of tools
<eagles0513875> how much space does gnome take up outa curiosity
<ikonia> I think it's about 600 meg for a full install,
<ikonia> (roughly)
<eagles0513875> O_O from what im seeing its showing 2,144MB (2.1gb) O_o
<eagles0513875> can always purge the desktop environment when im fedup of it
<RoyK> eagles0513875: 2GB for a server install???
<RoyK> Filesystem            Size  Used Avail Use% Mounted on
<RoyK> /dev/mapper/idun-root 124G  1.7G  116G   2% /
<eagles0513875> gnome
<RoyK> oh
<eagles0513875> ya big time travesty
<RoyK> on a server...
 * RoyK once setup linux servers with 32MB RAM drives
<ikonia> eagles0513875: with regard to your user issue, setup another user, see if it has the same issues
<eagles0513875> ok
<eagles0513875> i just realized i wont be able to access the desktop lol if i put one on there
<ikonia> why can't you access the desktop ?
<eagles0513875> i have never had much luck with forwarding x to another machine like this
<ikonia> that's easy to setup, we can help you with that
<eagles0513875> i would prefer to stay on commandline
<ikonia> why ?
<ikonia> you can't use it
<eagles0513875> yes i can
<ikonia> having the desktop will m ake it easy for you and you can still use the command line
<ikonia> eagles0513875: you can't use it, lets be honest,
<eagles0513875> i can use it
<ikonia> and you can still use it even with the gui, but for serious tasks that are important you won't have teh worry of messing up
<eagles0513875> this isnt on my server now this is on a vps
<ikonia> eagles0513875: come on, you've just had to google how to add a user, I'm not putting you down for that, but that is basic stuff for running a server on the command line
<eagles0513875> ikonia: i know how to add a user
<ikonia> it doesn't matter if it's a vps or a physical server
<ikonia> eagles0513875: you had to google it
<eagles0513875> i just dont remember when adding a user hwo to add a user with a home directory
<ikonia> eagles0513875: ok, so if you knew how to use the command line you could do man useradd read the man page for the -d and -m option and add it, you wouldn't need to google it and copy and paste a command
<ikonia> (not that there is anything wrong with googling)
<ikonia> eagles0513875: have you created another usert to to test yet ?
<eagles0513875> doing so now
<eagles0513875> same issue
<ikonia> try adding the user like this
<ikonia> sudo useradd -g users -d /home/test -s /bin/bash -m -k /etc/skel test
<eagles0513875> what exactly does the -k /etc/skel test do?
<_Techie_> eagles0513875, sets the skeleton directory
<ikonia> test is the username
<eagles0513875> ok
<eagles0513875> its not working saying group doesnt exist
<ikonia> the users group doesn't exist ?
<ikonia> show me the output of uname -a please
<eagles0513875> im on 10.04.2
<eagles0513875> Linux EagleEyeT 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 x86_64 GNU/Linux
<eagles0513875> i changed up that command a bit to try and create the user i would like to use
<ikonia> eagles0513875: why did you change it
<ikonia> it is a test
<_Techie_> ikonia, maybe he didnt want to have to redo it if it worked
<eagles0513875> that worked actually
<_Techie_> why do you suggest stupid things?
<ikonia> ?
<ikonia> what  stupid thing did I suggest ?
<_Techie_> naming it test because its a test when its clearly better to use a proper username, and if it works just migrate
<ikonia> _Techie_: not at all, as those settings may not have been right for him, I wanted to do a test to see if it worked,
<eagles0513875> it seems to look fine now
<ikonia> eagles0513875: so it works with the auto completion ?
<eagles0513875> yes
<ikonia> but the two users you made without the -k option didn't work
<eagles0513875> nope
<_Techie_> ikonia, i hear that you suggested that eagles0513875 install a GUI because he wasnt that well versed with all the commands?
<ikonia> for some rason your machine is not defaulting to /etc/skel
<ikonia> _Techie_: correct yes
<eagles0513875> ikonia: wouldnt kubuntu-offtopic be more appropriate for this offtopic chit chat
<ikonia> it's not offtopic chit chat
<_Techie_> ikonia, i dont care if i get banned for this or not, but its shitheads like you suggesting people withought perfect CLI knowledge use a GUI, they will never learn if you say that, so next time dont try scapegoat and help them the fuck out
<ikonia> _Techie_: just tone down the language please
<_Techie_> and with that being said
<_Techie_> i wish you a good day
<ikonia> _Techie_: I'm not critisising someone for not having perfrect command knowledge, however if you are trying to run a server and you've not picked up the basics in over 2 years of learning, using the gui as a fall back is a good tool
<ikonia> ooh, why did it underline him ?
<jkg> ikonia: because <underscore><string><underscore>
<ikonia> jkg: my client normally doesn't do it
<jkg> oh huh, yeah, mine either a few lines prior
<jkg> guess it doesn't do it for nicks in channel
<\sh> hmmm...what needs to be done in an upstart pre-start script to tell upstart to not actually exec a job? is an exit code of != 0 enough to give upstart the order to not "exec <service daemon>" ?
<\sh> forget about my question...
<uvirtbot> New bug: #736757 in samba (main) "PANIC (pid 18442): internal error in winbindd" [Undecided,New] https://launchpad.net/bugs/736757
<Daviey> \sh, What was the answer?
<airtonix> Daviey: isn't that awesome?
 * airtonix rages
<Daviey> airtonix, Need more words... :)
<airtonix> Daviey: when someone asks for help then neglects to pass on the solution
<Daviey> ahh!
<Daviey> I'm sure \sh will when he comes back to irc.. :)
<airtonix> always the optimist
<Daviey> But generally yes, i 'love' forum threads where it just says "fixed it!"
<\sh> Daviey: the solution is: script \n # doing some checks \n if [ <whatever is true> ]; then \â exec <service> \n fi \n end script.. I just was to complicated with my thinking of pre-start script .. end script exec ...
<\sh> I should always think twice before I ask a question...the answer is sometimes too obvious ;)
<Daviey> \sh, ah!  Yes - i see
<Daviey> airtonix, told ya so :P
<jamespage> Morning all - can someone give me some help understanding how the library path works in Ubuntu?
 * jamespage thinks things have moved on since LD_LIBRARY_PATH was used....
<Daviey> jamespage, You can overide using LD_LIBRARY_PATH
<Daviey> LD_LIBRARY_PATH=/home/jamespage/something/ ./somebin
<jamespage> But ldconfig with files in /etc/ld.conf.d is the 'proper' way todo it?
<Daviey> dave@voodoo:~$ cat /etc/ld.so.conf
<Daviey> include /etc/ld.so.conf.d/*.conf
<jamespage> Daviey: thats the one :-)
<jamespage> so LD_LIBRARY_PATH could be used to localise a change to support say a Non-Ubuntu installed application?
<Daviey> jamespage, Well that is a funny question... I used LD_LIBRARY_PATH on lucid just the other day, and it seemed to work... but just doing some quick searching people are saying that it doesn't..
<Daviey> I understood setting the env variable was good for local usage, (doesn't require root!), but /etc/ld.conf.d was the place to do it for packaging etc.
<jamespage> Daviey: so the context to my question was about bug 735296 - it looks like a local PostgreSQL install is getting included at the system level which kills winbindd
<uvirtbot> Launchpad bug 735296 in samba "package winbind 2:3.4.7~dfsg-1ubuntu3.4 failed to install/upgrade: underproces installed post-installation script returnerede afslutningsstatus 127" [Undecided,Incomplete] https://launchpad.net/bugs/735296
<jamespage> wanted a nice solution to allow the reported to continue with the local install without conflicting with system installed components.
<Daviey> jamespage, Yeah... LD_LIBRARY_PATH i would think would be the correct fix for that... my i'm not certain as i'm not in doubt that still works!
<Daviey> jamespage, I wonder why he is using non-packaged postgres?
<jamespage> Daviey: good question
<Daviey> Still, i'm intrigued to know the answer now :)
<jamespage> Daviey: not sure its a full installs - looks like it may be a locally compiled version of http://pgfoundry.org/projects/psqlodbc/
<zul> morning
<Daviey> late-morning zul o/
<zul> Daviey: no really early
<Daviey> jamespage, Ah, that makes more sense
<Daviey> zul, not on real o'clock... ie, UTC. :)
<zul> Daviey: heh my biological clock says its really early
<Daviey> heh
<zul> SpamapS: you didnt know about augeas?
<zul> nxvl: ping when you are around
<\sh> hmmm...lucid -> rabbitmq-server failed to install ? no bugs open? grmpf
<zul> SpamapS: yes i know about php 5.3.6 ;P
<Doonz> I just got a VPS box with ubuntu 10.04 installed. I created 2 new users. How do i get them to have the same terminal feel as the original user (sorry i dont know what to call it)
<ikonia> Doonz: please don't cross-post you've already asking in #ubuntu
<RoAkSoAx> morning all
<\sh> zul: somehow something is wrong with rabbitmq-server on lucid: 1. it warns me that I'm upgrading from an old version (which is not true, because it's a fresh server) and it failes to start during "configure", but it's running already
<zul> \sh: known bug
<\sh> zul: bug #653495 eventually?
<daxroc> Hey all
<zul> \sh: its fixed in maverick and above we need to do an sru for it
<daxroc> with 10.10 , More often than not when downloading files speeds drop below 10Kb ps , this is both for local and external network requests , i've tried disabling ipv6 both through sysctl and through grub to no avail, ( also tried seperate switches, ports, cables. Has any one seen this problem or have any suggestions ?
<daxroc> Local is Cat5e, downloads are external http, ftp and apt is also slow
<daxroc> Internal http,ftp  are fine upto 25Mb ( roughly 1second ) then slow to < 10Kb
<n1md4> hello.  i've installed uec on cc server, with 2 network interfaces.  network/interfaces has both configured for dhcp, surely 1 should be a public address, and the other static for dchp to the rest of the cluster.  I'm following https://help.ubuntu.com/10.04/serverguide/C/uec.html but it's not clear what should be configured for multiple interfaces.  Can someone pastebin their network/interfaces and dhcp.conf files please.
<n1md4> https://help.ubuntu.com/community/UEC/CDInstall guide even.
<daxroc> box is a dell precision T1500 with NetLink BCM57780 Gigabit Ethernet PCIe (rev 01) Nic
<daxroc> consistant over other hardware too Dell 2850 with bonded nics
<RoAkSoAx> zul: howdy! Do we have to subscribe needs-packaging bugs to ubuntu-archive once the package gets uploading and its sitting in the new queue?
<RoAkSoAx> s/uploading/uploaded
<zul> RoAkSoAx: no
<RoAkSoAx> zul: ok thanks. python-ethtool is there now so I guess it's just matter of time
<zul> RoAkSoAx: yep
<Daviey> RoAkSoAx, If it's not been looked at before Tuesday, i'd start asking ~ubuntu-archive directly
<Daviey> \sh, Regarding the rabbitmq-server bug, you can preseed that value to go away.
<RoAkSoAx> Daviey: will do ;)
<Daviey> \sh, rabbitmq-server rabbitmq-server/upgrade_previous note
<orudie> for some reason I don't get a ping reply from some domains, for ex. I can ping yahoo.com but no ping reply from google.com
<Tricks> alright guys our server went down earlier so I've forwarded all web traffic to our backup server
<Tricks> it was running fine but now it's serving pages REALLY slowly and I can see about 10 processes (/usr/sbin/apache2 -k start) open with ps -aux
<Tricks> anybody know why these processes are running?
<patdk-wk> cause you told it to?
<Tricks> nope
<Tricks> just started happening
<patdk-wk> what do you mean just started happening?
<Tricks> tried apache2ctl -stop and start to clear them and they come back :S
<patdk-wk> apache always runs several copies of itself
<Tricks> patdk-wk, surely not the start command though?
<patdk-wk> heh?
<patdk-wk> what part of apache starts several copies of itself ==1?
<resno> having hdd issues errors pop-up: http://dpaste.com/510513/. not sure how to fix
<uvirtbot> New bug: #736904 in squid (main) "package squid 2.7.STABLE7-1ubuntu12.2 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/736904
<SpamapS> zul: no I just discovered augeas a bit ago. Very cool.
<zul> SpamapS: ah ok
<SpamapS> zul: seems like it would be very useful when paired w/ debconf
<SpamapS> zul: I'm using it in ensemble formulas to tweak config files.
<zul> SpamapS: agreed
<iggi> Can anyone help me troubleshoot an apache issue? I had a virtual host setup and it worked fine, but ever since I rebooted all of the pages state "The server is taking to long to respond"
<RoAkSoAx> hallyn: has anything from today's updates might have broken something with KVM/VNC?
<RoAkSoAx> hallyn: even though while trying to upgrade It shows that qemu-kvm has been kept back, when trying to view it with virt-viewer or virt-manager it just shows a black screen
<jamespage> hggdh - time to move to Jenkins?
<hggdh> heh
<hggdh> jamespage, certainly
<jamespage> hggdh: so I'm pretty happy with the state of the current from source packaging in the testing PPA.
<jamespage> I'm just pushing through a bit of refactoring; and I have polished up the jenkins-slave package to replace the existing hudson-slave package from usit (athough it will still work with Jenkins)
<jamespage> I guess the question is how and when.
<jamespage> I'd also like to get the ec2 testing stuff I've been working on in a Jenkins instances as well - maybe the same one.
<jamespage> (oh - and where - do we still want to be cloud based)
<hggdh> jamespage, on the last question: we will need a lot more of preparation for getting out of the cloud -- we will have to check with IS how to get it running, and still be able to use the slaves we have now
<hggdh> so I do not think we will be able to do it *right* now
<hggdh> jamespage, about when... right now would be a good time, the ISOs are broken
<jamespage> doh!
<hggdh> jamespage, and finally, on how: IDK ;-)
<jamespage> Well we can either have a nice clean install and move over the jobs we want to keep;
<jamespage> or we can snapshot the current system and move over in-place - it should be fairly clean
<RoAkSoAx> Daviey: just proposed a eucalyptus branch for you to review
<hggdh> jamespage, which one would be easier? Sounds like snapshot would work, correct?
<jamespage> hggdh: yes but I will test it first :-)
<jamespage> is there anything else we want to change at the same time - for example adding some extra storage etc..  which would be easier on a clean install
<jamespage> migrating from one server to the other is a quick scp of /var/lib/hudson -> tgt:/var/lib/jenkins
<jamespage> And we can move the elastic IP when we are happy with the new server.
<hggdh> jamespage, we might want a bit more of sotrage, just in case
 * patdk-wk borrows the tgt
<hggdh> let me check how much we use right nwo
<jamespage> I reckon a new server is best bet then; we also need to discuss ec2 automated testing
<jamespage> as that can be run on the master server as it has no special hardware requirements.
<jamespage> However it does require more resources - I tried to run more that 2 in paralell on a t1.micro and it ran out of memory.
<hggdh> heh
<hggdh> now, how much more memory you think?
<hggdh> and the t1.micro is really, er, micro...
 * RoAkSoAx off to lunch
<iggi> I had a virtual host setup and it worked fine, but ever since I rebooted all of the pages state "The server is taking to long to respond"
<RoAkSoAx> Daviey nver mind uploaded wrong branch gonba fixed after lunch
<iggi> So... How can a website have a viewable website, but not a complete A record? i.e.- bandcommunications.com (this is obviously not legit)
<jkg> your browser is probably auto-adding the www subdomain (which in this case does have an A record)
<iggi> How did I miss that? I'm really off today lol
<n1md4> hello.  Is this the right place to ask about uec?
<iggi> n1md4: You can ask, depends if people are around that can answer.
<brosseaub> hello
<brosseaub> anyone know hot to install php 5.3.3 on ubuntu 10.04 ?
<n1md4> I'm following this guide http://cssoss.files.wordpress.com/2010/12/eucabookv2-0.pdf I'm on page 9/10.  I've a 2 server model, 1 controller and 1 node.  The node is behind the controller, but I can't connect to the internet; i.e. aptitude fails.
<brosseaub> i could compile this by hand but there must be a PPA or something to automate the process
<b0ot> I'm new to snmp... is there an easy way to install net-snmp or opennms on ubuntu
<b0ot> 10.04
<iggi> b0ot: sudo apt-get install net-snmp net-snmp-utils I believe, could be wrong, that might be for CentOS though.
<zul> nxvl: ping again
<nxvl> again?
<nxvl> when was the 1st one?
<zul> this morning
<zul> so augeas
<nxvl> really
<zul> really
<zul> check the logs ;)
<zul> so agueas
<zul> is it possible we can get 0.8?
 * nxvl checked and no ping
<nxvl> sure
<nxvl> i can update it liek now
<nxvl> like*
<nxvl> one sec
<zul> ill ask for FFE then or I can get Spamaps to :)
<iggi> nxvl: I see a ping (but didn't have timestamp on) was about 10:45-10:50 EDT
<nxvl> 0.8 is already in natty
<nxvl> https://launchpad.net/ubuntu/+source/augeas/0.8.0-0ubuntu1
<uvirtbot> New bug: #737027 in multipath-tools (main) "kpartx udev rule is broken" [Undecided,New] https://launchpad.net/bugs/737027
<nxvl> zul: ^^
<zul> seriously?
<nxvl> and it builded
<zul> nxvl: sorry about that then
<zul> nxvl: you might want to update it in debian then :)
<nxvl> nope, we are waiting for doc something
<SpamapS> 0.8 in natty works beautifully
<SpamapS> 0.7 in lucid seems confusing and a bit broken. :-/
<SpamapS> but I may be abusing it
<SpamapS> just the -s flag makes augtool a lot simpler to use from the cmdline. :)
<SpamapS> nxvl: are you extremely familiar w/ augeas? I could use a couple of pointers on a common scenario..
<nxvl> used to be
<nxvl> haven't touch it in a long time
<SpamapS> What were you going to use it for?
<nxvl> i was going to write a central managment/configuration tool
<nxvl> but i have no time left
<SpamapS> ahh...
<SpamapS> well I am trying to figure out how to do this all in one augtool command..
<SpamapS> mysqldpath=`augtool ls /files/etc/mysql/my.cnf | grep ' = mysqld$'|head -1`
<Hashyri> Hi, I installed ubuntu server, and I'm trying to setup a SQL database on it, is it recommendable ?
<Hashyri> It's for personal use only, for learning.
<SpamapS> Hashyri: very. Which SQL database did you want to use?
<Hashyri> 10g
<SpamapS> Oh
<SpamapS> Oracle doesn't support Ubuntu IIRC
<SpamapS> Hashyri: I assume you meant Oracle 10g
<Hashyri> yes
<SpamapS> you know.. there are other SQL databases. ;)
<patdk-wk> 10g is old
<nxvl> SpamapS: yeah, no, i'ven't use it that far, but raphink_ might know
<SpamapS> raphink: ping? around? want to answer an augeas question?
<raphink> I'm around ;-)
<raphink> the question might be better asked on #augeas though
<Hashyri> I had the Oracle 10g installed on windows server 2003, before... since I use in school I guess I should use the same version in home, don't ?
<SpamapS> good point. :)
<raphink> I'm about to leave so in case it lasts a bit ;-)
<Hashyri> augeas, ok, will look around
<Hashyri> thx
<SpamapS> Hashyri: no, not augeas
<Hashyri> oO
<SpamapS> Hashyri: thats something I'm working on. ;)
<Hashyri> ahh
<Hashyri> Ok
<SpamapS> Hashyri: You should try PostgreSQL .. its free and has similar features to oracle
<Hashyri> hmm, ok will give a try.
<Hashyri> Can I have diferent versions of SQL databases in one linux server ? Will it generate conflicts ?
<uvirtbot> New bug: #712695 in openssh (main) "sshd crashed with SIGSEGV" [Low,Incomplete] https://launchpad.net/bugs/712695
<RoAkSoAx> DavidLevin: ok the eucalyptus branch should be read, proposed the merge into the wrong branch
<RoAkSoAx> argh
<RoAkSoAx> DavidLevin: sorry >P
<RoAkSoAx> Daviey: ^^
<DavidLevin> no worries
<Daviey> RoAkSoAx, ok great
<Daviey> RoAkSoAx, What was the reasoning for switching from pycentral to dh_python2?
<RoAkSoAx> Daviey: Debian is kinda neforcing it for what I can see, so I just decided to go ahead with the change, that doesn't really affect in anything http://wiki.debian.org/Python/PyCentral2DhPython2
<Daviey> yeah, it seems to be the direction all the cool kids move in.
<Daviey> RoAkSoAx, Okay, it looks good to me, i'll merge and upload it first thing in my morning.  However, can you also add bug #697753 to it?
<uvirtbot> Launchpad bug 697753 in eucalyptus "bittorrent recommendation prevents bittorrent demotion" [Medium,Confirmed] https://launchpad.net/bugs/697753
<RoAkSoAx> Daviey: sure
<zul> hallyn: ping
<RoAkSoAx> Daviey: will do it before you wake up :)
<Daviey> RoAkSoAx, rocking
<Daviey> (you forget i never sleep)
<RoAkSoAx> Daviey: I was changing that to "I'll do it before UK Morning" :P
<zul> thats why he is slightly nuts
<RoAkSoAx> zul: *slightly*
<zul> Daviey: the stories i heard would send chills down your spine
<Daviey> heh
<zul> hallyn: what do you think of bug 737027?
<uvirtbot> Launchpad bug 737027 in multipath-tools "kpartx udev rule is broken" [Undecided,New] https://launchpad.net/bugs/737027
<ivoks> urgh...
<ivoks> advice: avoid btrfs at any cost
<ivoks> rather use ntfs :(
<KB1JWQ> Hating upstart at the moment.
<netritious> Does anyone here know anything about SunFire E25K?
<KB1JWQ> Specifically its half implemented brokenness.
<uvirtbot> New bug: #736762 in libvirt (main) "libvirtd assert failure: *** glibc detected *** /usr/sbin/libvirtd: double free or corruption (!prev): 0x094c5688 ***" [Medium,Incomplete] https://launchpad.net/bugs/736762
<KB1JWQ> netritious: Not if you don't ask a real question. :-)
<netritious> KB1JWQ well stated :)
<netritious> I don't know much about them, but in a meeting today I found out there is one in a Memphis TN warehouse just released from probate being sold for $35K
<netritious> I thought if anyone knew anything about that server they might just be hanging out here :)
<KB1JWQ> netritious: And you intend to run *Ubuntu* on this?!
<netritious> uh no
<netritious> although I would try if I could lol
<netritious> I have seen some nice Sun workstations run Ubuntu though.
<KB1JWQ> netritious: #solaris may be a better fit for the question. :-)
<netritious> KB1JWQ: good point...will do thx
<iggi> Does anyone know why before a reboot my virtual host in apache2 worked fine, but after a reboot it gives me a "The Server is taking to long to respond"
<_jws> hi all - quick question regarding user-data: I have an ec2 image I've prepped and rolled out an AMI. The user-data I try passing to my AMI isn't getting run - I assume it's b/c it was already run once. Can I turn it back on? (sorry for the crap terminology)
<KB1JWQ> _jws: user-data in what context?
<_jws> KB1JWQ: sorry - the user-data script I have (eg changing hostname), isn't being run on the AMI I  created from an ubuntu image (which ran the user-data script) the first time (I then futher tweaked the instance and rolled an ami from it)
<KB1JWQ> _jws: The only user-data stuff I have is "running puppet." :-)
<KB1JWQ> Kicking the can to something that isn't EC2 specific tends to lead to better infrastructure down the line.
<_jws> KB1JWQ: yeah, I'd like to get puppet up and running, but can't convince the boss of the extra time to get things running (I know, f'in stupid. i've tried pushing) - maybe I should just impl. pupppet and say the hell with it
<jiboumans> _jws: i'm a bit late to the conversation, but mathiaz has a great blog post on using puppet + cloudinit from the comfort of your shell
<jiboumans> _jws: read oldest -> newest of course: http://ubuntumathiaz.wordpress.com/category/puppet/
<_jws> jiboumans: thanks. this will help
<jiboumans> _jws: our infrastrcuture is implemented along those lines wiht several extensions, so feel free to shoot a question here and i'll do my best to answer
<_jws> jiboumans: excellent. thank y'all so much. should have this up and running in no time :)
<jiboumans> _jws: be sure to run the latest versions of puppet though. and you're welcome
<orudie> can someone please help me? I installed vmware tools and at boot I get error: no such disk , I posted this on vmware forums but i'm not getting any responses http://communities.vmware.com/message/1718892#1718892
<guntbert> orudie: this is the support channel for ubuntu server, are you certain that you want to ask here?
<orudie> guntbert, the boys over at #vmware are telling me its not vmware related
<guntbert> orudie: I was just asking - your guest is ubuntu server I guess?
<orudie> guntbert, yup
<orudie> guntbert, it starts after the disk checker, and everything seems working
<orudie> guntbert, but there is no grub loader during boot, and those errors there
<guntbert> orudie: no grub loader? you mean no grub menu? press left <shift> during boot to get it
<orudie> guntbert, by other VM which is also ubuntu 10.04 shows "grub loader" right after post
<guntbert> orudie: there is (with ubuntu) *always* grub - so please try the left <shift> key
<orudie> guntbert, i tried it, didn't come up
<orudie> guntbert, can you help me check to see if my disk is properly mounted ?
<guntbert> orudie: strange  In my memory there is only one way to mount a virtual HD in vmware
<guntbert> orudie: you could boot your system from a live CD (iso) and see if there is anything gone wrong
<shlm> hello
<shlm> there anyone
<shlm> anyone know what is postfix server ?
<oCean> !mta
<ubottu> A Mail Transfer Agent (MTA) is the server software that sends and queues mail. The default MTA (and !MDA) on Ubuntu is !postfix ("exim" is also officially supported). See also !MailServer and !MUA
<orudie> can anyone help me with my problem please ?
<shlm> postfix is it pmta ?
<shlm> postfix is itpmta ?
<shlm> postfix is it MTA ?
<oCean> :)
<oCean> correct
<oCean> !postfix
<shlm> well
<ubottu> postfix is the default !MTA and !MDA on Ubuntu. For help, read https://help.ubuntu.com/community/Postfix and https://help.ubuntu.com/community/PostfixBasicSetupHowto - See also !MailServer
<shlm> i wonder using ubuntu
<shlm> with one iP i got several vhost
<shlm> apache vhost
<shlm> i installed postfix
<shlm> i got 3 domain name : domain1.tld, domain2.tld & domain3.tld
<shlm> working with one ip
<guntbert> orudie: I tried but you didn't respond
<shlm> my question is : with postfix is it possible to create several email ID related with domain name ?
<orudie> guntbert, I really do not see how booting from live CD will help me
<shlm> for example : info@domain1.com, contact@domain2.com, pingme@domain3.com
<shlm> is that possible ?
<oCean> sure
<shlm> all with one IP
<oCean> postfix can handle multiple domains. You have to setup proper mx records for each domain, they can have same ip (your vps)
<guntbert> orudie: if the virtual HD is properly mounted you will see it in the live session too, and you can work on configuration problems too
<KB1JWQ> shlm: You probably want to outsource the mail to a provider.  Running a mailserver isn't for the faint of heart.
<guntbert> KB1JWQ: ++1
<oCean> shlm: start here: https://help.ubuntu.com/community/PostfixBasicSetupHowto, though I agree with KB1JWQ on the fact that is might not be very easy
<linuxtech> I have been running mail since 1995, and now I use exim4.
<KB1JWQ> Not to mention that the internet doesn't need more badly run mailservers. :-p
<KB1JWQ> linuxtech: I've run that in the past, I tend to prefer postfix.
<KB1JWQ> The exim root exploit late last year / early this year wasn't exactly endearing. :-)
<shlm> I'm ready for the challenges
<linuxtech> Yes, which is why I am packaging the lateset exim for my servers.
<linuxtech> Bind is already done, mailman is also on the todo list.
<guntbert> shlm: try/train first in a completely pivate network - open mail relays are very much frowned upon
<KB1JWQ> linuxtech: I guess I'm not sure why you'd go to that level, but okay. :-)
<shlm> i simply wanna know the basic things and essential things ?
<shlm> the rest i can manage it myself
<KB1JWQ> #postfix may be a better resource, but you're expected to read.
<shlm> i m already present but ppl there not so much helpfull than here
<guntbert> shlm: if you are beginner with mail servers: don't connect it to the internet!!
<TheInfinity> guntbert: +1
<shlm> for  outsource, there is no question as i often to have problem
<shlm> why ? guntbert
<TheInfinity> shlm: its very easy to make a open relay -> other ppl sent spam over your server
<TheInfinity> shlm: -> my poor little server has more to fikter
<TheInfinity> *filter
<guntbert> shlm: you can get yourself in troubles - providers don't like open mail relays either - it even can get expensive (not to mention criminals using it...)
<shlm> so what would like to tell ?
<TheInfinity> shlm: test your server at home
<shlm> do not install postfix and to use gmaiL as it is free ?
<shlm> how they can send spam over my server ?
<TheInfinity> shlm: at home means nobody can reach it from outside. behind a router or something like that.
<TheInfinity> shlm: they can when you make the config wrong. which happens quite easy as a beginner.
<shlm> without giving access on my cli how they use my server for spam ?
<TheInfinity> shlm: bad config options, so smtp auth for example
<TheInfinity> *no
<guntbert> shlm: they don't  need access to your CLI for that
<TheInfinity> shlm: my first mailserver installation took 2 weeks, every evening. just plan some time till everything works save and secure :)
<guntbert> shlm: we *are* trying to scare you, yes, but for a very good reason
<TheInfinity> the reason is spam:nonspam ration = around 100:1 @ my server *g*
<shlm> i don't really need the utility to use a mail server at home
<shlm> i mean the necessaty of using mail server at home
<guntbert> shlm: then don't run it at all - but if you want to learn then run it at home
<TheInfinity> shlm: you dont need one at home (in my opinion) unless you want to learn / test or unless you have several clients
<shlm> ok let 's talk about cms
<shlm> CMS
<shlm> Content management system
<TheInfinity> shlm: but server security is more then just a good mail server config. for example: ssh with password is baaaad. ssh with pubkey auth is fine. *g+
<shlm> when we can see there are lots of cms providing the mail server module
<shlm> which do what to do
<shlm> without doing anything
<guntbert> shlm: my absolutely last word in this regard: forget the mail server thing
<shlm> what is your point of vue on it ?
<shlm> is that secure to use that one or not ?
<shlm> ok leave it
<shlm> what is the raid soft can do ?
<RoyK> http://www.gnu.org/fun/jokes/c+-.html
<deedubb843> Hello. I am having a problem with "preseed" installation of ubuntu 10.04.2 server. if I use ks=<pathtofileonhttpserver> which contains url --url "<directory of ubuntu 10.04.2 insallationmedia>" it works perfectly. If i used a preseed file and preseed/url=<pathtopreseedfileonhttp> it gets to the Checking Ubuntu archive mirror and errors out. I believe I need to tell the preseed where to find the installation files but I have been unable to find t
<deedubb843> anyone? I don't understand how its suppose to know where its installing from
<deedubb843> I tried settings the local0/repository line, no love
#ubuntu-server 2011-03-18
<ScottK> kirkland: When server people said they wanted an improved boot experience, I don't think it was the colors they were concerned with.
<twb> haha
<kirkland> ScottK: thank you for the constructive comment
<twb> I would like the boot process not to hang if I make an LVM LV read-only and it has a fsck pass defined in /etc/fstab
<ScottK> kirkland: I'm serious.  I'm sure it's lovely and all, but ^^^ is what's important.
<twb> Just as an example
<Saturn2888> hi all. How do I remove sit0 and sit1 adapters? I had them in there a long time ago and haven't ever been able to remove them. I was wondering if anyone else knows. They were Link encap:IPv6-in-IPv4 and should be pretty easy to delete for someone that knows better.
<fxhp> I want to specify the disk image path in vmbuilder
<fxhp> And I cannot find any docs that give examples
<eagles0513875> any webmin users in here out of curiosity
<greppy> eagles0513875: probably not, it's not really supported on ubuntu
<eagles0513875> greppy: then why is it in the repo O_o
<greppy> eagles0513875: which repo?
<greppy> I don't find it in mine
<jmarsden> eagles0513875: rmadison webmin shows nothing at all... i.e. it is not there.
<eagles0513875> ya
<eagles0513875> my apologies
<eagles0513875> now how to uninstall it from source :(
<_ruben> some makefiles offer a 'make uninstall' option
<_ruben> if not, then it'll be a manual process
<_ruben> and if your lucky, all its files reside under for instance /usr/local/webmin/
<eagles0513875> found it
<eagles0513875> the uinstall script was in /etc/webmin
<eagles0513875> another question for you guys
<eagles0513875> sudo: unable to resolve host EagleEyeT <---- i have that name in the /etc/hostname file yet its still complaining
<eagles0513875> what have i set incorrectly in that file
<greppy> is it in /etc/hosts?
<eagles0513875> let me look
<eagles0513875> greppy: http://pastebin.com/F9aPs6aV
<greppy> then no you don't.
<eagles0513875> wait
<greppy> you need to add EagleEyeT to /etc/hosts
<eagles0513875> problem fixed
<eagles0513875> the name was something different before the bmit-internal it was called before
<eagles0513875> and i had changed that and didnt change it in hosts
<eagles0513875> thanks greppy :)
<maxillusionist> hi ilea
<tuxinator> hi all
<tuxinator> how to achieve that snmpd does monitor interface traffic?
<_ruben> what?
<tuxinator> my snmpwalk output: http://pastebin.com/8dgPdH8P
<_ruben> oh, change the public community from paranoid to ehm, the other one
<_ruben> in /etc/snmp/snmpd.conf
<tuxinator> i plan to use cacti later, but first i have to ensure that it gets interface statistics from snmpd
<tuxinator> _ruben: thanks man, looks better :D it's long time since i last worked with snmpd :D
<GeneralK> So I have an issue where I am unable to mount an NTFS partition on a drive that uses a gpt partiton table... It's like the partition doesn't exist outside of parted or whataver tool I use to look at it in... eg. I can't mount it from /dev/sda or /dev/dm-0 Any ideas would be appreciated
<iclebyte_work> anyone noticed any BGP issues with British Telecom?
<DavidB> I have a File sharing problem on our Ubuntu File Server can someone please help me out?  We have 15 station running Windows Xp and Windows 7 and our File Server running Ubuntu, I configured 3 shares with full access, restarted Samba but some of the stations can not agges the server
<DavidB> access*
<DavidB> Can someone please help?
<DavidB> Hello?
<DavidB> IIs there someone willing to help?
<Aison> maybe you should provide some more information
<DavidB> Like?
<Aison> eg. pastebin your samba config
<DavidB> ok
<Aison> that don't mean that I can help, but maybe
<DavidB> #======================= Global Settings =======================  [global] 	workgroup = COTTONTEXDOM ;	netbios name = GrafUBU 	server string = %h server (Server) ;	wins support = no ;   wins server = w.x.y.z 	dns proxy = no ;   name resolve order = lmhosts host wins bcast  #### Debugging/Accounting ####  	log file = /var/log/samba/log.%m 	max log size = 1000 ;	syslog only = no 	syslog = 0 	panic action = /usr/share/samba/panic
<RoyK> !pastebin
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<DavidB> sorry for that
<DavidB> my first time here
<DavidB> http://paste.ubuntu.com/582045/
<RoyK> np
<Aison> can you do
<Aison> testparm -s smb.conf > smb.conf.tested
<Aison> and paste this?
<DavidB> ok
<RoyK> DavidB: can you connect to the server via IP address from the clients that can't see the server - as in \\x.x.x.x ?
<DavidB> @RoyK I can see the shares , acces some of them but not others
<DavidB> http://paste.ubuntu.com/582047/
<Aison> quite strange
<DavidB> i need to also specify that the computer that cant brows one share is in a domain
<Aison> well, I never used this security = SHARE option
<DavidB> I need open access to all shares
<DavidB> its an internal server so dont need security higher than local
<DavidB> Does it conflict maybe with the Main Domain server rules?
<DavidB> brb
<Aison> i've got a problem also here: some of my ubuntu servers have got a higher resolution in console than others, why? where can I change that? some framebuffer thingie?
<DavidB> Aison I added a virtual device with a resolution setting on mine http://paste.ubuntu.com/582053/
<DavidB> I dont have a monitor attached to the server
<DavidB> The link is my custom xorg.conf
<Aison> well, here i've got 6 servers with a switchbox connected to one screen
<Aison> I don't have got X here ;)
<DavidB> :)
<Aison> but somehow my consoles have got different resolutions
<DavidB> Maybe you have diferent resolutions in grub's menu.lst
<Aison> yeah, but on ubuntu 10.10 I can't find the menu.lst
<Aison> it's defined somewhere else
<DavidB> http://lifealgorithms.wordpress.com/2009/11/11/customising-console-and-grub2-resolution/
<DavidB> But be carefull it sais it might fail :)
<DavidB> Any ideas on my problem? :P
<zul> morning
<DavidB> morning
<Aison> DavidB, I guess your problem is located at the client (windows) side
<Aison> because everything else looks ok
<DavidB> well I have 2 similare shres on the same server but both are accesible
<DavidB> similar*
<DavidB> Aison I found the culprit
<DavidB> I needed a Read Only = no on the last share that was not accesible
<Aison> oh :)
<RoAkSoAx> morning all
<orudie> At boot of 10.04 I get error: disk not found, followed by disk check, telling me that disk is not properly mounted/unmounted, then the system starts and everything seems normal. Any help for me how to fix this ?
<patdk-wk> don't yank the power cable out?
<mdeslaur> kirkland: FYI, qemu-kvm isn't installable right now because it FTBFS on i386...are you looking into that?
<kirkland> mdeslaur: yes
<mdeslaur> kirkland: ok, cool....thanks
<kirkland> mdeslaur: jdstrand reported in ubuntu-devel
<compdoc> orudie, type:  'smartctl -a /dev/sda'  (or whatever device your hard drive is)
<mdeslaur> kirkland: ah, cool
<orudie> compdoc, smartctl not recognized
<compdoc> sorry, I thought smartmontools was included with the OS
<orudie> compdoc, i'll install it
<compdoc> you need to read the SMART data onthe drive. look for reallocated sectors and pending sector count
<compdoc> see if either are greater than zero
<patdk-wk> and don't do it over usb cable
<RoAkSoAx> kirkland: /win 2
<RoAkSoAx> arrgh
<RoAkSoAx> kirkland: howdy!! please don't forget to review python-ethtool for the new queue when you have the time :)
<kirkland> RoAkSoAx: aha, sure, thanks
<zul> time to do some deweeding again
<compdoc> just say no! to deweed
<iclebyte_work> anyone running puppet?
<zul> RoAkSoAx: where are with 619712?
<RoAkSoAx> zul: huh?
<RoAkSoAx> oh bug#
<RoAkSoAx> zul: well I was never able to reproduce the "race" as specified in the bug #619712
<uvirtbot> Launchpad bug 619712 in keepalived "keepalived vrrp race condition and fix (versions 1.1.17 and 1.2.0 but perhaps all?)" [High,Won't fix] https://launchpad.net/bugs/619712
<RoAkSoAx> zul: however, it is supposed to be "fixed" upstream
<RoAkSoAx> in the latest release
<zul> RoAkSoAx: can you add the info to the bug and do the approiate thing plesase? :)
<RoAkSoAx> zul: sure. We can also try to get the latest into natty, but that will also mean that we'd need to get ipvsadm2.6
<zul> RoAkSoAx: wouldnt it be easier to backport the supposed fix from upstream?
<RoAkSoAx> zul: that too, though the newest releases have better support for IPV6
<zul> RoAkSoAx: any known regressions with it?
<RoAkSoAx> zul: none so far. I'm gonna evauluate either to patch it, or get FFe's over this weekend
<RoAkSoAx> and by monday I should have a clearer picture of what to do
<uvirtbot> New bug: #370874 in exim4 (main) "package exim4-base 4.69-5ubuntu2 failed to install/upgrade: short read in buffer_copy (backend dpkg-deb during `./usr/share/doc/exim4-base/spec.txt.gz')" [Undecided,Invalid] https://launchpad.net/bugs/370874
<zul> RoAkSoAx: cool thanks
<bbeck> Does anyone have experience creating their own EC2 AMI of ubuntu server that is preconfigured with certain packages?  I'm trying to build an image for some software that I've written and having troubles figuring out how to get started.
<patdk-wk> hmm, I thought that was pretty simple
<patdk-wk> install ubuntu into ec2
<patdk-wk> install software you want, configure it how you want
<patdk-wk> then back it up to s3
<patdk-wk> publish backup in s3 as ami, done
<bbeck> well that certainly does sound easy.
<patdk-wk> I'm sure there are a few catchs :)
<patdk-wk> like first time run scripts to configure the ip and machine name when it's installed from the ami, that would be lost doing it that way
<patdk-wk> but otherwise, yep
<bbeck> why would that matter?  i'm assuming EC2 has a DHCP server to provide IPs
<patdk-wk> I didn't think it used dhcp
<patdk-wk> been awhile since I last used ec2 though
<xuru> what's a good channel to ask some fiber channel/multipath questions on ubuntu?
<patdk-wk> this one?
<xuru> ok, I'm getting a lot of these errors:  sd 3:0:2:5: reservation conflict
<xuru> I have 5 ESX servers hooked up to a Pillar 300
<xuru> and I'm hooking up this backup server to it as well
<xuru> Running ubuntu 10.04 lts
<xuru> everything looks good running multipath -v3 and multipath -ll
<xuru> but the paths seem to go down and then become active again
<xuru> I'm guessing this is because the esx servers are issuing a scsi reserve?
<patdk-wk> not sure :(
<patdk-wk> I haven't attached any linux machines to the fiber esx stores
<patdk-wk> only done that on windows, and haven't bothered to see if it complains
<xuru> yeah, it's kind of weird...   I've been trying to get vmware's vixDiskLib SDK to work over SAN...  been a long road so far
<owen1> i try to change my timezone to utc but can't find it in tzselect. any tips?
<genii-around> owen1: Pick 11 and then put maybe just GST    ?
<owen1> genii-around: `GST' is not a conforming Posix time zone string.
<genii-around> GMT perhaps then. I think it takes the "GST-#" and converts it to the 3 letter codes
<genii-around> GST-0  seems to work
<owen1> genii-around: ok. thanks!
<genii-around> np
<zul> sorry about the spam
 * RoAkSoAx off to lunch
<Frosh> I have setup some of our Ubuntu servers using KInit, Winbind, and Samba (following a guide i read) and setup authentication against our Active Directory; this has (seemingly) worked fine for several months, but newly created AD users are unable to login (they get "access denied") - I was wondering if anyone could help me troubleshoot (turn on appropriate logging, debugging, etc..) - if this is the wrong venue, can someone point 
<RoAkSoAx> kirkland: ping
<RoAkSoAx> kirkland: is the setvtrgb /etc/vtrgb supposed to change background colors too?
<kirkland> RoAkSoAx: yo
<kirkland> RoAkSoAx: if i understand your question correctly, yes
<RoAkSoAx> kirkland: cause the only thing I get change in both my laptops is the letters from grey to white
<kirkland> RoAkSoAx: are you on the console, ctrl-alt-f1?
<RoAkSoAx> kirkland: yes
<kirkland> RoAkSoAx: the background color change is not noticeable
<kirkland> RoAkSoAx: it goes from 0,0,0 black to 1,1,1 black
<kirkland> RoAkSoAx: goto http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html
<kirkland> RoAkSoAx: grab the bash script at the end of that page
<kirkland> RoAkSoAx: and run it before an after you do:
<kirkland> sudo setvtrgb vga
<kirkland> sudo setvtrgb /etc/vtrgb
<RoAkSoAx> let's see
<RoAkSoAx> kirkland: i though it would change the color to aubergine though :)
<kirkland> RoAkSoAx: run debconf
<kirkland> RoAkSoAx: dpkg-reconfigure <something>
<kirkland> RoAkSoAx: or press F9 in byobu
<RoAkSoAx> kirkland: ahh I see it now!! cool!!
<RoAkSoAx> kirkland: thanks ;)
<kirkland> RoAkSoAx: you could, however, change your background to aubergine, though, easily :-)
<kirkland> RoAkSoAx: do this ....
<kirkland> RoAkSoAx: sudo vi /etc/vtrgb
<kirkland> RoAkSoAx: and change the first number in the 3 lines to:
<kirkland> 61
<kirkland> 0
<kirkland> 31
<kirkland> RoAkSoAx: and then sudo start setvtrgb
 * RoAkSoAx tries
<RoAkSoAx> kirkland: much better! I think that should be default :)
<kirkland> RoAkSoAx: the beauty is that you can now configure this easily
<RoAkSoAx> kirkland: indeed!! But personally I was expecting to see it by default :)
<kirkland> RoAkSoAx: heh.  that's up to the design team, not me :-)
<RoAkSoAx> kirkland: as long as we can change it with no hassle is all good :)
<RoAkSoAx> kirkland: though, idk if this is because of the new color changes, but byobu's color scheme has changed
<RoAkSoAx> kirkland: dude this new colors (and the console with aubergine is just amazing!! it's awesome that you've done this!!
<koltroll> RoAkSoAx, url? (just joined but you sound so excited that I got interested)
<RoAkSoAx> koltroll: http://fossplanet.com/f10/call-testing-aubergine-love-server-folks-114677/
<koltroll> cheers
<kirkland> RoAkSoAx: thanks, man;  it's nice to hear something kind from someone every once in a while :-)
<kirkland> RoAkSoAx: b/c there are some gripey people out there!
<RoAkSoAx> kirkland: thanks to you!! :) (and indeed!!)
<RoyK> http://transport.nilu.no/products/fukushima <-- seems California is getting some .jp air soon
<Patrickdk> never did need that state anyways
<david5345> I ran nmap on my ubuntu box. I did the 4 different scans, default, xmas tree, FIN, null and UDP Scan
<david5345> Every time it said "1000 scanned ports are open|filtered". Except on the UDP scan, it caught ntp as being "closed".
<david5345> Are my iptables secure enough ? or is there another test I should run ?
<RoyK> david5345: iptables is secure enough for L3/L4
<david5345> For the sake of this conversation, I am only talking about TCP, UDP and IP
<RoyK> for that, iptables is secure
<david5345> ok, I guess my question was if my rules were strong enough.
<RoyK> pastebin them
<RoyK> !pastebin
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<david5345> sorry ubottu, I am used to msn messenger
<david5345> Do I need to worry about security lower than L3 ?
<RoyK> rarely
<RoyK> unless you're afraid of attacks on the LAN
<RoyK> but then, arptables can do filtering as well
<david5345> It's in a data centre, I assume they will protect against local attacks
<RoyK> for any sysadmin, a pinch of paranoia is good, but don't add too much of it
<david5345> I am beyond a pinch :D
<RoyK> david5345: I'm quite sure you will be sharing the network with others in the same datacentre
<david5345> Yes, but would any of them launch an attack against me ?
<RoyK> david5345: you never know - one can be hacked and try to do so, but if they do, they would probably try on IP, L3, so it shouldn't be much of a problem
<david5345> Do I need to worry about 5, 6, and 7 ? Does 7 include installing secure programs and keeping them patched ?
<RoyK> L2 and down is pretty well secured already, unless there's a bad driver somewhere
<RoyK> L7 is application protocols - they should be fairly secure - but "L8+" (no such thing, but still) is usually the problem - some bad code and suddenly you have an XSS or DoS or even local exploit
<RoyK> HTTP is a good example of a L7 protocol
<david5345> Wonderful!
<david5345> HTTP is only as secure as the backend such as apache or PHP
<RoyK> HTTP is secure, and so is apache, but adding PHP or CGI or something to that (all of which uses HTTP for transport) can open doors
<guntbert> RoyK: L8  = you and me ;-)   so what might L8+ really be?
<RoyK> I'd call php and the gang L8, you and me comes a wee bit further up :)
<RoyK> L>7 is highly subjective :)
<guntbert> indeed :-)
<david5345> apache = L8
<david5345> PHP = L9, MySQL = L10 :D
<RoyK> apache = L5-7
<RoyK> apache doesn't do much >L7
<david5345> I dislike apache, I now use lighttpd. It uses up less resouces :)
<RoyK> but it can call PHP etc, which is on a higher level
<RoyK> lighty is nice, but Apache works too
<RoyK> I guess that's just a matter of taste
<david5345> I also wrote a apparmor.d profile for my PHP code, I figure that should help me keep things secure
<david5345> Actually, I really like apparmor.d now that I understand it.
<bezao> how do i enable iptables on my vps? i dont have eth0, ifconfig shows me "lo" "venet0" "venet0:0" i tried using venet0 and venet0:0 on iptables, but it doesnt work, it block everything! i'm trying this "iptables -N TRUSTED" and "iptables -A TRUSTED -i venet0 -p tcp -m tcp --dport 22 -j ACCEPT" but it blocks me.
<bezao> how do i enable iptables on my vps? i dont have eth0, ifconfig shows me "lo" "venet0" "venet0:0" i tried using venet0 and venet0:0 on iptables, but it doesnt work, it block everything! i'm trying this "iptables -N TRUSTED" and "iptables -A TRUSTED -i venet0 -p tcp -m tcp --dport 22 -j ACCEPT" but it blocks me.
<ehw> bezao: a) define "block everything!" do you get an error message? and b) this sounds like an OpenVZ container; have you asked the company you're renting it from about the error?
<bezao> ehw i cant access it from my home. i can only access by the openvz console at my hosting website to reset iptables, then i can access i again from my home.
<bezao> i enabled port 22 to ssh, but i cant connect, ping timeout/connection refused
<bezao> ehw i have an ubuntu vbox at home, that i tested it, but at home i have eth0, which it works the iptables rules! but on the vps, i dont have eth0, so i search on ifconfig and tested the possibilits and got nothing! :/
<ehw> bezao: OpenVZ uses a different name for the interfaces.  What you might want to check is if your rules ever get to TRUSTED; if you don't have a jump to TRUSTED at the end of e.g. INPUT, then it won't be reached
<bezao> sorry i did not udnerstood.. let me pastebin the file
<bezao> please can you check ehw http://pastebin.com/idinWvT4 ?
<ehw> bezao: also paste bin the output of iptables-save after you're run the script?
<bezao> 1min
<bezao> need to change venet0:0 to venet0
<bezao> it was saying it was wrong
<bezao> ehw http://pastebin.com/K1Mj4Yb6
<bezao> after i run, i cant ping anymore my vps ip, and cant connect to ssh, to i go the vps-url and tell it to disable iptables. then i can use ssh and ping again
<bezao> any ideias ehw ?
<ehw> have a look, bezao
<azizLIGHTS> how do i undo the command "adduser dropbox ubuntu" ?
<ehw> *having*
<bezao> ok
<ehw> bezao: if i just run the script, it works fine in a normal vm (using KVM); you can't use aliases like eth0:0 or venet0:0 for interface names, that's for sure
<bezao> ehw i pasted you before i change, i'm now using venet0
<ehw> bezao: it's been a while since I saw openvz in action; doesn't venet0 have an internal address?
<bezao> ehw venet0 doesnt have an ipv4, venet0:0 has an ipv4
<ehw> bezao: can you put venet+ as the interface name and give it a try?
<bezao> on the iptables file?
<bezao> sure, let me try
<ehw> right, after -i in the iptables commands
<bezao> it works, i have access to ssh but i cant ping, even if i comment the line about ping
<bezao> comment or uncomment, ping doesnt work.
<ehw> bezao: might want to check if /proc/sys/net/ipv4/icmp_ech_ignore_all is set to 1; should be 0 if you want to be able to piing
<bezao> i dont have that file ehw
<bezao> i think you mean ech = echo
<bezao> that i have
<bezao> it's 0.
<ehw> bezao: sorry yes, that was a type
<ehw> typo (typing on ipad is hit-or-miss)
<bezao> hehe, np.
<bezao> i have 0, it should be able to ping?
<ehw> bezao: should do normally...
<bezao> it doesnt, hehe, np ehw iptables works;) thanks dude
<ehw> bezao: glad at least ssh works :-D
<bezao> it's everythin that i need :)
<nzfish> hi chan, anyone using openpanel?
<erichammond> Is it possible to create an upstart task that only runs after the file system has been mounted and before mysql starts?  Or would I have to modify the mysql upstart configuration to tell it to wait for my new task to complete?
#ubuntu-server 2011-03-19
<marrusl> erichammond, for the start after filesystem, your custom job could be (for example):  start on local-filesystems.
<erichammond> marrusl: Yep, got that part.  Mainly wondering if I can hold off MySQL until I complete setting up the temporary directory for it.
<marrusl> erichammond, to make sure that mysql waits for it, sure you could change the mysql config but you don't want to do that.  instead create a "wait job" to block mysql until your job is done/started.
<marrusl> something like:  custom-job-wait.conf
<marrusl> erichammond, wait, let me create and pastebin something.  1 min.
<erichammond> marrusl: cool, thanks.
<marrusl> erichammond, http://pastebin.com/fPtv6kaW
<marrusl> erichammond, now if your custom job is not a service that goes to "running" (rather it's a task that runs and is done) you might have to rework that.
<erichammond> marrusl: Thanks, I'll check it out. and yes, it's just a simple task.
<marrusl> By doing "start on starting mysql" as soon as mysql signals that it's *going* to start, it will wait until custom-job-wait is done.  And custom-job-wait will block until custom-job has entered running.
<marrusl> actually it might work just as well, because the wait job terminates on (started custom or stopped custom)
<marrusl> once the wait job dies, mysql is free to contine.
<marrusl> Yeah probably some tweaking, but that gives you the method for blocking another job *without* having to modify it (which of course is a *bad* idea with stock packages, your changes will get clobbered on the next update of mysql).
<iggi> Whats the easiest way to check what program is listening on a specific port?
<erichammond> marrusl: nice.  I think we could simply use this technique to do our task directly in that config (create directory and mount).
<marrusl> erichammond, aha.  great idea.
<marrusl> erichammond, fwiw, nice to "meet" you.  big fan of yours in the ec2ubuntu world.  :)
<erichammond> marrusl: It's cool when I'm able to contribute back a bit to the community whose work I've depended on for so long.
<erichammond> marrusl: Thanks for the help.
<marrusl> erichammond, happy to.  take care.
<jmarsden> iggi: sudo netstat -ntlp | grep :1234     # will show you the process/program listening on TCP port 1234
<iggi> jmarsden: I tried that, the problem is that it just shows "-" for the program
<jmarsden> iggi: That usually means you didn't do the sudo so you ran netstat as an unpriviledged user?
<iggi> Good point /facepalm
<iggi> still doesn't explain why apache is listening on port 80, but doesn't load the page :(
<jmarsden> iggi: So now you read your apache logs under /var/log/apache2 to find out what it did.
<iggi> jmarsden: I had apache setup and running fine, but once I restarted it stopped working and gives "the server is taking to long to respond". Also, the logs don't have anything interesting in them
<jmarsden> iggi: Sounds odd.  Maybe try stopping it and then starting it, instead of restarting??  Do you at least see your attempts to get pages in the log files?
<iggi> jmarsden: I tried that too, no luck
<iggi> jmarsden: If it makes a difference I am running virtual host, but nothing has changed since then
<jmarsden> I need to go out to eat soon, but test some more... what does sudo /usr/sbin/apache2ctl status  show?  Does it think apache is running?
<iggi> jmarsden: It says all connections are waiting for connection and one is "sending reply"
<edugonch> Hello, I'm installing ubuntu server 10.10 and when I get to the disck config and the iscsi I get this error = No iscsi targets where discovered 0n xxxxxx:3260, what can I do?
<Tohuw> edugonch: what's the make/model of your iscsi controller? Have you checked its compatibility with Ubuntu Server?
<edugonch> Please, I need help on this
<edugonch> I don't know, let me see
<trimeta> I ssh'ed into my server and ran "aptitude safe-upgrade," but then forgot about it and the ssh session timed out...it was at the "here are the updates you need, do you want them? [Y/n]" question when the timeout happened.
<trimeta> How do I reset apt so it can be used again?
<trimeta> Since no actual updating was happening when the ssh session timed out?
<patdk-lap> reset apt?
<patdk-lap> it should of reset itself
<patdk-lap> unless your server doesn't know the session timed out, and it's still going
<patdk-lap> then you need to kill it
<patdk-lap> and when you run apt, it should tell you what to do, or clean it itself
<trimeta> Yeah, pgrep -l aptitude shows one process still running.
<patdk-lap> turn on ssh keepalive, so it notices from now on :)
<rigved> trimeta: just login again and run the upgrade again...i case it tell you that aptitude is already running, then you
<rigved> need to stop it
<trimeta> I can kill that safely, right? It'll wipe the lockfile properly and everything?
<trimeta> Or do I then need to manually delete /var/lib/dpkg/lock?
<patdk-lap> the next apt run should notice the lock is stale
<trimeta> Huh, that did work. Cool.
<trimeta> I was paranoid about killing it, but it did the job.
<trimeta> Especially since this new upgrade is the kernel...I didn't want to mess around.
<patdk-lap> kernel is probably one of the safest things :)
<trimeta> Speaking of which, I should restart this in a screen...anyway, thanks for your help guys.
<trimeta> More than once I've run out of space on my /boot partition and had a hard time making it redo the install correctly.
<trimeta> Or forgotten to mount the /boot partition at all, so it can't find /boot/grub...
<patdk-lap> ya, those are annoying to fix, but no real damage :)
<trimeta> True, I suppose.
<cavefish> Hey people
<cavefish> I've got problems installing sound on Ubuntu Server 10.10 with the latest updates / kernel
<cavefish> I've installed all the Alsa packages and nothing seems to work with the program Cmus
<cavefish> I've got an error saing Error: opening audio device: internal error
<Luchi_Miranda> ?
<lvh> Hello. I'm trying to use virt-manaager to access a remote virtual machine over SSH. By default, virt-manager seems intent on logging in as root. How can I use my normal user? (Normal user has the correct permissions for creating and running VMs locally on that machine)
<uvirtbot> New bug: #738209 in autofs (main) "autofs appears to be started before it is truly functional" [Undecided,New] https://launchpad.net/bugs/738209
<uvirtbot> New bug: #738219 in autofs (main) "autofs should always start before at and cron" [Undecided,New] https://launchpad.net/bugs/738219
<neko> Hi
<neko> I'd like to configure a LDC with Samba & LDAP or Mysql... Could anyone tell me what advantages has LDAP over Mysql? I am more confortable with Mysql and I think that having personal accounts on it, I could use for more things, like php-based programs, etc
<neko> I need it for a network with about 50-100 computers
<nigelb> LDAP probably is *the* authentication standard.
<nigelb> You can have all your machines use pam and login via ldap sharing the same /home directory over nfs
<nigelb> so, you can have a 1000 ubuntu computers set up with ldap, and administering them would be very easy.
<neko> nigelb, but it also can be possible  with Mysql, I think
<nigelb> neko: how do you get mysql to talk to ubuntu login?
<neko> nigelb, with Samba is possible
<nigelb> that, I don't know.
<nigelb> I know LDAP is certainly possible.
<nigelb> and that's the reason why its there
<neko> but I see LDAP so many complex... isn't it?
<nigelb> Yeah, it is :|
<neko> I've never worked wit it
<nigelb> I recently set it up, I know its complicated like crazy.
<neko> heheh
<neko> I'm used to work with Mysql, and I imagine that an user account DB with Mysql could be so much easyer to maintain than LDAP
<sako> hey all, you guys think ubuntu is ready for production webserver?
<Patrickdk> I sure hope so, been using it that way for years now
<sako> i just got aws (ec2) and seems ubuntu is the best option as there are official amis from canonical ;)
<compdoc> heh
<sako> been debating between ubuntu and debian
<compdoc> I use ubuntu desktop as a server - its great
<jkg> sako: just a data point, but all my production web servers run Ubuntu LTS releases.
<sako> ye
<sako> i guess if wikimedia can make ubuntu work, no reason i can't :P
<jkg> not sure I'd want the excitement of 6 monthly upgrades, but LTS is a nice balance for me :-)
<sako> my purposes are a lot less mission critical than theirs :P
<sako> ya LTS would def be what i go with
<sako> hmm looks like ubuntu doesnt have #includedir /etc/sudoers.d by default
<uvirtbot> New bug: #738308 in apache2 (main) "Backport proxy connect SSL support" [Undecided,New] https://launchpad.net/bugs/738308
<qwebirc26475> hello
<qwebirc26475> how can I disable user login at boot?
<qwebirc26475> i just to boot the computer without typing passwords..
<qwebirc26475> want
<qwebirc26475> -&#58;ALL EXCEPT root&#58;LOCAL ?
<gbsd-gord> qwebirc26475: well as a server, all "server" services will run without a login, I assume you want some normal user to be able to monitor things locally at the console
<qwebirc26475> gbsd-gord: I want ubuntu-server to boot without asking a password
<qwebirc26475> bigbang: except for root of course
<qwebirc26475> and I have setup a SSH server
<gbsd-gord> but it will boot without a password, into a running state, is what I was really trying to say
<qwebirc26475> yes
<qwebirc26475> how can I do that
<gbsd-gord> ah it does it anyways,
<gbsd-gord> what you see is the login:   for the next user that walks by, but even if that user logs in 3 months later, all the time, the services for network users have been running normally
<qwebirc26475> gbsd-gord: but I want to get to the console without login in
<gbsd-gord> hmm ok
<qwebirc26475> yeah :p but i want access the shell too
<gbsd-gord> from a ssecurity point of view, I'd say that was very bad, becasue there would be no accountign of who did what
<gbsd-gord> I do understand what you want though
<qwebirc26475> hm
<qwebirc26475> yeah
<qwebirc26475> maybe I'll leave it
<qwebirc26475> let me test one thing
<gbsd-gord> what I would do, is make a user called "user", and either a blank password, or a password like "user"
<gbsd-gord> but its VITAL that the "user" user is not allowed to be a sudoer
<qman__> at the very least
<qman__> even more measures should be taken to lock that user down
<qman__> no remote access, for one
<gbsd-gord> that's a huge risk you see, which is why I don't really want yout to do that, no way :)
<gbsd-gord> ^ like he says, it's scary
<uvirtbot> gbsd-gord: Error: "like" is not a valid command.
<qwebirc26475> k
<qwebirc26475> done
<qwebirc26475> I just setup the ssh server because I don't have a monitor for that computer hehe
<gbsd-gord> that's the way most servers are run
<qwebirc26475> :
<gbsd-gord> headless
<qwebirc26475> :) tks for the tips
<qwebirc26475> need to go
<qwebirc26475> []
<gbsd-gord> ok, be careful ;)
<gbsd-gord> scary
<qman__> what's even more scary is the millions of users that run that way
<gbsd-gord> yeah, don't depress me haha
<qman__> some people I have to twist arms to use a password at all, much less implement a secure password policy
<gbsd-gord> some peopel are beyond help, which is normally what I put in writing in reports
<gbsd-gord> but ^ I spell it correctly
<qman__> hah
<gbsd-gord> 10 minutes later I still can't think _why_ he wanted to do that, which means it time for a break here
<Aison> hello
<Aison> as I understand, nfs4 shares are not auto mounted at bootup
<Aison> I placed now post-up mount /home || true into the interfaces config files, so that the drives are mounted after the interface is up
<Aison> but somehow this is not working either
<Aison> what do I have to consider?
<Guest86001> ei
<Guest86001> How can I make sshd to start on boot?
<Guest86001> I installed but it's not starting..
<Guest86001> k
<Guest86001> i changed the port it's working again
<Guest86001> weird
<Guest86001> hm
<Guest86001> the problem was the listenaddres
<Xpistos> Is there any reason why I should be able to boot a 10.04 server that has a 2tb drive in it?
<Xpistos> When I try and go into either BIOS or RaidBIOS I get to see the drive but it doesn't show how big it is or move from where it is
<_ruben> 2tb drives tend to get the installer to use gpt (even tho the actual limit is a bit higher than most 2tb drives) .. and grub doesn't like gpt
<_ruben> nor do a lot of partition tools
<Xpistos> what is gpt?
<_ruben> guid partition table, dos partition tables only go upto 2tb
<_ruben> !gpt
<qwebirc70881> hello again
<qwebirc70881> How do I connect 2 computers on lan
<qwebirc70881> I edited /etc/network/interfaces but I cannot ping the machine
<Devo-Kun> ]\
<dhimes> following guidelines for 10.04 jeos-and-vmbuilder.html with two worries:  apt-get install apt-proxy gives deprecation warning for twisted .protocols.telnet
<dhimes> ... and...
<dhimes> libvertxml.tmpl looked to have conditional logic not described in the how-to.  (I'm new at KVM- just playing with it )
<Aison> what do I have to install so that post-up, pre-down, is executed in /etc/network/interfaces
<Aison> somehow it's not working here :(
<nonotza> for some reason php on my ubuntu lamp server isn't working. it was working before, and now it isn't. can someone offer tips on troubleshooting?
<nonotza> I can use it from the CLI but php pages being served from apache2 aren't parsed
<dhimes> Did you reinstall apache?
<dhimes> Sounds like the apache config got borked so won't run the php
<dhimes> Perhaps could happen if put in a new apache, but didn't tell it about php?
#ubuntu-server 2011-03-20
<cred> I'm trying to upgrade to the newest LTS and it's hung on mysql spewing a bunch of "audit" errors......................what can I do?
<cred> I'm trying to upgrade to the newest LTS and it's hung on mysql spewing a bunch of "audit" errors......................what can I do?
<cred> why is there no /etc/default/grub in 10.04 ???
<cred> how do I turn off the graphics on boot ???
<RoyK> that file is there
<RoyK> google for it
<cred> the file isn't there
<cred> I"m ssh'd into the system now
<cred> there is no /etc/default/grub
<cred> this upgrade leaves the system broken
<cred> mysql won't start at all, my entire company is down
<ScottK> This is probably one of those times when having backups and a redundant server would be nice.
<jmarsden> cred: Did you move your mysql databases to some totally non-standard location?
<jmarsden> cred: My guess is that newer apparmor rules for mysql in the upgraded package are not allowing it to access whatever non-standard place you put your databases... but I'm just guessing at this point.
<cred> jmarsden: you guess correctly
<jmarsden> OK, so the quick fix is put the databases back where they should be, or else configure apparmor to be in warning only mode.
<cred> jmarsden: but I deleted to new rules with the old one, and still no luck
<cred> jmarsden: I've turned off apparmour and it still won't start
<cred> jmarsden: how do I get it into warning mode only ?
<cn1209> What's the best offsite backup solution. I was trying to use crashplan.com since it allows me to copy from one computer to another (offsite). Any ideas?
<jmarsden> cred: something like   sudo complain mysql
<jmarsden> cred: or maybe sudo complain mysqld    maybe?
<jmarsden> cred: I am not an apparmor expert at all.
<cred> that didn't work
<jmarsden> !doesnt work | cred
<ubottu> cred: Doesn't work is a strong statement. Does it sit on the couch all day? Does it want more money? Is it on IRC all the time? Please be specific! Examples of what doesn't work tend to help too.
<GeekyAdam> hi all
<lifeless> can uec do something more like a regular kvm setup - with a persistent img file ?
<lifeless> if so, how does want set it up ?
<lifeless> s/want/one/
<uvirtbot> New bug: #738527 in openssh (main) "package openssh-server 1:5.1p1-5ubuntu1 failed to install/upgrade: subprocess post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/738527
<cred> what does this mean: type=1502 audit(1300588611.956:20):  operation="open" pid=1433 parent=1 profile="/usr/sbin/mysqld" requested_mask="r::" denied_mask="r::" fsuid=0 ouid=0 name="/sys/devices/system/cpu/"
<cred> I"m getting this error when I try to start mysql
<cred> I've disabled apparmor, but I'm still getting this crap
<cred> what does this mean?:    type=1502 audit(1300588611.956:20):  operation="open" pid=1433 parent=1 profile="/usr/sbin/mysqld" requested_mask="r::" denied_mask="r::" fsuid=0 ouid=0 name="/sys/devices/system/cpu/"
<pmatulis> cred: how did you disable apparmor?
<cred> pmatulis: I followed the instructions on the wiki, it kept running though so I purged it
<cred> mysql still won't start
<cred> I just removed mysql and re-installed it and now it's hung on the config phase
<pmatulis> cred: how did you disable apparmor?  what wiki?
<cred> pmatulis: the ubuntu wiki
<cred> pmatulis:  update-rc.d -f apparmor remove
<cred> now mysql-server won't install at all
<cred> the config process it hung, I'll have to reboot
<cred> I guess I'll have to save my config file an purge mysql and see if it will install
<cred> ghhhaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!!!!!!!!!!!!!!!1
<cred> it still won't start
<cred> I can't believe this
<cred> how can mysql not install and run?
<pmatulis> cred: i heard to completely disable apparmor you need to pass a kernel option b/c there is something lurking in the initramfs
<pmatulis> cred: 'apparmor=0'
<cred> pmatulis: ok, thanks
<cred> pmatulis: I guess that goes into the non-existant /etc/default/grub file ?
<cred> pmatulis: do you know the command to turn off the graphical boot ????  I have "nomodeset" but it's not working
<pmatulis> cred: get rid of 'splash'
<cred> pmatulis: splash isn't listed
<cred> well, I see to NEVER, NEVER use do-release-upgrade
<cred> this system is hosed
<pmatulis> cred: i would just pass the option during boot and see if it works
<cred> pmatulis: I'm trying it now
<cred> well, nothing is working
<cred> I'm going to install centos, thanks for all your help guys
<dschuett> anyone good with bind9?
<airtonix> depends, not really
<kaushal> Hi
<kaushal> Does Ubuntu 10.04 has asterisk binary packages ?
<kaushal> I mean Ubuntu Lucid Server
<greppy> apt-cache search asterisk should show them.
<kaushal> is it better to go with 10.04 or 10.10 as server ?
<greppy> I go with LTS releases
<greppy> unless I need something specifically from a "normal" release.
<kaushal> ok
<kaushal> greppy: Thanks
<bigbang> hey i followed the mail filtering how to un help ubuntu for postfix.. adding spamassasin and clamav.. is it normal now for it to take so long to send the email out and in ?
<KurtKraut> bigbang, so long means how many minutes?
<bigbang> well about a whole minute i try like using webmin to do other things and it just slows down the whole pc...
<Egbert9e9> whenever i start screen on a remote ssh, detach, disconnect and later reattach it, it somehow forgets my home directory and opens each window as "user@server:/$" instead of "user@server:~$"
<RoyK> Egbert9e9: screen will open new screens with the current directory you had when starting screen in the first place
<Egbert9e9> RoyK: i start it on ~
<Egbert9e9> something is weird
<uvirtbot> New bug: #738742 in bacula (main) "package bacula-director-mysql (not installed) failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/738742
<airtonix> greppy: having any problems with suspend on your desktops lately ? this is me raging at pm_suspend : http://dpaste.com/521335/
<greppy> airtonix: the only problem I am having is the vga monitor doesn't come back from a suspend or if I power it off for a while, have to reboot.  the dvi monitor comes back fine.
<airtonix> greppy: this guy explains my scenario pretty much verbatim : http://forums.debian.net/viewtopic.php?f=5&t=52581
<airtonix> sshf might the cause
<airtonix> sshfs
<greppy> ah
<airtonix> except it's on both server and desktop that it happens
<edge> i got my server locked in a dependency problem after trying to upgrade. how can i resolve this. http://pastebin.com/hPGW5pm1
<hydraD> hi
<hydraD> anybody from romania here?
<hydraD> thanks
<RoyK> hydraD: why?
<gbsd-gord> RoyK: I assumed he/she needs Romanian language support
<gbsd-gord> hydraD: is that ^ correct?
<wilsotc> I'm looking for advice on an HP proliant server model DL360 G7. Are there known issues with this system? The DL360 G6 *is* officially supported, and I would purchase this instead if there are problems.
<wilsotc> (with the G7)
<jpds> Only one way to find out.
<jpds> Changes are everything will just work.
<wilsotc> I'm fairly confident, just making sure. Also, is the Smart Array controller P410i is supported from the updated kernel in 10.10 through the hpsa driver?
<wilsotc> I've seen that the hpsa driver is now included in the latest kernels.
<wilsotc> Anyone using SSD drives?
<wilsotc> This system will include battery backed write cache, and I need lots of IOs, not space so I'm looking at SSD
<Andy80> hi all
<Andy80> I'm running an Ubuntu server on a VPS (a Karmic version). I've currently installed nptd to keep time updated, but it looks like something is not working, this is what I get when I run "date"  command: Mon Mar 21 16:43:38 CET 2011
<Andy80> the timezone is European/Rome based and it should be Mar 20, 21:01
<Andy80> how can I fix this?
<raphink_> Andy80: did you run ntpdate?
<wilsotc> Your clock should be close to correct before NTPD starts, or it won't sync
<wilsotc> just update it (like with ntpdate) and it should work fine
<wilsotc> ntpq -c peers
<Andy80> if I run ntpdate I get: root@wifi:~# ntpdate
<Andy80> 21 Mar 20:36:28 ntpdate[28004]: no servers can be used, exiting
<wilsotc> will show the status of your NTPD process
<Andy80> let me check the other command...
<Andy80> root@wifi:~# ntpq -c peers
<Andy80>      remote           refid      st t when poll reach   delay   offset  jitter
<Andy80> ==============================================================================
<Andy80>  europium.canoni 193.79.237.14    2 u    -   64  377    9.953  -841279 145836.
<Andy80>  bds2.bds-bayern 192.53.103.108   2 u   27   64  377    7.812  -841134 143860.
<Andy80> (sorry for the copy/paste)
<wilsotc> The time should be closer than one hour, or it won't sync
<wilsotc> your offset is too large
<Andy80> wilsotc: ok... so how can I change the offset? how can I fix it? Do I have to configure another ntp server in ntpd.conf?
<wilsotc> you can do that with the time, and date commands OR in the BIOS when you turn the machine one
<wilsotc> on
<wilsotc> OR with ntpdate
<Andy80> I cannot access the bios... it's a VPS
<wilsotc> ntpdate server
<wilsotc> hold I will get the exact command,
<Andy80> root@wifi:~# ntpdate ntp.ubuntu.com
<Andy80> 21 Mar 20:43:39 ntpdate[28064]: the NTP socket is in use, exiting
<Andy80> mmm...
<RoyK> ntpdate pool.ntp.org
<Andy80> wait, let me stop ntpd
<wilsotc> try ntpdate ntp-s1.cise.ufl.edu
<Andy80> ok, now it was set correctly:
<RoyK> pool.ntp.org is very reliable and close to the source
<wilsotc> yes
<Andy80> root@wifi:~# ntpdate ntp.ubuntu.com
<Andy80> 20 Mar 21:19:13 ntpdate[28087]: step time server 91.189.94.4 offset -84330.344231 sec
<Andy80> ok, I'll use it then
<wilsotc> now /etc/init.d/ntpd restart
<wilsotc> and you should be golden in a few mins
<Andy80> let's see.... if I remember well. I had already used ntpdate once.... then I installed ntpd and "someting" messed with time/date again during night....
<Andy80> let's see if it resists
<wilsotc> is this a dual boot system?
<Andy80> this is an Ubuntu server installation on a Virtual Private Server hosting
<Andy80> by the way... it looks like ntpd is messing again
<wilsotc> it is likely changing upon restarting your system
<Andy80> it already set 3-4 minutes more than my local machine
<wilsotc> the hardware clock in your machine is likely a different timezone that your server OS is expecting
<Andy80> server: Sun Mar 20 21:26:19 CET 2011 - local PC: dom 20 mar 2011, 21.23.20, CET
<Andy80> but.... if the system is virtualized, Ubuntu cannot set the real hardware clock...
<wilsotc> OR the VM is resetting the virtual or REAL hardware clock
<wilsotc> in the VM there's a simulated hardware clock that is possibly getting changed
<Andy80> mmm... I'll have to ask to VPS support then...
<wilsotc> that sounds like your VM is hosted on a system with a flaky hardware clock? I've seen cheap crystals so bad that NTPD couldn't run
<Andy80> for now I've stopped ntpd and re-set the time again using: ntpdate pool.ntp.org
<Andy80> let's see in 5-10 minutes if/how changes
<wilsotc> yeah, a cron job every minute will be okay, but you might see lost seconds and weird effects if the clock is drifting that quickly
<Andy80> with ntpd it has been worse: it changed +24 hours in less than half a day :\
<RoyK> Andy80: then ntpd probably gave up - is this on a VM?
<Andy80> RoyK: yes, it's on a VM
<Andy80> wilsotc: uhm... even with ntpd stopped, the clock is getting forward too fast.... I already have 4 minutes difference between my machine and remote server
<wilsotc> A bad physical clock in the host machine is possible
<wilsotc> what does your /var/lib/ntp/ntp.drift
<wilsotc> file show?
<Andy80> root@wifi:~# cat /var/lib/ntp/ntp.drift
<Andy80> 329.743
<wilsotc> It usually shouldn't be much over +-20
<wilsotc> if you look at the 4th column of /var/log/ntpstats/loopstats, does that number fluctuate wildly?
<Andy80> let me check
<RoyK> Andy80: there are some issues with some virtualisation systems that may make the clock drift quite badly
<RoyK> Andy80: it's _not_ an ntp issue, though, it's the emulated BIOS clock that drifts
<Andy80> wilsotc: cat: /var/log/ntpstats/loopstats: No such file or directory
<wilsotc> I suspect the VM also
<wilsotc> you could look here for further info: http://support.ntp.org/bin/view/Support/KnownOsIssues#Section_9.2.2.
<Andy80> wow, lot of info to read... give me some minutes :)
<wilsotc> See if your hosting service uses NTPD or equivalent, then just use the host VMs clock seems to be the answer
<Andy80> I don't know what they use to virtualize :\
<Andy80> I'll ask a friend of mine who works in that service, as soon as I get him online.
<wilsotc> ok
<Andy80> uff :\ the clocl is completly fu*** it's already 18 minutes forward my local time...
<Andy80> clock
<RoyK> Andy80: what sort of virtualisation system are you on?
<Andy80> RoyK: I don't know....
<gbsd-gord> sounds like H.G.Wells-ware :)
<wilsotc> I've seen a bad hardware clock drift that much
<Andy80> RoyK: it's not something I'm managing directly... it's a service I'm buying from a friend
<RoyK> k
<RoyK> Andy80: not that I think it will help, but can you check if you're running a "server" or "generic" or "virtual" kernel?
<Andy80> yep..
<RoyK> uname -a
<Andy80> root@wifi:~# uname -a
<Andy80> Linux wifi 2.6.31-20-server #58-Ubuntu SMP Fri Mar 12 05:40:05 UTC 2010 x86_64 GNU/Linux
<RoyK> which distro is this?
<RoyK> distro version, I mean
<Andy80> ubuntu 9.10
<RoyK> upgrade to 10.04 for a start
<RoyK> 9.10 is theoretically supported for another month, but 10.04 will be supported until april 2015, and I doubt much bugs will be fixed in 9.10
<gbsd-gord> agreed - 1st priority, then if time problem goes away, you're sorted ;)
<gbsd-gord> once 9.10 "drops off", it will be a little difficult to upgrade smoothly
<RoyK> also, after upgrading to 10.04, install 2.6.35 as your kernel - the default, 2.6.32 IIRC, has some issues that has been fixed in .35
<Andy80> mmm.... I'm running a "half-critic" service on this machine.... are we sure everything will be ok?
<RoyK> gbsd-gord: really?
<RoyK> gbsd-gord: I've upgraded rather old machines way after their support went out
<RoyK> Andy80: do you have console access to the machine?
<gbsd-gord> after it drops off the mirrors (at some unknown unpecified time I think) then  a CDrom upgrade is needed
<RoyK> Andy80: usually, an upgrade to a new release is trivial, but then...
<Andy80> RoyK: I've ssh/root access to it
<RoyK> gbsd-gord: I guess that's highly theoretical - it's only a few weeks since I upgraded a 9.04 machine
<gbsd-gord> Royk: that could be at least 7-19 months from now, best not to take too long because it's easy to forget
<gbsd-gord> RoyK: oh yes, not urgent, but a very good idea and might fix the time trouble
<RoyK> Andy80: well, ask whoever setup the system in the first place - an upgrade should be trivial, but if their infrastructure is wierd, well, you never know
<Andy80> can you point me to some upgrading information/instructions that I can read? usually on old debian machines I used to manually change the source.list file to reflect the new version, then apt-get dist-upgrade...
<RoyK> Andy80: apt-get update && do-release-upgrade
<RoyK> no reason to change sources.list
<gbsd-gord> http://www.ubuntu.com/desktop/get-ubuntu/upgrade    covers it
<Andy80> ok, let's try....
<gbsd-gord> updating-over-ssh is in theory "more risky" over ssh (in case you lose the SSH by upgrading, but if you write down the extra ssh port it opens for you you should be pretty safe
<gbsd-gord> I've never had problems, ymmv
<Andy80> ok, let's try
 * Andy80 cross fingers....
<uvirtbot> New bug: #738949 in dovecot (main) "package dovecot-common 1:1.2.9-1ubuntu6.3 failed to install/upgrade:" [Undecided,New] https://launchpad.net/bugs/738949
<Andy80> is the configuration of apache, php, postgresql ecc.... kept? yes?
<RoyK> Andy80: yes
<Andy80> well
<Andy80> RoyK: upgrade finished! system restarted and running. The hosted service seems to run fine. The problem with date/time remained :\
<Andy80> RoyK: what I don't understand is: why the kernel version is the same?
<Andy80> andy80@wifi:~$ uname -a
<Andy80> Linux wifi 2.6.31-20-server #58-Ubuntu SMP Fri Mar 12 05:40:05 UTC 2010 x86_64 GNU/Linux
<Andy80> and I cannot find /boot/grub/menu.lst nor /boot/grub/grub.cfg
<RoAkSoAx> kirkland: ping
<Daviey> Andy80, It sounds like the kernel is not part of your image... This isn't that uncommon for some visualisation methods.
<Andy80> btw... I've my friend on Skype now... he says they use xen
<Daviey> it's /usual/ for the kernel not to be part of the Domu... there are hacks that allow it to seem to be inside.
<Daviey> Andy80, So, unless your provider supports pvgrub or simialr. - you are stuck on the kernel they provide.
<RoAkSoAx> Daviey: o/
<gwynddyllyd> hey. i just installed a minimal natty system and would like to add X to it. i installed xorg but failed to find a package to configure it from the command-line. do i have to fiddle with the xorg.conf file or is there some package (maybe curses based) to do it?
<gwynddyllyd> sorry if this is the wrong place to ask
<gwynddyllyd> the system will function as a small LAN file server
<Daviey> hey RoAkSoAx
<Andy80> RoyK: we've fixed it :)
<Andy80> RoyK: echo xen > /sys/devices/system/clocksource/clocksource0/current_clocksource
<Andy80> yeah ;)
<Theravadan> i am trying to start a m2.4xlarge in any availability zone using an ubuntu server ami and it goes from pending->terminated, anyone know what i'm doing wrong or recommend any ubuntu ami that might work?
<lifeless> on UEC or EC2 ?
<Theravadan> ec2
<Theravadan> ec2-run-instances ami-548c783d -n 1 -t m2.4xlarge -z us-east-1c -b /dev/sdb=:500:false -k 'afkeypair' is the cmd
<zul> http://uec-images.ubuntu.com and the get-console command should be able to tell you what is happening hopefully
#ubuntu-server 2012-03-12
<Super_Dog> Dinking around with AjaxPlorer.  Install instructions say "Make sure that the Â« data Â» sub-folder is writeable by the webserver."
<Super_Dog> Anybody have recommendations on the property "sudo chmod -R" command for that?  I'm crappy with permissions...
<patdk-lap> would be more than that
<patdk-lap> unless you make it writable by all
<Super_Dog> Writable by all would be a little bad wouldn't it.... :-(
<patdk-lap> depends on many things, but generally, yes
<Super_Dog> Correct me if I'm wrong, but can you assign 755 to just the "www-data" user....  Wouldn't that do the trick in Ubuntu?
<patdk-lap> yes, and kill whoever needed to edit/modify/... it
<patdk-lap> plus means the web user has more permissions than probably needed, like making changes to that program itself
<patdk-lap> normally you just set the group to www-data
<patdk-lap> and give group write access to what is needed
<airtonix> Super_Dog: i usually add my user to the www-data group and set the permissions to 775
<Super_Dog> Do you mean 775 or 755?
<airtonix> then chown www-data:www-data /var/www -R
<airtonix> Super_Dog: i mean 775
<patdk-lap> that is way overkill
<airtonix> how is it overkill?
<patdk-lap> just like I said above
<patdk-lap> the webuser can EDIT and MODIFY anything in the program
<airtonix> and yourself?
<airtonix> nothing?
<patdk-lap> leave user alone, why change it?
<airtonix> because it needs to actually do stuff?
<patdk-lap> heh?
<Super_Dog> Am trying to just AjaxPlorer....
<patdk-lap> since when does the user have anything to do with that?
<airtonix> it's pretty awesome having a /var/www that you can't even setup stuff in
<Super_Dog> Am a moderately experienced Ubuntu desktop user and relatively new to the Ubuntu Server platform
<airtonix> patdk-lap: oh i don't know, perhaps when you need to : git clone repourl:project-name ./project
<patdk-lap> hmm?
<airtonix> but hey, maybe the "webuser" magically does this with ESP
<patdk-lap> why am I git cloning a website?
<airtonix> if you need to ask that question, then i am at a loss for words
<patdk-lap> I wouldn't do it into /var/www
<Super_Dog> Guys... I didn't understand a single thing you are talking about... :-)
<patdk-lap> I keep the websites owned by a user
<patdk-lap> and let the webserver access it
<patdk-lap> no reason for the webserver to own it
<airtonix> patdk-lap: and then the www-data user can't write... more awesome
<patdk-lap> why can't it write?
<airtonix> because it doesn't own it?
<airtonix> because you left it at 755
<patdk-lap> as I said, that is what I use group permissions for
<patdk-lap> who said I left it at 755?
<patdk-lap> did you not read ANYTHING I said?
<patdk-lap> <patdk-lap> yes, and kill whoever needed to edit/modify/... it
<patdk-lap>  plus means the web user has more permissions than probably needed, like making changes to that program itself
<patdk-lap>  normally you just set the group to www-data
<patdk-lap>  and give group write access to what is needed
<Super_Dog> So let's say I'm a normal human being that doesn't know Ubuntu from his elbow....  When they tell me in the install manual to, "Make sure that the Â« data Â» sub-folder is writeable by the webserver" what do I do when it appears I have successfully installed the LAMP stack in Ubuntu?
<Super_Dog> I have a www-data group it appears....
<Super_Dog> I presume that is the user that has privileges in the /var/www directory....
<airtonix> apparently you only let "A Group" write to "A Folder", then when your application fails becuase it can't write you come back here
<Super_Dog> Airtonix:  You said "I usually add my user to the www-data group and set the permissions to 775"...  Pray what command do you issue to make that happen?
<airtonix> Super_Dog: ls -al /var/www will show you what user and group have ownership
<airtonix> Super_Dog: you add users to groups with : sudo adduser USERNAME GROUPNAME
<airtonix> Super_Dog: you set permission bits with : sudo chmod 775 PATHNAME [ -R ] << where -R is optionally recursive
<Super_Dog> ok  so if I have a user joseph I could issue:  sudo adduser joseph www-data       ?
<airtonix> Super_Dog: apparently you should be making a group called "AwesomeDevelopersGroupThatHaveWriteAccessToThisOneFolder"
<airtonix> and adding yourself to that group, then change ownership of your virtualhost folder under /var/www to be owned by taht group with : sudo chown :AwesomeDevelopersGroupThatHaveWriteAccessToThisOneFolder /var/www/VIRTUALHOST_FOLDER -R
<Super_Dog> It says that user 'joseph' is already a member of `www-data'.
<Super_Dog> sudo chmod 775 /var/www/ajaxplorer/data -R    - Does this look right?
<Super_Dog> I've added the users who should have access to this to the "www-data" group it appears....
<Super_Dog> Guys....  Only works when I do following:
<Super_Dog> sudo chmod 777 /var/www/ajaxplorer/data -R
<Super_Dog> I know that's probably bad.... (Sorry, permissions have been the bane of my existence and I am an admitted retarded idiot when it comes to Ubuntu Server permissions.)
<Super_Dog> Any ideas?
<taipres> there any issues
<taipres> with latest ubuntu server edition(went 32bit to save memory)
<airtonix> save memory?
<taipres> yeah
<taipres> 64bit version naturally uses more
<taipres> I choose ubuntu over centos because ubuntu just easier for me to compile stuff
<taipres> but need kind memory usage
<patdk-lap> hmm, 10.04 32bit installs with only 23megs ram usage
<taipres> sudo apt-get install lighttpd php5-cgi
<taipres> says it can't find either package, any ideas?
<qman__> have you enabled universe?
<qman__> I don't think those are in main, though I could be wrong
<taipres> qman no
<taipres> how do I do that
<qman__> sudo vi /etc/apt/sources.list, uncomment the lines in the universe section
<qman__> sudo apt-get update
<taipres> this isn't gonna install a bunch of intense apps right? my only constraint is memory
<taipres> 256MB
<qman__> no, it will only allow you to install more software
<qman__> it doesn't install more software by itself
<taipres> thanks
<patdk-lap> you won't be doing large downloads from php will you?
<qman__> it will use slightly more disk space to cache the available packages
<qman__> but we're talking kilobytes
<taipres> don't see any lines commented
<taipres> I see
<taipres> deb http://archive.ubuntu.com/ubuntu natty main restricted universe
<taipres> deb http://archive.ubuntu.com/ubuntu natty-updates main restricted universe
<taipres> deb http://archive.ubuntu.com/ubuntu natty-security main restricted universe
<taipres> last one is deb http://archive.canonical.com/ubuntu natty partner
<taipres> patdk and no, no php big downloads
<taipres> although my VPS is on a 1Gbps hehe, and cpu is good
<taipres> php probably uses lot of mem for big downloads
<patdk-lap> no, lighttpd does
<patdk-lap> lighttpd buffers anything external into ram
<taipres> really? I thought that was the point
<taipres> low memory consumption
<patdk-lap> so if you send a large file via php/proxy/...
<patdk-lap> no
<patdk-lap> lighttpd isn't about low memory comsumtion
<patdk-lap> it's about being fast
<taipres> "With a small memory footprint compared to other web-servers, effective management of the cpu-load, and advanced feature set (FastCGI, SCGI, Auth, Output-Compression, URL-Rewriting and many more)"
<patdk-lap> it normally uses small memory
<patdk-lap> but it buffers first
<taipres> herm, well do you recommend a different web server then?
<patdk-lap> if you send a 1gig file from php, lighttpd will allocate 1gig of ram to hold it
<taipres> if I go over 256 is goes into vswap
<taipres> have anothre 256 there too
<patdk-lap> well, normally you don't have to worry about it, don't send large files, or if you must, use x-sendfile
<qman__> FYI, I run apache on real hardware with 256MB
<qman__> along with plenty of other stuff
<qman__> without any issues
<taipres> apache dies easy though
<taipres> is the reason i'm using alternative web server
 * patdk-lap has never seen apache die
<qman__> same
<taipres> http://www.kalzumeus.com/2010/06/19/running-apache-on-a-memory-constrained-vps/
<patdk-lap> most likely you use bad apache modules
<taipres> this guy has
<taipres> claimed it was lot of traffic and apaches keep alive default time
<qman__> that's all adjustable
<patdk-lap> apache died for him cause of bad config
<qman__> and more likely a problem with the VPS software
<patdk-lap> and he hitting the oom
<patdk-lap> not apaches fault
<qman__> not saying don't try other things, just that apache is not the culprit here
<taipres> so you think apache is better at mem than lighthttpd?
<patdk-lap> I never said that
<taipres> i've never really heard apache described as anything besides bloated and complicated config
<patdk-lap> I use lighttpd, and love it, and it is low mem for me
<taipres> I use it with xamp and my regular server but is pre config so
<patdk-lap> but it does BUFFER first, than attempt to stay low mem
<taipres> ok
<patdk-lap> so it is possible for lighttpd to use gigabytes of ram, easily, if you are stupid enough to let it
<qman__> configuring apache is not very complicated
<taipres> so any ideas why I can't get this download to work? when config file seems ok?
<qman__> it's bloated compared to minimalist servers, but not compared to other featureful servers
<taipres> i'm gonna google regardless but if you know i'dr ather not waste hours
<qman__> have you done apt-get update?
<taipres> doing that right now
<patdk-lap> heh, you always update, before attempting to install
<patdk-lap> package versions change often
<taipres> ;D
<qman__> for example, IME, apache blows IIS7 out of the water in terms of performance
<qman__> it's all relative
<taipres> is IIS suppose to be good?
<qman__> if you ask microsoft it is
<taipres> I just remember in the 90's all the flood of exploits for it
<patdk-lap> heh for iis7?
<taipres> never read about performance
<taipres> nah older iis i'm sure
<patdk-lap> iis5 wasn't nice
<patdk-lap> iis6 is ok
<patdk-lap> iis7 has been good so far
<qman__> yeah, iis5 was the disaster area
<taipres> the update fixed it, thanks
<qman__> they learned the lesson for the most part and have tried to be secure by default
<taipres> nice :D
<taipres> I wish windows had more linux like everything only a few commands away
 * patdk-lap still puts apache + mod_security infront of iis
<taipres> that's only thing i really like about linux
<patdk-lap> start using powershell? :)
<qman__> in my opinion, powershell tried to fix that, but failed miserably
<qman__> it's so obtuse and awkward, it just doesn't work
<patdk-lap> ya, I made some mixed powershell + vbs scripts, that work good though
<taipres> that's unfortunate
<taipres> I love me some VBS scripts
<patdk-lap> confusing as hell, that you talk both languages in one file, but you can't mix them
<taipres> COM rocks
<qman__> powershell is useful but it's not smooth or cohesive
<qman__> nothing like bash
<patdk-lap> man, I made a private key, and I can't locate it anywhere
<taipres> microsofts one framework is really nice too, the one to get all the info
<qman__> also, you can cause a BSOD by running ps | kill
<taipres> can't think of the name, wrote enough code to use it, you'd think i'd know
<taipres> vbs can access it too
<taipres> oh WMI
<taipres> can evne control CPU speed with it, with some computers
<taipres> comes on all windows that I know of
<taipres> or CPU fan speed rather
<taipres> linux memory management also threw me for a loop when i spent hours trying to figure out why I couldn't free memory allocated with pthread stack
<taipres> like I could on windows, turns out linux sees memory different, if its not used it's considered wasted
<taipres> but then again even at max mem swap was never touched, so I guess it's really just cached mem not the actually stats
<qman__> yeah, you have to count less buffers and cache
<taipres> http://www.slideshare.net/haish/linux-memory-consumption good overview
<qman__> the kernel is very good at utilizing free memory to speed up the system
<taipres> qman makes sense
<taipres> just makes me uncomfortable when I see 90% used
<taipres> but i'll get use to it
<taipres> if it bothers me that much may just switch from pthreads to forks, because when the processes are killed they free
<linocisco> who is using vbox with ubuntu server?
<linocisco> I can't mount vbox guest addition cdrom on ubuntu guest server
<patdk-lap> no one?
<linocisco> no one
<qman__> that's going to be a question for virtualbox people
<qman__> unless they included a deb for it, it's probably not going to work
<linocisco> i was asking there
<linocisco> no one is responsive after a while.
<taipres> they're a pretty active bunch at times they may just not know
<taipres> you try googling?
<taipres> got lighthttpd and php going well, fastcgi enabled too
<taipres> ubuntu rocks, now need to go after mysql, heard it's a memory pig
<linocisco> how to clear all iptables entries?
<patdk-lap> mysql is hardly a memory hog
<taipres> patdk-lap that's good then :)
<linocisco> how to clear all iptables entries? all know it is to list with "iptables -L" but I dont know how to clear.
<linocisco> hi all , mine is similar to this "http://askubuntu.com/questions/58817/squid-proxy-not-working-using-ubuntu-server-10-04-and-two-nics-one-is-connecte". but no answer yet
<linocisco> hi all , mine is similar to this "http://askubuntu.com/questions/58817/squid-proxy-not-working-using-ubuntu-server-10-04-and-two-nics-one-is-connecte". but no answer yet
<SpamapS> linocisco: iptables -F
<SpamapS> linocisco: note that this only flushes the *filter* table.. there are others... 'man iptables' will explain them
<linocisco> SpamapS, thanks. I would try. btw, could you also look at http://askubuntu.com/questions/58817/squid-proxy-not-working-using-ubuntu-server-10-04-and-two-nics-one-is-connecte
<linocisco> SpamapS, i dont know why mine is not successful like that
<SpamapS> linocisco: I think I have an answer for that one.
<airtonix> why does /etc/resolv.conf have to be a symlink?
<SpamapS> airtonix: it doesn't!
<airtonix> SpamapS: apparently it does!
<SpamapS> airtonix: but, without resolvconf, you won't adapt to network changes properly
<airtonix> SpamapS: who said anything about not having a resolvconf?
<SpamapS> airtonix: the package resolvconf
<airtonix> 1. install bind 2. rm /etc/resolv.conf 3. echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf 4. sudo service bind restart 5. recieve error 6. slow clap
<airtonix> so again...
<SpamapS> airtonix: apt-get remove resolvconf
<airtonix> why would i do that on a server ? i didn't install it
<SpamapS> its part of ubuntu-minimal
<airtonix> shouldn't ubuntu server require the software it's installed with by default?
<airtonix> since you know... that's the image i installed.. ubuntu-server
<SpamapS> but its only Priority: important , so you can remove it
<SpamapS> airtonix: no, a lot of that stuff is there as a convenience.
<airtonix> could have fooled me
<SpamapS> airtonix: resolvconf is actually quite handy for dealing with a dynamic environment. And for static environments, it should leave your stuff alone.
<airtonix> i'm on ubuntu-server... i install bind.. i expect to not have problems like this
<SpamapS> airtonix: I can see that you've been inconvenienced by this, and you have an idea of how you'd like things to work. Now is a great time to file a bug report. Perhaps against 'ubuntu-minimal' with 'apport-bug ubuntu-minimal'
<airtonix> so here is what will happen. i'll take your word for it that resolvconf isn't required by anything else, i'll remove it and my server will die.
<Super_Dog> Anybody run across this before?  http://pastebin.com/LqvtaJQK
<airtonix> large flames will consume it
<Super_Dog> set locale problem?
<SpamapS> airtonix: if your server dies because you remove resolvconf, then something did not declare the dependency properly and that is most *definitely* a bug.
<SpamapS> airtonix: btw, if you read 'man resolvconf' it explains how to make sure your static interfaces push their configs into resolv.conf
 * SpamapS notices that 'man interfaces' doesn't explain this, nor does 'man resolvconf' .. filing bugs
<jayjay> hey
<jayjay> is anyone online :)
<SpamapS> airtonix: bug was already reported almost a month ago btw: bug #934237
<uvirtbot> Launchpad bug 934237 in ifupdown "Manual pages should mention dns-nameserver and dns-search" [Undecided,Invalid] https://launchpad.net/bugs/934237
<jayjay> cool
<SpamapS> airtonix: did your server imoliate itself yet?
<jayjay> hey I'm having some issues transferring my ssh key
<jayjay> i keep getting a connecting timed iout error
<airtonix> yes it's currently a hazard zone now
<jayjay> any tips? I'm kinda scared to keep ssh running right now :S
<jayjay> there have been 4 attempts to hack it with multiple ip's today
<SpamapS> jayjay: sorry, I don't understand the question. How are you trying to "transfer your ssh key" ?
<jayjay> Sorry i was not clear. I am trying to transfer my ssh key from my ubuntu 10.04 server onto my laptop with this command ssh-copy-id <username>@<host>
<jayjay> i keep getting this error ssh: connect to host XXX.XXX.XX.XX port 22: Connection timed out
<qman__> then you either have the wrong address or it's firewalled
<jayjay> its firewalled
<jayjay> i have shorewall installed
<jayjay> but i users the shore wall stop command
<jayjay> do i need to remove shore wall to be able to do this?
<qman__> not likely
<qman__> you need to add an exception
<jayjay> i can connect to ssh via my laptop
<jayjay> i just can't transfer the key
<jayjay> i added an exception for ssh
<jayjay> SSH/ACCEPT	net		$FW
<qman__> wait, are you trying to connect to SSH on the server, or on the laptop?
<qman__> because if you're trying to connect to the server from the laptop, you need to also transfer the key to the server from the laptop
<qman__> the client's key needs to be placed on the ssh server
<jayjay> oh?
<jayjay> the laptop is the client
<jayjay> the desktop is the server
<qman__> then you need to run ssh-copy-id on the laptop
<jayjay> yes i just ran that command from the laptop as ssh-copy-id SERVER IP
<jayjay> it worked this time =D
<jayjay> test
<qman__> yeah, it always goes that way
<jayjay> ok after the command, it asked me to enter my password
<qman__> I was unsure because all my linux boxes run sshd
<jayjay> can i disable the password in the ssh config file and still be able to connect to the ssh server?
<qman__> the way ssh-copy-id works, it logs in over SSH to copy the key
<qman__> after that, you should be able to log in without entering an SSH password
<qman__> though you may have to enter a key password, depending on how you created the key
<jayjay> no i did not
<jayjay> when i made the key i left the password option blank
<qman__> ok, in addition to that, your client has to actually look for and use the key
<qman__> I'm guessing since ssh-copy-id worked, it'll use it
<qman__> default is ~/.ssh/id_rsa
<jayjay> Uhgggg now my laptop will not connect to it at all :I
<jayjay> :(
<jayjay> i give up at this ssh key thing
<jayjay> LOL wtf i just started my firewall backup and now its letting me connect to it but still requires a password
<airtonix> jayjay: if you want to auto login with pub/priv key from laptop to server then you send the public version of your key set on the laptop to the servers ~/.sssh/authorized_keys (or just paste it in there and chmod it and the directory ~/.ssh/ 700)
<airtonix> *i mean ~/.ssh
<airtonix> jayjay: another unfortunate and undocumented side effect of using pub/priv keys is that when you have more than 7, the ssh-agent on the machine you are sitting at will only check if the first 7 (you never know which 7)  are valid for logging into the remote machine.
<airtonix> more than 7 local ssh keys
<airtonix> jayjay: even setting a shortcut in ~/.ssh/config on your local machine will not circumvent this. you unfortunately have to: SSH_AGENT= ssh user@remote -i ~/.ssh/id_rsa
<airtonix> which is to mean that you have t set SSH_AGENT= (to nothing)
<airtonix> at least i think it's SSH_AGENT
<taipres> finally got phpmyadmin working, lighttpd needed alias edit
<taipres> it's a hideous version though
<taipres> ugh
<taipres> i'm hoping ubuntu repository has the latest
<taipres> ut oh, says it is the latest
<taipres> it's not.
<taipres> i'll try main ubuntu channel
<taipres> isn't bleeding edge version of ubuntu, so have to compile phpmyadmin from source
<taipres> have a good one gentlemen
<linocisco> squid proxy with two NIC (one for WAN and one for LAN) possible?
<linocisco> squid proxy with two NIC (one for WAN and one for LAN) possible?
<linocisco> squid proxy with two NIC (one for WAN and one for LAN) possible?
<kraut> there is no reason for repeating yourself
<kraut> and yes, why not? if the connectivity is fine, squid would be fine.
<RoyK> linocisco: I don't see why not...
<linocisco> i never had internet through squid on vbox
<linocisco> that is why now I am testing zentyal
<_ruben> 2+ legged squids are much more common than single legged squids i'd say
<linocisco> RoyK, I made iptables and i configure squid with just http_acccess 3128
<linocisco> _ruben, I long waited to see internet through squid.never worked
<linocisco> _ruben, but all my testing was on Vbox
<RoyK> linocisco: it should work - if it doesn't, you've probably made an error somewhere. don't start messing around with iptables until you know everything else is working
<linocisco> RoyK, to allow internal network internat access, i need iptables NAT rule to open ports, right? and then ipv4 forward=1
<_ruben> you're combining all kinds of stuff that are not directly related .. how do you want to use squid? as a manually configured proxy in clients, or as a transparent proxy, or ..?
<linocisco> _ruben, first of all, i want manually
<_ruben> in case, iptables won't be needed at all
<_ruben> in that case*
<lynxman> morning o/
<io> lynxman: hi!
<lynxman> io: ello :)
<linocisco> _ruben, how could route between two NIC cards, WAN and LAN card?
<_ruben> squid's a proxy, not a router
<_ruben> a proxy works on a higher (osi layer) level than a router, basically
<_ruben> eg: http://tldp.org/HOWTO/IP-Masquerade-HOWTO/what-is-masq.html
<linocisco> I want to know exactly how to handle squid with two LAN cards(one is WAN and one is LAN). that is so clear that all LAN clients should get internet from that proxy
<_ruben> the number of network cards really has zero influence on how squid works and/or needs to be configured
<_ruben> the squid box itself has properly working internet access i assume?
<jibel> jamespage, SpamapS  do you know why lamp_reboot post-install test keeps failing  ? It seems that php code is not interpreted. Is it a bug in the test or real issue ?
<jamespage> jibel: not sure - I'll try to take a look today
<rye_> Hello, I am curious about the status of bug #935585 - I am experiencing this issue, the bug report has a branch attached but no status changes, is it being tracked for release?
<uvirtbot> Launchpad bug 935585 in upstart "[kernel panic] init: log.c:786: Assertion failed in log_clear_unflushed: log->remote_closed" [High,Confirmed] https://launchpad.net/bugs/935585
<jamespage> lynxman, reviewing the rabbitmq-server debdiff (again)
<jamespage> morning Daviey
<lynxman> jamespage: cool beans :)
<koolhead17> hello all
<RoyK> linocisco: yes
<linocisco> yes for what? sorry i cleared previous msgs
<RoyK> 09:15 < linocisco> RoyK, to allow internal network internat access, i need iptables NAT rule to open ports, right? and then ipv4  forward=1
<linocisco> RoyK, yes. so i need iptables anyway
<RoyK> linocisco: adding that masq rule won't hurt
<linocisco> is there any precise guide?
<RoyK> and you'll need another to do transparent proxyingâ¦
<RoyK> guide to what?
<linocisco> guide to (squid with two NICs(wan+LAN))
<RoyK> google ip masquerading
<RoyK> it's rather simple, really
<linocisco> it is easier said than done.
<RoyK> linocisco: yes, most things are, but google it, please, it really is simple
<linocisco> RoyK, yes. I really want to blame myself as I really could not get it. and dying to feel myself as useless
 * RoyK goes to do something useful
<jjohansen> stgraber: I replicated the rbind mount error in isolation, and fixed it you can try the parser at people.canonical.com/~jj/apparmor_parser, just drop it over the one in /sbin/
<jjohansen> /me will work with jdstrand to get it into the archive today
<jjohansen> /me hasn't tried it in a container yet, my container is failing with an error before hit the apparmor confinement, and I haven't had time to look into it yet.
<jjohansen> Sorry its been one of those weekends (son + accident => hospital kind)
<uvirtbot> New bug: #952893 in zookeeper (universe) "Zookeeper script contains invalid check for cygwin" [Undecided,New] https://launchpad.net/bugs/952893
<koolhead17> nijaba: around
<nijaba> koolhead17: hello
<koolhead17> nijaba: what was the client you suggested instead postfix.
<nijaba> koolhead17: msmtp-mta
<koolhead17> nijaba: thanks
<nijaba> np
<koolhead17> nijaba: BTW looking fwd to meet you on 4th apr :)
<nijaba> koolhead17: unfortunately not sure yet if I'll be able to come
<koolhead17> ooh. :( ok
<hallyn> morning!
<stgraber> morning!
<hallyn> stgraber: say, is the kernel with the apparmor mounts fix in the archive?
<hallyn> i'm goign to push the utmp fix, guess i should look through the (long) bug list and see what else i should squash in there
<uksysadmin> hey all
<stgraber> hallyn: I'm grabbing jjohansen's fixed apparmor_parser and I'm running a test kernel, if I can get it to work here, we should have it working in the archive later today
<hallyn> oh the fix was userspace
<stgraber> hallyn: well, jjohansen mentioned a few issues on the kernel side too, so I guess it's a bit of both
<stgraber> jjohansen: btw, no need to apologize for work you couldn't do over your weekend ;) thanks for all the help debugging this
<uksysadmin> Doing a fresh install of Precise Server Beta 1 always boots into tty7 - first time I didn't realise - thought my box had hung.  Is there a bug filed against this? I can only find this one which seems to be there for a while: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/761830
<uvirtbot> Launchpad bug 761830 in grub2 "grub2 still hands off to blank tty7 on non-Server command-line-only systems and some Server systems" [Low,Confirmed]
<stgraber> hallyn: so I'm testing http://people.canonical.com/~jj/apparmor_parser + http://people.canonical.com/~jj/linux-image-3.2.0-18-generic_3.2.0-18.29_amd64.deb now
<hallyn> stgraber: ok i'm trying to figure out how to stage things.  do you know when that might hit the archive?
<stgraber> hallyn: apparmor_parser should hit the archive later today once I confirm it works and jjohansen and jdstrand can prepare a new package
<hallyn> ok.  but so the apparmor policy in lxc won't be re-enabled until tomorrow right?
<stgraber> hallyn: right, my plan is to upload LXC with the updated apparmor profile as soon as the new apparmor hits the archive, so likely late tonight or tomorrow
<stgraber> wow, apparmor works! well, the first rule matched and worked, now I need to write the next ones ;)
<hallyn> it's all sugar and spice from here on out
<hallyn> stgraber: ok i'm going to push a fix in an hour or 3 for bugs 948623, 951150, and 948481
<uvirtbot> Launchpad bug 948623 in lxc "Can't start more than 35 containers on my machine" [High,In progress] https://launchpad.net/bugs/948623
<uvirtbot> Launchpad bug 951150 in lxc "lxc-start-ephemeral is not all ephemeral" [High,New] https://launchpad.net/bugs/951150
<stgraber> bug 948481
<uvirtbot> Launchpad bug 948481 in lxc "adjust Build-Depends to include dh-apparmor" [Low,In progress] https://launchpad.net/bugs/948481
<hallyn> trivial one...  just rolling it in
<stgraber> ok. While you're touching lxc-start-ephemeral would it be possible to add a "-d" flag to it to start the container in the background?
<hallyn> you use lxc-start-ephemeral?
<hallyn> sure, will do
<hallyn> in that case, if you don't mind, i'll push a staging tree for you to test with?
<stgraber> I used lxc-start-ephemeral to do some LXC stress testing yesterday and ended up doing "for i in $(seq 1 2000); do (lxc-start-ephemeral -o template&); sleep 2; done"
<hallyn> that's pretty enough... :)
<hallyn> now the script hangs around to clean up...  i guess i should just have the script itself fork and exit so the fokred thread cleans up
<stgraber> hallyn: yeah, I think it'd be the easiest, longer term I think we should add pre/post scripts to the LXC config
<stgraber> so we can have lxc-start run pre/post scripts outside the container and inside the container. OpenVZ has that and I remember using it quite often (though mostly for ugly workarounds ;))
<hallyn> yeah - like an initrd
<stgraber> right
<hallyn> that would hlep the fedora case too
<hallyn> if you want to opena  bug on that...
<hallyn> or we can just discuss at uds i guess
<stgraber> jjohansen: any reason "/** -> /usr/lib/lxc/root/**" wouldn't work? (it doesn't match)
<stgraber> hallyn: I can add it to my UDS notes
<hallyn> along with the rewrite in go
<stgraber> right ;)
<smb`> hallyn, While I see you around and before I forget: I had a little improvement for libvirt last week which may be something to feed back upstream (bug 949028). How would that best /simplest be done?
<uvirtbot> Launchpad bug 949028 in ubuntu "libvirt: xen: never use type=ioemu for vif definitions" [Medium,Triaged] https://launchpad.net/bugs/949028
<smb`> hm... though that also was uploaded already (to precise)
<smb`> bah, forgot the closes... :/
<hallyn> gary_poster: hey, are you around?
<gary_poster> hey hallyn, yes
<hallyn> gary_poster: i'm just looking at the lxc-start-ephemeral patch...
<gary_poster> thanks, yes
<hallyn> gary_poster: only thing i noticed was that you seem to put the original LXC_BIND entry in the container fstab at the end
<hallyn> IIUC, won't that make your other bind mounts be overmounted?
<hallyn> oh, wait
<gary_poster> I do compare for equality
<gary_poster> I don't look for containment
<gary_poster> I was taking the tack that if they do an explicit -b, they mean it
<hallyn> no, you mount them under $rootfs, so that'll DTRT. sorry :)
<gary_poster> :-) cool, np
<hallyn> gary_poster: per stgraber's suggestion I'm also going to add a '-d' option, then i will push
<gary_poster> cool hallyn, thank you
<Xethron> I am looking for a way to dynamically control internet speed based on the amount of internet a user consumed relative to another.
<kirkland> smoser: howdy
<smoser> hy
<kirkland> smoser: do we have 64bit ami's for m1small yet?
<smoser> they just work
<smoser> amis are all the same. they just changed the hardware
<Xethron> For example. If only one user is online, he gets 100% of the line speed. However, if two users are online, the one who consumed less internet should get preference as to allow him/her to "catch up"
<smoser> er... the [virtual] hardware available to run on
<kirkland> smoser: oh?
<kirkland> smoser: you didn't have to rebuild the ami's using 64 bit kernels and packages?
<smoser> no. we have 32 bit amis and 64 bit amis.
<Xethron> Is there something that can control something like that?
<smoser> all they did was now allow you to run the 64 bit amis on m1.small size
<smoser> previously they would just deny your request to do that.
<smoser> the interesting thing this means is that they basically have no more 32 bit only hardware.
<smoser> (or they can select filter the availablbe nodes to run a m1.small on to ones that have 64 bit hardware in the hypervisor)
<smoser> i guess its more likely that they just have an arch aware scheduler.
<lynxman> jamespage: ping
<stgraber> jjohansen: [61634.549605] apparmor_parser[16338]: segfault at 8 ip 000000000040fcc5 sp 00007fffef35cd20 error 4 in apparmor_parser[400000+be000]
<stgraber> jjohansen: that's when adding pivotroot
<stgraber> hallyn: so I guess we'll be running without apparmor for a few more days ;)
<hallyn> stgraber: dang.
<a_ok> does it matter if I add a mount option in fstab that is not supported?
<hallyn> stgraber: well i think i'll whip up a set of apparmor mount rules anyway, and pretend i tested it
<stgraber> hallyn: the mount rules for the /usr/bin/lxc-start can be tested but anything after that can't as pivotroot is denied and can't be allowed without apparmor_parser crashing :)
<hallyn> stgraber: i wasn't going to restrict mounts for lxc-start itself i don't think
<hallyn> maybe i should...
<hallyn> i guess i should.
<hallyn> no sense allowing mounts with target outside of $rootfs
<stgraber> hallyn: also you seem to have the choice between a generic "mount" allowing everything or allowing each of them individually but can't make something like "/** -> /usr/lib/lxc/root/**" as I'd have done for /usr/bin/lxc-start
<hallyn> right
<stgraber> but that's a bug, the documented syntax allows it
<hallyn> oh i misread
<hallyn> i'm goign to go based on the docs at the wiki.  and file bugs :)
<stgraber> ok :)
<hallyn> but first i have some other testing to do.  bbl
<stgraber> hallyn: I think "/** -> /usr/lib/lxc/root**" is what we want for usr.bin.lxc, we also want "pivotroot /usr/lib/lxc/root" and then look into the second profile
<stgraber> hallyn: the tricky part being nested containers
<jamespage> lynxman, pong
<hallyn> stgraber: my hope is that nested containers will just DTRT with pathanmes...
<hallyn> stgraber: I'm more worried about cgrousp with nested containers.
<hallyn> bc i'd like to restrict writing to them
<hallyn> stgraber: did i cc: you on the email about that?
<uvirtbot> New bug: #953081 in php5 (main) "libapache2-mod-php5 postinst script does not enable module" [Undecided,New] https://launchpad.net/bugs/953081
<stgraber> hallyn: I think you did, at least I remember you mentioning it (basically it'd be great to allow /sys/fs/cgroup/*/lxc/<container name>/** in apparmor)
<hallyn> right
<hallyn> q is do we want to risk the complicatino of per-container profiles this late in cycle
<pabelanger> So I'm getting ready to upload my patches for bug 953093, glance will still use sqlite3 by default.
<uvirtbot> Launchpad bug 953093 in glance "Add dbconfig-common support to glance" [Undecided,New] https://launchpad.net/bugs/953093
<stgraber> hallyn: I'd usually say it's too risky but on the other end I'm guessing LTS users (me included) will want it and so it'd be easier to fix any bug post-release than to explain how to implement the feature on their own until the next LTS
<hallyn> stgraber: do you think this requires FFE?
<hallyn> or is it a bug that users can't specify per-container policy?
<stgraber> hallyn: I think it'd need a FFe but explaining the current limitation and the problem it'd be for our users
<hallyn> ok
<hallyn> stgraber: the ugliest part of this is the complication/slowdown at lxc-create, lxc-clone, and lxc-destroy
<stgraber> hallyn: yeah, though if only building and loading the new profile it should be fairly quick
<stgraber> hallyn: (as long as we don't call a full reload of all the profiles)
<hallyn> stgraber: certainly won't be doing that :)
<gary_poster> hallyn, we are seeing tempfs be very unhappy now (it has errors trying to whiteout files).  The only thing I know I did in a crazy way is that we are using an overlayfs as an upper for another overlayfs in my patch.  We can switch to just making a new tempfs instead and see if that is less insane
<gary_poster> that's what I'm going to try to do in a bit
<gary_poster> need to step away for now
<bencer> hi huats, jamespage
<bencer> we have uploaded new packages of zentyal fixing the issues you raised
<huats> bencer: ok great
<jamespage> bencer: great!
<bencer> when do you think you can have a look at them?
<jamespage> bencer, tomorrow realistically
<bencer> jamespage: ok cool, i will ping you again tomorrow :)
<bencer> let me know if we can do anything else in the meanwhile
<pabelanger> Anybody know where the glance debian packaging branch is?  Or which one is master?
<pabelanger> Trying to create a merge request
<bencer> jamespage: huats could also upload them? do you want to have a look at them again? or can i push him to upload them asap?
<pabelanger> I am assuming lp:ubuntu/nova
<pabelanger> sorry, lp:ubuntu/glance
<jamespage> bencer: we should not upload until a FFe has been granted for these packages - https://wiki.ubuntu.com/FreezeExceptionProcess
<hallyn> gary_poster: my understanding is that the rationale/justification for overlayfs's simplicity is precisely that you can overly on top of an overlay
<hallyn> gary_poster: so doing what you suggest is good for verifying that that's the problem, but if there's a problem then it's a bug
<jamespage> bencer: preparing the information for that in the bug report is something that can happen in between now and review completion
<jamespage> I note from the bug report that there is still some debate as to whether these packages should be native or not.
<huats> bencer: i cannot upload that since the FFe is needed + the new process
<bencer> jamespage: huats should we fill the bug for FFe now?
<jamespage> bencer: yes - it can be done in the existing bug report
<bencer> jamespage: do we have to do something else than subscribing ubuntu-release team?
<jamespage> bencer, yes - the Bug Description needs to be populated with the information detailed in https://wiki.ubuntu.com/FreezeExceptionProcess
<hallyn> stgraber: help!
<hallyn> stgraber: my attempt to dput lxc failed with -eperm
<jamespage> bencer: I think it just needs re-shuffling to line up with the required information to support the release team approval
<stgraber> hallyn: again? :)
<jamespage> please don't subscribe ubuntu-release until myself/huats has had time to review again; packaged really need to be good-to-go
<stgraber> hallyn: should be good now
<hallyn> stgraber: i don't *think* i did anything stupid...
<hallyn> thanks, retrying
<bencer> jamespage: huats ok thanks, i will ping you both tomorrow, going to rewrite the bug description before then, thanks again
<jamespage> bencer: no problem
<huats> great
<pabelanger> Okay, just created my first merge request using launchpad.  I _think_ I did it right
<hallyn> stgraber: push worked this time, thanks :)
<stgraber> hallyn: np. cjwatson hardcoded the ACL in his script so it shouldn't be dropped anymore ;)
<stgraber> hallyn: you merged that one https://code.launchpad.net/~gary/ubuntu/precise/lxc/bug-951150/+merge/97021 right?
<hallyn> stgraber: yup
<hallyn> hopefully the right one (i didn't realize he was going to make more changes)
<hallyn> stgraber: and -d works for me :)
<stgraber> hallyn: cool. Marking the bug fix released and the branch merged then.
<hallyn> lp email told me it was marked fix released
<stgraber> bug 951150
<uvirtbot> Launchpad bug 951150 in lxc "lxc-start-ephemeral is not all ephemeral" [High,New] https://launchpad.net/bugs/951150
<stgraber> ^ doesn't agree :)
<uvirtbot> stgraber: Error: "doesn't" is not a valid command.
<stgraber> anyway marked fix released manuaally
<stgraber> *manually
<hallyn> thanks
<hallyn> i wonder if the mp was done after the push, and that re-opened the bug
<pabelanger> Now, on to adding dbconfig-common support into nova
<koolhead17|away> wa00 is nova going to be completely automates ?  /o.0\
<gary_poster> hallyn, I have confirmation that overlayfs + upper overlayfs does not allow removal of files; overlayfs + upper tempfs does
<gary_poster> it may be that overlayfs requires xattrs but does not implement it?
<gary_poster> hallyn, I modified my branch: http://bazaar.launchpad.net/~gary/ubuntu/precise/lxc/bug-951150/revision/79
<gary_poster> should I make a new MP?
<hallyn> gary_poster: yes pls
<gary_poster> ack hallyn
<hallyn> gary_poster: by overlayfs upper,
<hallyn> gary_poster: that means the fs changes will be written to?
<hallyn> that seems reasonable to refuse
<gary_poster> hallyn, yes
<hallyn> so you're working around that?
<gary_poster> hallyn, yeah, I just make another tempfs
<gary_poster> mount that in ephemeralbind
<gary_poster> and it is ok
<hallyn> gary_poster: should you create a single tmpfs for all overalys for a container,
<hallyn> and create subdirs under that to use as the overlays?
<gary_poster> that's what I do hallyn, yeah
<hallyn> gary_poster: awesome, thanks :)
<gary_poster> cool
<gary_poster> hallyn, MP is here: https://code.launchpad.net/~gary/ubuntu/precise/lxc/bug-951150-2/+merge/97077 .  Would you like me to file a separate bug, or do anything else?
<hallyn> gary_poster: no thanks, i'll take it from here
<gary_poster> cool, thanks hallyn
 * gary_poster goes to have a bit of lunch
<hallyn> np - ttyl
<hallyn> gary_poster: to be sure, you ran this and it cleaned up fine?  (wondering whether i can just push blindly, or should setup an env to test in)
<mgw> any suggestions on avoiding clock skew on qemu-kvm guests? Is there a demonstrable reason why ntp shouldn't be used?
<gary_poster> hallyn, I tested it quickly and it seemed fine.  I had used a version without the -d changes for my initial tests.  If you haven't already run a couple of tests, I'm happy to
<gary_poster> happy to run a few more tests, I mean
<hallyn> gary_poster: -d shouldn't affect it, i'll push, thanks
<gary_poster> cool thank you
<hallyn> thanks you for the patch :
<gary_poster> :-) welcome
<Xethron> I wish to setup a server that will dynamically control internet speed based on the amount of internet a user consumed relative to another. For example. If only one user is online, he gets 100% of the line speed. However, if two users are online, the one who consumed less internet should get preference as to allow him/her to "catch up". Does anyone know of software with the ability to do that?
<uvirtbot> New bug: #953289 in unixodbc (main) "package odbcinst1debian2 2.2.14p2-5ubuntu2 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/953289
<axisys> i have to pick disk 8 while booting (ctrl+p and then select disk 8) to boot the OS.. do I need to change the disk order in bios to automate this?
<axisys> can I just install the boot loader on every disk instead ?
<Evansdny> hello
<Evansdny> I just setup my first home server using ubuntu 11.10 server of course coming from kubuntu and ubuntu on most of my machines i feel very mich at home. i am however having a horrible time configuring aliases and sub domains in apache either manually or using webmin's interface. i just cant seems to get the sub domain to come up. i have a top level domain and a static IP address at home through my ISP
<Evansdny> any help or direction to some good information sites would be greatly appreciated
<axisys> Evansdny: you will have lot better luck in getting support if you pastebin some error from your apache either here or #httpd channel
<Evansdny> ok....   well i guess i'm not getting any errors it just goes directly to the top level domain and dousnt even see the sub domain. even though all my config files are correct
<Evansdny>  i will try #httpd
<Evansdny> thanks
<axisys> how do I catch the grub at boot on lucid ? shift is not catching it
<axisys> is it esc+shift?
<axisys> esc worked
<axisys> this x4270 fails to boot all the way http://paste.ubuntu.com/880895/
<axisys> what gives?
<_KaszpiR_> GPT: Use GNU Parted to correct GPT errors.
<axisys> hmm.. in recovery mode it boots all the way
<axisys> _KaszpiR_: so boot in that mode and try to fix it, may be?
<axisys> not sure what is the fix
<axisys> GPT:585937498 != 585937499 <-- so change the block size or something?
<axisys> sdi and sdj are part of md and boot disk
<axisys> no GPT error for those two
<axisys> recovery mode works perfect.. it is really single user mode .. and I enabled ssh and ssh in it.. all lvm+md looks good and installed new pkg.. works perfectr
<axisys> perfect*
<axisys> http://paste.ubuntu.com/880960/ <-- boots perfect in dmesg
<uvirtbot> New bug: #953453 in lxc (universe) "[FFE] use per-container apparmor profiles" [Undecided,New] https://launchpad.net/bugs/953453
<akgraner> mmm, thought you'd like to see the table of contents for Issue 12 of Ubuntu User Magazine  - http://www.ubuntu-user.com/Magazine/Archive/2012/12
<HaltingState> hey; is ssh enabled/running by default on ubuntu server?
<HaltingState> if i install from iso and hook up to network can I ssh in or do i have to do something else also
<pabelanger> HaltingState: when you install openssh-server it is
<pabelanger> you will be prompted during the installation process to add it
<uvirtbot> New bug: #953533 in groovy (universe) "groovysh crashes instead of starting" [Undecided,New] https://launchpad.net/bugs/953533
<Xethron> I wish to setup a server that will dynamically control internet speed based on the amount of internet a user consumed relative to another. For example. If only one user is online, he gets 100% of the line speed. However, if two users are online, the one who consumed less internet should get preference as to allow him/her to "catch up". Does anyone know of software with the ability to do that?
#ubuntu-server 2012-03-13
<philipballew> Im trying to use EXPECT to write a auto-login script to a telnet session(dont ask me why my work uses telnet). WHat would be wrong with this script
<philipballew> http://paste.ubuntu.com/881214/
<greppy> philipballew: you may be better served with the perl Net::Telnet module.
<greppy> I used expect to do that stuff until I learned about Net::Telnet :)
<philipballew> greppy, Can you help me with a script for that?
<philipballew> I dont  speak perl
<greppy> ah, that could make life a little more interesting, but there are good examples in the docs.
<greppy> http://search.cpan.org/~jrogers/Net-Telnet-3.03/lib/Net/Telnet.pm
<greppy> philipballew: you can install the module using 'sudo apt-get install libnet-telnet-perl'
<philipballew> does it matter that I dont know perl greppy
<Rallias> I've got a bit of a predicament...
<Rallias> every hour on the 40 my server stops responding for ~30 seconds, and I can't figure out why.
<greppy> philipballew: take a look at the docs, there's an example script right at the beginning.
<Rallias> What, besides cron, should I look at?
<philipballew> alright. Might be fun to play with
<matthew-parlette> Rallias: have you taken a look at the logs after the 30 second stall?
<twb> Rallias: it's a cron job
<Rallias> twb but there's no cron job set to run on the 40 minute mark
<twb> Rallias: you checked /var/spool/cron and /etc/cron* ?
<twb> It may still trigger on the o'clock but take 40 minutes of thinking before its I/O starts
<Patrickdk> doesn't have to be cron, but very likely
<Rallias> oh...
<Patrickdk> mysql supports events, so it could be like a cron inside mysql :)
<Rallias> well wait... I have php fastcgi starting on the 9 and 39, but the download doesn't happen on the 9...
<Patrickdk> or anything else that does that, I know wordpress has it's own built in cron too
<Patrickdk> by stop responding, you mean web?
<Rallias> yeah
<Patrickdk> then it's probably apache stopping
<Patrickdk> but there is still a session in progress
<Rallias> I run nginx
<Patrickdk> well, whatever
<Patrickdk> and till that session finishs, it won't stop, so you can start it again
<Patrickdk> normally there is a better way to restart that
<Patrickdk> but why in the hell would you cron a restart?
<Rallias> but it knocks out everything as you saw with me disconnecting and reconnecting.
<twb> If shell is still working, you can just ps auxf when the event occurs, and look for D state procs.
<Rallias> but ssh gets knocked out, so no it doesn't
<twb> Then set up a nohup'd typescript to capture the output every ten seconds, and start running it at half-past the hour
<twb> Then after the event, go in and look at it
<twb> nohup sh -c 'while ps auxf; do sleep 60; done >/tmp/psauxf.log' &
<twb> ...or whatever
<Rallias> *sigh*
<Rallias> I hate this thing sometimes :D
<Rallias> so how long will that script run for?
<twb> Forever
<twb> Obviously once you are past the affected window, you go back and kill it
<Rallias> so if it is a mysql cron, what should I do to debug it?
<Patrickdk> none of this should cause ssh to die
<Patrickdk> unless this is a very very low memory server, and somehow causing everything to oom
<twb> Rallias: turn it off and see if the problem goes away
<twb> Patrickdk: I've seen I/O thrash, too
<Rallias> I'm running at 250 MB under full most the time
<twb> Rallias: pastebin output of free -m
<Patrickdk> i/o thrash won't cause his network connection to drop though
<twb> Patrickdk: well, unless all of apache is in D state
<twb> Patrickdk: but yeah I guess I wouldn't expect it to hang ssh also
<Patrickdk> apache yes, but not ssh
<Rallias> how do I copy with putty?
<Patrickdk> never used putty
<Rallias> twb it even hangs quassel so
<twb> Rallias: NFI
<Rallias> http://pastebin.ubuntu.com/881267/
<Rallias> As far as I can narrow it down, its either nginx, php, or mysql...
<twb> OK so it's probably swap-thrashing rather than OOMing
<twb> Which could conceivably account for stuff like ssh hanging, too
<twb> IME when 2.6 starts to swap you've basically fucked and have to hard-cycle the power to get it to respond
<uvirtbot> New bug: #953646 in unixodbc (main) "package odbcinst1debian2 2.2.14p2-5ubuntu3 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/953646
<BuenGenio> hellop
<BuenGenio> for some reason my system boots up without any nameservers in resolv.conf after every reboot
<BuenGenio> how do I fix this?
<BuenGenio> IP is static
<BuenGenio> can I add nameserver under interface config in /etc/network/interfaces ?
<stgraber> BuenGenio: dns-nameservers 8.8.8.8 8.8.4.4
<stgraber> BuenGenio: dns-search mydomaina.com mydomainb.com
<BuenGenio> where do I put that?
<stgraber> BuenGenio: http://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/ for more details
<stgraber> BuenGenio: in /etc/network/interfaces as interface option
<stgraber> BuenGenio: you can also check "man resolvconf" for some examples
<BuenGenio> even after I modify resolv.conf after every reboot it's clean
<stgraber> yes
<stgraber> in 12.04 /etc/resolv.conf is managed by resolvconf so if you don't put the right values in /etc/network/interfaces or get your DNS config from a dynamic source, any change to /etc/resolv.conf won't persist after reboot
<stgraber> and can be lost at any point even if done after boot
<twb> stgraber: resolvconf has been around a long time, and it's optional
<twb> (If precise has forced everyone to install resolvconf, I will be VERY ANNOYED.)
<stgraber> twb: resolvconf is in ubuntu-minimal in Precise
<twb> Well, FUCK
<Patrickdk> ya, changed from optional to required
<stgraber> twb: per https://blueprints.launchpad.net/ubuntu/+spec/foundations-p-dns-resolving it's possible to turn it off though we recommend people simply use it properly and report bugs so we can fix them (though it looks like we covered most of them already)
<Patrickdk> or atleast, could be annoying to remove :)
<stgraber> Patrickdk: well, removing it is indeed a bit tricky, disabling it is just a matter of making /etc/resolv.conf a regular text file instead of a symlink, though we obviously don't recommend doing that
<twb> stgraber: my problem is I understand how resolv.conf works and I don't want to have some silly extra script layer getting in the way and forcing me to understand it
<twb> Which is incidentally the same objection I had for NM
<stgraber> twb: then just make it a regular text file and you'll have the old behaviour
<twb> OK, no worries then
<twb> My memory was that it was much more complex than that
<Patrickdk> that becomes lots more fun, when you have all kinds of ppp/openvpn/ipsec/... connections going up and down per min on a server :)
<twb> Patrickdk: I do have that
<stgraber> the problem is, we discovered a lot of scripts and software all of them trying to mess with /etc/resolv.conf most of them overwritting it entirely
<twb> I basically go into (all except one of) their config files and set them to "do not fuck with resolv.conf"
<stgraber> the only sane way of fixing all that is by forcing them all through a single code path that can deal with merging the duplicate entries and generate a conf file that respects what the libc expects
<twb> stgraber: yeah I understand the purpose of resolvconf
<twb> stgraber: I just didn't want to have to deal with it until I absolutely had to
<stgraber> the biggest issue that triggered the switch to resolvconf was dhclient's ipv4/ipv6 modes that each fight with the other over who writes /etc/resolv.conf ... that one and read-only boot that was made impossible by having stuff trying to write /etc/resolv.conf :)
<stgraber> twb: yeah, we made sure that the conversion of /etc/resolv.conf to a symlink only happens the first time resolvconf gets installed, not at boot time and not on upgrades. So converting it back to a regular file basically tells resolvconf to stop doing anything (and it should also prevent anything that's resolvconf-aware from touching your /etc/resolv.conf)
<twb> Ah, dual-stack dhclient would be an issue, yeah
<Rallias> I've got a quick question about cron
<Rallias> how would I get a job to run once every 25 hours?
<BuenGenio> anyone using spamassassin here?
<twb> Rallias: run it every hour, check if the hours since epoch is divisible by 25, if not, exit 0
<Rallias> programiming has never been my strong suit...
<twb> That's not programming, that's basic sysadmin work
<twb> Something like $(($(date +%s) / 3600 % 25)
<linocisco> hi
<linocisco> how to see loaded modules in startup?
<linocisco> i found one error but I dont know how to scroll up
<linocisco> i found one error but I dont know how to scroll up
<linocisco> i found one error but I dont know how to scroll up
<qman__> linocisco, to scroll through the screen buffer on the console, use shift+pgup and shift+pgdn
<linocisco> qman__, not working
<twb> qman__: that will scroll through video buffer, and if he has chvt'd it will be empty
<twb> scrolling in screen is achieved with ^A^[ to enter copy mode, then vi movement commands (^B,^F,hjkl).  Leave it with ^G.
<Corey> seekwill: What have you broken now?
<seekwill> I didn't break anything!
<seekwill> Not yet, at least
<wooter> what happens when support for the ubuntu distro im using expires...just no more updates ?
<twb> wooter: you get nagged about it every time you log in
<wooter> ah thanks
<twb> Note that different packages expire at different times
<wooter> will things like apache still update ?
<twb> So even though you theoretically have five years of support, most of your packages only get 18mo or 3y
<twb> wooter: no, once the release is EOLd, none of it gets security updates
<twb> Ideally you should plan migration to the next LTS release several months before your current release is EOLd
<wooter> ah k..my server is 10.10, is up next month
<wooter> LTS comes out as same time as this expires
<twb> THat is because 10.10 is not an LTS release
<twb> You should probably sit on 10.10 until 12.04.1 is release, then migrate to that.
<wooter> yeah, i wasn't thinking when i installed it
<twb> Since it's only a month or two, it's probably not worth the grief of trying to migrate to 11.04 until then
<twb> Unless this is a front-line server, in which case security might be a bigger issue
<wooter> nah its not front line
<wooter> still, it took alot of man hours to get it all configured how it is now
<wooter> dopnt want to do it again for 5 years
<twb> wooter: upgrading doesn't mean reinstalling from scratch
<twb> This is not a cowboy outfit like RHEL
<wooter> yeah, ive got a few hardware and platform specific scripts going on, everytime it updates, stuff happens like entire sound sub system changes or something like that
<wooter> ill try upgrade first for a laugh
<twb> !upgrade
<ubottu> For upgrading, see the instructions at https://help.ubuntu.com/community/UpgradeNotes - see also http://www.ubuntu.com/desktop/get-ubuntu/upgrade
<twb> Please read the documentation first
<wooter> thanks twb
<linocisco> qman__, shift pgup and shiftpgdwn not working
<rye_> Hello, on current precise grub does not set a timer when the previous boot failed, is it intended?
<twb> linocisco: 16:14 <twb> scrolling in screen is achieved with ^A^[ to enter copy mode, then vi movement commands (^B,^F,hjkl).  Leave it with ^G.
<linocisco> twb, i m now talking about on CLI . not on VI
<twb> linocisco: I'm talking about GNU Screen.
<linocisco> twb, it also doesn't work
<linocisco> twb, my server is on VM
<linocisco> twb, as guest
<twb> linocisco: dunno then, sorry
<linocisco> twb, thanks anyway
<linocisco> hi all
<linocisco> regarding squid, i have WAN card with DHCP with authentication to get internet and, LAN card which has static IP. when I run "nmap WAN DHCP's IP", I got 3128 squid-http only
<linocisco> other ports are not open
<linocisco> is that the reason why my client does not get internet through squid?
<linocisco> hi all, i m in the middle of tasksel, now installing package 1%, I want to stop it with Ctrl+C, so how could I do to properly stop?
<linocisco> hi all
<linocisco> how to interrupt of installation package using tasksel?
<a_ok> what do I need to do to increse the "max open files" limit for mysql (I don't mean the user but mysqld as I have no idea where its imposed limits come from)
 * bluefrog test
<bluefrog> pouet
<jamespage> a_ok: take a look at the upstart configuration in /etc/init/mysql.conf
<jamespage> a_ok: and man 5 init
<jamespage> you can set the max files limit with an override
<rye> jodh: hello, I am stalking you regarding https://launchpadlibrarian.net/96606883/lp935585.jpg (bug #935585) which is currently being displayed on my screen and I don't want to boot it until I find out how to debug the reason
<uvirtbot> Launchpad bug 935585 in upstart "[kernel panic] init: log.c:786: Assertion failed in log_clear_unflushed: log->remote_closed" [High,Confirmed] https://launchpad.net/bugs/935585
<jodh> rye: the work around is documented on the bug here: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/935585/comments/4. I've made the code changes for a proper fix and will soon be advertising an updated Upstart package (in a PPA) for initial testing. As such, we would welcome feedback from yourself and as many others as possible. I'll update the bug when the PPA version is available.
<uvirtbot> Launchpad bug 935585 in upstart "[kernel panic] init: log.c:786: Assertion failed in log_clear_unflushed: log->remote_closed" [High,Confirmed]
<rye> jodh: umm, yes, i am aware of the workaround. Since this is not a production machine i thought I can assist in debugging it, since as I understand the reason for such behavior was not found
<uvirtbot> New bug: #953876 in exim4 (main) "Can not specify different DNSBLs for IPv4/IPv6" [Undecided,New] https://launchpad.net/bugs/953876
<whalesalad> c7usi
<whalesalad> hey guys, is there a simple way to determine which flags the apt-get version of nginx is compiled with?
<_ruben> unless nginx can provide that info itself, you'll likely have to inspect the source package
<whalesalad> cool, looks like I was on the right track. am downloading it now and will inspect the `debian/rules` file
<memoryleak> Hi
<bencer> jamespage: just saw your comment on #928501, going to prepare it for the FFe now
<memoryleak> Can I take MySQL Debian package to install 5.5.21 version on Ubuntu 11.10 ? Or is it recommended to build from source?
<rye> jodh: ok, so in order to move along - there is no need to debug the kernel panic issue on startup, right? I.e. I can proceed with booting it normally?
<uvirtbot> New bug: #953899 in unixodbc (main) "package odbcinst1debian2 2.2.14p2-5ubuntu3 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/953899
<uvirtbot> New bug: #953901 in unixodbc (main) "package odbcinst1debian2 2.2.14p2-5ubuntu3 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/953901
<jodh> rye: I'm not sure I understand what you are saying. What would be very useful is if you would try the PPA in #11 on that bug and comment on the bug.
<jodh> rye: We were not able to reproduce the problem as reported, but I was able to identify the fix for what is essentially a timing issue (how quickly the log partition becomes writeable).
<rye> jodh: i meant that I have the server with the error right here, so i might have altered the packages to add some debugging output that might have helped. But the "how quickly the log partition becomes writeable" seems correct, however I am still not sure why this can be 100% triggered when fsck is forced to run. Ok, will add the ppa then
<timkuijsten> it looks like the mysql upgrade from last night is breaking permissions.. anyone else having troubles?
<rye> ok, ppa version for upstart works properly
<jamespage> bencer: I'll see if I can track down an archive admin re native packaging but don't let that hold you up on the FFe
<bencer> should i remove that from the ffe then?
<uvirtbot> New bug: #953915 in cloud-init (main) "Missing .deb dependency on python-oauth" [Undecided,New] https://launchpad.net/bugs/953915
<RoyK> hrmf!
<RoyK> anyone here using automount?
<RoyK> I've setup auto.master pointing to auto.home and when logging in, nothing happens, automount doesn't output a single line of debug info
<RoyK> cd /home/rsk works - logging in doesn't trigger mounting
<RoyK> (anyone?)
<ikonia> RoyK: that sounds most odd
<ikonia> RoyK: when you login and it doesn't mount, is your expected home dir there ?
<uksysadmin> do openstack issues get discussed on a particular ubuntu mailing list, or is it left to the main openstack mailing list?
<RoyK> ikonia: sec
<RoyK> hm... it seems if the user is in nis, the homedir isn't fetched on login and automount doesn't do much
<RoyK> if user exists in local files, it works (somewhat)
<patdk-wk> hmm
<patdk-wk> been awhile since I used automount, it was always unreliable for me
<patdk-wk> pam_mount worked well for login mounts though
<uvirtbot> New bug: #953985 in keystone (universe) "keystone does not attempt to reconnect to sql server" [Undecided,New] https://launchpad.net/bugs/953985
<ikonia> RoyK: sorry for the delay, but do you have nis setup to point at the automounter, eg: auto.master (auto_master) and auto.home (auto_home)
<RoyK> ikonia: it worked out - lots of mess in this setup
<ikonia> RoyK: care to expand ?
<ikonia> was it the nis auto.home map (for example)
<jamespage> bencer: just got an ack on the FFe and an archive admin to review once they land in NEW
<uvirtbot> New bug: #954068 in euca2ools (main) "describe-images omits empty fields platform and root_device_type" [Undecided,New] https://launchpad.net/bugs/954068
<RoyK> ikonia: no I don't care to explain ;)
<ikonia> fair enough
<pabelanger> Preference? nova-database-mysql or nova-backend-mysql; this is for dbconfig-common support
<chmac> I'm looking for something rsync-like which runs as a daemon and pushes changes automatically, any advice?
<chmac> As I understand it, drbd waits until the write has been confirmed on both hosts before considering the file "written", which is not what I want.
<pabelanger> chmac: rsync can run as a daemon
<pabelanger> guess, you want the automatic part
<chmac> pabelanger: Does it monitor files and push those changes automatically? Might well do what I'm after...
<pabelanger> chmac: no, I don't think so
<pabelanger> you'd have to read up on it
<chmac> pabelanger: Yeah, I think it acts as a daemon to receive rsync connections from other hosts, rather than monitoring the filesystem.
<chmac> pabelanger: I found unison and lsyncd which might do what I'm after
<chmac> pabelanger: I essentially want something like rsync running ever 5 minutes from cron, but slightly faster and with less overhead :-)
<chmac> pabelanger: Thanks for your input btw
<pabelanger> np
<jamespage> bencer: around?  need to talk about ebox->zentyal transition
<uvirtbot> New bug: #954127 in dhcp3 (main) "Network configuration fails during installation of 12.04" [Undecided,New] https://launchpad.net/bugs/954127
<iclebyte> I need a kernel module which only seems to exist in 2.6.35-28-generic - where can I download just the module to compile it against my current 2.6.32-28 kernel?
<jdstrand> SpamapS: hi! I see you did some stuff with the txzookeeper test suite. what do I need to do to run it? I did: cp -a /usr/share/pyshared/txzookeeper /tmp ; cd /tmp/txzookeeper ; trial txzookeeper
<xranby> iclebyte: look inside the 2.6.35-28-generic  kernel sourcecode
<iclebyte> the ubuntu specific source or direct from kernel.org?
<jdstrand> SpamapS: that mostly works except for: test_session [ERROR]
<jdstrand> exceptions.AssertionError: ZOOKEEPER_PATH environment variable must be defined
<xranby> iclebyte: the ubuntu specific source
<SpamapS> jdstrand: you need a running zookeeper
<jdstrand> SpamapS: I installed zookeeperd
<uvirtbot> New bug: #912508 in txzookeeper "test suite can wipe out existing zookeeper on local box: should Configure zk server as part of test fixture" [Medium,Confirmed] https://launchpad.net/bugs/912508
<SpamapS> jdstrand: wait no, I think that one starts zookeeper up.. ZOOKEEPER_PATH=/usr/share/java/zookeeper.jar might work
<SpamapS> jdstrand: hard to recall.. one of the test suites trashes a running zookeeper
<jdstrand> SpamapS: I saw that bug and actually just commented in it. I am writing a qrt script so I can remember how to do the test suite in case it doesn't get enabled in the build. these run in vms so I don't care about trashing things
<xranby> iclebyte: you can also google for the module name and try to locate any upstream source repository
<xranby> and then build the module standalone
<iclebyte> yea it's a Qlogic one. having a hard time finding it
<iclebyte> it's called qlcnic
<iclebyte> 10gigE fibre card
<jdstrand> SpamapS: this is it: ZOOKEEPER_PATH=/usr/share/java/ trial txzookeeper
<jdstrand> SpamapS: perfect, thanks!
<jdstrand> PASSED (successes=136)
<SpamapS> jdstrand: the reason I didn't want to run the test suite in the build if it trashes zookeeper is that some people build in chroots on their machines.. would be a shame to accidentally hose somebody's zookeeper.
<SpamapS> jdstrand: I think it will be relatively easy to modify the suite to start a bastion zookeeper and run the tests against that
<jdstrand> SpamapS: no, I get that. It would be nice to start zookeeperd on a different port or something
 * jdstrand nods
<SpamapS> jdstrand: we do that for juju.. so we can even steal that code, maybe even push it back into python-zookeeper as a utility class.
<jdstrand> cool, yeah
<xranby> iclebyte: the module source got merged into the upstream linux kernel and sent in by qlogic themself. its maintained in the git tree at kernel.org
<SpamapS> jdstrand: thanks for the zookeeper review. Next step is: update man pages to mention network security options. Anything else?
<xranby> iclebyte: this was the initial commit http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=af19b49152bdb68fda894183e88096d6d1aa5c3d
<jdstrand> SpamapS: nope, I think that should be it
<xranby> i have since then moved around inside the kernel tree
<iclebyte> xranby, can this git interface generate a tarball with all the contents or do I need to download each file manually?
<jdstrand> SpamapS: oh, you might check out the deprecation warning
<xranby> iclebyte: take this patch http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=patch;h=af19b49152bdb68fda894183e88096d6d1aa5c3d
<xranby> iclebyte: and apply it on your kernel tree
<jdstrand> SpamapS: it isn't required for main inclusion; just fyi
<SpamapS> jdstrand: I will.. I'd be surprised if upstream hasn't addressed it.. they have been pretty on top of such things which is why there aren't many warnings.
<iclebyte> does it not need to be for a specific kernel source version?
<xranby> iclebyte: that all depends on if the code itself uses features only found in newer kernels
<xranby> you have to read and test the code yourself on your old kernel
<xranby> the code itself do not care which kernel version its used on
<iclebyte> I thought diff would because it references specific lines in the code
<xranby> this are the initial commit so its all new files
<xranby>  8 files changed, 8728 insertions(+), 0 deletions(-)
<SpamapS> jdstrand: fyi: https://issues.apache.org/jira/browse/ZOOKEEPER-909 .. will open a bug report which we can close when 3.4.0 is uploaded
<SpamapS> jdstrand: (thats the deprecation warnings) :)
<jdstrand> SpamapS: oh, we are getting 3.4.0 in precise?
 * jdstrand was hoping we would for the kerberos bits
<SpamapS> jdstrand: no
<SpamapS> jdstrand: btw, that brings up a different issue. PHP 5.4 .. it looks like Suhosin has not shipped for 5.4.0 yet
<SpamapS> mdeslaur: ^^
<SpamapS> micahg: ^^
<SpamapS> jdstrand: perhaps we should discuss this at your team meeting, but I feel that its more your team's call than mine. For server users, 5.4.0 is by far the preferred choice, but w/o Suhosin.. it puts a larger burden on you guys
<mdeslaur> SpamapS: well then, looks like we aren't getting 5.4 after all :)
<jamespage> jdstrand, 3.4.x is still considered a beta release by upstream
<jdstrand> I see
<SpamapS> I asked suhosin's authors for an update.. but have not heard anything
<uvirtbot> New bug: #953258 in juju "juju executing hooks with umask of 0077" [Undecided,Fix released] https://launchpad.net/bugs/953258
<iclebyte> how do I apply this patch to a kernel source tree?
<mdeslaur> SpamapS: ah, yeah, I was about to ask that
<iclebyte> headers are in usr/src/linux-headers-2.6.32-28
<iclebyte> ?
<alex88> hello guys, i'm on 11.04 server, i've tried to change hostname by editing /etc/hostname & /etc/hosts but after reboot it's still the same..any idea?
<iclebyte> alex88, did it save okay?
<uvirtbot> New bug: #954160 in zookeeper (universe) "deprecation warnings during build: org.jboss.netty.channel.ChannelPipelineCoverage" [Low,Triaged] https://launchpad.net/bugs/954160
<alex88> iclebyte: yup.. wait.. if it's important, it's a openvz container
<iclebyte> not played with openvz so I can't comment.
<alex88> let me check also thatâ¦
<alex88> iclebyte: yup, that was the problem..thank you anyway man ;)
<iclebyte> welcome.
<jcastro> smoser, no hvm ami's anywhere but east-1?
<iclebyte> jesus, running Maverick is going to be easier than this.
<smoser> no hvm anywhere but us-east-1, jcastro
<cwillu_at_work> iclebyte, what are you doing?
<iclebyte> cwillu_at_work, I need a driver which only exists in 2.6.35 on my ubuntu 10.04lts server system
<iclebyte> I have the patch http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=patch;h=af19b49152bdb68fda894183e88096d6d1aa5c3d
<cwillu_at_work> iclebyte, you aware that newer kernels will work fine on it?
<cwillu_at_work> i.e, you should be able to install a 11.10 kernel deb without issue
<iclebyte> I was thinking that maybe there is another repo I can add then just upgrade my kernel. exactly where to find such a repo i'm not sure
<jdstrand> smoser: hey-- I saw that you packaged txaws. what is aws-status supposed to actually do?
<smoser> its a gui i think , right ?
<smoser> i ran it at once
<smoser> s/at once/once/
<smoser> it was long ago
<iclebyte> there must be a backported kernel package somewhere?
<jdstrand> smoser: well, it prompted me for some credentials, then didn't do anything
 * jdstrand moves along
<bencer> jamespage: i'm around, let me know what you need about the transition
<jamespage> bencer, hey - so I updated the bug report as well
<bencer> let me check first then
<jamespage> bencer: as the packages stand now if someone upgrades a system with ebox it won't automatically get replaced with the equiv zentyal packages
<jamespage> this was pointed out to me by an AA
<bencer> jamespage: let me put you in context
<bencer> current ebox packages are totally broken in oneiric, and in a pretty bad shape in lucid
<bencer> so is unlikely anybody is using them :)
<bencer> also the configuration backend changed
<jamespage> lucid->precise is quite a likely upgrade path tho
<bencer> so you need a migration tool to change the configuration format
<bencer> anyway, the configuration paths are different
<bencer> so no configuration is overwritten
<jamespage> SpamapS, around - would appreciate your input into this conversation ^^
<bencer> jamespage: they story is that ebox packages in the past were not really maintained, were uploaded and forget about them
<bencer> now is one of the goals of the zentyal project to have working packages in ubuntu
<bencer> so we will put efforts on their maintenance
<SpamapS> jamespage: looking
<SpamapS> bencer: we should still make sure that zentyal causes the removal of ebox so there are no problems.
<jamespage> SpamapS, bencer: well the packages as is cause a removal - but its not automatic - you have to switch to zentyal
<bencer> yes, if you do apt-get install zentyal, ebox packages are removed
<bencer> how we should make the upgrade path then?
<bencer> with a dummy package?
<SpamapS> Ah, there needs to be a transitional, empty ebox package
<jamespage> bencer, see http://wiki.debian.org/Renaming_a_Package
<SpamapS> yes exactly, just an ebox package, which depends on zentyal
<SpamapS> it can be removed in 12.10
<SpamapS> bencer: we agree that having old broken ebox around is unlikely.. but if it was around, and causing problems for the user.. they'll be quite happy when its replaced w/ Zentyal
<bencer> yup, i agree
<bencer> and shoud be easy to do
<bencer> jamespage: SpamapS where should be do the dummy package
<jamespage> bencer: it forms part of the source package that will replace it
<bencer> on ebox or on libebox, so we can set it to oldlibs and be automatically suggested to be removed?
<jamespage> bencer: zentyal-common replaces libebox so the dummy package should be provided by it
<bencer> jamespage: currently we have the replaces header in all packages
<jamespage> bencer: yes - but it needs a transitional package as well
<bencer> they way to go is to still maintain the replaces and add the dummy package for libebox, right?
<jamespage> yep - each package should have a Breaks/Replaces and a transitional package for the package it replaces.
<bencer> cool, were do you want us to push these changes?
<SpamapS> jamespage: a provides isn't strong enough..
<SpamapS> you need a dummy package
<bencer> on our ppa? or in the bzr import you created?
<SpamapS> oh
<SpamapS> sorry I misunderstood
<jamespage> PPA is fine
<jamespage> bencer: I really don't want to make any changes to the packages you provide if possible
<bencer> fair enough
<bencer> going to talk with jacalvo and we will push that changes and i'll update back you when ready
<jamespage> bencer: do you want me to check before you upload to PPA?
<jamespage> that way we avoid another cycle of review if there are any issues
<bencer> jamespage: i'll send you a link to our git changeset when ready
<jamespage> bencer: great!
<smoser> SpamapS, utlemming says that you said you would do:
<smoser> https://bugs.launchpad.net/ubuntu/+source/apt/+bug/948461
<uvirtbot> Launchpad bug 948461 in apt "apt-get hashsum/size mismatch because s3 mirrors don't support http pipelining correctly" [High,Confirmed]
<smoser> as SRUL
<jamespage> SpamapS, reboot testing?
<SpamapS> jamespage: yes!
<SpamapS> jamespage: whats the word?
<jamespage> SpamapS, so - I did some digging as gema asked me to take a look at the failures
<SpamapS> smoser: I'll take a look at it yes
<jamespage> and I knew you where at charm school
<jamespage> I discovered three different bugs!
<SpamapS> smoser: best if somebody else uploads, so I can just do it as an SRU review
<SpamapS> jamespage: only 3? :)
<jamespage> SpamapS, bug 953081
<uvirtbot> Launchpad bug 953081 in php5 "libapache2-mod-php5 postinst script does not enable module" [Undecided,New] https://launchpad.net/bugs/953081
<smoser> ah. yeah. thats fine. i thought utlemming suggested that you were going to backport fix and upload ...
<smoser> utlemming, ^
<jamespage> SpamapS, bug 953144
<uvirtbot> Launchpad bug 953144 in ubuntu-server-iso-testing "Post-reboot tests are not correctly injected into test-case" [Undecided,New] https://launchpad.net/bugs/953144
<jamespage> and one in Jenkins
<jamespage> SpamapS, basically the standard lamp test suite is being run twice - it fails pre-reboot because of that first bug
<SpamapS> haha nice
<utlemming> smoser: yeah, I'll do the backporting work
<jamespage> SpamapS: it then runs it again due to the second bug post reboot
<jamespage> but it does pass
<jamespage> but Jenkins then borks as it gets confused because the test suite has the same name
<jamespage> and reports a double failure from the first run
<jamespage> I have a local fix for usit that I'm working on ATM - had to put it down today for other stuff
<SpamapS> jamespage: ok, so that makes sense because I never actually ran the test as part of jenkins
<jamespage> but the post-reboot test does not run at the moment as it can't find boottests.py package
<jamespage> its just a pathing issue
<SpamapS> jamespage: ok, so what can I do? :)
<jamespage> SpamapS, leave it with me - I'll get it fixed up tomorrow
<SpamapS> jamespage: want me to fix the PHP bug? Or did you already get that handled? Because I believe it *must* do a restart, not reload
<jamespage> if you could check bug 953081 that would be great
<uvirtbot> Launchpad bug 953081 in php5 "libapache2-mod-php5 postinst script does not enable module" [Undecided,New] https://launchpad.net/bugs/953081
<jamespage> SpamapS, I am right in thinking that graceful restart does not load new modules right?
<jamespage> ah - I guess so
<SpamapS> jamespage: correct
<SpamapS> LoadModule is only processed at start time, so restart is needed
<jamespage> I'm happy to fix that bug as well tomorrow
<jamespage> SpamapS, yikes - that means i have to touch php :-)
<SpamapS> jamespage: hold off on that
<SpamapS> jamespage: still pending a potential move to 5.4
<jamespage> SpamapS, OK - I'll leave alone then
<jamespage> SpamapS, FYI I normally install the ubuntu-iso-testing package from the team PPA
<jamespage> that sorts out the deps and configures everything up for you
<SpamapS> ohhh coool did not know that
<jamespage> and then you can hack locally on the branch as much as you like
<jamespage> PPA is based of trunk - daily builds ....
<jamespage> I need to get it uploaded for precise as well
<jamespage> will do that this week
<ha1dfo> hi, I don't know if this is the appropriate channel, but I want to preseed debconf for mysql-server-5.1, but having trouble: I can see my vars in debconf db, but mysql still asks for root password
<SpamapS> ha1dfo: simplest way to make sure you get the right values set is to manually install and answer the questions, then do 'debconf-get-selections | grep mysql'
<ha1dfo> SpamapS, the automatization is the whole point in this one...
<ha1dfo> ohh, you mean to dump it, then put them back? I've done it with the password fields
<ha1dfo> SpamapS, just tried it, still asks for it
<ha1dfo> maybe because mysql-server-5.1 config starts with an unconditional 'db_input high mysql-server/root_password || true' ? Can this function be tricked?
<SpamapS> ha1dfo: if its already answered, db_input will take the value
<ha1dfo> SpamapS, installed mysql-server-5.1, given passwords, debconf-get-selectiosn | grep mysql; removed mysql, purged mysql-server-5.1, loaded selected values, reinstalled, and still i got the question :(
<SpamapS> ha1dfo: hrm, might be a package bug then
<ha1dfo> SpamapS, with this preseed method i was able to preconfig ldap-auth-config easily, but i am having a lot of trouble with this mysql
<ha1dfo> i think then i'll have to dig around in debconf's scripts
<bencer> jamespage: a quick question, the versioning for this should be zentyal-common 2.3.3 or zentyal-common 2.3.2ubuntu1 ?
<jamespage> bencer: keep the native versioning
<jamespage> 2.3.3
<bencer> ok
<utlemming> smoser: https://code.launchpad.net/~utlemming/cloud-init/cloud-init.apt_debug/+merge/97268
<smoser> would you consider adding that to apt_update_upgrade ?
<smoser> rather than another one ?
<pabelanger> Success!  nova converted to use dbconfig-common now.  Just finishing up the testing and will setup a merge request
<stgraber> jjohansen: any luck with apparmor_parser?
<jjohansen> stgraber: sure I have fixed a few more bugs, and am chasing another atm.  I did notice something about your failure though.  You profile rule is wrong, it needs a trailing / to indicate the target is a directory
<jjohansen> the trailing / is significant in profile rules
<jjohansen> stgraber: with the trailing / I think what I gave you should actually work, but I didn't test that as I was already testing a compiler with some additional fixes
<ha1dfo> SpamapS, back to the mysql preseed problem. If it is a package bug, who can I report it to? package states a mail list for contact
<stgraber> jjohansen: oh, ok, I can give it a quick try here
<utlemming> smoser: sure
<stgraber> jjohansen: [127565.282734] type=1400 audit(1331663966.960:723): apparmor="DENIED" operation="pivotroot" parent=17255 profile="/usr/bin/lxc-start" name="/usr/lib/lxc/root/" pid=17272 comm="lxc-start" srcname="/usr/lib/lxc/root/mnt/"
<stgraber> jjohansen: with pivot_root /usr/lib/lxc/root/,
<jjohansen> stgraber: hrmm, okay, it looks like it will need those other fixes after all, I'll ping you as soon as I have figured out the current one
<stgraber> jjohansen: ok, thanks
<bencer> jamespage: http://git.zentyal.org/zentyal.git/commitdiff/bd2f1eb44ce911937f0279e4db3e353cacdc1c3f
<axisys_> i have 10 disks.. how do I boot the system from disk 8 ? currently i manually pick disk 8 from console.. yikes
<jamespage> bencer: I'd add ${misc:Depends} to all of the transitional packages
<jamespage> bencer: some of the packages are missing Breaks: fields - would be best to add those as well.
<axisys_> should I need to create a small partition on all the disks and install grub on them ?
<gary_poster> hallyn, are you around?  I'd love to get https://code.launchpad.net/~gary/ubuntu/precise/lxc/bug-951150-2/+merge/97077 landed soon if possible
<hallyn> hm?  i thought i had pushed that
<hallyn> gary_poster: can you check the archive and see if it does what you need?  I'm pretty sure I merged that
<gary_poster> hallyn, oh, sorry!  I expected to see the MP change.  Looking
<hallyn> yeah sorry i need to bde better about that.  since the mp itself does squat wrt the archive i tend to ignore that once i've pushed the pkg
<hallyn> and bzr merges now are not working for me bc of quilt cruft, so i do them by hand anyway
<gary_poster> heh (and argh for merges), I can understand that
<hallyn> frankly i wonder if we should try doing lxc natively, using git branches
<hallyn> (would say bzr, but upstream is git)
<gary_poster> sure
<gary_poster> if this were a Launchpad build recipe, you could have the coding in git, imported into Launchpad, with a bzr branch for the debian bits
<gary_poster> I don't know how that corresponds to your needs though :-/
<hallyn> of course there's the complication that debian is using quilt for its patches
<hallyn> but yes, that would work for me i think.
<hallyn> maybe our recipe could convert the debian patches to a bzr branch...
<gary_poster> I'll mention it to my manager this afternoon; maybe he has a brilliant idea
<webmonkey> If someone has a minute, I'm having some trouble getting fail2ban jails to start properly
<webmonkey> 2012-03-13 19:41:29,561 fail2ban.jail   : INFO   Jail 'ssh' started
<webmonkey> 2012-03-13 19:41:29,581 fail2ban.actions.action: ERROR  iptables -N fail2ban-ssh
<webmonkey> iptables -A fail2ban-ssh -j RETURN
<webmonkey> iptables -I INPUT -p tcp -m multiport --dports all -j fail2ban-ssh returned 200
<hallyn> gary_poster: cool
<webmonkey> running Ubuntu 11.10 and fail2ban v0.8.6
<webmonkey> I've googled around and tried the various flock and sleep patching methods I found, to no avail :-( Also this is running on a Linode VPS
<uvirtbot> New bug: #954368 in puppet (main) "Upstart script puppet agent" [Undecided,New] https://launchpad.net/bugs/954368
<bencer> jamespage: fixed that too on http://git.zentyal.org/zentyal.git/commitdiff/cd397b18f4c91bee75a7e8692ac961633432d188
<bencer> should be ready to build now?
<hallyn> stupid q.  in precise, tasksel adds 'basic ubuntu server' option at top.  does enabling that add, or reduce, packages installed?
<jetole> Hey guys, don't know if this is the right room to ask but I just recompiled my linux kernel (Added IMQ support for traffic shaping) from the source of the server image using the dpkg-buildpackage tool and when it finished, as expected, it created .deb files to install but there are also a bunch of .udeb files which I have never seen before and wondering what they are and if I need them
<jetole> for example nic-modules-2.6.28-19-generic-di_2.6.28-19.66_i386.udeb is one. scsi-modules-2.6.28-19-generic-di_2.6.28-19.66_i386.udeb is another
<hallyn> just the debs
<jetole> hallyn: what do I do with them?
<Jeeves_> Oi
<Jeeves_> Is it correct that /var/run symlinks to /run in Precise?
<hallyn> Jeeves_: yes
<jetole> Jeeves_: try asking in #ubuntu+1
<jetole> or not. nevermind
<Jeeves_> Hmm, that's not very cool for the symlinks inside /var/run :/
<hallyn> jetole: see https://en.wikipedia.org/wiki/Deb_(file_format)
<hallyn> microdeb
<jetole> hallyn: I did but I'm still not sure if I should have interest in these files and what it should be. I was hoping for someone who has done this before or knows better about the udeb's who can give some experience
<hallyn> jetole: dpkg -x nic-modules-2.6.28-19-generic-di_2.6.28-19.66_i386.udeb x; find x; and see if it has all you need.  If so, you might be able to save space using it instead of the deb.
<hallyn> but i always just take the .deb
<hallyn> (really i just 'fakeroot debian/rules binary-generic' which i don't think even gives me .udebs)
<harry__> currently i have ubuntu 11.10, i want to upgrade my machine to ubuntu server, are there any additional features or its a totally different operating system?
<jetole> hallyn: this is for the kernel on 9.04 (please don't remind me 9.04 isn't supported. I know) and I ran apt-get source linux-image-2.6.28-19-server, dpkg-source -x linux_2.6.28-19.66.dsc and then AUTOBUILD=1 NOEXTRAS=1 skipabi=true dpkg-buildpackage -rfakeroot -b
<jetole> harry__: it's not so much a totally different operating system, the kernel is different and it's more optimized for functioning as a server but I think the primary question is why do you want to switch?
<harry__> i just want to learn more on ubuntu server by installing, can i keep my data intact after upgrading
<harry__> do i have the same user interface as ubuntu 11.10
<hallyn> jetole: wasn't saying what you did was wrong :)  i generally follow https://wiki.ubuntu.com/SergeHallyn_ppakernels
<hallyn> jetole: but again, i think it's worth it for you to check if the .udebs happen to have everything you need, if you want a minimal custom install anyway.  oterhwse, i'm saying the udebs are alterntaives to the debs for installers.  you don't need them if you're using debs
<hallyn> bbl
<harry__> i have lot of data and applications on ubuntu 11.10, all of them will on ubuntu server?
<harry__> all of them will work on ubuntu server?
<hallyn> harry__: yes.  in the past i've installed ubuntu server and later done apt-get install ubuntu-desktop...
<hallyn> if there's anything missing you'll be able to install it.  ubuntu server is ubuntu.
<harry__> nice
<harry__> what is the easiest way to upgrade?
<harry__> apt-get install command?
<hallyn> sorry - what do you mean by upgrade?
<hallyn> install packages, yes
<hallyn> upgrade, 'apt-get update && apt-get -y dist-upgrade'
<utlemming> smoser: for your consideration: https://code.launchpad.net/~utlemming/cloud-init/cloud-init.apt_debug/+merge/97268
<harry__> upgrade i mean , from ubuntu  to ubuntu server
<koolhead11> harry__, what exactly you want ?
<hallyn> I'm afraid that's a bit of a non-sequiter.
<hallyn> you'd have to remove things like network-manager, and manually config network.
<hallyn> like koolhead11 says, i'm not sure that's what you want
<harry__> i have ubuntu 11.10, i want to install ubuntu server
<koolhead11> harry__, sudo tasksel
<harry__> i will be having the same user interface, any major changes, just want to confirm?
<koolhead11> harry__, everything will be same. Don`t wrry. No one can take away your GUI. go ahead :)
<harry__> thanks much koolhead11 and hallyn
<hallyn> harry__: are you using wireless?
<harry__> yes i am using wireless
<harry__> you mean the internet connection right
<jamespage> bencer: lgtm
<Daviey> jamespage: go home.
<jamespage> Daviey: no - you go home
<jamespage> :-)
<Daviey> :)
<jamespage> oh - I am home - how did that happen?
<Daviey> mystery of the world.
<hallyn> harry__: don't switch to server then.  server doesn't support wireless out of the box.
<harry__> oh thanks
<harry__> why doesnt it support?
<harry__> wireless
<hallyn> the drivers aren't shipped, and userspace tools not installed.  you can of course install them.  you need network cable to do so :)
<hallyn> (have done it quite often, but it's never pretty)
<harry__> oh ok , anyway good thought of you..i will wait then ..wanted to just try out the serve
<harry__> r
<Xethron> I wish to setup a server that will dynamically control internet speed based on the amount of internet a user consumed relative to another. For example. If only one user is online, he gets 100% of the line speed. However, if two users are online, the one who consumed less internet should get preference as to allow him/her to "catch up". Does anyone know of software with the ability to do that?
<RoyK> Xethron: pfsense might have some of it
<RoyK> dunno about linux firewalls with that, though
<koolhead11> harry__, is there any app you want to run?
<Xethron> RoyK, thanks, will look into it.
<koolhead11> Daviey, got few mins
<harry__> nothing important
<harry__> just wanted to try it out
<harry__> but when i did sudo tasksel i fould packages like mathematics and other packages
<harry__>  i installed mathematics
<harry__> was just trying it out
<harry__> i love ubuntu :)
<koolhead11> harry__, grab a copy of ubuntu server guide and install virtualbox and try to learn/understand
<Daviey> koolhead11: a few
<harry__> cool yes will try
<koolhead11> hallyn, https://help.ubuntu.com/10.04/serverguide/C/
<koolhead11> and google 4 virtualbox
<hallyn> ?
<hallyn> no thanks
<uvirtbot> New bug: #954492 in juju (universe) "dummy ensemble package should be dropped" [Low,In progress] https://launchpad.net/bugs/954492
<variant> hi all, my local dhcp server is a little on the flakey side. is it possible to stop dhcpcd from deconfiguring the interface if the dhcp server dissapears for a few minutes?
#ubuntu-server 2012-03-14
<Zx432> Hi
<Zx432> I tried to set a server but now I have a loopback. I have a Local loopback. The server is plugged in my router, since I only want it for internal use it shouldn't be a problem.
<Zx432> Anyone here? :)
<l0n> Zx432, Not sure what you mean, what are you trying to do?
<Zx432> I was setting up a home srever
<Zx432> It should be basically my first server, It is plugged in my router and has no internet acces or any acess at all because of the loopback.
<twb> l0n: I *think* he's trying to do something relating to networking, and is getting an 169.172/24 address from avahi.  Or possibly he's looking at "scope local" on an iface address.
<twb> Zx432: what you're saying doesn't make any sense.
<Zx432> ifconfig -a gives me:
<l0n> twb, that does idd sound reasonable
<twb> Zx432: do not use ifconfig.
<Zx432> ok
<twb> Zx432: pastebin the output of "ip r", "ip a" and /etc/network/interfaces.
<qman__> Zx432, there is always a loopback adapter
<qman__> it has nothing to do with your ability to get to the internet
<Zx432> ok
<Zx432> but I still cant ping anythinng.
<Zx432> I hope it is clear I am talking about another computer.
<qman__> that's fine, but it has nothing to do with the loopback adapter, you need to configure your interface, and verify that one is even loading in ifconfig -a or ip a
<Zx432> ok
<Zx432> how?
<qman__> in ifconfig -a, you will see blocks of information for each adapter which is configured
<qman__> there will always be a lo0
<qman__> beyond that, you should have an eth0, or eth1, or ath0, or wlan0, etc
<qman__> if you don't, that means you've got driver issues or hardware issues
<qman__> if you do, it's simply not configured
<Zx432> what is lo?
<l0n> lo = loopback
<Zx432> It is te only thing showing
<Zx432> So it is a driver problem?
<twb> Zx432: lspci -nn | grp net
<twb> *grep net
<Zx432> It tells me the name of the card.
<Zx432> What do the numbers mean?
<twb> That's the PCI ID.
<twb> You can enter it into the kmuto.jp HCL page to find out if it's supported
<twb> If it isn't showing up in "ip a" then it probably isn't
<qman__> yeah, a google search of it will also help
<qman__> if it's not supported, you'll likely find lots of other people having the same problem
<twb> And lots of unhelpful "I cargo-culted this and it works OKish" comments...
<uvirtbot> New bug: #954477 in swift (main) "swift config doesn't match upstart expectations" [Undecided,New] https://launchpad.net/bugs/954477
<Zx432> Winbond electronics corp w89c940 first searches don't fill me with hope...
<uvirtbot> New bug: #954632 in lxc (universe) "The -- option to lxc-start-ephemeral no longer works" [Undecided,New] https://launchpad.net/bugs/954632
<trimeta> I just realized that my 10.04 LTS server's been sitting around with the 2.32.35 kernel for while, when USN-1389-1 noted a number of serious security bugs that should have made me reboot.
<trimeta> However, whenever aptitude told me to upgrade my kernel, I always checked the changelog to see if I actually needed to reboot into the server...all it says there is "Bump ABI."
<trimeta> Shouldn't the changelog make some mention of "security bugs fixed," and set the urgency to something other than "low"?
<patdk-lap> dunno about aptitude, but apt-get tells me those things
<twb> trimeta: you hit C in aptitude to see the changelog?
<trimeta> I mean, landscape (which is presumably fueled by aptitude gives me a message saying "N packages can be updated, M updates are security updates." when I log in, but it doesn't say which ones are which.
<trimeta> twb: I typed "aptitude changelog linux-server"
<twb> Either 1) you are looking at the meta package; or 2) there are multiple changelog entries since the current version, and you're only looking at the latest (first) one.
<twb> FYI, there is an extra package you can install that lets aptitude boldify all the new changelog entries
<trimeta> twb: I may be looking at the meta package...let me see if typing the full package name helps.
<twb> Also apt-listchanges and apt-listbugs
 * patdk-lap just doesn't bother, and updates on every kernel
<twb> trimeta: the metapackage is built from a separate stub source package than the actual kernel itself -- this is a kernel-specific peculiarity, normal packages don't have that issue
<trimeta> OK, aptitude changelog linux-image-2.6.32-39-server does give a more meaningful changelog; I'll need to remember that in the future.
<trimeta> It still could be clearer here which of these updates are security-related.
<twb> trimeta: in aptitude's GUI, it's a separate section
<trimeta> I'm accessing the server over ssh; in fact, I don't have X installed on this thing.
<twb> aptitude's GUI is an ncurses GUI
<twb> (Well, it also has a GTK GUI now.  Of course, you can tunnel X applications as long as there is an X *server* installed on your SSH *client* side.)
<trimeta> I've never used aptitude's ncurses interface, actually...hmm, let's see what's here.
<twb> http://paste.debian.net/
<twb> Gah
<twb> That pastebin hates screenshots with box chars in them :-/
<twb> http://cyber.com.au/~twb/tmp.txt <-- screenshot
<twb> http://paste.debian.net/159636/
<trimeta> OK, so if I have security updates, they'd be there?
<trimeta> I usually just update things using a script that runs "aptitude update && aptitude safe-upgrade".
<twb> You might have unattended-upgrades installed and enabled
<twb> In which case you wouldn't notice most of the time
<trimeta> It's definitely installed...
<trimeta> But I think it's disabled (no mention in /etc/apt/apt.conf.d/10periodic).
<bearly230> Hello all. I was hoping you all could point me to a good guide for setting up an internet gateway using ubuntu 11.10.
<uvirtbot> New bug: #954692 in nova (main) "cannot detach volume from terminated instance" [Undecided,New] https://launchpad.net/bugs/954692
<uvirtbot> New bug: #923426 in samba (main) "smbd crashed with SIGABRT in store_inheritance_attributes() (dup-of: 911680)" [Medium,New] https://launchpad.net/bugs/923426
<uvirtbot> New bug: #927891 in samba (main) "smbd crashed with SIGABRT in store_inheritance_attributes() (dup-of: 911680)" [Medium,New] https://launchpad.net/bugs/927891
<uvirtbot> New bug: #954721 in cloud-init (main) "90_dpkg_maas.cfg is world readable" [Undecided,New] https://launchpad.net/bugs/954721
<variant> hi all, my local dhcp server is a little on the flakey side. is it possible to stop dhcpcd from deconfiguring the interface if the dhcp server dissapears for a few minutes?
<brando753>  what is the ideal way to load balance a drupal site between three servers, I feel rsync might be to slow if a new user signup and then redirected to a non-updated server
<SpamapS> brando753: rsync doesn't do load balancing
<SpamapS> brando753: you mean how do you keep shared files in sync?
<brando753> no rsync to update content
<brando753> between the servers and using round robin
<SpamapS> the content should be in a database server
<brando753> but that is not an ideal situation for me
<brando753> SpamapS, video content
<SpamapS> brando753: for video you want an object storage solution. I like MogileFS, CEPH, and Swift
<SpamapS> dunno if drupal has built in support for any of those
<brando753> Im using red5
<brando753> but Im just trying to setup the main servers for load balancing
<brando753> I dont want the database in one server incase something happens
<twb> I thought all the cool kids used S3/EC2 for load balancing shite
<twb> I know drbd is a huge fuckign pain
<SpamapS> drbd is easy.. its the cluster managers like corosync/pacemaker that are a PITA
<SpamapS> but manually switching drbds in a catastrophic failure scenario is easy
<SpamapS> brando753: mysql has several ways to make it HA so you can put it on all 3 servers
<SpamapS> brando753: I really like Galera for that..
<SpamapS> brando753: http://codership.com/products/mysql_galera
<twb> SpamapS: if you have a NOC monkey that isn't an idiot, I guess
<twb> SpamapS: oh did I mention the customer also wanted to drbd the root fs
<brando753> thanks I will check that out, how about for actual file synchronization? is rsync the best choice?
<twb> SpamapS: so I was estimating how hard it would be to manage drbd from within the initrd...
<twb> brando753: rsync is good at moving arbitrary data from one place to another.
<twb> brando753: for code, you should be using VCS not rsync
<twb> brando753: for RDBMSs, you should be using database-specific sync mechanisms
<SpamapS> brando753: no, for videos it would be a bad choice
<SpamapS> brando753: since they could take quite a while, and be missing for a long time
<SpamapS> twb: drbd root is a bit silly
<twb> SpamapS: yes well, customer was more than a bit silly
<brando753> thanks I will check some of this out
<SpamapS> brando753: use a distributed object store.. looks like red5 has some massively complicated way to do that.. http://trac.red5.org/wiki/Documentation/Clustering/EdgeOriginSolutiononTerracotta
<koolhead17> adam_g: hi there
<mrintegrity> hi all, my local dhcp server is a little on the flakey side. is it possible to stop dhcpcd from deconfiguring the interface if the dhcp server dissapears for a few minutes?
<_ruben>  a dhcp server disapearing for a few minutes shouldn't cause any issues, unless you have insanely small lease expiration times
<mrintegrity> _ruben: hmm, guess that could be the problem actually
<mrintegrity> a typ
<mrintegrity> a typo
<twb> _ruben: or it happens to be when that least expires
<twb> *lease
<_ruben> that's why it should work like most ipsec implementations (not sure if that's the case currently with dhcp implementations): attempt to renew long before expiry and only deconfig when it fails to do so at expiry time
<twb> Maybe it does; I am only guessing
<_ruben> makes 2 of us then :)
<_ruben> then again, the most obvious solution would be to fix the dhcp server
<twb> Pfft, fix things?  You must be new here
<_ruben> :)
<_ruben> breaking stuff does tend to be way easier
<mrintegrity> _ruben: to be honest, I did fix the dhcp server and this is no longer a problem. my only consern is future dhcp server breakage causing unplanned downtime on the whole network because of this feature/miss configuration :)
<mrintegrity> 08:14 < _ruben> that's why it should work like most ipsec implementations  ... I will test this
<_ruben> i wonder if dhcp has seperate renew and expire lifetimes (like ipsec does), then you could set it to renew like every hour, but expire only after a day for instance. probably isn't the case though
<diplo> Morning all
<a_ok> I can stop mysql with service mysql stop. It gives a givea an error: stop: Method "Get" with signature "ss" on interface "org.freedesktop.DBus.Properties" doesn't exist. After that upstart is all messed up: start: Unknown job: mysql
<twb> a_ok: why are you telling us?
<a_ok> twb: how do I get to start mysql?
<twb> Oh sorry, I thought this was #networking
<twb> a_ok: it looks like it's pissed that dbus isn't running or some dbus backend magic isn't installed
<twb> Or wait, are you root?
<a_ok> yes i am
<twb> "sudo stop mysql"
<a_ok> twb: unknown job mysql...
<twb> Is there an /etc/init/mysql.conf ?
<a_ok> yes there is
<twb> I am not convinced you're actually root
<twb> It sure feels like you're not, from the symptoms
<a_ok> twb: I am convinced I am root however I seem to have corrupted config. Weird that it would stop the first time though
<twb> upstart might not reread the config file while the job is running
<a_ok> twb: ah. that might explain it
<a_ok> twb: I am trying to get a ulimit -n in there. it's in the pre-start now but not doing anyting
<Jeeves_> A core dumping init... Should I worry? :)
<lynxman> morning o/
<koolhead17> hello lynxman
<Jeeves_> http://imgur.com/8cePA
<lynxman> koolhead17: ello!
<jamespage> Daviey: poke rabbitmq
<jamespage> morning BTW :-)
<Daviey> jamespage: poke received
<jamespage> gah
<jamespage> bencer: around? lots of empty packages...
<bencer> jamespage: yup
<bencer> tell me
 * jamespage hates cdbs for this
<jamespage> so
<jamespage> side effect of adding the transitional packages I'm afraid
<bencer> didnt check the launchpad build, jacalvo uploaded that
<jamespage> when cdbs builds a source package with a single binary package
<jamespage> $(DEB_DESTDIR) points to debian/<BINARY_PKG_NAME>
<jamespage> when it has multiple binary targets it points to debian/tmp
<bencer> jamespage: what do you suggest to do in this case then?
<jamespage> just thinking about it
<jamespage> bencer, http://build-common.alioth.debian.org/cdbs-doc.html#id2550863
<jamespage> Alternatively, write a packagename.install file listing: debian/tmp/*
<pabelanger> Well, after testing yesterday.  I'm happy with now nova and dbconfig-common are working together; uploading the patch to launchpad
<jamespage> bencer: thats one way around it
<jamespage> OR zbuildtools could be updated to target the right directory for the package.
<bencer> ok, let me have a look at it
<_ruben> Jeeves_: who needs init anyways ? :p
<jamespage> bencer: I'm not looked yet but there could be a problem with upstart configuration getting installed into the transitional packages as well
<bencer> ok, going to have a look at that too
<jamespage> bencer: I'll do a quick local build to check now
<jamespage> bencer: confirmed - both packages get all upstart configurations.
<bencer> arr :-/ then maybe the easiest thing is to change zbuildtools?
<Jeeves_> _ruben: Indeed. init is sooooooooo overrated :)
<bencer> jamespage: going to have a look on a change on zbuildtools
<jamespage> bencer: I think so
<uvirtbot> New bug: #954915 in nova (main) "Add dbconfig-common support to nova" [Undecided,New] https://launchpad.net/bugs/954915
<pabelanger> ok, I have 2 merge requests up, 1 for the glance package and the other is for the nova package.  Both add support for dbconfig-common.  Reviewers wanted / welcome :)
<uvirtbot> New bug: #954759 in php5 (main) "php5 10.04 LTS critical bug fixes" [Undecided,Invalid] https://launchpad.net/bugs/954759
<chmac> logwatch is run from /etc/cron.daily/00logwatch on 3 hosts. On 2 hosts it runs at 5:25 and on one at 6:25, any idea why it's different on one host?
<chmac> I've checked the time and timezone on all three hosts, the output of `date` is the same on all.
<_ruben> chmac: have a look at /etc/crontab
<chmac> _ruben: Strange, they're all identical. I did change the timezone after setting up the servers, I wonder if it needs to reboot to take effect. I just restarted the cron service, maybe that will resolve it.
<chmac> The two hosts at 5:25 are actually running wrong, it's meant to run at 6:25, so I'll keep an eye on it, maybe reboot those nodes if necessary.
<chmac> _ruben: Thanks for the feedback btw
<rsajdok> what are the minimum required for the ubuntu 10.04.4 server I have vps with 128 ram. Is it enough memory space?
<patdk-wk> non-vps it uses like 26megs
<patdk-wk> so with vps, it should use less
<rsajdok> patdk-wk: thanks
<uvirtbot> New bug: #955070 in juju (universe) "charm getall stderr output" [Undecided,New] https://launchpad.net/bugs/955070
<Cryp71c> Does ubuntu server still store the default runlevel in /etc/inittab ?
<lynxman> I'm trying to run libvirt in precise but is spewing all kinds of errors http://pastebin.ubuntu.com/883276/
<lynxman> Any idea where to start looking at?
<Cryp71c> lynxman, its not much, but looks like a process which is supposed to pull hardware info is failing terribly. I'm unsure as to why its doing this or what a fix may be, a quick google turned this up as the page of the process in question: http://www.nongnu.org/dmidecode/
<uvirtbot> New bug: #700146 in eucalyptus "The default configuration of httpd-cc.conf on the cluster controller is causing apache2 to write the /var/log/eucalyptus/cc.log* files incorrectly. Versions 1.62, 2.02 of eucalyptus-cc." [Undecided,New] https://launchpad.net/bugs/700146
<lynxman> jamespage: so what would be better, reverse patch 2.7.11 or go to 2.7.12? Isn't too late at this height of the cycle to jump one version up?
<jamespage> lynxman, hmm
 * jamespage scratches his chin
<lynxman> Daviey: ^^^ ??
<jamespage> lynxman, bearing in mind the last few releases from upstream how do we feel about taking a whole new point release?
<jamespage> has 2.7.11 proven a bit more stable?
<jamespage> lynxman, anyone from puppetlabs/puppet devs around to help us make this decision?
<lynxman> jamespage: I can ask stahnma later today
<lynxman> jamespage: at this point we're pretty much high in the release cycle, the version should be stable
<lynxman> *should*
<jamespage> I agree
<jamespage> so the patch makes more sense
<jamespage> lynxman, whats the impact of not taking that patch
<jamespage> I did not ask that in the MP
<lynxman> jamespage: it reverts a newly introduced lock file mechanism
<lynxman> jamespage: looks like it broke existing platforms running 2.7.x so it was decided to reintroduce it in 3.0 and drop it out 2.7.x
<jamespage> lynxman, ack - fix it up and ill upload
<lynxman> jamespage: thanks :)
<lynxman> jamespage: will fix it right away
<jamespage> lynxman, you might want to run 'update-maintainer' on that branch as well
<jamespage> saves me typing it :-)
<lynxman> jamespage: hehe will do
<Daviey> lynxman: yeah, we can resolve that in euca
<Daviey> Have a patch
<lynxman> Daviey: cool
<uvirtbot> New bug: #955110 in juju (universe) "juju should tell me that I'm not in libvirtd group when running juju bootstrap" [Undecided,Confirmed] https://launchpad.net/bugs/955110
<uvirtbot> New bug: #955168 in lxc (universe) "lxc-start seems to reset properties of input devices" [Undecided,New] https://launchpad.net/bugs/955168
<hallyn> stgraber: did you have any complaints about my manpages patch?  if not i'll roll it into the package right now (along with a patch from gary_poster)
<stgraber> hallyn: nope, I only briefly looked at it but it looked good
<hallyn> ok, thanks.
<stgraber> hallyn: btw, I have apparmor working fine here now but I'm not too sure what we should do with the profile to make it block what we want (moving /proc and /sys) while allowing everything else so we don't need to patch it every other week for other distros/versions/...
<stgraber> hallyn: http://paste.ubuntu.com/883431/ is what I have now
<hallyn> stgraber: well i think we want to allow tmpfs anywhere,
<hallyn> restrict /proc and /sys to those locations,
<hallyn> and refuse securityfs
<hallyn> (and debugfs)
<hallyn> and maybe add a comment (or a conditional if possible?) to just allow all mounts
<hallyn> stgraber: what you have now works with current kernel and apparmor userspace?
<hallyn> or still waiting on pushes?
<stgraber> hallyn: you need a new parser and kernel
<hallyn> if it works, i can just stage my changes in ubuntu:lxc and let you update?
<hallyn> ok
<stgraber> 12:44 <@jjohansen> people.canonical.com/~jj/linux-image-3.2.0-18-generic_3.2.0-18.29_amd64.deb
<stgraber> 12:44 <@jjohansen> people.canonical.com/~jj/apparmor_parser
<stgraber> hallyn: ^
<stgraber> hallyn: blocking debugfs and securityfs caused mountall to fail in an interesting way for me, we might need to patch it to deal with apparmor denying the mount
<stgraber> hallyn: or allow these fs only at the "right" location and then deny access to the content
<hallyn> i guess the latter is ok
<hallyn> can we let lxc-start mount it, and deny the container mounting it?
<hallyn> (i.e. if it's mounted will mountall ignore it)
<stgraber> yeah, if it's mounted mountall won't touch it
<hallyn> I guess our existing restrictions already refuse writes to those, so it's not a big deal either way
<hallyn> but the advantage of having the host mount it is that it keeps the contaienr policy easier to understand
<hallyn> disadvantage, we need to update contaienr fstab in templates
<hallyn> stgraber: meanwhile, shoudl i push the two fixes i have, or do you have something else to stage today?
<stgraber> hallyn: push what you have, the apparmor changes still depend on having some other bug fixed in apparmor and new userspace + kernelspace upload
<hallyn> kthx
<stgraber> hallyn: what was the reason for blocking /sys/fs/fuse/connections/ ?
<hallyn> <shrug>  fuse is scary?  :)
<hallyn> i don't know the fuse api.  i just don't want the containers being able to muck with the kernel through the fuse stuff
<hallyn> if everything under /sys is safe, then we can remove that.
<hallyn> (but i have my doubts)
<stgraber> hallyn: don't we allow /dev/fuse already?
<hallyn> yes but /dev/fuse is world writeable
<hallyn> looking at /sys/fs/fuse on my laptop i guess it looks ok
<hallyn> how do you register a new fuse handler?
<hallyn> (that's what i want a container to be refused)
<hallyn> does that require a modprobe?
<stgraber> apparently when creating a new fuse mount through /dev/fuse a new entry appeares in /sys/fs/fuse/connections/
<stgraber> owned by the user owning the mount
<hallyn> so the only problem is a privacy one with root in container seeing info for user on host
<stgraber> so in my case I see an entry in /sys/fs/fuse/connections/ that's owned by my user (500 for directory and some files at 600 in there)
<hallyn> well i did an sshfs mount, and don't see anything bad there
<stgraber> hallyn: well, currently it can access it, it just can't write to it
<hallyn> so container could do a DOS with max
<hallyn> max_connections
<hallyn> but that's it
<hallyn> and i don't thin it needs to write to it :)  but near as I can tell it's not bad if it can
<hallyn> you know, compared to other stuff it can do :)
<stgraber> I guess I'm fine keeping it read-only for now and allowing read/write when we have a bug report, does that sound good?
<hallyn> yup
<hallyn> stgraber: i'm about to turn back to qemu and libvirt bugs.  do you want me to be testing the apparmor stuff right now instead, or are you ok fleshing that out still?
<hallyn> (i wonder if anyone would notice that there isn't a lxc-clone manpage...)
<stgraber> hallyn: I'm working a bit on it now, will need jjohansen's help on a mount weirdness though
<stgraber> hallyn: one more question, shouldn't we deny /proc/sys/fs/**?
<stgraber> hallyn: mostly thinking of /proc/sys/fs/binfmt_misc/
<stgraber> hallyn: oh, we do, sorry, I'm blind :)
<stgraber> hallyn: nice side-effect of apparmor: [ 9736.120788] type=1400 audit(1331742393.327:650): apparmor="DENIED" operation="mount" info="failed type match" error=-13 parent=8952 profile="/usr/bin/lxc-start//lxc_container" name="/" pid=8964 comm="mount" flags="ro, remount"
<hallyn> ah yes.  i was hoping for that, and have bugged poor jjohansen like 20x over the last 2 months asking whether that would happen :)
<hallyn> \o/
<hallyn> heck now we could drop the rootfs.pin file...  so ppl coudl stop asking me to put text in there
<hallyn> (but as i'm sure ppl will be dsiabling apparmor in some cases, better not)
<stgraber> hallyn: http://paste.ubuntu.com/883514/
<stgraber> hallyn: with the FIXMEs fixed, I think I'll be happy to use that for precise
<lynxman> hallyn: hey hallyn, I see you uploaded a new version of libvirt that solves a couple issues, it's not still on the archive but installing from scratch the previous one I have this problem http://pastebin.ubuntu.com/883276/ have you seen this before?
<pabelanger> Quick question, I've posted a few merge request for nova and glance (OpenStack), I'm unsure if I have set them up to notify the properly people; I have not heard much feed back yet.  Its only been a day, but want to make sure I did the review properly
<adam_g> pabelanger: where did you propose the merges?
<uvirtbot> New bug: #955231 in mysql-dfsg-5.1 (main) "Unable to install sympa during debian-installer" [Undecided,New] https://launchpad.net/bugs/955231
<pabelanger> adam_g: bug 954915 and bug 953093 have each being linked to the merge request
<uvirtbot> Launchpad bug 954915 in nova "Add dbconfig-common support to nova" [Undecided,New] https://launchpad.net/bugs/954915
<uvirtbot> Launchpad bug 953093 in glance "Add dbconfig-common support to glance" [Undecided,New] https://launchpad.net/bugs/953093
<azertyu> hello
<azertyu> my system reboot 3 times per day
<azertyu> i got this error : [kern.info] kernel Copyright (C) 2004 MontaVista Software - IPMI Powerdown via sys_reboot.
<azertyu> on my log
<EvilResistance> azertyu:  is your system a VPS?
<azertyu> correct
<EvilResistance> i'd assume the hardware node went down for updates/upgrades
<EvilResistance> sys_reboot might be being called from the hardware node
<adam_g> pabelanger: oh nice. would you mind redirecting the proposals to lp:~ubuntu-server-dev/nova/essex and lp:~ubuntu-server-dev/glance/essex ?
<EvilResistance> thereby signaling to the VPSes to shutdown, and come back up after the upgrades
<pabelanger> adam_g: Okay sure, did I link to the wrong branches?
<hallyn> stgraber: hm.  should all regular block devices be allowed to be mounted in contaienrs you think?
<hallyn> or should we make that configurable?
<adam_g> pabelanger: well, we do all the packaging work in the ubuntu-server-dev branches, and then merge into the lp:ubuntu/precise/ branches. im not sure if theres a way to make that more obvious, so sorry if its unclear.
<hallyn> I think ppl will complain about it being denied.  But certainly a bad FS risks corrupting the host
<pabelanger> adam_g: Okay
<hallyn> lynxman: looking
<adam_g> pabelanger: proposals to the ubuntu-server-dev get run thru the openstack CI testing before uploading to ubuntu
<hallyn> lynxman: no, that's new to me
<stgraber> hallyn: I'm guessing most people would do that through the fstab outside of the container to avoid giving access to the block device
<stgraber> hallyn: doing so will be allowed by my current profile
<stgraber> (as lxc-start can mount anything it wants to /usr/lib/lxc/root/**)
<hallyn> stgraber: ok.  perhaps the server guide should give guidance on that
<stgraber> hallyn: probably a good idea indeed
<hallyn> (but i've sent the merge requerst so that'll have to wait)
<stgraber> hallyn: we definitely shouldn't recommend allowing block devices in the lxc config, it's much easier to have lxc-start mount them
<hallyn> stgraber: thanks for getting the policy working!
<stgraber> hallyn: np, just need to nag jjohansen some more, have the parser fixed (or my profile) and get all that in the archive
<lynxman> hallyn: it's very weird indeed ./
<azertyu> anyone ?
<azertyu> i got this error : [kern.info] kernel Copyright (C) 2004 MontaVista Software - IPMI Powerdown via sys_reboot.
<hallyn> lynxman: i've seen it before apparently https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/922628
<uvirtbot> Launchpad bug 922628 in libvirt "virbr0 not created at startup" [Medium,Invalid]
<lynxman> hallyn: ooh :)
<lynxman> hallyn: so it just happened on a new VM here
<lynxman> hallyn: interesting... anything you want me to check on the system? I rebooted it twice with the same result
<hallyn> lynxman: still the internal error Child process (/usr/sbin/dmidecode -q -t 0,1,4,17) status unexpected: fatal signal 15  has me wondering if something else is causing the root problem
<Ng> /4/
<Ng> 6/
<hallyn> lynxman: what does '/sbin/iptables -L -t nat' show?
<lynxman> hallyn: http://paste.ubuntu.com/883568/
<hallyn> lynxman: is that with libvirt sucessfully restarted?
<hallyn> lynxman: waht confuses me is that virPidWait is supposed to ignore -EINTR
<lynxman> hallyn: restarted it yeah, but still complains
<lynxman> hallyn: also can confirm that /dev/virbr0 doesn't exist
<hallyn> lynxman: but that you can reproduce it is good news :)  can you set the log level to 1 in /etc/libvirt/libvirt.d and restart libvirtd?
<hallyn> eh.  maybe not worth it.
<hallyn> lynxman: your iptables output shows 192.168.122.0 redirects existing.  what created those?
<lynxman> hallyn: I reckon libvirt did
<lynxman> hallyn: http://paste.ubuntu.com/883574/
<lynxman> hallyn: also the interface virbr0 exists with the correct IP, but /dev doesn't :)
<lynxman> hallyn: *duh*
<hallyn> ?
<hallyn> what do you mean by /dev doesn't?
<lynxman> hallyn: nah, me being sleepy, nevermind
<lynxman> hallyn: it created the network interface properly
<lynxman> hallyn: still log says error
<hallyn> and vms have a good working network?
<hallyn> but on your first start of libvirt, they didn't?
<lynxman> hallyn: it does
<lynxman> hallyn: ah no, nova-compute doesn't start because libvirt errors
<lynxman> hallyn: it waits forever to connect to libvirt and doesn't show up in nova-manage service list
<lynxman> hallyn: can do a reverse ssh tunnel for you to connect to the VM
<hallyn> lynxman: ah. i see.  i'm confusing errno and waitpid status
<hallyn> lynxman: woudl it be possible for you to jsut reboot and not try to restart libvirt?
<hallyn> i want to know if the network still comes up right
<lynxman> hallyn: let me do that right now
<lynxman> hallyn: same result, compute waiting forever on libvirtd, virb0 is there so is libvirtd it's just sitting ducks
<lynxman> hallyn: http://paste.ubuntu.com/883591/
<lynxman> hallyn: iptables http://paste.ubuntu.com/883594/
<hallyn> lynxman: this is annoying.  -4 is not supposed to be a valid exit code for iptables.  and the rule *is* being added.
<hallyn> I don't want to go and blindly patch the code to be quiet though
<lynxman> hallyn: :/
<pabelanger> adam_g: okay, I think I got it. Thanks for the pointer
<lynxman> hallyn: anything else I can test? If not as said, I can do a reverse tunnel so you can connect in
<hallyn> lynxman: i'm afraid if i do that my afternoon will be sucked up by it :)
<hallyn> but ig uess we should
<lynxman> hallyn: heh :)
<lynxman> heidar: be my guest
<hallyn> lynxman: how do you want to do it?  ssh-import-id serge-hallyn and give me an ip addr to go to?
<adam_g> pabelanger: cool, ill hopefully give it a look today
<adam_g> pabelanger: thanks for the help, btw
<lynxman> hallyn: no need to, PMing you the details
<hallyn> thx
<pabelanger> adam_g: no problems, I needed them locally, that's why I did it
<Kiall> Heya, I'm trying to install nagios3, but I'm not wanting apache to come in a dependency . According to the package deps, apache is a recommends.. Yet, using --no-install-recommends still brings it in.. How can I find out where the hard "Depends" rather than "Recommends" is coming from?
<cwillu_at_work> Kiall, works fine here
<Kiall> Actually, Looks like I found it.. But would still like to know an easier way than tracing through the deps manually :)
<cwillu_at_work> Kiall, are you seeing apache2-utils and thinking that it's installing apache?
<Kiall> No, it was depending on libapache2-mod-php OR php-cgi ..
<Kiall> adding php-cgi let it work correctly..
<pabelanger> Kiall: apt-get install nagios3-core
<cwillu_at_work> Kiall, huh, it doesn't attempt to install php-cgi nor libapache2-mod-php here
<Kiall> But, I would still love to know a better way to list a dependency tree for what apt is selecting..
<Kiall> cwillu_at_work: http://packages.ubuntu.com/oneiric/nagios3-cgi
<cwillu_at_work> Kiall, nagios3-cgi is not nagios3
<cwillu_at_work> hmm, although it does seem to depend on it too
<Kiall> Yes but, nagios3 Depends on nagios3-cgi, which Depends on libapache2-mod-php5|php5-cgi|php5..
<cwillu_at_work> deps must have changed since 10.04
<Kiall> Anyway.. All sorted, but is there an easier way to have apt list a tree of what its about to install? I'm sure I've seen a method of doing that before.. But cant find it!
<cwillu_at_work> (on a tangental note: not requiring php is a hard requirement for any software I use :p)
<Kiall> lol
<smoser> SpamapS, ping
<smoser> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/615545
<uvirtbot> Launchpad bug 615545 in cloud-init "Instances launched in a VPC cannot access ec2.archive.ubuntu.com" [Undecided,Fix committed]
<SpamapS> smoser: pong
<SpamapS> smoser: sup?
<smoser> that bug...
<smoser> there has been an update un-verried sitting in lucid-proposed for quite some time
<smoser> and it appears that it wouldn't even fix the issue any more
<smoser> that... and i'm looking to fix https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/615545
<uvirtbot> Launchpad bug 615545 in cloud-init "Instances launched in a VPC cannot access ec2.archive.ubuntu.com" [Undecided,Fix committed]
<smoser> oops
<smoser> https://bugs.launchpad.net/ubuntu/+source/apt/+bug/948461
<uvirtbot> Launchpad bug 948461 in apt "apt-get hashsum/size mismatch because s3 mirrors don't support http pipelining correctly" [High,Confirmed]
<SpamapS> smoser: how do we test cloud-init from proposed? do we build AMI's for proposed updates?
<smoser> so i'm thinking i should just get the lucid-proposed dropped.
<smoser> SpamapS, well you can build with -proposed, but the easiest thing really is to just launch an instance
<smoser> then
<smoser> upgrade
<smoser> rm -Rf /var/lib/cloud
<smoser> reboot
<smoser> its reasonable for most things
<SpamapS> ah ok
<smoser> but for this, i tihnk i just need to drop that fix, and we need to pave the way for the S3 mirrors
<smoser> drop as it forget
<smoser> so, thats why i pinged you
<smoser> i thikn you can pull it from -prposed
<smoser> right?
<SpamapS> smoser: Probably, but I wouldn't know how to do that.
<SpamapS> I only know how to let things in. :)
<konradb> hi, how can i install oidentd on ubuntuserver?
<Daviey> smoser: seen, https://wiki.ubuntu.com/Testing/EnableProposed ?
<smoser> Daviey, i'm confused.
<smoser> i was asking how i can get something removed from proposed.
<konradb> i have just installed package "oidentd", i typed "/etc/init.d/oidentd start", i changed conf like here: http://is.gd/Z3Z3C3
<konradb> and its not working
<konradb> :/
<konradb> i added iptables -A INPUT --protocol tcp --destination-port 113 -j ACCEPT
<konradb> and output too
<konradb> whats wrong?
<Daviey> smoser: Oh.. sorry
<Daviey> smoser: I don't think you need to bother?
<smoser> well, i'm gonna do another upload
<Daviey> smoser: it's a published source, which you can declare as failed verification.. so just ignore it.
<Daviey> smoser: well you need to bump the version regardless.
<smoser> "declare as failed verification"
<Daviey> smoser: as in, treat it as a failed verification
<SpamapS> smoser: if you have another change for cloud-init.. just revert the failed one in the next upload.
<lynxman> hallyn: machine won't reboot until you manually kill process 23846
<uvirtbot> New bug: #948461 in apt (main) "apt-get hashsum/size mismatch because s3 mirrors don't support http pipelining correctly" [High,Confirmed] https://launchpad.net/bugs/948461
<hallyn> lynxman: actually i killed the dmidecode and that did it.  what IS that dmidecode and why does it fail?
<hallyn> (but, i'm afk doing lunch - bbl)
<patdk-wk> don't you need dvi/hdmi/displayport for dvidecode to work?
<patdk-wk> dmidecode :)
<foo> If it takes about 15 seconds to finish doing something like "mkdir folder" (where I create a folder) - what does that typically mean?
<patdk-wk> foo, using dm
<patdk-wk> other issues is if your running out of diskspace/inodes
<foo> patdk-wk: hm, I don't have dm - what is that? CLI only. I don't see a problem with diskspace, how would I know if I'm running out of inodes? I wonder if I need an fsck.
<patdk-wk> df -h, and df -hi
<patdk-wk> dm would be if your using, encryption, non-mdadm software raid, lvm, luks
<cwillu_at_work> patdk-wk, I don't believe btrfs has an inode limit of that nature
<patdk-wk> I didn't see him say btrfs at all
<foo> Thanks, this is the output: /dev/sdc1                59M    646K     58M    2% /public
<cwillu_at_work> er, sorry
<cwillu_at_work> wrong channel
<foo> looks ok to me. hmm. is it possible I just need a fsck?
<patdk-wk> unlikely
<foo> weird. in that case, drive maybe going bad?
<patdk-wk> my system used to take upto 5min to do a mkdir on ext3
<patdk-wk> could be
<patdk-wk> to a smartctl test on it
<patdk-wk> smartctl -t long /dev/....
<patdk-wk> then check the results in a few hours
<foo> patdk-wk: I am on ext3. This is a small home based server, nothing too crazy
<hallyn> lynxman: i now can't reproduce it
<hallyn> lynxman: eh, but reverting my debug patch lets me reproduce.  heisenbug?
<smoser> hallyn, did you test IO at all in vms ?
<hallyn> smoser: no
<smoser> what am i paying you for
<smoser> oh
<hallyn> smoser: i started to set up an oneiric vm on marula, but it failed to boot
<smoser> wait
<smoser> never mind.
<hallyn> smoser: i really do want to know if it's regressed...
<smoser> yeah. i know.
<hallyn> i think that is third in line, after lynxman's bug, and daviey's patch for nested vmx
<Harald__> Hi. what do I need to do to make my USB WLAN-dongle "listen" at boot-up? Currently, when I boot and the ethernaet cable is NOT connected, I cannot access the system. If I connect the ethernet cable to eth0, then all of a sudden the wlan0 becomes active and I can ssh into the box. Both are configured with static IPs, wlan0 is lested first in /etc/network/interfaces (not that I think that matters), and I'm running 11.10 Serve
<Harald__> Help much appreciated!!
<esuave> is there any way i can tell how or when an IP changed on my server?
<esuave> the machine was rebooted.. and the IP is assigned statically.. but some how it still changed..
<esuave> im thinking it may have been a user.. but is there a sure way i can tell? maybe a log or something?
<esuave> *correction.. the machine was never rebooted.. been up for 133 days
<Harald__> Doesn't anybody have an idea?
<SpamapS> Harald__: can you pastebin your /etc/network/interfaces file?
<SpamapS> Harald__: (hint: apt-get install pastebinit && pastebinit < /etc/network/interfaces)
<Harald__> auto lo
<Harald__> iface lo inet loopback
<Harald__> # The primary network interface
<Harald__> auto wlan0
<Harald__> iface wlan0 inet static
<Harald__> address 192.168.xxx.6
<Harald__> gateway 192.168.xxx.1
<Harald__> netmask 255.255.255.0
<Harald__> wpa-ssid <my-essid>
<Harald__> wpa-psk <my-psk>
<Harald__> auto eth0
<Harald__> iface eth0 inet static
<Harald__> address 192.168.xxx.5
<Harald__> gateway 192.168.xxx.1
<Harald__> the hint about pastebinit came too late, sorry...
<Harald__> would it help to put a 'sudo ifup wlan0' in my /etc/rc.local?
<Harald__> at least this answer (http://askubuntu.com/questions/3677/disable-wireless-on-startup) would lead me to believe that could work
<Harald__> though there the person wants to disable it...
<ses1984> hey, i had a question not sure if this is the place to ask or #ubuntu+1, anyway i just installed precise x64 in a VM and noticed that python-django* packages were not in the repos,
<ses1984> i was wondering if those were intentionally left out, or they will be added in for the final release
<ses1984> i've been using the last LTS and it included django and associated packages, and a lot more python packages
<SpamapS> ses1984: which ones are missing?
<SpamapS> python-django - High-level Python web development framework
<SpamapS> ses1984: I see python-django-* in my precise system
<ses1984> i just did "aptitude search django" and got nothing
<SpamapS> Harald__: please *do not* paste text in the channel
<SpamapS> Harald__: when I said "pastebin" I meant paste it into a pastebin website. http://paste.ubuntu.com/
<ses1984> i didn't do apt-get update yet that's probably it...
<ses1984> yup
<SpamapS> Harald__: so, the ifup is a bit confusing. After udev is done detecting devices, 'ifup -a' is run, so wlan0 should be brought up then.
<Harald__> putting that into the r.local had no effect anyway
<SpamapS> yeah I didn't read the askubuntu page, but I doubt that would change much
<SpamapS> I don't know how the wpa-* bits are handled..
<SpamapS> but I've never had any luck using WIFI on Ubuntu w/o network manager.
<Harald__> and what's better is, when I disconnect the ethernet cable then wlan0 goes down, too
<Harald__> by 'going down' I mean I lose my ssh on that interface
<Harald__> and here you go: http://paste.ubuntu.com/883847/
<hallyn> lynxman: libvirt libvirt0_0.9.8-2ubuntu13 is the only one that gives me those errors.
<hallyn> i find it hard to believe that enabling numa fixed it :)
<lynxman> hallyn: heh :)
<lynxman> hallyn: so it's a last minute bug
<hallyn> my guess is it's so timing-related that a new build rearranges the code enough to hide it
<hallyn> in which case, the bug is still there and real
<hallyn> maybe worth looking at iptables src for clues
<nancy--> how to ssh with a key?
<guntbert> nancy--: see https://help.ubuntu.com/10.04/serverguide/C/openssh-server.html
<uvirtbot> New bug: #955510 in nova (main) "failed attach leaves stale iSCSI session on compute host" [Undecided,New] https://launchpad.net/bugs/955510
<nancy--> i just sudo apt-get apache2          . did i installed php and mysql too and how to start these. with httpd?
<qman__> nancy--, no, those must be installed in addition, apache is just a web server
<bitmonk> hey guys i'm trying to set up a basic samba share based on the default config that ships with lucid, but my auth fails (share works if i allow guest)
<bitmonk> anyone know if there's anything about the default config that's wonky?
<bitmonk> in the log i pretty much just get create_connection_server_info failed: NT_STATUS_ACCESS_DENIED
<qman__> bitmonk, with the default config, you have to set up users in tdbsam
<qman__> sudo smbpasswd -a username
<bitmonk> ah
<qman__> libpam-smbpass supposedly keeps them in sync, and users added hereafter should get smb passwords
<qman__> but for existing users, you have to do that
<bitmonk> ah. yeah our users are written out by chef, i don't think any sort of pam trigger will work on them ever.
<bitmonk> but i only really need two users.
<bitmonk> and they aren't unix users anyway. this'll do.
 * bitmonk hasn't touched samba in so many years.
<bitmonk> how is access controlled for these users? wow it really has been a while.
<nancy--> how to install mysql after i have installed apache2 and php5 ?
<SpamapS> bitmonk: the default setup will map them to unix users of the same name I THINK
<SpamapS> nancy--: apt-get install mysql-server
<uvirtbot> New bug: #955540 in juju (universe) "juju-create hard-coded to use 192.168.122.1" [High,New] https://launchpad.net/bugs/955540
<nancy--> thx
<nancy--> SpamapS,  how to give ftp access to /var/www folder ?
<SpamapS> nancy--: do not use FTP :)
<SpamapS> nancy--: ssh/scp/sftp is what you want
<SpamapS> nancy--: I'd recommend putting your website in /srv/www rather than /var/www .. as some packages may put files in /var/www even though they shouldn't. :)
<nancy--> really?
<nancy--> ok.. so how to setup sftp?
<SpamapS> nancy--: install openssh-server
<nancy--> i already haave ssh acces. i dont know what is installed. iam using amazon free tier
<SpamapS> nancy--: ahh ok. So you can scp files from your local machine to your instance then.
<SpamapS> nancy--: or use 'sftp' for the same purpose
<SpamapS> nancy--: you can also use 'rsync' to copy a whole directory tree
<nancy--> hmm
<nancy--> how to install sftp
<nancy--> SpamapS,
<SpamapS> nancy--: its built in with ssh
<nancy--> hm
<nancy--> thx guyes
<uvirtbot> New bug: #955576 in juju (universe) "'local:' services not started on reboot" [Undecided,New] https://launchpad.net/bugs/955576
#ubuntu-server 2012-03-15
<hallyn> lynxman: hey, any chance i can still get to that machine?
<hallyn> i figure i'll try rebuilding v13 to see if it's the build itself
<hallyn> will try in another instance meanwhile
<uvirtbot> New bug: #955681 in php5 (main) "php5-common's /etc/cron.d/php5 session expiration can go crazy" [Undecided,New] https://launchpad.net/bugs/955681
<uvirtbot> New bug: #955720 in apache2 (main) "package apache2-utils 2.2.14-5ubuntu8.8 failed to install/upgrade: ErrorMessage: package apache2-utils is already installed and configured" [Undecided,New] https://launchpad.net/bugs/955720
<linocisco> hi all
<SpamapS> :q
<SpamapS> ahh.. the dreaded :q
<twb> It's a smilie that says "oops I accidentally the vim"
<linocisco> supposed i have my company email with domain abc.org. but the port of the email is blocked by our ISP we are using. every one has username@abc.org accounts. So i want to set up email server to allow or configure that accounts
<SpamapS> linocisco: so your ISP won't let you take incoming email on port 25?
<SpamapS> linocisco: I bet they'll be happy to host your MX for you.
<linocisco> SpamapS, our Novell server use port 1914. that is blocked by ISP
<SpamapS> wtf?
<taipres> incomming mail isn't 25 anyway
<twb> Yes it is.
<taipres> that's smtp port, outgoing
<twb> Incoming mail from roaming users might be on 587, but mail from random other MTAs will come in on 25
<taipres> use imap with ssl
<taipres> twb ahh good point
<twb> taipres: um, if I subscribe to a mailing list, say on sf.net, and it sends me mail, sf.net will connect to the 25 of my MX (or A)
<taipres> yeah I got it
<taipres> was thinking purely client, but the fact the client outgoing is 25 then that must mean the server is listening on 25
<taipres> linocisco what are you using as a host? shared, cloud, vps, dedi?
<twb> taipres: uh, no.  If my MTA talks to your MTA, my MTA doesn't need 25 open
<twb> I connect your 25 to my <random high port>
<linocisco> taipres, my email accounts are in my office Novell Server but internet traffic is going through HQ hub
<twb> it's only if you want to give ME mail that MY 25 must be open
<taipres> twb that's what I said
<twb> At no point is my 25 connected to your 25 in a single TCP connection
<twb> taipres: OK, I misread, nm
<taipres> linocisco sounds like a big setup, they have no reason to block common ports like that
<koolhead17> hi all
<linocisco> taipres, they dont' block us . because HQ internet's is open to all offices in the world.  we are connected using VSAT link to HQ. but in some remote office, we cann't have bigger VSAT overseas satellite. so we are stil using local ISP which blocks our port we need
<taipres> yeah your ISP has no reason to block port 25, that's ridiculous and extremely unprofessional
<linocisco> taipres, i m thinking only easy email access for those offices with slow and limited port connection via local ISP
<linocisco> taipres, yes. but local ISP here do the same way country wide
<twb> taipres: it is (and should be!) done by default on residential DSL plans
<twb> taipres: this reduces the amount of spam generated in general.  If you want to operate a mailserver, a good ISP will allow you to opt out of 25 blocking.
<linocisco> taipres, that is why I am thinking how their email accounts created in HQ be accessible in those office rather than using webmail
<twb> ISPs that don't have that policy tend to get added to DNS RBLs
<taipres> twb glad I don't use DSL then, ISP should not block any port, for any reason
<taipres> and anyway he said the ISP his company is using
<taipres> am assuming that's not residental
<twb> IIRC linocisco is in burma, so it's probably a junta-owned ISP
<linocisco> taipres, here is our gov owned ISP does
<linocisco> taipres, as a workaround, i m thinking about port mapping or something like that
<taipres> i'm really sorry to hear that
<taipres> linocisco port may may work but the clients would need to know of the new port
<linocisco> taipres, I am not so smart in linux and ubuntu config. Just checking something possible
<taipres> linocisco if your local ISP is able to block ports and stop your emails, that means you must control the domain(assuming you're not handing out isps)
<taipres> so just rent a vps or dedicated server in another country and toss your email server on there
<taipres> http://cheapvpsdeals.info/feature-search/ can find plenty using that $10 or less a month, usually $3 is
<linocisco> taipres, to have a dedicated server in other country is not also possible
<taipres> what about VPS?
<taipres> I'm using a VPS right now 1Gbps
<taipres> allows everything
<taipres> < $5
<greppy> taipres: lots of ISPs block port 25 as a way of mitigating abuse issues, especially on residential plans.
<linocisco> taipres, how could I communicate from that remote office. In those offices, we are just to use that local ISP line only
<taipres> http://cheapvpsdeals.info/vps6-net-4-76-openvz-vps-256mb-1gbps-in-chicagous-los-angelesus-germany-turkey/
<taipres> linocisco you'll use the VPS's ISP, just setup a dns server or use a free one and point the dns(hostname yourcompany.com) to the ip ofyour VPS
<taipres> where your mail server is sitting
<linocisco> taipres, in HQ
<linocisco> taipres, the thing is how the link will be established
<taipres> where is the hostname pointing, to HQ servers?
<taipres> either way I don't see why the local ISP is even in the picture, if they want to be zealots and try and control things you can cut them out, use a VPN to securely connect to the VPS where your stuff is forwarded from the HQ or is recieving directly
<taipres> btw greppy ISP's that block port 25 are lame
<taipres> the days of email bouncers are over
<taipres> and most large email servers will reject residental anyway unless you login with credentials
<taipres> like gmail, yahoo.com etc...they'll all reject
<linocisco> taipres, our main office has bigger VSAT dish and using that, we are connected to HQ, so our main office is fine. but in remote offices, they are to rely on local ISP which is slow and required ports are blocked
<SpamapS> linocisco: if this is a corporate/sattelite situation.. you probably should be using a VPN anyway
<linocisco> taipres, previously , mail accounts from those diffcult remote offices are in our main office server, but later to reduce one segment in communication, we tell HQ to create those account on HQ, now they are using webmail only
<taipres> linocisco yeah I think VPS can solve your problem
<taipres> is cheap solution, and no more local ISP non-sense
<linocisco> taipres, ok. If I go with VPS, how can I get that link using which kind ? ADSL or VSAT or Fiber or type of media?
<taipres> linocisco I think a VPN connection to hq servers via TCP would work, I don't know anything about VSAT, but if it's the only way you can communicate with HQ
<taipres> then VPS won't help
<taipres> at least it doesn't appear so
<taipres> linocisco if you don't get any solutions in here, from these fine folks #networking may bare fruit
<linocisco> taipres, ths
<greppy> taipres: sorry, I disagree, and if they were over, the counters on the rules wouldn't be incrementing :)
<RoyK> hi all. anyoen here using kvm with oneiric or precise?
<RoyK> s/oen/one/
<henkjan> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<RoyK> henkjan: yeah
<RoyK> I was curious if anyone had used sanlock
<RoyK> or managed to get it to work with current packages at all
<henkjan> i'm using kvm on oneiric. but not sanlock
<RoyK> because running kvm on a shared storage (currently nfs) is nice, but it's not very nice if a VM is started on two hosts at the same time...
<RoyK> because that makes filesystem integrity rather interesting
<jibel> jamespage, lamp-reboot test failed
<jibel> jamespage, File "/usr/bin/run-test", line 727, in <module>
<jibel>     post_reboot_results = test['post-reboot-results']
<jibel> KeyError: 'post-reboot-results'
<jamespage> jibel: wip
<jamespage> I'll fix that up today
<jamespage> jibel: hmm - on it now
<SpamapS> jamespage: btw, I'm up for a few more minutes.. :)
<jamespage> hey SpamapS!
<jamespage> nearly have that sorted
<jamespage> that test actually timed out - but the handling on timeout does not work so well
<SpamapS> jamespage: ahh, a common thread amongst unit testing frameworks that are being bent into doing functional/integration tests
<Daviey> SpamapS: you have no idea of that pain... :) .. Do you think it's proper that a unit testing framewrok is used to do integration testing, by git cloning the client of the tool the tests belong to? :)
<SpamapS> Daviey: if your goal was to send me off muttering curses to bed.. its working. ;)
<SpamapS> Got a bug in my bonnet to get the aws-status tray-icon-thingy ported to app indicators..
<SpamapS> nearly done
<jamespage> morning Daviey!
<Daviey> SpamapS: nice
<Daviey> morning jamespage.. i haven't forgotten!
<jamespage> Daviey, lol
<jamespage> SpamapS, zentyal is nearly ready to go - adding the transitional packages broken all of them due to the way zbuildtools users CDBS
<jamespage> bencer is working on a fix ATM
<SpamapS> jamespage: yeah, the CDBS insanity that they have embraced needs to go. :-/
<SpamapS> jamespage: thanks for tracking that.. I've had almost no time to spend on it
<jamespage> SpamapS, yeah - I've been caught out by that issue twice in the last week
<jamespage> lynxmans changes to rabbitmq-server had the same problem
<lynxman> jamespage: *gah*
<lynxman> jamespage: going to the doc now to get blood out of me, will review afterwards if thats okay ;)
<SpamapS> jamespage: ok, well good luck on fixing the lamp stuff.. ttyl!
 * SpamapS passes out
<jamespage> jibel: I've made the collection of test results a bit more rugged - https://code.launchpad.net/~james-page/ubuntu-server-iso-testing/reboot-testing-fixes/+merge/97598
<tdn> I have encrypted rootfs with lvm/luks as chosen during install. How do I change the keys? I am aware that I can normally change luks keys with cryptsetup luksAddKey /dev/sdX, however, I am not sure whereto change the key, when using luks with LVM?
<jibel> jamespage, qck, I'll review your merge request today
<jibel> s/q/a
<jamespage> jibel: ta
<smb> lynxman, just as I am not completely confident in subscribing people would make then aware (well it may or may not for me): may I point you to bug 948323 which I believe you could have an opinion about. :)
<uvirtbot> Launchpad bug 948323 in ipxe "Rom images for e1000 and ne2k missing vendor and device id" [Low,Confirmed] https://launchpad.net/bugs/948323
<uvirtbot> New bug: #955883 in squid3 (main) "Squid3 Crash assertion failed: comm.cc:349" [Undecided,New] https://launchpad.net/bugs/955883
<azertyu> hello
<azertyu> there
<azertyu> i try to install ipmitool
<azertyu> i got this error :
<azertyu> Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory
<azertyu> what that it means ?
<_ruben> azertyu: you need to load the various ipmi kernel modules
<chmac> `netstat -lp` shows a process listening on localhost:domain but under pid / name it lists only "-". How do I find out what process it is that's listening on port 53?
<chmac> Never mind, needed sudo netstat instead :-)
<uvirtbot> New bug: #955938 in lxc (universe) "Unable to create lxc instances with ubuntu-cloud template for release other than precise" [Undecided,New] https://launchpad.net/bugs/955938
<lynxman> jamespage: back!
<lynxman> jamespage: so what do I need to fix from rabbitmq-server?
<jamespage> lynxman, only what was in the merge proposal
<uvirtbot> New bug: #955935 in lxc (universe) "Creating lxc instances using ubuntu-cloud template throws warning" [Undecided,New] https://launchpad.net/bugs/955935
<lynxman> jamespage: hmm rabbitmq-server is only a debdiff afaict
<lynxman> jamespage: puppet is the merge proposal, going to fix that now
<jamespage> lynxman, you are quite correct
<jamespage> still waiting on FFe approval for rabbit
<jamespage> I was commenting on the CDBS behaviour you saw when you switched rabbitmq-server to be multi-binary
<lynxman> jamespage: aaah okay, got confused then
<jamespage> lynxman, how do you feel about upstartifying puppet?
<jamespage> something for next release?
<lynxman> jamespage: I can do that now if we might
<lynxman> jamespage: I fear no upstart :)
<jamespage> bug 954368
<uvirtbot> Launchpad bug 954368 in puppet "Upstart script puppet agent" [Low,New] https://launchpad.net/bugs/954368
<jamespage> lynxman, I think its not  bad idea
<lynxman> jamespage: taking that one :)
<jamespage> however I'm reticient todo it this late in the cycle
<lynxman> jamespage: well, if we do it properly it should be fine
<jamespage> lynxman, I also thing the entire package should be done - it has three init scripts.
<lynxman> jamespage: yeah, the puppetmaster ones and the puppet client one
<lynxman> jamespage: shouldn't be a problem
<jamespage> lynxman, we really need to be restricting changes to bugfixes now
<jamespage> I think this is a new feature
<lynxman> jamespage: your call, you're wiser than me ;)
<lynxman> jamespage: I think it would be a nice feat for an LTS
<jamespage> lynxman, hmm
<jamespage> lynxman, personally I don't think we should now; lets park it for early next cycle
<jamespage> I'll comment on that bug
<lynxman> jamespage: your call :)
<Daviey> lynxman: Is there a bug with how it's currently handled ?
<Daviey> lynxman: What will it do to improve things?
<lynxman> Daviey: it will just make things the same, on an upstart fashion, that's why I'm neither for or against
<Daviey> lynxman: nah, hold off.
<lynxman> Daviey: will do :)
<uvirtbot> New bug: #955999 in unixodbc (main) "package odbcinst1debian2 2.2.14p2-5ubuntu3 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/955999
<uvirtbot> New bug: #956000 in juju (universe) "'juju' with no arguments gives confusing message" [Undecided,New] https://launchpad.net/bugs/956000
<benji> I have an LXC question: after creating and starting (in daemon mode) a container I use lxc-console to log in.  I get a command prompt and when I issue my first command (no matter what that command is) I am then sent back to the login prompt.  Thoughts?
<uvirtbot> New bug: #956019 in keystone (universe) "keystone doesn't install the copyright file" [High,New] https://launchpad.net/bugs/956019
<uvirtbot> New bug: #950935 in glance (main) "Allow adm group to read log files" [Medium,Fix released] https://launchpad.net/bugs/950935
<hallyn> benji: thought:  "that's whack"
<benji> heh
<hallyn> actually it sounds like a problem with libraries.  our OOM.
<hallyn> anything in syslog about SEGVs or OOM?
<benji> hallyn: ooh:
<benji> [ 6448.889458] zsh[5057]: segfault at 10000 ip b758b914 sp bff1dd64 error 4 in libc-2.11.1.so[b755d000+153000]
 * benji disables his .zshrc and tries to log in.
<benji> ok, that is insane: something in my .zshrc is causing zsh to segfault (but only in a container, not on the host)
<benji> but, it mean's I'm at least working again; thanks hallyn!
 * benji wonders how that stray apostrophe got in here.
<maswan> wtf, postgresql-8.3 was removed from lucid, so you no longer have an upgrade path for a hardy postgresql server? it used to work a year or two ago
<jamespage> lynxman, around? have a question about the swift upstart configs that I think you authored
<cwillu_at_work> maswan, the upgrade path is likely http://www.postgresql.org/docs/current/static/pgupgrade.html
<bencer_> jamespage: i've fixed zbuildtools so now we dont get empty packages, uploaded here, going to copy now the other packages from previous ppa to force the rebuild
<bencer_> and everything should be ready: https://launchpad.net/~bencer/+archive/zentyal-2.3-precise
<pabelanger> ogra_: thanks for the upload
<ogra_> welcome
<lynxman> jamespage: otp, 1 sec :)
<jamespage> bencer: nice one
<jamespage> lynxman, np
<maxon> hello ubutu-server. can i boot ubuntu-server from usb-hdd?
<bencer_> jamespage: do you need something else? should we wait until all packages are built?
<jamespage> bencer_, that should be enough
<jamespage> I'll take a look later today/early tomorrow.
<bencer_> anyway, i'll copy the other packages to that ppa when zbuildtools is published
<jamespage> yp
<jamespage> yes please
<bencer_> jamespage:  ok thanks!
<jamespage> bencer_, no problem!
<lynxman> jamespage: I'm back, shoot! :)
<jamespage> lynxman, OK - so I just picked up a bug re the swift upstart configurations
<lynxman> hallyn: sorry, went to bed yesterday, the machine is available again at the same location as before
<jamespage> lynxman, http://paste.ubuntu.com/884885/
<jamespage> I don't understand the "find" bit
<qman__> I need some suggestions for bandwidth monitoring, $boss wants to know how much bandwidth certain people are using on the network, so I want to get totals per host over time, preferrably with nice graphs
<lynxman> jamespage: it is to check that we have at least one configuration file available in the configuration directory, because it comes with none by default
<pabelanger> maxon: sure, why not.  Assuming your BIOS supports it
<qman__> ntop is the only thing I know of which does this, though it doesn't do it well, and it keeps crashing, and every time it does, restarting it loses all the data collected
<jamespage> lynxman, bug 954477
<nbozo> Hi. In Ubuntu server, cluster and ctdb support is not compiled at build time anymore since years. How do you - Ubuntu server users - use samba in an Ubuntu cluster?
<uvirtbot> Launchpad bug 954477 in swift "swift config doesn't match upstart expectations, required upstart configurations not installed." [High,In progress] https://launchpad.net/bugs/954477
<jamespage> I guess the config locates have moved around a bit
<jamespage> locations....
<lynxman> jamespage: could really be, I already patched one of them :)
<lynxman> jamespage: Mind these scripts were created for diablo
<jamespage> lynxman, OK _ I just wanted to check I was not missing anything - I'll fix up now
<lynxman> jamespage: thank you very much ;)
<maswan> cwillu_at_work: ok, will try that then
<cwillu_at_work> maswan, I believe the problem is that 8.3 will be eol before lucid is eol, so it had to happen eventually
<cwillu_at_work> and an automatic migration from 8.3 to 8.4 would be silly
<cwillu_at_work> (where "silly" is defined to mean "would work fine for people who use postgres as a toy, and cause massive homicide-inducing disruption for those who don't")
<maswan> cwillu_at_work: but upgrades from hardy needs to work until hardy is eos
<cwillu_at_work> maswan, upgrades work fine
<cwillu_at_work> nobody ever made a promise that upgrade will not require any manual intervention, just that you'll have support in doing so
<cwillu_at_work> and migrating from 8.3 to 8.4 is supported by postgres
<cwillu_at_work> (in the "it should work, and people won't say it's unsupported if it breaks")
<cwillu_at_work> (...sense)
<maswan> cwillu_at_work: sure, I know how it worked when I upgraded the previous batch of db servers
<jamespage> lynxman, OK have enough to work with now
<andygraybeal> hey guys, i got a 10.04 kvm/libvirt server, often when i go to build a new machine, it fails.  it seems that after an automatic update this happens, after i reboot all is well.  i don't have any debug information, i just wonder if anyone has an idea of what might be going on with this simple description.
<koolhead17> lynxman: gosh!!
<raubvogel> If I have two encrypted partitions with the same passphrase, is there a way to enter the passphrase only once during bootup?
<uvirtbot> New bug: #956128 in multipath-tools (main) "find_multipaths feature missing from upstream" [Undecided,New] https://launchpad.net/bugs/956128
<hallyn> jamespage: you're coredev?
<jamespage> hallyn, yep
<hallyn> jamespage: can you plz push lp:~serge-hallyn/ubuntu/lucid/procps/procps-e to lucid-proposed?
<jamespage> hallyn, lemme take a look
<hallyn> thx
<hallyn> maybe i should send you a debdiff...
<hallyn> i can't push to lp:~serge-hallyn/ubuntu/lucid-updates/procps/procps-e, so i'm not convinced lp did the right ting...
<hallyn> eh it should
<hallyn> jamespage: the debdiff is just http://people.canonical.com/~serge/procps-e.debdiff
<jamespage> hallyn: on it now
<jamespage> hallyn, uploaded
<hallyn> jamespage: thanks
<hallyn> i think that bug pre-dates my familiarity with SRU process
<uvirtbot> New bug: #956177 in nova (main) "python-nova should include openssh-client as a dependency" [Undecided,New] https://launchpad.net/bugs/956177
<rockets> How stable is the server version of 12.04 right now
<rockets> relatively speaking
<jamespage> bencer_, I still see upstart configurations in the transitional packages?
<bencer_> jamespage: uhm i checked that but maybe only on packages without upstart scripts?
<bencer_> damm it, let me check again
<bencer_> jamespage: you are right
<bencer_> let me see how i can fix that
<jamespage> bencer_, this works - http://paste.ubuntu.com/885126/
<bencer_> jamespage: yup
<bencer_> that's what i was testing :)
<jamespage> bencer_, longer term it would be nice to move these packages away from CDBS
<jamespage> and use debhelper instead
<bencer_> jamespage: now the unfriendly thing of launchpad comes, i've to increase the version only for fixing that small thing that was only a poc
<jamespage> bencer_, I think you can delete the package from the PPA now and re-upload a fixed version
<stgraber> hallyn: I "think" we should allo /lib/init/fstab.lxc to be bind mounted over /lib/init/fstab in the new apparmor profile ;)
<stgraber> hallyn: just had my machine crash because I forgot that on
<stgraber> *one
<stgraber> (as in, my container starting messing with my host's ttys until X and everything else crashed)
<stgraber> jjohansen: how's the apparmor bugfix release coming along?
<jjohansen> stgraber: it should be going up soon
<jdstrand> stgraber: patches are under review. we are hopeful it will be uploaded in a few hours
<stgraber> jjohansen, jdstrand: that's good news, thanks
<stgraber> jjohansen: will we need a matching kernel upload or is the kernel I'm running currently only adding a few debug statements?
<jjohansen> stgraber: you will need the matching kernel for the full pivot_root /blah, but pivot_root, will work until the new kernel goes up
<stgraber> jjohansen: ok, will that be in the next kernel upload? I think there's one planned tomorrow
<jjohansen> stgraber: yeah I think so I am test building a kernel from scratch just to make sure on the patches, and the patch request is going out after that, so they should go in this afternoon
<stgraber> cool
<adam_g> smoser: dont suppose you know why euc2ools/euca-authorize uses the 'authorize_security_group_deprecated' method instead of 'authorize_security_group'
<smoser> from ?
<smoser> boto ?
<adam_g> smoser: yeah, i think
<adam_g> smoser: i cant seem to create self-referential security groups with the _deprecated method its calling, using the non-_deprecated call seems to work tho.
<smoser> adam_g, i'm not sure.
<koolhead17|away> hi all
<koolhead17|away> adam_g, https://answers.launchpad.net/keystone/+question/190793 can you help me with it?
<adam_g> koolhead17|away: not atm, sorry
<koolhead17|away> adam_g, np
<stgraber> hallyn: mind if I share the link to your lxc doc pdf? I had a few comments on blog posts/g+ lately about lack of LXC documentation, so even if not really ready, I think it'd be useful to point them to it
<uvirtbot> New bug: #956366 in nova (main) "self-referential security groups can not be deleted" [Undecided,New] https://launchpad.net/bugs/956366
<med_> There's a new(er) OpenStack quantum source package ~e4 available but only ~e2 packages are built. Anyone know why? ... I guess I should know why or be able to find out.
<Daviey> roaksoax: what issues are you seeing with ubuntu-cobbler-import?
<brendan0powers> jamespage: Hi, I see that the rds source package has made it into universe
<brendan0powers> jamespage: How long does it usually take for binary packages to be built?
<raubvogel> How do I set ntp.conf in ubuntu to listen to multicast?
<Daviey> adam_g: quantum IIRC had a binary app that was moved from one package to the other or something.. without appropriate Breaks/Replaces.. do you know if it was fixored?
<adam_g> Daviey: you might be thinking of keystone? im not sure, quantum is all chuck shork
<adam_g> i even typo'd the typo
<Daviey> adam_g: lol
<Daviey> adam_g: Okay, well, the last upload the binary was rejected for that reason.
<Daviey> adam_g: so it needs fixoring before the next upload, if not already
 * RoyK  
 * Daviey 
<adam_g> Daviey: the trunk builds in jenkins are fine.
<Daviey> adam_g: right, it's not a build issue.. it's an upgrade issue
<Daviey> dpkg needs to know to remove the old binary before instlaling the new one to avoid trying to overite a file.
<adam_g> Daviey: i cant make any promises that ill be getting to that today, sorry
<Daviey> adam_g: no, i don't expect you to.. but i'm saying it needs to be resolved before the next glance upload. :)
<Daviey> Doesn't seem glance is uploaded weekly.
<adam_g> Daviey: which is tomorrow, no?
<Daviey> So perhaps leave it
<adam_g> glance *should* be uplaoded weekly with the rest
<Daviey> adam_g: last uploaded on the 5th
<Daviey> Oh bug.
<adam_g> great
<Daviey> Not galnce, quantum !
<Daviey> sorry.
<adac> On server i got a kernel panic, suppossingly because of the disk space on boot. any ideas how to solve this? can I just delete data from boot device?
<Daviey> utlemming: Hey, are you around?
<utlemming> Daviey: for you, yup
<Daviey> adac: well, not something you care about, right... :)
<Daviey> utlemming: Great!  Any news on the locale issue?
<Daviey> with cloud images?
<adac> Daviey, hehe. Well i'm not sure even what uses that much diskpsace on it. i guess has seomthin to do with new kernels
<utlemming> Daviey: not really...cloud-init gets in the way by setting /etc/default/locale and then pam prevents you chaning it via ssh
<Daviey> utlemming: right...
<Daviey> utlemming: it's a really, poor experience atm
<Daviey> utlemming: pam *prevents* ?
<utlemming> Daviey: I whole-heartedly agree on that point.
<Daviey> utlemming: if i $ LC_ALL=C ssh ubuntu@foo , the locale of C is passed through
<Daviey> How is pam preventing that?
<Daviey> or rather, pam isn't preventing that
<Daviey> Isn't the issue more that the instance doesn't have non-US locales?
<utlemming> no, its not. Run "locale"
<Daviey> http://pb.daviey.com/cm
<utlemming> Daviey: "Internal server error"
<Daviey> utlemming: refresh?
<utlemming> no dice
<Daviey> utlemming: if i use my local, and don't overide. http://pb.daviey.com/gwLc/
<utlemming> the problem is "pam_env.so envfile=/etc/default/locale"
<Daviey> utlemming: hang on..
<Daviey> LANG is hard coded to /etc/default/locale
<utlemming> yes, but cloud-init
<Daviey> the rest are passed through the ssh connection from the local machine, right
<utlemming> yes
<Daviey> Sooo... if LC_ALL is set to en.US.UTF-8 in /etc/default/locale.. we should be ok?
<utlemming> yummm....yeah...I'll submit a patch to smoser on that
<stgraber> hallyn: I posted the link to the .pdf and made it clear that it was a work in progress.
<stgraber> hallyn: I also proposed a plenary for UDS on LXC to cover all the cool stuff we have in 12.04, some unusual use of containers and what are the next big steps. If it's accepted, you're welcome to contribute :)
<Daviey> utlemming: So.. the locale issue seems to also cause problems with certain package installs.
<utlemming> Daviey: that is _new_. Before I've seen errors, but never a package failure
<Daviey> dbconfig'ing packages craps out.. so it's really not just a cosmetic issue, but a Critical one
<utlemming> Daviey: agreed
<jdstrand> stgraber, hallyn: fyi, new apparmor uploaded
<stgraber> jdstrand: yeah! thanks
<jdstrand> stgraber: sure thing-- be sure to thank jjohansen :)
<jdstrand> jjohansen: thanks!
<CheckIn> Hello. Does Ubuntu server come with a set of firewall rules on by default?
<SpamapS> CheckIn: no but it does not open any ports by default
<SpamapS> CheckIn: and ufw is available, which is the "Uncomplicated FireWall"
<CheckIn> SpamapS: How does it close the ports?
<CheckIn> Just not install any server software ?
<SpamapS> CheckIn: right
<CheckIn> Hmm
<SpamapS> CheckIn: you can optionally install the firewall and server software during the installer of course.
<CheckIn> I don't know where these firewall rules came from then
<CheckIn> I've installed mysql apache freeradius and php5
<CheckIn> and I have a list of firewall rules
<CheckIn> SpamapS: http://imgur.com/UPvUG is what I get on boot
<SpamapS> CheckIn: looks like ufw.. perhaps I was wrong and it is turned on by default
<CheckIn> SpamapS: Ok thanks
#ubuntu-server 2012-03-16
<jdstrand> ufw is not enabled by default
<jdstrand> those are libvirt rules
<jdstrand> CheckIn: ^
 * jdstrand wanders off
<uvirtbot> New bug: #956607 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.61-0ubuntu0.10.04.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/956607
<CheckIn> jdstrand: umm hmm
<CheckIn> it knows that it's in a virtual machine?
<qman__> CheckIn, it does, but that's not why
<qman__> it must have been created by a vm builder of some kind, or has libvirt stuff installed on it
<qman__> in any case, those rules don't really affect anything you're trying to do
<qman__> it simply added some accept rules to the input chain, which is an accept policy already
<qman__> and configured forwarding, probably to get the VM online
<qman__> that would only affect you if you're trying to configure as a router
<CheckIn> qman__: So why can't I ping the machine before I drop the rules?
<qman__> plenty of reasons, but the rules showing there aren't it
<qman__> not directly anyway
<qman__> it could be that your VM networking is messed up out of the box
<qman__> and that those rules it creates cause it to be unpingable
<CheckIn> qman__: dropping the rules makes it pingable but it's not the rules?
<qman__> not _just_ those rules
<qman__> check the nat table, and check the VM config
<qman__> they might in combination be breaking it
<CheckIn> hmm ok thanks
<CheckIn> qman__: What would libvirt be used for?
<CheckIn> It's installed I'm wondering if I can just remove it
<CheckIn> Is that what is giving me a virbr0 network card?
<pabelanger> Friendly reminder bug 953093 and bug 954915 both have branches ready for review (merge request).
<uvirtbot> Launchpad bug 953093 in glance "Add dbconfig-common support to glance" [Low,New] https://launchpad.net/bugs/953093
<uvirtbot> Launchpad bug 954915 in nova "Add dbconfig-common support to nova" [Low,New] https://launchpad.net/bugs/954915
<qman__> CheckIn, my guess is yes, but I don't know enough about libvirt to say for certain
<CheckIn> qman__: Yes as in I can remove it or yes as in it's providing things for the computer ?
<qman__> yes as in it's providing the interface
<CheckIn> qman__: Ah well I don't use it and it doesn't seem to do anything
<uvirtbot> New bug: #956657 in samba (main) "package samba (not installed) failed to install/upgrade: Ð¾ÑÐ¸Ð±ÐºÐ° Ð·Ð°Ð¿Ð¸ÑÐ¸ Ð² Â«<ÑÑÐ°Ð½Ð´Ð°ÑÑÐ½ÑÐ¹ Ð²ÑÐ²Ð¾Ð´>Â»: ÐÑÐ¸Ð±ÐºÐ° Ð²Ð²Ð¾Ð´Ð°/Ð²ÑÐ²Ð¾Ð´Ð°" [Undecided,New] https://launchpad.net/bugs/956657
<ChmEarl> CheckIn, you never make virtual machine in Kvm or Xen? thats what libvirt helps with
<CheckIn> ChmEarl: Nope this is an install within a virtual machine which is VirtualBox based
<qman__> that may explain things
<qman__> did you install the virtualbox extensions in the VM?
<qman__> IME, virtualbox's networking is hoakey and hard to get working right
<larry> i wonder if somebody is trying to attack my website, i got visit from anonymous proxy, with the IP address 65.49.2.185, a whois shows this belongs to sophidea, and searches show bad things about it, like this: www.wilderssecurity.com/showthread.php?t=286100 and this: http://www.wilderssecurity.com/showthread.php?t=302402   and this: http://thewhiterace.com/showthread.php?1720-Whitenewsnow-com-attacked-by-Anonymous/page2
<larry> anybody waked?
<qman__> not for long, but you haven't stated why you think you were attacked
<qman__> a hit doesn't mean an attack
<qman__> most attacks will leave you with bizarre looking requests in the apache log
<larry> well i found that IP was browsing my website
<larry> maybe they are preparing to attack?
<larry> so they can take money from customers later?
<endra> What do you guys make of this: http://pastebin.com/XqSQGqS9
<Maleko> larry: cloudflare it
<Maleko> :p
<larry> is it necessary?
<qman__> doubtful
<qman__> a hit's a hit, doesn't necessarily mean anything
<larry> well any idea who what this thing was?
<qman__> if you're worried, check for CVEs on the software you run
<larry> just random?
<qman__> and patch your server
<qman__> and of course, check your backups
<larry> if its hosted by godaddy will godaddy do any security or its all up to me?
<qman__> depends
<qman__> VPS, it's up to you
<qman__> they have certain measures in place to cover themselves, but not you
<larry> CVE = common vulnerabilities?
<qman__> point is, a random hit from an anonymous proxy doesn't make you any more or less vulnerable or at-risk than normal
<qman__> make sure your software is patched and reasonably secured
<qman__> and that your backups are good, because stuff happens
<qman__> now, if you're seeing strange behavior from that IP, then you might have something to look into, but so far, I don't see any reason to investigate it much further
<larry> ok thank you
<timebox> need help installed ubuntu server 64bit 11.10 to HP proliant ML150G6 but its as if I cant connect to the net I cannot even ping anyone had same experience?
<timebox> my bad wrong port used thanks!
<koolhead17> hi all
<lynxman> morning o/
<lynxman> jamespage: will be tackling the puppet merge this morning
<jamespage> lynxman, you read my mind :-)
<Maleko> hello
<Maleko> how do you add route?
<lynxman> jamespage: I'm like that ;)
<lynxman> jamespage: I had to do it yesterday, but I didn't find the time finally :/
<koolhead17> Maleko: route add
<koolhead17> Maleko: man route <--
<_ruben> eew
<_ruben> man ip
<_ruben> ip route add ...
<lynxman> koolhead17: route is obsolete, ip route add is how it should be done
<lynxman> !man
<ubottu> The "man" command brings up the Linux manual pages for the command you're interested in. Try "man intro" at the  command line, or see https://help.ubuntu.com/community/UsingTheTerminal | Manpages online: http://manpages.ubuntu.com/
<koolhead17> sorry Maleko
<koolhead17> thaks lynxman
<koolhead17> *thanks
<Maleko> ahh people replied me. thought this place is dead
<Maleko> i actually want to ask why "route add" adds an extra entry in routing
<Maleko> http://pastebin.com/auxBfhV8
<Maleko> why ip route doesn't do that?
<_ruben> because the route command is broken? use ip route to list the routing table instead
<lynxman> Maleko: it does
<lynxman> Maleko: the thign being, you're adding on ip route a default gateway
<lynxman> maknz: then with route add a gateway to THAT network
<lynxman> Maleko: then with route add a gateway to THAT network
<lynxman> Maleko: I'd suggest you getting a bit more familiarised with routing basics
<lynxman> Maleko: you don't need a gateway for a network that you're part of, it's completely redundant and not desired
<Maleko> im lost when you said i dont need a gateway for a net im part of :\
<maxb> gateways take you other places. why would you go through a gateway to get to where you already are? that makes no sense
<Maleko> so your point is i don't need to specify a gateway if i only want to talk to other computers in the same net i am?
<lynxman> Maleko: exactly
<lynxman> Maleko: gateways are only to be able to talk to computers in other networks
<jamespage> lynxman, whilst you are working on puppet you should credit glenn's work in the changelog entry (working through the sponsors queue ATM and noticed he did some debdiff's)
<lynxman> jamespage: hmm sure, didn't use any of his code though
<jamespage> lynxman, sorry - you are quite correct - just noticed you added the debdiff
 * jamespage faceplants
<lynxman> jamespage: ;)
<Zx432> How do I enable ethernet card on Ubuntu server? I searched on the net but I was unsucessfull. I must warn you I am a noob and this is my first server.
<koolhead17> https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/935585
<uvirtbot> Launchpad bug 935585 in upstart "[kernel panic] init: log.c:786: Assertion failed in log_clear_unflushed: log->remote_closed" [High,Confirmed]
<koolhead17> is fix for this  getting merged today?
<Jeeves_> koolhead17: I fixed it by moving a file in /etc/init
<Maleko> Zx432: ip link set eth0 up/down
<Jeeves_> Maleko: Well, that is a bit too basic, I suspect :)
<Jeeves_> Zx432: See /etc/network/interfaces
<Jeeves_> And 'man interfaces'
<Zx432> well now I get no picture... :)
<Zx432> Monitor is showing: Not optimum mode
<Zx432> I have done nothing.
<Zx432> My server doesn't display picture. Monitor displays: Not optimum mode Recommended mode 1280x1024 60 hz
<soren> Daviey: I've worked out how to create Keystone stuff (roles, services, endpoints, etc.) on install.
<Daviey> soren: oh?
<soren> Daviey: http://bazaar.launchpad.net/~ciscosystems/keystone/ubuntu/revision/37
<soren> Daviey: Specifically, see the upstart job changes and the postinst.
<Daviey> soren: should they be medium or low priority ?
<soren> Daviey: Which ones?
<soren> Daviey: The create_* questions?
<Daviey> soren: creating the user
<Daviey> Ie, should we default to prompting it, or make it a main target for preseeding?
<soren> The basic assumption is:
<soren> If you're doing a large deployment, you'll be using preseeding.
<Daviey> soren: what is the point of /var/lib/keystone/run_on_start.bak ?
<soren> If you're not using preseeding, you're fairly likely just setting up a simple test environment.
<Daviey> ahh
<Daviey> it's a run once operation
<soren> Daviey: Yeah.
<soren> Daviey: I think it's rude to do things like that and then remove the evidence.
<soren> Daviey: It's nice to have around for me for debugging, but it's also nice to anyone else to be able to see what "I" did on their behalf.
<soren> Daviey: E.g. the rightscale init script used to delete itself. I hated that.
<Daviey> soren: I think, if you can document that in the d/changelog.. lets get it included in the upload today
<Daviey> (would like a QA run of it first.)
<soren> Daviey: It's not entirely appropriate for Ubuntu right now.
<Daviey> soren: oh?
<soren> Daviey: It renames the database.
<soren> WEll..
<soren> Rather: It doesn't.
<soren> ...but expects to find it elsewhere.
<Daviey> 24             SQL_CONNECTION="sqlite:///$dbc_basepath/$dbc_dbname.db" 24            SQL_CONNECTION="sqlite:///$dbc_basepath/$dbc_dbname"
<soren> dbconfig-common lets you specify a dir and a db name.
<Daviey> ?
<soren> Yeah, but dbconfig creates $dbc_basepath/$dbc_dbname
<Daviey> soren: you dropped the .db?
<soren> Anyway, a bit of care needs to be applied there and I haven't worried about that just yet. It's minor.
<soren> Daviey: dbconfig-common does.
<soren> Daviey: dbconfig-common creates the db.
<soren> Daviey: ...but names it just $dbc_basepath/$dbc_dbname
<soren> Daviey: Not $dbc_basepath/$dbc_dbname.db
<Daviey> oh
<Daviey> soren: Well, regardless - this would be nice to get into precise.
<soren> Daviey: ...and if you set dbc_dbname to keystone.db by default, the mysql and postgresql backends get upset.
<Daviey> right!
<KM0201> is there a walkthrough to install ubuntu server from USB?  i fail at the cd rom detection, and i don't have a cd drive.
<soren> Daviey: I'd be happy to get it into Ubuntu, I just haven't thought through all the migrations paths yet at all.
<Daviey> soren: Are you committed to doing that?
<koolhead17> !unetbootin
<ubottu> For information about installing Ubuntu from USB flash drives, see https://help.ubuntu.com/community/Installation/FromUSBStick - For a persistent live USB install, see: https://wiki.ubuntu.com/LiveUsbPendrivePersistent
<KM0201> koolhead17: no kidding?... why don't you try reading that.
<KM0201> ii've done what it says there, it doesn't seem to work
<soren> Daviey: The migration stuff?
<Daviey> soren: yes
<soren> Daviey: I'm not sure, really. I have no clue what state keystone was in in Oneiric.
<koolhead17> KM0201: because i use it.
<KM0201> not working for me, i tried doing what is said
<soren> Daviey: What I *can* do, though, is handle the upgrade from the version immediately before the one wehre these changes land.
<Daviey> soren: No, i'm not sure we can migrate from ks->ksl nicely
<Daviey> adam has more info.
<soren> Daviey: Oh, we can.
<soren> Daviey: Well, at least we're supposed to.
<koolhead17> KM0201: tell me where are you stuck?
<Daviey> soren: I thought there was an issue?
<Daviey> soren: markmc also mentioned concerns for them aswell.
<KM0201> detecting the cdrom.. says it couldn't be mounted
<Daviey> Maybe i'm missing something.
<soren> Daviey: Code has been written that's meant to handle that. Whether it works or not... I don't know.
<soren> Daviey: It's just a rather awkward procedure.
<KM0201> i get the unetbootin screen, boot the usb, choose keyboard, language, etc..
<Daviey> soren: oh cool, my views might have predated that then :)
<KM0201> but it fails to mount the cd
<soren> Daviey: You have to create a new database, point a migration script at the old one as well as at your old Nova install..
<soren> Daviey: It's very, very non-trivial.
<Daviey> *awesome*
<soren> Daviey: See http://keystone.openstack.org/
<Daviey> soren: If i were a production admin, i'd not want you to do that much fiddling without permission :)
<soren> Daviey: Let me qualify that a bit: It looks pretty easy to do *manually*. Driving it automatically in a postinst scenario... Not much fun at all.
<Daviey> right!
<Daviey> I think we should provide a debconf warning, linking to steps to provide
<Daviey> Handling a seemless, automatic upgrade seems risk prone.. and not that beneficial imo
<soren> Daviey: Yeah, something like that.
<soren> Daviey: It's a lot of work for sure, and not anywhere near the top of my list of priorities.
<Zx432>  I have a graphic card Gigabyte GV R925128T all of a sudden I get Not optimum mode Recommended mode 1280x1024 60 hz on my monitor, I can still see boot sequence normally.
<soren> Daviey: But handling the upgrade from a very recent keystone should be quite easy.
<soren> Daviey: I just haven't looked at it yet. I'm focused on making a fresh install work.
<Daviey> soren: agreed
<Daviey> soren: If you are happy to put the work into this, i think it would be a great addition
<patdk-wk> zx432, that is cause your using a lcd monitor
<patdk-wk> and the resolution doesn't match the lcd display
<patdk-wk> either ignore it, or change your screen size
<Zx432> Ihe problem is I can't do a thing.
<Zx432> And the mponitor is new
<Zx432> *monitor
<patdk-wk>  Idon't know what that means, can't do a thing
<Zx432> The screen is black and only the ewarning is dancing around.
<Zx432> *warning
<patdk-wk> hmm, probably need to use the nomodeset kernel option on boot
<patdk-wk> and then either fix the screen resolution, or just make that perm
<Zx432> Never did that, checking the internet
<Zx432> The funny thing is it worked a week ago.
<jamespage> smoser: https://code.launchpad.net/~james-page/swift/essex-fixup-upstart/+merge/97690
<Zx432> I must be doing stg wrong, I cant even get the the nomodeset to show
<bencer_> jamespage: i saw you changed into Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
<bencer_> do we need to do this in all packages?
<lynxman> jamespage: around?
<jamespage> bencer_, no - that was only because I did the delta for testing that build fix
<jamespage> lynxman, yep
<bencer_> jamespage: ok thx
<lynxman> jamespage: nah, nevermind it was a git issue :)
<xpistos> hello everyone.
<xpistos> are there any utilities that i should be running on my Ubuntu server to keep it healthy? Like H-top or something?
<bencer_> jamespage: btw, even you delete the packages, you are not allowed to upload again the same version :-/
<jamespage> bencer_, thats what I thought but I though I managed todo it the other day
<jamespage> hmm - maybe not.
<Zx432> ok i sort of fixed the monitor problemm.
<Zx432>  but I still can't get the internet to work.
<Zx432> Is there a way to restart the ethernet card and force it to connect using dinamic ip?
<michael_tn> good day all
<Zx432> Hi
<michael_tn> i'm looking for a little help with using d-i and preseed to format 4 disks in LVM during an auto install via pxe
<michael_tn> i can have partman do a single disk  with no issue
<michael_tn> if someone has a preseed example that would fantastic
<michael_tn> all this is on 11.10 server builds
<lynxman> jamespage: question, in order to ensure a proper sync, I've patched the local fs, which would be the most appropriate command to push that to a patch file in debian/patches ? I normally do run debuild but I'm sure there's a better way
<jamespage> lynxman, I'd use quilt
<lynxman> jamespage: quilt itself? gotcha
 * lynxman trying to do things right
<jamespage> lynxman, quilt new <patchname>
<jamespage> quilt add filea
<jamespage> quilt add fileb
<jamespage> quilt refresh
<lynxman> jamespage: it's a big load of files, but yeah, will do :)
<jamespage> of course that does require that you start by doing it that way :-)
<jamespage> lynxman, is it an upstream patch?
<lynxman> jamespage: quilt supports file deletion as well?
<lynxman> jamespage: yes but it requires some fine tuning, won't patch cleanly as it is
<jamespage> lynxman, hrm it does quilt add the file and then remove it - pretty ugly
<lynxman> jamespage: it removes a couple files
<lynxman> jamespage: apart from patching around ~20 files
<lynxman> jamespage: that's why I was wondering if it's supported to delete a file in a patch
<jamespage> lynxman, well it is - by removing every line in the file
<jamespage> its brittle tho
<lynxman> jamespage: yeah, I saw that going wrong a couple of times, this patch is a bit of a pita
<lynxman> jamespage: upstream patch has line differences and a couple patches that require manual editing
<jamespage> lynxman, is this still the one that reverts something thats dropped in 2.7.12?
<lynxman> jamespage: yessir
<lynxman> jamespage: just trying to redo the patch as it should
<lynxman> jamespage: otherwise when I do debuild it always complains about something
<jamespage> lynxman, you might be best to rebase the patch in github based on applying it to 2.7.11 to generate a clean patch
<jamespage> i.e. use git todo that
<lynxman> jamespage: been looking that way, unfortunately I didn't find a way to do it cleaner than the patch I've been provided
<lynxman> jamespage: so I'll have to make do with that one :)
<hallyn> stgraber: can you explain bug 956655?
<uvirtbot> Launchpad bug 956655 in launchpad "libvirt dnsmasq causes runaway chain reaction" [Undecided,New] https://launchpad.net/bugs/956655
<lynxman> hallyn: still need my VM?
<hallyn> lynxman: oh, if possible.
<Zx432> How to restart ethernet card?
<pabelanger> jamespage, re: bug 907152. I think 'start on runlevel [2345] or started libvirt-bin' should be good for upstart.  Since using 'and' is no recommended
<uvirtbot> Launchpad bug 907152 in nova "Error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory" [Medium,Triaged] https://launchpad.net/bugs/907152
<lynxman> hallyn: of course, let me fire it up
<jamespage> pabelanger, I don't think it avoids the race condition tho
<pabelanger> I know
<stgraber> hallyn: looking
<lynxman> hallyn: done, port 65222 as usual
<jamespage> pabelanger, I wish upstart could check to see if an event could ever be generated and use it conditionally
<jamespage> jodh, feature for next release ^^ ?
<jamespage> :-)
<pabelanger> jamespage, initially I had 'start on (filesystem and net-device-up IFACE!=lo) and started libvirt-bin' but that got rejected
<jamespage> pabelanger, yeah - I guess we're slowly deprecating (filesystem and net-device-up IFACE!=lo)
<stgraber> hallyn: right, it's his fault ;) people shouldn't create loop between their dnsmasq servers
<stgraber> hallyn: I've seen a few other people get into the same problem with the libvirt one ;)
<jamespage> pabelanger, I'm not sure there is a good fix for this in upstart
<pabelanger> jamespage, I have to admit, I still don't know why 'start on runlevel [2345] and started libvirt-bin' would not work; my upstart foo is weak
<stgraber> hallyn: one trick might that might work is putting the LXC dns server in /etc/resolvconf/resolv.conf.d/tail instead
<stgraber> hallyn: that would make the libc use it only if dnsmasq can't resolve
<uvirtbot> New bug: #941874 in dmraid (main) "(fakeraid) root device not activated during boot" [High,Fix released] https://launchpad.net/bugs/941874
<hallyn> stgraber: tail instead of head?
<hallyn> lynxman: thanks
<stgraber> hallyn: head would put you in an infinite loop again as the libc would take the first entry, use the lxc dnsmasq which will want to resolv through the external resolve and so use itself as it's the first entry in /etc/resolv.conf ;)
<lynxman> hallyn: np :)
<stgraber> hallyn: tail still has the potential of looping but it should be much reduced as you'll need to have the libc fallback to it, have it be a record that the lxc dnsmasq doesn't know how to resolve and have it for some reason forward it to itself
<koolhead17> seems like i found bug with glance :( failing to upload image. need to rechack it though
<hallyn> stgraber: should i just drop that suggestion from the server guide?
<hallyn> it was based on a suggestion on the security team's wiki pagefor testing with qemu/libvirt
<stgraber> hallyn: I think that'd be the safest, yes
<hallyn> sigh
<hallyn> too bad.  it's such a nice feature
<hallyn> stgraber: how's the mounts stuff?  I saw tgardner applied some new kernel patches this morning...
<hallyn> are you pushing anything today?
<stgraber> hallyn: probably not today, I need to wait for everyone to be on the new kernel
<stgraber> hallyn: so probably on Monday when linux-meta has been uploaded and the new kernel built on all the architectures
<stgraber> pivot_root fails with our current kernel so we need to keep apparmor off until the new kernel lands, then deal with the bugs asking everyone to reboot to get LXC working ;)
<hallyn> ok
<hallyn> what did we ever decide about per-container policies
<smoser> roaksoax, ok. so runparts late command stuff...
<smoser>  http://paste.ubuntu.com/886406/ is the perl script that we'd run
<hallyn> ah yes.  waiting on ffe for bug 953453
<uvirtbot> Launchpad bug 953453 in lxc "[FFE] use per-container apparmor profiles" [Undecided,Confirmed] https://launchpad.net/bugs/953453
<smoser> and http://paste.ubuntu.com/886407/ is an example of what debconf content would look likje
<smoser> but i can't figure out how to remove the values from the database
<smoser> or, have given up on that at the moment.
<smoser> utlemming, ping
<utlemming> smoser: pong
<smoser> so, 2 things.
<smoser> your 2 merges
<smoser> first hte locale one
<smoser> how safe is that?
<smoser> i'm kind of confused on it (https://code.launchpad.net/~utlemming/cloud-init/cloud-init.lc_all/+merge/97768)
<hallyn> smoser: for ubuntu-cloudimg-query, should i allow any streams other than 'released' or 'daily'?
<hallyn> (the code is ..  not 100% obvious)
<smoser> you could, but there are only those 2
<smoser> and i really can't imagine another one.
<smoser> sub-daily
<smoser> :
<utlemming> what do you mean by
<utlemming> "safe"?
<smoser> how likely is it to break someone who had been using these happily before. or otherwise wreak havok on an image.
<hallyn> smoser: thanks
<utlemming> I think it is safe, from that stand point. It will get rid of all the error messages about invalid locales
<utlemming> if a user like, say Davey, who exports "LC_ALL=en_GB" logs in, which is invalid, they will get the default
<stgraber> hallyn: jodh added your upstart branch for SIGPWR to the list of things to merge and upload with the next upstart release (by Thursday if all goes well)
<utlemming> so in reality, I would argue that this enhances the user experience by disallowing the setting of invalid or unsupported locales
<smoser> so, does that happen only if the locale is not installed ?
<utlemming> the bug with db-common? yes. If the locale is valid (i.e. installed/generated), then db-common works fine.
<utlemming> if not, then it errors out and refuses to install
<smoser> i dont knwo of the bug with db-common
<utlemming> that was the impitus...Davey discovered that db-common in precise will fail the installation dialog if the LC_* are invalid.
<hallyn> lynxman: complete rebuild of v13 does not reproduce.  it really appears v13 was a bad build.  which is scary
<smoser> utlemming, so the presense of hte locale file forces locale
<hallyn> i'll try 3 more builds to see if i can randomly get a bad one
<smoser> right?
<lynxman> hallyn: oops
<hallyn> but maybe it was just a bad toolchain lineup
<lynxman> hallyn: okay! :)
<hallyn> lynxman: is my building slowing down your laptop?
<smoser> so as far as I can see this, the images have basically never allowed someone with a different local to actually use their other locale
<smoser> utlemming, ^ Daviey ^ ?
<smoser> is this true ?
<smoser> ie, it seems to me that the content in that file (/etc/default/locale) *forces* those settings onto the user (ie, they are not then passed through ssh)
<lynxman> hallyn: nah, it's an i7, has enough juice :)
<smoser> utlemming, and if we put LC_ALL in there, all it does is force *that* LC_ALL also.
<smoser> utlemming, ?
<smoser> and then, i'm not sure that (this is probably a bug in cloud-init), but LANG != LC_ALL.
<barbo91> hi guys, i need help, i need to censure some sites for my clients pc's... I want to configure a server to do that, and i don't know well how can i do it, new on linux tbh i'm a junior... i thinked about a DNS server with some kind of restriction... can someone help me? clients pc's are windows and have a limited user, and i want to force them to use my server
<barbo91> i already got some services up like bind, apache and squid+guard
<utlemming> smoser: sorry, phone call
<utlemming> smoser: it only force the LANG LC_* variables, not things like byobu
<smoser> well, it forces LC_ALL
<smoser> (which in behavior forces othe rthings)
<smoser> so cloud-init is generally broken
<smoser> utlemming, so i'm generally very unqualified to have this conversation
<smoser> but to me it looks like, cloud-init as a cloud-config setting 'locale', which defaults to a value 'en_US.UTF-8'
<smoser> it then runs localegen with that (which is probably correct)
<smoser> but then, it sets 'LANG' to that value
<utlemming> smoser: the real issue is tha the SSH sends the LC_* values, which are then incorrect
<smoser> i'm not sure oif that is right at all.
<smoser> no, thats useful.
<smoser> and if we populate templates/default-locale.tmp, we break that useful functionality
<utlemming> it is...but the values may not be valid (or generated), which breaks the userland experience
<smoser> so we shouldn't be doing that at all
<utlemming> then we need to generate all the locales
<smoser> but getting messgaes saying "you have invalid locale", to which a quick google would show you how to fix that
<smoser> is much better than breaking for anyone who would know how to fix this situation the ability to fix it
<smoser> ie, our /etc/default/locale file means you cannot 'locale-gen your-locale' and then start using it via ssh.
<smoser> basically, cloud-init is insisting that all users of this system use the system locale when they ssh in.
<utlemming> yes
<smoser> instead of the default behavior on a non-cloud server, where it would work or they would get warned.
<utlemming> on a non-cloud, the locale would be choosen as installation time
<smoser> yes (it is here too)
<smoser> and on a non-cloud, you might ssh to a system that did not have your locales installed
<smoser> (just as here)
<smoser> the difference is that on our cloud images, you cannot use your locale!
<smoser> (without rm /etc/default/locale)
<smoser> isnt that right?
<utlemming> I spoke with one of th locale folks a while back about coming up with a dynamic way to handle the installation of locales...and the concesus is that it would take a lot of work.
<utlemming> yes it the same situations between non-cloud and cloud, just felt more on cloud
<smoser> actually wait.
<smoser> this is the same as it is on my desktoip system
<smoser> the install populated /etc/default/locale with LANG="en_US.UTF-8"
<smoser> so we're doing what normal ubuntu server would do
<smoser> (although that is generally sucky, i think)
<utlemming> one could argue that if your locale is hu_HU on the client, you'll likely have hu_HU on the server, so its not a problem there
<steakknife> After --set-selections on lucid, is there a standard way to trigger dselect without an upgrade?
<utlemming> and if you track the problem to the roots, the reason you can't override LANG is because of /etc/pam.d/sshd uses pam_env to force the LANG anyway.
<steakknife> utlemming: sounds like something that could be pushed out like ssh keys with puppet, but that would be work
<smoser> utlemming, so... right now, my limited understanding of this...
<utlemming> steakknife: perhaps, we need to solve the problem at a lower level than puppet since not all cloud users use puppet
<smoser> in my opinion, i'd like to not touch /etc/default/locale (remove it)
<smoser> yeahy
<smoser> this is not right
<smoser> because ssh allows LC_* but not LANG
<smoser> so you set default LANG because there was none
<smoser> that makes sense.
<utlemming> actually, in testing, LANG is passed
<smoser> oh wait. it does
<smoser> yeah
<smoser> you're reight
<smoser> so really.. the way i think woudl be best to fix it would be to removfe that file and not touch it at all
<smoser> which woudl suck
<smoser> but at least then, 'locale-gen $LANG' would make you happy
<smoser> as it is, that wont work
<smoser> oh.
<utlemming> oh?
<smoser> i guess outside of dynamically installing locales... the best thing to do would be for ssh to look at the list of locales available, and muck your environment only if your settings didn't match the system you were ssh'ing to
<lynxman> jamespage: puppet new branch ready, what was the command you told me I should run? :)
<smoser> utlemming, so heres what i think right now:
<jamespage> lynxman, just push it to the merge proposal branch
<lynxman> jamespage: doing so
<utlemming> smoser: I'll have to look at the debian bug...but that _is_ a bug upstream
<smoser>  * ubuntu cloud images default install is basically the same as a stock ubuntu server install
<smoser>  * adding LC_ALL to /etc/default/locale actually makes us not like stock ubuntu install, and makes it more difficult to use LC_ALL after installation of locales on the cloud image.
<smoser> i agree that it sucks
<smoser> and i will admit that i iprobably care less becauase i have the "proper" LC_ALL in my environment right now :)
<smoser> do you see what i'm saying though?
<smoser> doing this actually makes things different and worse.
<smoser> in some way
<utlemming> alternatively, we could force LC_ALL=C
<smoser> right, but that would be no better.
<smoser> its just a different default
<smoser> that is also pre-installed
<utlemming> except that it is no internationally compatable
<utlemming> s,no,now
<smoser> well, changing a default, and making it worse for some people and better for no people is not really progress.
<smoser> :)
<lynxman> jamespage: overwrote branch, all yours
<smoser> utlemming, do you agree with that ? or not?
<steakknife> Btw, see if I get the use case... is the goal to provide users with their locale, which may not be yet installed on a given box?
<utlemming> this is a rock and hardplace, really. Fixing it in the cloud-images is the wrong place. I did mock up an /etc/profile.d script that magically detected and generated the right locale, but fixing it there is the wrong place too.
<utlemming> steakknife: the goal is ensure that the SSH locale is generated and useable, to prevent userland problems due to invalid locale settings
<smoser> utlemming, ok. now more quickly... i hope
<smoser> on your other proposal
<jamespage> lynxman, looking now
<lynxman> jamespage: cool :)
<Captain_Proton> I know this is not a ubuntu ? but I am running it on  a ubuntu server. I need some help with spf_record? here is my record : v=spf1 ip4:173.162.32.1 ip4:75.98.169.177 include:_spf.google.com -all
<jamespage> bencer_, hows it going?
<Captain_Proton> but I still get alot of spam hit on the 173.162.32.1
<Captain_Proton> did I type it right
<steakknife> Captain_Proton: i usually set an spf and a txt record, just to be sure.
<Captain_Proton> is the syntax right?
<steakknife> Captain_Proton: try http://www.kitterman.com/spf/validate.html
<smoser> utlemming, Daviey i'd appreciate your input on my comments at https://code.launchpad.net/~utlemming/cloud-init/cloud-init.lc_all/+merge/97768
<hallyn> lynxman: this is quite the heisenbug.  I can't reproduce it today.  I'm going to mark my bug as confirmed, and comment about all the ways i couldn't reproduce it, and email the m-l
<hallyn> lynxman: thanks!
<lynxman> hallyn: no problem, glad to be of help :)
<hallyn> lynxman: let's see if any discussion gets spawned: https://www.redhat.com/archives/libvir-list/2012-March/msg00733.html
<lynxman> hallyn: *crossing fingers*
<Zx432> I tryed to set static ip on my server to finally make it work (connect to internet) it does not work. Anyone willing to help a noob out?
<Captain_Proton> Zx432, did you set dns in /etc/resolv.conf
<genii-around> And possibly gateway
<Captain_Proton> Zx432, it should be set by your dhcp server but if not  - nano /etc/resolv.conf and type this -- nameserver 8.8.8.8
<Captain_Proton> gateway should be set in interface.conf
<Captain_Proton> sorry interfaces
<genii-around> Yes, /etc/network/interfaces
<Captain_Proton> guess he's sleeping :D
<Zx432> Captain_Proton, yes I did.
<Captain_Proton> c an you pastbin you interfaces?
<Captain_Proton> hho  did you restart networking?
<genii-around> Zx432: Did you set IP to a valid IP ( not for instance ending in .0 or .255 )
<Zx432> Network won't restart.
<Captain_Proton> Zx432, /etc/init.d/networking restart
<Captain_Proton> then you have a typo somewhere
<Zx432> Sistem plugged in router for now all ip-s should work.
<Captain_Proton> let us see you interfaces. it is something simple I am sure
 * genii-around prepares a new pot of coffee
<Zx432> ? How do I do that? Restarting gives me an error.
<Captain_Proton> cat /etc/network/interfaces than copy and paste in http://pastebin.com/ and give us the link
<genii-around> Captain_Proton: If he can't connect, pastebin is probably not an option ;)
<Zx432> First I need graphicall interface.
<Zx432> And yes. No connection might be a problemm.
<Captain_Proton> Zx432, are sitting in front of the computer that has the problem?
<Zx432> Captain_proton: Yes. The problematic one has no connection. So I am naturaly using anotherone to talk here.
<Captain_Proton> Zx432, ok then you type what you have in interfaces here so we can see where the problem is
<Zx432> Ok
<Zx432> /etc/network/interfaces
<Zx432> #This file describes the network interfaces available on your system
<Zx432> #and how to activate them. For more information, see interfaces(5).
<Zx432> #The loopback network interface
<Zx432> auto lo
<Zx432> iface lo inet loopback
<Captain_Proton> that all that in there
<Zx432> #the primary network interface
<Zx432> auto eth0
<Zx432> iface eth0 inet static
<Zx432> adress 192.168.1.110
<Zx432> netmask 255.255.255.0
<Zx432> network 192.168.1.225
<Zx432> gateway 192.168.1.1
<Zx432> That is all.
<Captain_Proton> well address is missing a "d"
<Zx432> Anything else?
<Captain_Proton> network you do not really need but whatever
<Zx432> Or do I fix that and try again?
<Captain_Proton> fix the d will most likly make it work
<Captain_Proton> Zx432, did that work
<genii-around> I'm not sure if "network 192.168.1.225" is valid
<Captain_Proton> your right that would be broadcast network should be 192.168.1.0
<genii-around> broadcast should end in 255 not 225 ...
<Captain_Proton> broadcst is 255
<Zx432> File is read only?
<Captain_Proton> sudo before you command
<Captain_Proton> sudo nano/vim /etc/networking/interfaces
<Zx432> Ok. Fixed, address and changed brodcast to 192.168.1.255
<Zx432> Did I forget something?
<A-KO> How does one install the isc versions of dhcp4 and dhcp6 in Ubuntu?
<genii-around> Zx432: Did you spell it "brodcast" or "broadcast" ?
<genii-around> ( eg: avoid another typo ! )
<Zx432> brodcast
<genii-around> Zx432: It should be "broadcast"
<genii-around> Zx432: ( in the file )
<Zx432> Sorry english is not my native language.:)
<Zx432> Ok now restart?
<Zx432> Sudo /etc/init.d/networking restart?
<Captain_Proton> no capital S just sudo
<Zx432> Yes sorry. My phone did that automatic. Looks good now i ping something?
<Captain_Proton> sweet
<Captain_Proton> sudo apt-get update see if that works
<Zx432> No
<Zx432> I must have done something wrong.
<Captain_Proton> can you ping ggole.com
<Captain_Proton> google.com
<Zx432> Yes?!@:-D
<Zx432> I can ping!
<Captain_Proton> :)
<Zx432> Now, why doesn't the update work?
<Captain_Proton> what error did you get?
<Zx432> Sec. How to stop pinging? Isn't it ctrl+c?
<Captain_Proton> yup
 * Captain_Proton be right back
<Zx432> It won't stop.
<uvirtbot> New bug: #956421 in openssh (main) "error in /var/log/syslog: "ssh[12586]: segfault at 0 ip 001c60e0 sp bfbf2508 error 4 in libc-2.13.so"" [Undecided,Incomplete] https://launchpad.net/bugs/956421
<Zx432> There are pages of the same error. Eth0: mismatched ead page pointers 4c vs fa.
<Zx432> Well it stopped now.
<stgraber> hallyn: do we have a bug report for clean shutdown of LXC containers?
<stgraber> (I just merged your branch in upstart and will upload in a few minutes)
<Captain_Proton> can you reboot it?
<Zx432> Whole system?
<Captain_Proton> yes
<Zx432> Ok.
<Zx432> It is alive!
<Captain_Proton> :D
<Captain_Proton> Zx432, all better now
<Zx432> sudo apt-get update it clearly gets the updates but does it install them? It finished so fast.
<Zx432> Almost no download.
<Captain_Proton> nope then you do apt-get upgrade to update the system
<Captain_Proton> update downloads the new package list
<Zx432> Ah. Should i do that?
<Captain_Proton> yes
<Pici> dist-upgrade will pull in all the upgrades.
<Pici> !dist-upgrade
<ubottu> A dist-upgrade will install new dependencies for packages already installed and may remove packages if they are no longer needed. This will not bring you to a new release of Ubuntu, see !upgrade if that is your intention.
<Zx432> There seems to be a lot of upgrade/update commands...
<patdk-wk> Zx432, it's easy
<patdk-wk> update = get current list of what is available
<patdk-wk> upgrade = update anything that is currently installed
<patdk-wk> dist-upgrade = update anything that is installed, that requires new stuff to also be installed
<patdk-wk> kernel upgrades are going almost always be dist-upgrade
<ninjix> any orchestra + juju users in channel?
<SpamapS> ninjix: you're more likely to find juju folks in #juju
<ninjix> SpamapS: tnx
<adam_g> dv310p3r: ping
<dv310p3r> adam_g, ping
<adam_g> dv310p3r: oh jeez, sorry, pinged the wrong person!
<dv310p3r> adam_g, lol, figured so.
<koolhead17|away> :)
<cr3> is there a way to set the group of an existing ec2 instance?
<hallyn> stgraber: woohoo, just saw the power-chagned upstart uploaded :)  now i need to consider how best to use it.  copy the way debian does it through alternatives?
<stgraber> hallyn: not sure what Debian does exactly, but I guess we should send SIGPWR to all containers, then enter a loop where every second we check if they're all dead. After let's say 45s, give up and kill them all
<stgraber> I'm guessing Debian must be doing something similar
<smoser> cr3 set the group ?
<smoser> adam_g, is there a bug associated with the postsint change at https://code.launchpad.net/~openstack-ubuntu-testing/glance/precise-essex-proposed/+merge/97966
<adam_g> smoser: yeah, thats the change that was applied to the ubuntu branch and cherrypicked into that one
<adam_g> smoser: i left the changelog entry out because its already present in the ubuntu branch, tho ill add it if it should be there
<smoser> ah. ok.
<adam_g> smoser: https://code.launchpad.net/~openstack-ubuntu-testing/horizon/precise-essex-proposed/+merge/97979  theres the final one. im going to hold off on melange, its building locally fine but failing in the CI
<smoser> adam_g, ok.
<smoser> so with the glance, what would be best i think is if you would have grabbed the precise change and change log entry
<smoser> and committed that
<smoser> andt hen added yours
<smoser> but i'll just insert the precise.dist one
<adam_g> smoser: ahh.. so we're good on that or you want me to update?
<smoser> i've got it.
<smoser> we just dont want to delete the changelog entry
<smoser> adam_g, python-quantumclient seemds to be missing some stuff also
<adam_g> smoser: what kinda stuf
<smoser> looking more
<smoser> http://paste.ubuntu.com/886876/
<smoser> adam_g, ^
<smoser> thats diff of precise source (lp:ubuntu/precise/python-quantumclient) to lp:~ubuntu-server-dev/quantum/python-quantumclient/
<adam_g> yeha
<adam_g> it looks like the ubuntu-server-dev is not in sync
<adam_g> smoser: we should create a lp:~ubuntu-server-dev/python-quantumclient/essex like all the others, not sure why that one differs
<smoser> right.
<adam_g> smoser: can you push a clone of lp:ubuntu/precise/python-quantumclient  there, ill create a -proposed and resolve the diff
<smoser> well, this is hwere i gets hairy
<smoser> the lp:ubuntu/precise/python-quantumclient are full branches
<smoser> but the ones on ubuntu-server-dev are packaging only
<smoser> i'll try to sort it out and get something to ubuntu-server-dev.
<adam_g> smoser: oh i see, so that package isn't setup for this kinda workflow yet?
<smoser> i guess not. do you know who initially did that ?
<smoser> bah, adam_g last pastebinit was garbage
<smoser> http://paste.ubuntu.com/886884/
<smoser> is better.
<smoser> but i'll resolve
<adam_g> thanks
<cr3> smoser: I can start an instance with a security group but what if I want to chnage that group for an existing instance later?
<smoser> hm... maybe you can take a hike
<smoser> http://serverfault.com/questions/237557/how-to-change-an-ec2-instances-security-group
<smoser> yeah, looks like you can take a hike, cr3
<cr3> smoser: I suspect there's a play on words there but it's totally lost on me; I blame it on my 300-word vocabulary
<smoser> go jump in a lake?
<smoser> hm...
<smoser> cr3, i think this is correct:
<smoser> Security Groups are being assigned to an instance at creating it. Once the instance is running, you cannot assign more groups nor can you remove groups from a running instance. You can only change individual rules in security groups.
<smoser> http://blog.taggesell.de/index.php?/archives/68-Managing-Amazon-EC2-Networking-In-The-Cloud.html
<smoser> adam_g, it looks like zul must have synced from debian on that.
<adam_g> smoser: hmph, okay
<smoser> adam_g, woudl you be opposed to waiting for chuck on this ?
<adam_g> smoser: not at all
<smoser> it seem smostly resolvable.
<smoser> but the the debian/copyright has changes..
<adam_g> smoser: yeah, quantum has been largely chucks deal anyway. first time touching those packages for me
<smoser> well, i'm gonna leave that one.
<smoser> ok.
<smoser> i have
<smoser> glance  keystone  nova  python-keystoneclient  python-novaclient
<smoser> done
<smoser> what else was there?
 * med_ perks up his ears on the quantum stuff and reads scrollback.
<adam_g> smoser: horizon
<smoser> k. i'll get horizon and swift too.
<adam_g> yeah, swift... just merge james' proposal, im not going to do a new upstream release till we know whast going there
<adam_g> smoser: are we good on the rest? i need to run an errand downtown and probably wont be back till youre EOD
<smoser> i have swift and horizon i should be abel to get.
<Daviey> smoser: last quatum upload introduced a new binary package, which contained a moved bin from another package.  It didn't have appropriate Breaks.. so it was rejected.
<KM0201> whats the command to move a folder (with data) to another location?   mv /path/to/folder /path/to/destination   keep stelling me "the directory is not empty" so i'm assuming i'm missing a switch
<stgraber> hallyn: can you push your latest lxc upload to the udd branch?
<stgraber> hallyn: did it the old school way :)
#ubuntu-server 2012-03-17
<stgraber> hallyn: pushed ubuntu40 with apparmor turned back on for the bits that only needed the new userspace
<stgraber> hallyn: there are 3 FIXMEs left, one that's fixed with the new kernel so should be safe to do later this week and the two others need some apparmor work
<stgraber> jjohansen: any ETA for the remaining fixes?
<smoser> SpamapS, if you feel like it, SRU sponsorship waiting at
<smoser> https://bugs.launchpad.net/ubuntu/+source/apt/+bug/948461
<uvirtbot> Launchpad bug 948461 in apt "apt-get hashsum/size mismatch because s3 mirrors don't support http pipelining correctly" [High,Confirmed]
<hallyn> stgraber: sorry didn't see that. the UDD branch seemed to get out of sync, but i import-dsc'd and pushed, seems ok now
<hallyn> stgraber: then I did lp:~serge-hallyn/ubuntu/precise/lxc/lxc-shutdown.  haven't tested it yet :)  i think i'm done for the night
<hallyn> stgraber: thanks doing the apparmor stuff
<hallyn> ttyl
<smoser> hallyn, still awake ?
<smoser> must have missed you.
<smoser> if you see this, https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/957622
<uvirtbot> Launchpad bug 957622 in qemu-kvm "kvm -kernel with grub multiboot kernel dumps core or exits" [Undecided,New]
<hallyn> smoser: i did see it hit my inbox
<hallyn> i'll try to reproduce and repr with qemu.git
<smoser> fix attached to bug
<smoser> hallyn,
<smoser> sent upstraem, it doesnt crash anymore
<hallyn> yay!
<hallyn> thanks
<smoser> will give more test on monday. haven't actually tried loading a working disk image
<smoser> only loading and seeing no crash
<jjohansen> stgraber: I think they are fixed and am starting to do some testing so, say Monday
<hallyn> stgraber: all right, now lxc-shutdown is tested, and working nicely
<hallyn> gnight
<drounse> does ubuntu server have dhcp?
<taipres> it should, if not you can apt-get install dhcp probably
<ChmEarl> dhcp3-server
<taipres> apt-get install dhcp3-server
<drounse> ok thank you, what if i want to use pfsense too? does that have dhcp?
<onekenthomas> lmgtfy.com?
<p3rsist> Hi guys. Im trying to append another public key (from a second host) to the remote authorized_keys host, but it doesn't work. Usually, you just append the new public key to authorized_keys right?
<onekenthomas> y,  but...
<onekenthomas> there can be some other aspects
<onekenthomas> which I only vaguely remember ATM from setting up gitosis,  which is pretty picky about the key format
<bencer_> jamespage: everything should be ready here: https://launchpad.net/~bencer/+archive/zentyal-2.3-p/+packages
<uvirtbot> New bug: #957727 in apr (main) "apr: update to 1.4.6 to fix svn fsfs repository corruption" [Undecided,New] https://launchpad.net/bugs/957727
<uvirtbot> New bug: #957820 in tomcat6 (main) "package tomcat6 6.0.28-10ubuntu2.3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/957820
<gnuskool> i got a WAP which I have turned off all routing and want to use ubuntu as the routing/dhcp server, but cant get this to function - do i have to connect the server in PPoE to the modem or an a LAN port on the AP be enough to get a connection to the internet
<stgraber> jjohansen: perfect, Monday sounds good
<arooni-mobile> according to http://www.php2s.com/linux/installing-lamp-linux-apache-mysql-php-on-ubuntu-11-10.html ; there should be /etc/apache2/available-apps on ubuntu 11.10; but i have the apache2 package installed and that directory isnt there
<arooni-mobile> nevermind; it wasnt installed correctly
<koolhead17> !apache2
<koolhead17> !lamp
<ubottu> LAMP is an acronym for Linux-Apache-MySQL-PHP. However, the term is often used for setups using alternative but different software, such as Perl or Python instead of PHP, and Postgres instead of MySQL. For help with setting up LAMP on Ubuntu, see  https://help.ubuntu.com/community/ApacheMySQLPHP - See also the Server CD installation process (different in Edgy+)
<koolhead17> arooni-mobile, probably this will help you
<RoyK>  19:31:25 up 12 days,  4:02,  1 user,  load average: 49.00, 48.88, 48.78
<RoyK> (seems someone have started a wee bit too many run jobs on that box...)
<patdk-lap> ya, I do that sometimes
<TimR> does anybody know why I am getting this error: http://pastebin.com/j3UqEeeG
<TimR> when I try to install horde
<nancy--> i have installed apache. how can i point my www.domain.com to the  /var/www folder?     i mean manage dns ?
<RoyK> TimR: did you try to read the file the error message tells you to read?
<TimR> where would I read the error at?
<RoyK> you just pastebinned it
<TimR> ya I read it half million times
<TimR> it sounds like the setup when you install it all messed up
<RoyK> from your pastebin 'Read /usr/share/doc/horde3/README.Debian.gz on how to allow access.'
<TimR> nevermind I got it to work
<patdk-lap> heh
<TimR> I didnt remove the stupid echo in the config file
<nancy--> i have installed apache. how can i point my www.domain.com to the  /var/www folder?     i mean manage dns ?
<TimR> you need a to change your apache2 sites-available to match your /var/www/ setup
<RoyK> TimR: thatÂ§s not related to dns...
<RoyK> !apache
<ubottu> LAMP is an acronym for Linux-Apache-MySQL-PHP. However, the term is often used for setups using alternative but different software, such as Perl or Python instead of PHP, and Postgres instead of MySQL. For help with setting up LAMP on Ubuntu, see  https://help.ubuntu.com/community/ApacheMySQLPHP - See also the Server CD installation process (different in Edgy+)
<TimR> then nancy-- question isnt very clear
<RoyK> hm... anyway - nancy-- - take a look in /etc/apache2/sites-available
<RoyK> and sites-enabled
<RoyK> that's where   sites are configured
<RoyK> (symlinked to sites-enabled)
<TimR> see your pointing same thing what I just said
<nancy--> hm..
<RoyK> nancy--: don't just put anything in /var/www, rather create a subdir per site
<nancy--> hmm. ya.
<RoyK> or only one even if you only have one site (for future growth)
<nancy--> so /var/www/mysite/index.php
<nancy--> ?
<RoyK> yeah
<nancy--> ok.
<RoyK> just, index.php is a file, not a dir
<RoyK> so the virtualhost should point to the dir
<nancy--> hm
<RoyK> !virtualhost
<RoyK> google apache and virtualhost - it's rather trivial
<RoyK> the only non-trivial bit, is the default website sometimes getting in the way, so you may have to remove that
<RoyK> that'll be /etc/apache2/sites-enabled/default
<nancy--> hmm
<RoyK> nancy--: but just try to configure a new vhost first
<RoyK> it usually works
<nancy--> vhost?
<nancy--> how
<sako> how do i see what files a packages installs?
<sako> i am interested in locating some .conf files
<RoyK> sako: dpkg -L pkgname
<sako> thanks
<RoyK> nancy--: Apache VirtualHost
<sako> RoyK: how can i see all installed packages?
<nancy--> RoyK,  i just saw the sites-available and enabled files.  but i cant understand how to point /var/www/mysite/    to    www.mysite.com
<RoyK> sako: dpkg -l
<RoyK> or
<RoyK> dpkg -l | grep ^ii
<RoyK> nancy--: you create a file, /etc/apache2/sites-enabled/yoursite.conf, in that you give the path
<patdk-lap> I think he wants to know how to configure dns :)
<patdk-lap> not the webserver
<RoyK> then you enable it with some command (don't remember) or just symlink it to /etc/apache2/sites-enabled (which is what that command does)
<patdk-lap> but maybe both :)
<RoyK> if you need to configure dns, well, that varies where you have the dns server, some web-based, local bind, dunno
<patdk-lap> looks like my mysql 5.5.20 on lucid with sphinx plugin is working ok
<nancy--> patdk-lap,  yes
<patdk-lap> up-porting the sphinx patch was a lot of work
<nancy--> patdk-lap,  both. dns and webserver
<RoyK> patdk-lap: sphinx?
<patdk-lap> http://sphinxsearch.com/
<RoyK> nancy--: usually the dns server is configured from whereever you bought the domain
<patdk-lap> been using sphinx before it went public
<RoyK> patdk-lap: ah
<patdk-lap> I really should upgrade sphinx sometime, but would take major rewrites to the apps
<nancy--> hmm. RoyK  i think i wll configure the dns at my domain registrar . but having problem doing the webserver config. so i just need to make a file named        mysitename.conf  in   /etc/apache2/sites-enabled      ?
<RoyK> nancy--: yes, and that file should hold a VirtualHost directive and its settings
<RoyK> nancy--: google for apache virtualhost
<nancy--> hm.. so i have to include some text in that file too?     virtual host directive and settings . that i need to learn
<nancy--> RoyK,
<patdk-lap> well, if you just want all websites pointed to your server to use /var/www, you don't need to do anything really
<patdk-lap> I would make that my first test
<patdk-lap> after that works
<patdk-lap> then work on breaking it up per site
<hilarie> Drupal7 is mad at my php 5.4, how can I bring it to 5.3?
<RoyK> nancy--: http://paste.ubuntu.com/888232/ <-- an example from my server
<patdk-lap> hilarie, didn't know ubuntu released 5.4 yet
<nancy--> patdk-lap,  i have different sites and need to point them to different folders.
<nancy--> RoyK,  thx!
<hilarie> patdk-lap I don't know what version I am running, I just found this bug report for what is happening to me http://drupal.org/node/1483986
<RoyK> nancy--: that includes a few bits you don't need, though
<patdk-lap> hilarie, there is no php 5.4 in ubuntu
<patdk-lap> that report is useless
<patdk-lap> WHAT is happening to YOU?
 * RoyK blames canada
<hilarie> patdk-lap http://paste.ubuntu.com/888224/
<RoyK> hilarie: create a file, phpinfo.php, with '<?php phpinfo(); ?>' and check what it says about versions
<nancy--> RoyK,  i cant skip them. i just need docroot
<hilarie> okay
<patdk-lap> odd
<RoyK> nancy--: and ServerName (and perhaps ServerAlias (and ErrorLog (and maybe CustomLog)))
<hilarie> Somehow, I am running 5.4
<hilarie> PHP Version 5.4.0-2
<patdk-lap> heh, it didn't come from ubuntu
<patdk-lap> cause no version of php in ubuntu, backports, updates, ... has 5.4 yet, not even unreleased
<hilarie> http://packages.debian.org/sid/all/drupal7/download probably from following that then?
<patdk-lap> well, that isn't ubuntu
<hilarie> https://help.ubuntu.com/community/Drupal told me to!
<RoyK> debian Sid - well - Sid is the kid that breaks toys...
<patdk-lap> hilarie, heh
<patdk-lap> you where suppost to use that link for drupal7 ONLY
<patdk-lap> not to download everything else, like php, apache, and who knows what else
<patdk-lap> you probably have a very unstable system now
<nancy--> RoyK,  hmm. now i need to point the dns at domain registrar and see if it works
<hilarie> patdk-lap what do I do from here?
<nancy--> RoyK,  i just need to put the server ip address in the mx record?
<RoyK> nancy--: mx is mail exchange
<RoyK> nancy--: for a web server, set the right address for the A record
<nancy--> mm... i should make an A record of?
<patdk-lap> hilarie, have 3 choices
<patdk-lap> reinstall :(
<RoyK> nancy--: and ServerName is the name sent by the browser, taken from the url line, like "www.wtf.com"
<patdk-lap> just make drupal work, and trust it's stable enough
<patdk-lap> uninstall php, and reinstall php (after removing that debian package line from apt)
<patdk-lap> I would remove that apt line anyways
<RoyK> nancy--: a dns "a record" is an address record
<RoyK> pointing something like "www.something.tld" to an ip address
<nancy--> RoyK,  hmm. ok
<RoyK> mx is for pointing a domain name to an ip address for mail routing purposes
<RoyK> somehow
<patdk-lap> mx is for pointing to a dns name that receives email, NEVER point to ip address
<hilarie> patdk-lap, should I remove everything that was put on? http://paste.ubuntu.com/888252/
<RoyK> patdk-lap: well, yes, sorry
<nancy--> patdk-lap,  hm
<patdk-lap> hilarie, that would get you back to ubuntu and should be good and stable again, yes
 * RoyK wouldn't use a Sid source for anything in production, and would rather build from sourcecode instead
<nancy--> RoyK,  so what exactly should be a A recond.          58.15.15.511    (server ip)?
<patdk-lap> hopefully none of that stuff did any like db updates
<RoyK> nancy--: yeah
<patdk-lap> royk, heh,I use enough sid stuff, but I always download the source, and repackage
<nancy--> RoyK,  just the ip... thats all
<RoyK> patdk-lap: my point
<hilarie> apt-get remove <extra packages installed> <new packages> and <upgraded package> then remove sid on list, then reinstall the removed upgraded ones?
<patdk-lap> you can remove sid now :)
<RoyK> nancy--: the IP and a name for the record, which should be either the domain name (or '@' for short) or something like "www" (with the implicit domain name)
<patdk-lap> from the list
<patdk-lap> make sure you do an update after you remove it
<nancy--> hm
<hilarie> I should have known something was wrong when I had to import a gpg key
<nancy--> RoyK,  iam ading "www" in host       and the server ip in "points to"
<nancy--> RoyK,  i had some prevoiusly added. mx
<nancy--> 	
<nancy--> 66.96.142.52
<nancy--> 	
<nancy--> pop
<nancy--> 	
<nancy--> 66.96.160.139
<nancy--> 	
<nancy--> ftp
<nancy--> 	
<nancy--> 66.96.160.139
<nancy--> 	
<nancy--> *
<nancy--> 	
<nancy--> 66.96.160.139
<nancy--> 	
<nancy--> mail
<nancy--> 	
<patdk-lap> nancy--, pastebin :)
<nancy--> 66.96.160.139
<nancy--> sory
<hilarie> pastebin...
<patdk-lap> or use the program, pastebinit :)
<nancy--> ya :)
<hilarie> What is a segmentation fault? http://paste.ubuntu.com/888259/
<nancy--> patdk-lap,  RoyK  what should i right in namesservers?
<RoyK> nancy--: do you have your own name server? or is it at a registrar?
<RoyK> hilarie: it generally means something is pretty messed up - either bad libs or in worst case bad memory
<nancy--> RoyK,  i just bought my domin from fastdomain.com
<RoyK> nancy--: then I guess they have a web service for it - and I guess that what you have pasted here, are A records for various stuff, like pop, ftp etc
<nancy--> RoyK,  now i have alot of options....... mx .  ns recod. dns .(i put www and the ip as A record)  , names servers. etc.  i dont know what to put in what
<hilarie> RoyK call the person who's server I was playing with and ask him if he has backups, then re-install you say?
<RoyK> the * would match anything
<nancy--> RoyK, yes
<RoyK> nancy--: just change the IP there
<nancy--> all the ips for mail. ftp * etc ?
<RoyK> hilarie: if you have a bunch of things running, a backup would be nice, otherwise just reinstall
<nancy--> and what to put in the "nameserver"
<RoyK> nancy--: yes
<nancy--> ok
<RoyK> nancy--: probably no need to change the nameserver parts
<nancy--> RoyK,  what should i put there
<RoyK> which domain name is this?
<RoyK> nancy--: this is an ubuntu channel, and generally, we try to help out as well as possible, but newbie questions about dns config may be easier to answer for your registrar...
<nancy--> RoyK,  Priority 	Host 	Points To:
<nancy--> 30               *         mx.open-pages.com
<nancy--> 30               @	mx.open-pages.com
<RoyK> that's the mx
<nancy--> the domain name is open-pages
<RoyK> not relevant
<nancy--> ok
<nancy--> what should be in name serverS?
<nancy--> if the mx record is fine.. ?
<nancy--> nameserver and ns record
<RoyK> I get no reply from the registered nameservers (dig mx open-pages.com, host open-pages.com etc)
<RoyK> nancy--: so please ask your registrar - it's not really about ubuntu server
<nancy--> ok. no problem... but it was my last question
<orionsonofneptun> hey
<guntbert> orionsonofneptun: welcome
<orionsonofneptun> im downloading ubuntu server now
<orionsonofneptun> does it have a try before install feature on disc
<KM0201> orionsonofneptun: no
<KM0201> there's not a lot to try though, it's all command line..lol
<KM0201> orionsonofneptun: what do you plan to use it for?
<orionsonofneptun> starting out i will host websites and operate irc bots
<KM0201> better get ready to do a lot of reading.
<orionsonofneptun> getting my network set up
<orionsonofneptun> really what are best fourums and sites to learn ubuntu server ive never even used anything but a little cpanel before
<orionsonofneptun> i have wonderfull ideas but not the real world knowledge required to acheive my goals i need all the ubuntu support i can get
<guntbert> !serverguide | orionsonofneptun
<ubottu> orionsonofneptun: The Ubuntu server guide may be found at http://help.ubuntu.com/10.04/serverguide/C/
<KM0201> well, IRC is a good start, ubuntuforums and the server guide.
<orionsonofneptun> ohh ubuntu fourums has server fourum
<orionsonofneptun> ok igot a message saying to replace disc with supported dvd or cd
<acidflash_> i am having a small problem with rsync
<orionsonofneptun> how do i verify disk is supported
<guntbert> orionsonofneptun: I've never seen such a message
<KM0201> what do you mean disk is supported?
<orionsonofneptun> it says my disk is not suppoerted
<orionsonofneptun> its a wierd disk
<acidflash_> it is stopping at almost the exact same place everytime, its pausing,  i am passing the following command, rsync -avHP --ignore-existing -e ssh root@whatever.com:/videos/youtube /videos/
<acidflash_> --> youtube/1o6T1a--vY8
<acidflash_> 3423460 100%   44.72MB/s    0:00:00 (xfer#902, to-check=823092/896132), youtube/1oBrGpbCGqs,          0   0%    0.00kB/s    0:00:00, its then getting to a certain place, the mentioned place, and stopping, even if i exit the program, and pass the same argument, rsync does nothing..., if i tell rsync to start ALL over again, it will then again count up to that same transfer 902, and stop,  rsync  version 3.0.7  protocol version 30 <-- t
<acidflash_> hats the version of rsync i am using
<guntbert> orionsonofneptun: "it"??
<orionsonofneptun> sony handycam 1.4 gb  dvd rw
<orionsonofneptun> its a small disc
<guntbert> !paste | acidflash_
<ubottu> acidflash_: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<acidflash_> guntbert: thanks
<orionsonofneptun> i did not paste a bunch of stuff what do you see
<acidflash_> orionsonofneptun: he meant me
<guntbert> orionsonofneptun: are you talking about the download / burning?
<orionsonofneptun> yup
<guntbert> orionsonofneptun: before burning make sure to check if the download is ok - use !md5sum
<orionsonofneptun> http://www.facebook.com/photo.php?fbid=342419105795452&set=a.167406903296674.30380.100000820959861&type=1&theater
<orionsonofneptun> that is the disk
<orionsonofneptun> is it ok to burn ubuntu server
<KM0201> if your machine can boot a USB, you can also make a bootable usb
<guntbert> orionsonofneptun: are you sure that your machine can boot from a DVD?
<orionsonofneptun> yup it does
<orionsonofneptun> or i would not be here now
<guntbert> how should we know?
<orionsonofneptun> the ubuntu burner says that disk is not supported
<orionsonofneptun> i relly want the disk to work
<guntbert> orionsonofneptun: what did you really download? did you check for errors?
<KM0201> orionsonofneptun: why not just boot a usb>?
<orionsonofneptun> no not yet how do i check for errors im not understanding  !md5sum
<guntbert> !md5sum | orionsonofneptun
<ubottu> orionsonofneptun: To verify your Ubuntu ISO image (or other files for which an MD5 checksum is provided), see http://help.ubuntu.com/community/HowToMD5SUM or http://www.linuxquestions.org/linux/answers/LQ_ISO/Checking_the_md5sum_in_Windows
<orionsonofneptun> km0201  id rather have disk cause the only thumb drive i got is from clearwire
<KM0201> ok
<orionsonofneptun> are you only supposed to have one bootable program per drive??
<orionsonofneptun> on a usb stick??
<qman__> most installers can only do one
<qman__> you can use a tool like YUMI to create multiboot flash drives, but it's not a universal thing
<orionsonofneptun> qman__ is that to me???
<qman__> yes
<orionsonofneptun> ok then so i need to configyre usb drive to keep clear connection manager and ubuntu server
<orionsonofneptun> im not able to see storage space of thumb drive either
<orionsonofneptun> i dont know if thier is room
<qman__> orionsonofneptun, recommend you don't use it, multibooting on a flash drive is difficult to set up and doesn't always work, can trash your data
<orionsonofneptun> well im not needing clear that bad anymore
<orionsonofneptun> maybee ill erase clear files
<orionsonofneptun> how to see capacity of device
<orionsonofneptun> well now you right click file and thier is no delete file option dangit what to do????
<orionsonofneptun> ubuntu 11.10
<orionsonofneptun> hello
<stgraber> hallyn: looking at your branch now
<stgraber> hallyn: what's the reason for calling lxc-shutdown twice? wouldn't just calling it once with -t $LXC_SHUTDOWN_TIMEOUT (you're missing the $ in your branch, btw) essentially do the same thing?
<stgraber> hallyn: before looking at your branch, I was actually wondering if we shouldn't extand lxc-stop to do the SIGPWR + timeout part, but now seeing that you made a separate script, I think that's fine like that
<uvirtbot> New bug: #958210 in squid (main) "squid ABRT due to storeLocateVaryRead()" [Undecided,New] https://launchpad.net/bugs/958210
#ubuntu-server 2012-03-18
<hilarie> How can I unmount a USB drive if its busy (trying to format it so it doesn't matter whats on it or anything)
<RoyK> umount -l ?
<hilarie> :)
<hilarie> WARNING: Re-reading the partition table failed with error 22: Invalid argument.
<hilarie> The kernel still uses the old table. The new table will be used at
<hilarie> the next reboot or after you run partprobe(8) or kpartx(8)
<hilarie> sorry, thought that was 1 line
<RoyK> yank the usb cable
<hilarie> Bleh, thats a half hour drive
<qman__> find the process that's accessing it and kill it
<qman__> lsof should be of help
<hilarie> I'd pastebin that but crap thats alot of lines
<hilarie> be back in an hour :)
<qman__> that's what grep's for
<qman__> hilarie, lsof | grep /where/it/was/mounted
<hilarie>  I don't understand, do you perform these steps on the server that has the stuff, or the client that wants the things? https://help.ubuntu.com/community/rsync#Rsync_Daemon
<hallyn> stgraber: yeah i was being silly with the double loop, just didn't want to wait 10 seconds for each container in serial.  fixed in the tree
<hallyn> stgraber: I'm ok with extending lxc-stop instead.  not sure what is preferable
<hilarie>  I don't understand, do you perform these steps on the server that has the stuff, or the client that wants the things? https://help.ubuntu.com/community/rsync#Rsync_Daemon
<hallyn> hilarie: well, you run an rsync (or these days more likely an sshd) server on the server, then run rsync from the client
<hallyn> long as sshd is running on the server you can rsync to/from it
<hilarie> I am able to rsync from it now
<hilarie> I'd just like to be able to do it automatically
<hilarie> hallyn so if I can grab from the server already, its set up on that side, how can I set my *local* server to pull from it on a regular basis
<hallyn> cron
<hallyn> (but i do it by hand, as i switch back+forth between laptops)
<hallyn> gnight
<hilarie> thank you hallyn
<stgraber> hallyn: extending lxc-stop would make it more like openvz where "vzctl stop" asks init to shutdown the container, then after a timeout (1 or 2 minutes I think), kill the container
<stgraber> I think if it's not too much effort, it'd make sense to have that part of lxc-stop and be pushed upstream, having lxc-stop + lxc-shutdown might confuse some people
<randomDude> any ideas on how i would make my freeradius authorise based on ldap but authenticate based on kerberos ? (ie samba4)
<randomDude> i have the ldap part working, but freeradius rejects since samba4 doesn't keep passwords in the ldap, it keeps them in the kerberos database i think
<hallyn> stgraber: i'll see how upstream feels about a patch to taht effect
<hallyn> (now i'm off, gnight :)
<m-Ali> hello guys
<uvirtbot> New bug: #958447 in unixodbc (main) "package odbcinst1debian2 2.2.14p2-5ubuntu3 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/958447
<uvirtbot> New bug: #958448 in unixodbc (main) "package unixodbc 2.2.14p2-5ubuntu3 failed to install/upgrade: ErrorMessage: dependency problems - leaving unconfigured" [Undecided,New] https://launchpad.net/bugs/958448
<acidflash> hello all
<acidflash> i am fscking my computer, and on a particular drive, i was asked probably half a million times to fix the inodes, and after all that, my screen is now being flooded with numbers, and the cpu is 100% and its been like that for a while,
<acidflash> any idea why this is happening, and if its normal or not  (while trying to repair file system)
<nancy--> any good tutorial to configure my domain dns with my linux server ? how to set A records , nameservers, ns record, mx record. etc. i dont know a crap..?
<bluefrog> nancy--, http://doc.ubuntu.com/ubuntu/serverguide/C/
<nancy-->  how to set A records , nameservers, ns record, mx record. etc. i
<nancy--> bluefrog,
<acidflash> nancy--: thats daemon specific
<acidflash> nancy--: the truth is, you should specify which DNS daemon you are using, in order to get that kind of answer
<StevenR_> nancy--: in fact, let's keep everything here, rather than in two channels
<nancy--> i have registered  a domain.com and i have a vps with apache2 installed (ubuntu) . i want to set up A records , namesservers, ns reocrds and other. and on the vps i need a seperate forlder in /var/www/site  for each domain. how can i do it. is there a tutorial?
<nancy--> k
<bluefrog> nancy--, http://doc.ubuntu.com/ubuntu/serverguide/C/
<nancy--> bluefrog,  ok. thats for vps. what about domain
<StevenR_> nancy--: so, does your registrar host your domain for you?
<bluefrog> serverguide is for you. talking about dns apache virtual host and so on
<nancy--> StevenR_,  no . as i told you . i have a vps
<Nafallo> I suspect the nice registrar has an exciting web interface to make dns configuration easy
<nancy--> bluefrog,  i dont know how to put ips and names in dns control panell of domain
<StevenR_> (having a vps doesn't mean your domain is or isn't hosted with your registrar)
<nancy--> Nafallo,  ^
<nancy--> my dns would be hosted by the registrar i gues.
<nancy--> its from fastdomains
<Nafallo> oh dear.
<StevenR_> Nafallo: ?
<Nafallo> I've had very bad experience with them.
<nancy--> like what
<Nafallo> such as pulling the NS records for a domain without asking and not being able to give them back.
<Nafallo> the domain has been down for a year by now.
<Nafallo> so yeah; oh dear
<StevenR_> ok, so use their control panel to set up the domain hosting (not the same as web hosting) and create the records that you need. Then setup the appropriate virtual hosts on your vps.
<StevenR_> nancy--: ^^
<nancy--> i have control pannel by which i can chage A records. nameservers etc. thats all
 * StevenR_ lunch &
<nancy--> what should i put in ns record?
<nancy--> acidflash,  so that relates to the linux server? no the domain. i mean i have to setup the bind server first to put something like ns1.ip here.   ?
<acidflash> nancy--: yes
<acidflash> nancy--: I personally use unbound, not bind
<nancy--> k
<acidflash> unbound is forwarding resolving caching dns
<acidflash> bind is not
<nancy--> k
<nancy-->  if i need to setup my domains with my vps. what things do i need to setup?
<nancy-->  if i need to setup my domains with my vps. what things do i need to setup?
<nancy-->  i have a .com domain. and a vps. how to host my site?
<uvirtbot> New bug: #958534 in samba (main) "Update to version 3.6.3-2" [Undecided,In progress] https://launchpad.net/bugs/958534
<nancy-->  i have a .com domain. and a vps. how to host my site?
<oCean> nancy--: don't crosspost please
<wonderman> hi, can someone tell me how to check what scripts will run on reboot of my server, previously apache has modules loaded that i had disabled, and i want to know if ubuntu's default config is telling it to do this
<qman__> did you use a2dismod to disable them?
<wonderman> yes
<wonderman> it may not happen this time around, but it did previously, and i want to be sure before i reboot
<qman__> check for links in /etc/apache2/modules-enabled/
<wonderman> what is the cleanest way to reboot btw? never really do it but ive upgraded a bunch of packages
<wonderman> kernel also
<qman__> if it's in there, it gets loaded, if not, it doesn't
<qman__> sudo reboot
<qman__> you can also press ctrl alt delete at a local console
<wonderman> okies
<nancy-->  i have set it up. virtual host is set and A record is set too. but the domain is not opening the index.htm on my vps. why is that?
<jcastro> smoser, is it possible to move an s3 bucket from one person to another?
<jcastro> or copy from one to another without downloading first and then reuploading?
<koolhead17> second option is possible jcastro :)
<jcastro> ok
<wonderman> nancy--: thats hardly any info
<nancy--> wonderman,  http://pastebin.com/EfFLn92z any help?
<nancy--> http://pastebin.com/EfFLn92z any help with permission for apache ?
<oCean> nancy--: STOP posting in 2 channels
<koolhead17> nancy--, have you tried installing apache before?
<nancy--> yes but never configured bind
<koolhead17> nancy--, what exactly your trying to install
<koolhead17> i see phpmyadmin
<koolhead17> webmin
<koolhead17> and other stuff there
<nancy--> koolhead17,  i installed apache. php, mysql and bind. was trying to configure bind . restarted. and got errors
<oCean> nancy--: I removed you from #ubuntu. You can ask here, but not in 2 channels
<ikonia> too late
<ikonia> I'd already removed him (lag)
<koolhead17> ikonia, :P
<uvirtbot> New bug: #958704 in autofs (main) "ubuntu 12.04 boot with nis(ypbind) autofs (nfs) not working. " [Undecided,New] https://launchpad.net/bugs/958704
<wonderman> nancy you obv have no idea what you are doing
<wonderman> start by reading some docs
<dsirijus> so, i have this java server (smartfox)... where should i place it (it's extract and run thing)?
<lickalott> anyone know of a good C channel?
<guntbert> lickalott: ##c   :-)
<KBentley57> hey guys, upon each reboot, does ubuntu server do a fsk?
<RoyK> fsk fsk fsk
<hallyn> stgraber: despite the changelog msg, ubuntu:upstart does *not* have debian/conf/power-status-changed.conf
<stgraber> hallyn: look for /etc/init/shutdown.conf instead
<stgraber> hallyn: I renamed it to avoid potential event name conflict between the main event (power-status-changed) and the start/stop events emited by the job with the same name
<hallyn> ah i see
<hallyn> in that case, i'm trying to figure out why 'stop on starting rc RUNLEVEL=[016]' is not triggering
<hallyn> does shutdown not change the runlevel?
<hallyn> seems unsafe
<hallyn> in particular, libvirt won't stop its vms if the host gets SIGPWR, it seems
<stgraber> shutdown not changing runlevel sounds like a bug, it definitely should do it which should trigger upstart's rc job
<hallyn> no i misunderstood
<hallyn> the problem is that pre-stop isn'g being done
<hallyn> post-stop is
<hallyn> (i think that's what's going on)
<KBentley57> hey, Just coming back for a few minutes.  I've installed ubuntu server on a pc with 5 hard disks.  Everything works pretty much fine, except whenever I reboot, sometimes it thrashes for hours, and I can't login to see what is happening (headless).  I know that sometimes it does a file system check, and sometimes it re-syncs the softraid, could it be anything else?
#ubuntu-server 2013-03-11
<fabioval> someone could direct me to a good tutorial on drbd + gfs?
<james1980> where does dovecot store its error logs?
<ikonia> james1980: /var/log - probably in /var/log/mail
<james1980> both empty "new file"
<ikonia> they are directories
<Syria> hi
<Syria> I have deleted apache but I am still getting the "It works" in my broswer !
<Syria> This did not help as well sudo update-rc.d -f apache2 remove
<Syria> Hi, I have deleted apache but I am still getting the "It works!" in my browser.
<sarnold> Syria: if you force a refresh iwth something like shift+reload, is it still there?
<sarnold> Syria: if you test with a simpler browser such as w3m or lynx is it still there?
<Syria> sarnold: Yes, Also using another browser did not change anything.
<sarnold> Syria: does netstat -lnp | grep :80 show apache running?
<Syria> sarnold: http://paste.ubuntu.com/5604194/
<sarnold> Syria: re-run with 'sudo', the last line of output sure looks like a webserver
<Syria> sarnold: tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      3798/apache2
<sarnold> yay :)
<sarnold> Syria: kill 3798
<sarnold> Syria: how did you delete apache? I'm surprised a process is left over...
<Syria> sarnold:  apt-get remove the sudo update-rc.d -f apache2 remove
<Syria> *then
<sarnold> Syria: I must admit I don't delete apache often :) but I would have expected that to kill apache.
<sarnold> okay bed time :)
<Syria> sarnold:  sudo apt-get remove apache2 >>> is thic correct ?
<Syria> Anybody knows why I am still getting the "It works!" after removing apache2 please?
<Syria> Lost the connection.
<Syria> I was asking about apache, After removing apache2 I am still getting the "It works!"
<jamespage> rbasak, around? I have a favor to ask
<rbasak> jamespage: morning! What's up?
<histo> favors??
<jamespage> rbasak, morning!
<jamespage> rbasak, so..... mongodb
 * rbasak grabs an ARM machine from the lab :-P
<jamespage> rbasak, I have a couple of issues on ARM and I'm well out of my depth and I wondered whether you had time to take a look?
<rbasak> Sure!
<jamespage> rbasak, OK - so issue one relates to the arm-support patch that is in the current raring packaging
<jamespage> rbasak, I picked this from the fedora-arm ML
<jamespage> rbasak, but on arm mongod just spins on startup in atomic_intrinsics_gcc.h
<jamespage> rbasak, so I'm not convinced its really working right :-)
<jamespage> i had a hack using gcc __sync builtins and got it to start but then hit issue 2)
<jamespage> rbasak, issue 2 is I think something todo with alignment
<psivaa> hallyn: ack, thanks
 * rbasak looks
<jamespage> rbasak, you can see this by running the mongo command
<jamespage> it SIGBUS's straight away
<jamespage> rbasak, src/mongo/bson/bsonelement.h:581
<jamespage> rbasak, OH - btw I enabled ddeb's for mongodb - thats already inthe archive
<koolhead17> Daviey: jamespage around?
<koolhead17> needed confirmation if i should cloudrepo 4 swift installation as well if am trying folsom on precise?
<Daviey> koolhead17: you should.
<Daviey> cloud archive
<koolhead17> Daviey: thanks!!
<NaGeL_Work> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<SystemParadox> morning all. Please can someone help me out with environment variables and cron. I'm fairly experienced, but cannot figure this out. Even if crontab has SHELL=/bin/bash and is executing a script with #!/bin/bash, the PATH is *still* wrong- it's not using /etc/environment, but then I can't find any documentation for this file either- is this bash or ubuntu specific? Thanks
<NaGeL_Work> is there any alternatives for webmin?
<NaGeL_Work> that is supported by ubuntu?
<NaGeL_Work> or is there any guide how to set up your web server? and secure it?
<NaGeL_Work> is virtualmin supported by ubuntu-server?
<jpds> N: Unable to locate package virtualmin
<jpds> No.
<NaGeL_Work> any alternatives to that? or webmin?
<jpds> !webmin | NaGeL_Work
<ubottu> NaGeL_Work: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<NaGeL_Work> Alternatives?
<Patrickdk> ssh?
<NaGeL_Work> something with a GUI. shh isa back up for me due that i don't really trust my command line knowledge
<koolhead17> Daviey: planning to give grizzly try from tomorrow :D
<koolhead17> deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main  <-- this is what i will use
<Diegonat> hi guys! I have got a huge problem. I have a luks partition /dev/mapper/archive-vmbackup but when I try to close it, "cryptsetup --verbose luksClose /dev/mapper/archive-vmbackup"
<Diegonat> "Command failed with code 19: No such device" I have got this output... Do you have any idea?
<zenzio> "Iâm glad Sony decided to go with 8gb RAM because it means that the PS4 will out-power most PCâs for years to come."... say what?
<psivaa> jamespage: just seen this one time post install failure with postgresql raring smoke tests: https://jenkins.qa.ubuntu.com/view/Raring/view/Smoke%20Testing/job/raring-server-amd64-smoke-postgresql-server/115/
<psivaa> jamespage: not sure what's causing and it only happened once. dont see much difference from the previous passing install logs.
<zenzio> er... that was the wrong channel, sorry
<psivaa> jamespage: it would help if you could shed some light when you have some time please. thanks
<NaGeL_Work> I goat question which would be better configurable/ more useful : zend server CE or plain apache2 and installing APC to it(and other useful stuffs)
<rbasak> jamespage: looks like there are already arrangements in place to avoid the unaligned access for the double on ARM. Evidently it's not working though. I'm still looking at it.
<jamespage> rbasak, yeah - some of that came from work in ubuntu for the 2.0.x series
<jamespage> yolanda, any chance you can take a look as to why ceilometer is ftbfs in the lab please?
<yolanda> sure
<hallyn> stgraber: 1. patch hit libvirt m-l for userns support for libvirt-lxc
<hallyn> stgraber: 2. i intend to push my hacky fix for lxc-execute to raring today, so that the utah tests can pass (or go on to the next failure :)
<stgraber> hallyn: ok. I'm about to send an e-mail to lxc-devel saying that the rc1 pull request will go out on Wednesday
<yolanda> jamespage, i'm having a problem with README.md in docs
<yolanda> looking at git it's there but i have this error: dh_installdocs: cp -a README.md debian/python-ceilometer/usr/share/doc/python-ceilometer returned exit code 1
<Xaro> Hey
<jamespage> yolanda, OK - I'll take a look
<jamespage> yolanda, in the meantime - https://code.launchpad.net/~james-page/keystone/fixup-test-overrides/+merge/152680
<yolanda> i tried to move file from docs to ceilometer-common.docs but still same problem, let me send you the MP anyway because i fixed other problems
<yolanda> jamespage, found that ceilometer isn't following the other packages in .docs, i'll take a look at nova, glance... and try to do the same
<jamespage> yolanda, hmm
<jamespage> I don't see that issue
<yolanda> apart from removing patches, it builds for you?
<yolanda> if i remove docs file it works, but as soon as i add it, i have this problem
<jamespage> yolanda, yep
<yolanda> that fails for me :(
<jamespage> yolanda, all I did was rm the patches folder
<yolanda> yes, i did the same
<yolanda> but it doesn't build due to that docs problem
<yolanda> how do you generate the source? i pull from git, then i do an sdist, and use that
<jamespage> yolanda, I have a hack
<jamespage> yolanda, http://paste.ubuntu.com/5604865/
<yolanda> maybe is the way that i do that, that it causes problem? it always worked for me
<yolanda> jamespage, but it shouldn't be the same as doing a python sdist?
<jamespage> yolanda, I think so yes
<yolanda> let me try with that script anyway
<yolanda> because i cannot build
<yolanda> jamespage, seems that code shouldn't be the same, i can build with your tool to download source, and not with sdist
<yolanda> i'll submit the MP
<Quest> hi, I have question about google serach
<eutheria> i've setup ubuntu to do some routing for me, i have a weird issue where the traceroute going in one direction takes a long time with some timeouts
<Quest> my domain is new, I have a link to sub.mysite.com  in the mysite.com/index.html. mysite.com is visible in google search but sub.mysite.com is not(not by keywords of its contents nor by its name . what could be the reason?
<YamakasY2> guys is it doable to install a server with 5GB for usr and var ?
<yolanda> jamespage https://code.launchpad.net/~yolanda.robla/ceilometer/grizzly/+merge/152686
<jamespage> yolanda, did you mean to leave that change in ceilometer-common.install
<yolanda> jamespage, no, sorry, it was due to my problems with building, it was also giving problems with that file
<yolanda> i'll correct it
<yolanda> i didn't commit the file :(
<yolanda> jamespage, recheck
<YamakasY2> guys, do you seperate var and usr ?
<Quest>  disconnected..
<Quest>  so i do need site maps and robots.txt ?  just linking pages with each other by hyperlinks is not suffice for subdomains?
<jamespage> yolanda, conditional approve - please update the changelog and merge
<yolanda> ok
<YamakasY2> I think it's still wise to seperare /usr and /var but what should be the size ? wil 5GB per partition do ?
<stgraber> hallyn: can you review my lxc-lenny removal patch on lxc-devel?
<hallyn> didn't i ack that already?
<hallyn> guess not
<stgraber> hallyn: not that I can remember
<stgraber> hallyn: I also just sent another batch of 3 changes. Adding get_version to the API and tweaking the python binding. I think that's mostly it for changes I have for rc1 here.
<hallyn> stgraber: should we take the kmsg patch?  (if so we should add that option to lxc-ubuntu template's boilerplate)
<hallyn> stgraber: got one of your 3 patches so far :)
<hallyn> ah there they are
<stgraber> hallyn: oh, I meant to comment on this one. I'm fine with it but I'd prefer the default to be "1".
<hallyn> stgraber: agreed :)
<stgraber> hallyn: the problem with changing the templates is that it won't upgrade existing containers
<hallyn> true
<hallyn> stgraber: of course there is one argument in favor of getting rid of kmsg handling -
<hallyn> it'll annoy users so they ask for proper syslog ns :)
<hallyn> stgraber: patch 2 - did you forget to add version.c to the index?
<hallyn> can you resend?
<stgraber> hallyn: version.c is already upstream, it was just missing version.h
<stgraber> (and wasn't really used as far as I can see)
<hallyn> d'oh
<hallyn> i never knew that was there :(
<hallyn> uh, i meant :)
<syncsys_> http://stackoverflow.com/questions/15342665/google-seo-indexing-subdomains
<jamespage> yolanda, review pretty please :-) https://code.launchpad.net/~james-page/quantum/iputils-arping/+merge/152716
<yolanda> done
<syncsys_> can anyone answer http://stackoverflow.com/questions/15342665/google-seo-indexing-subdomains
<lunaphyte_> hi.  my network interfaces are being named things like emo, p1p1, p1p2, etc, instead of eth0, eth1, etc.  how can i turn this behavior off?
<lunaphyte_> *em0
<sarnold> lunaphyte_: I think the answer is to fiddle with udev's 70-persistent-net.rules file
<sarnold> (see https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/appe-Consistent_Network_Device_Naming.html for details about the change)
<melmoth> http://mickvaites.com/2009/06/ubuntu-changing-network-device-id-udevrules-d/ (looking for "ubuntu udev network interface name" )
<lunaphyte_> great, thank you.  i had found a number of references to udev, but not those two
<lunaphyte_> it's sort of odd too though - i have installed 12.10 quite a few times on a number of different computers, and this is the first time i've seen it use this different device nomenclature
<sarnold> hehe, I haven't seen it yet myself either..
<tedski> yeah, i haven't seen driver-based naming on ubuntu... are you sure you didn't install centos6 :P
<acalvo> Hello
<acalvo> Preseeding seems to fail to load the preseed file using the alternate install cd and setting up isolinux to load it from the cdrom
<acalvo> however, passing parameters (such as locales) do work
<acalvo> I'm not using ubiquity as an unattended installation may not have a graphical frontend runnig
<xnox> acalvo: too many negatives. Are you preseeding? Which image are you using? (pxe, mini, server, alternate, desktop)? And which release?
<acalvo> I'm using an LTS (12.04) alternate image (x86_64)
<ScottK> stgraber: I think we're about at the time where migration freezes should go in place.  Since you volunteered to take care of that ....  If you look in the bzr history for the block I put up for Alpha 1/2, please just use that one again.
<ScottK> Oops.
<ScottK> Wrong channel.
<jamespage> smoser, utlemming: are there any know issues with raring with regards to virtualized performance?
<jamespage> (before I go dig into something I saw end of last week)
<smoser> jamespage, not that i'm aware of.
<dragoonis> command not found "do-release-upgrade"
<smoser> what does "virtualized performance" mean ?
<dragoonis> I'm on Ubuntu 10.10, trying to get up to 12.x
<jamespage> smoser, so I'm sure I saw a much higher host load average when deploying openstack-ha on virtual maas when compared to precise
<jamespage> smoser, like I said I need to dig deeper I think
<dragoonis> got it.
<jamespage> yolanda, oh great - ceilometer has grown hbase support - https://github.com/wbolster/happybase
<yolanda> i saw some comments about that in the channel last week
<yolanda> they have already deployed it?
<jamespage> yolanda, its in the codebase; we might have to disable that bit for grizzly
<jamespage> yolanda, its a bit late to be uploading new packages and happybase is not packaged
<yolanda> ok, i can take a look
<yolanda> we are recommending mongo in our package
<jamespage> yolanda, yes
<jamespage> hbase is not even packaged
<jamespage> charmed - yes
<jamespage> packaged - no
<jamespage> yolanda, (found that whilst trying your charm btw :-))
<yolanda> ok, i'll take a look tomorrow morning
<yolanda> how is my charm anyway?  :)
 * tgm4883 scowls at Daviey 
<Daviey> tgm4883: ?
<tgm4883> http://mythbuntu.org/
<tgm4883> Daviey, did you try adding the DNS entries as provided by Google?
<Daviey> hmm
<Daviey> -> taken to a different channel
<Vrga> Hello folks, question for anyone who knows, under which user should the /var/www and folders therein be? www-data or?
<Vrga> (apache2 related question)
<RoyK> usually root
<RoyK> you don't want www-data to be able to write to stuff like php files
<RoyK> if you need uploads, create a separate dir writable by www-data
<RoyK> that is - no need for it to be owned by root, just not www-data
<Vrga> ah, ok.
<RoyK> Vrga: thing is, if you have a php script that is somewhat flakey, write permission for www-data may allow the php scripts to be rewritten by code injection
<Vrga> currently, i'm trying to figure out why my virtual hosts are being iffy
<Vrga> thought it was iffy permissions, but its something else i think...
<RoyK> what's iffy about them?
<sarnold> Vrga: check your logs for details?
<RoyK> if it's successful code injection, it probably won't be logged (at least not if using POST)
<Vrga> RoyK, too long to explain at the moment, need to collect all the details and thoughts on the whole issue. And if it were as trivial as tailing logs, i wouldnt be ere :D
<sarnold> Vrga: it's all too often overlooked :(
<sarnold> RoyK: too true :)
<Vrga> i know, i know :D
<Vrga> btw, what about wsgi scripts? do they need to be under the www-data user or root user?
<lunaphyte_> sarnold, tedski: yeah, same here.  what's also a little odd is that /etc/udev/rules.d/70-persistent-net.rules is not being regenerated when i reboot.
<sarnold> lunaphyte_: I _think_ it is rebuilt only on special occasions, and probably leaves alone things you have configured yourself. (i've forgotten the conditions now, and I'm not entirely sure where to look...)
<lunaphyte_> hmm, ok.
<lunaphyte_> aha!  i found the culprit :)
<lunaphyte_> http://packages.ubuntu.com/quantal/biosdevname
<lunaphyte_> apt-get --purge autoremove biosdevname makes it go back to the "traditional" behavior
<lunaphyte_> https://lists.ubuntu.com/archives/ubuntu-devel/2012-January/034687.html
<lunaphyte_> this particular computer happens to be an optiplex 990.
<lunaphyte_> so although obviously not a poweredge server, certainly a dell.
<lunaphyte_> and 70-persistent-net.rules was regenerated this time.
<lunaphyte_> it's also mildly interesting that the disk this computer is booting from came out an optiplex 760, where i did not see this behavior.
<stgraber> hallyn: do you agree with Dwight's proposed change in syntax for the userns uid mapping? I don't mind changing it now, but ideally it should stay that way from now on ;)
<lunitik> lunaphyte_, I really recommend leaving it around, it actually makes more sense than the traditional naming, especially if you have multiple nics or something like that
<lunitik> lunaphyte_, Usually, for instance, whichever nic comes up first is eth0... in the new naming, it is by physical location, so you always know which is which
<lunaphyte_> lunitik: yeah, i can appreciate that.  it's an interesting concept, and one i'm not empirically discarding, but for the moment, i just want the familiar behavior.
<lunaphyte_> abrupt changes in device nomenclature have the obviously spectacular potential to break a *ton* of stuff.
<lunaphyte_> i can't even begin to list the litany of things which expect ethernet interfaces to be ethn
<lunitik> lunaphyte_, Well, theoretically, you would take steps to handle the change within the update process.
<lunaphyte_> oh, of course.  ultimately, it's a non issue.  but that doesn't absolve the identification mechanism.
<lunitik> lunaphyte_, I am not actually sure why anything would expect ethn... it isn't like its an actual /dev entry, but I'm not familiar with how that works from a programming perspective.
<lunitik> lunaphyte_, Seems the change to sd from hd would have caused more problems, but I don't recall anything
<lunaphyte_> fair point.
<lunaphyte_> it did cause human confusion, of course, just as this did for me at first :)
<lunitik> lunaphyte_, Oh, I looked into biosdevname exactly because I was confused about it, too, but it really does make a lot of sense
<lunaphyte_> a lot of that sort of thing that would expect ethn is truly just poorly written software which is making assumptions that shouldn't be made, but it's been such an axiomatic characteristic of linux for so long, it's almost become gospel.
<lunaphyte_> yeah, i can appreciate the theory, for sure.  i just have to give some thought to my opinion of the practical implications of the implementation.
<hallyn> stgraber: haven't seen that email but if it's what he's mentioned to me before then yes
<stgraber> hallyn: [lxc-devel] [PATCH] make [ug]id map ordering consistent with /proc/<nr>/[ug]id_map
<hallyn> yup that sounds good
<stgraber> ok, I'll pull that in staging then. The diff itself looks good to me. I just wanted to make sure you agreed with the syntax change.
<hallyn> stgraber: for the record i'll also ack it on the list
<stgraber> hallyn: ok
<hallyn> stgraber: for his other patch, i'm checking right nwo - i'm still not sure he's correct about the kernel limtiation
<daguz> I cannot seem to get a Ubuntu server to install as a paravirtualized guest under xen.  Does anyone have any hints?
<daguz> I never see a /proc/sys/xen directory.  So that is why I assume it is not PV.
<daguz> Even though I'm trying to do quantal I'm following (basically) these instructions: http://www.mmacleod.ca/blog/2011/05/ubuntu-natty-narwhal-and-xen/
<hallyn> well whaddyaknow, i'm just plum wrong
<sarnold> o_O
<sarnold> check again
<wpbiggs> hi, can someone tell me if i can use bind in ubuntu server to host my own domain name, or if i misunderstand the technology?
<sarnold> wpbiggs: you sure can, see this guide for one way to do that: https://help.ubuntu.com/12.04/serverguide/dns.html
<wpbiggs> thanks for your reply, so if i follow this guide, i would not have to use a service like godaddy or anything else to regster the domain?
<sarnold> wpbiggs: that is different :)
<wpbiggs> that's what i was afraid of is there a way to do that myself?
<sarnold> wpbiggs: you need a domain registrar such as godaddy or gandi or namecheap; they will typically require you to have two or more nameservers registered with them (aka "in glue") and those nameservers do the DNS...
<wpbiggs> oh, thanks, i had tried to host my domain before a few months ago, but i guess that was the step that i missed.
#ubuntu-server 2013-03-12
<maxped> does anyone use lynx browser? do you know what the blue bar near the bottom of the page is called?
<billy_ran_away> I'm trying to setup a replacement for Gmail since they're no longer doing the whole Exchange ActiveSync thing, can anyone recommend anything?
<billy_ran_away> I was thinking dovecot and z-push, anyone use those?
<holstein> maxped: did you try http://linux.die.net/man/1/lynx to see if hwat you are looking at is mentioned?
<ScottK> dovecot works well.  I didn't try it with z-push.
<billy_ran_away> ScottK: You able to push to iOS?
<maxped> holstein: i sure did. i know what is displayed in the bar can be changed via keyboard shortcut, but i cant find it anywhere. thought maybe if i knew exactly what lynx called that bar/area i could google a bit better
<ScottK> billy_ran_away: No.  No need to for me.
<billy_ran_away> ScottK: : )
<zvm-vlam> \Can anyone help with getting a instance running on kvm ontop of Ubuntu to be able to see my tape drives (instance is a Linux Net Backup server)
<zvm-vlam> Is there anyone alive in here or are you all sleeping??
<histo> zzzZZZzzz
<histo> zvm-vlam: Haven't played with kvm too much so sorry i'm of no help.
<zvm-vlam> k
<YamakasY2> morning
<histo> *yawn*
<YamakasY2> guys I'm figuring out how I shall partition my VM's for servers. Normally I had /boot / /usr/var /tmp and /home seperated but I wonder why I should do this when my Vm already is 30GB in this case without any storage in it
<histo> YamakasY2: How is the VM 30GB with no storage? What the hell do you ahve installed?
<histo> I personally wouldn't be worried about partitioning inside of a VM. The Virtual disks are just all stored in one big file on the host
<zvm-vlam> how can I assign tape drives to a VM
<YamakasY2> histo: no just make sure I can't have a lockup
<YamakasY2> histo: my usr and var are mostly 10GB
<YamakasY2> I'm figuring out if I shall make them 5 GB
<histo> YamakasY2: It's your system do what ever you want.
<histo> zvm-vlam: I'd assume it would be the same way as sharing any other devices liek cdrom etc... Tape should be no different
<YamakasY2> histo: that is kinda simple to say... is that the ubuntu way these days ?
<YamakasY2> histo: the question is if it's wise
<histo> !best | YamakasY2
<ubottu> YamakasY2: Usually, there is no single "best" application to perform a given task. It's up to you to choose, depending on your preferences, features you require, and other factors. Do NOT take polls in the channel. If you insist on getting people's opinions, ask BestBot in #ubuntu-bots.
<histo> YamakasY2: I told you my opinion already which you obviously didn't want to hear. So i'm not sure why you are asking.
<YamakasY2> histo: I'm not sure why you answering ?
<YamakasY2> *are
<histo> You're worried about non existent lockups
<histo> Don't run a VM then.
<histo> Run bare metal with backup or load balancing
<YamakasY2> histo: huh ? it doesn't have anything todo with a VM... but you want to keep Vm's as small as possible
<histo> YamakasY2: What lockup are you worried about? I asked why you are creating all these separate partitions and you said you are worried about lockups.
<YamakasY2> histo: filling up log, some application that decided to store in /usr, etc
<histo> YamakasY2: So how is putting it on a separate partition going to help solve that issue?
<YamakasY2> histo: it will not lock up the rest
<histo> That application will crash correct?
<YamakasY2> histo: you never had a failing mailserver because /var/log was full and your queue was not able to spool ?
<histo> Yes but I wouldn't use a 5GB partition for spool
<histo> You are the one doing that.
<histo> You're creating your own problems. IMO
<YamakasY2> histo: no I'm thinking about what happens when you do or don't
<histo> So what happens when your 5GB spool partition fills?
<histo> Same result
<histo> Use quota and other means to fix the issue. Don't put a hardware limitation on the problem. You are putting a bandaid on open heart surgery.
<YamakasY2> histo: yeah agree, but what would you do for server partition ?
<YamakasY2> I wonder, one / partition, a /tmp /boot and swap only ?
<jamespage> yolanda, I saw the ceilometer failure as well; fixed up the backports in the lab so is OK now
<yolanda> ok
<jamespage> yolanda, I also updated pecan on raring as the package we had fails to install
<jamespage> bug in the source code
<yolanda> you are always faster than me :)
<jamespage> yolanda, I updated the list to include what I did so we can cover it all for the CA
<yolanda> was it giving a bug? i was normally using that pecan package to install it manually
<ruben231> hi i have folder ruben23 @ /home and ruben23 also @ /var/www, now the folder on /var/www have new chnages and files added taht needs to be retain, how do i merge this folder into one that the new files added will not be lost any of this server, all will be retained and the same exsiting also
<ruben231> any idea guys..?
<ruben231> merging that intelligently will not omit the new files added and also retain teh duplicate
<jotterbot1234> Hello guys, has anyone had any experience troubleshooting an AWStats installation?
<jamespage> rbasak, hows mongodb looking?
<rbasak> jamespage: I had an idea, but it took hours to build and then it didn't work :-/
<rbasak> What kind of timescale are you hoping to get this fixed in? I would like to keep digging but I don't have an ETA now.
<jamespage> rbasak, please keep digging; raring would be nice :-)
<rbasak> It seems to me that the compiler should dtrt if it can be told that this particular operation could need manual alignment
<rbasak> And it also looks like it is already being told how to do this. It was in a way that I thought wasn't technically correct ("packed" isn't exactly the same thing as "aligned"), but telling it "aligned(1)" didn't seem to help
<rbasak> I want to reduce this to a test case and then experiment with options and look at the assembly generated
<jamespage> rbasak, sounds like a good approach
<jamespage> working with mongodb on arm is slow...
<rbasak> Indeed!
<rbasak> Also "nocheck parallel=4" didn't' work. I've patched debian/rules and submitted a bug
<rbasak> (to Debian)
<jamespage> yolanda, hmm - the mongodb sync in the ceilometer packages is not working well for me in the charms
<jamespage> yolanda, its a packaging issue rather than a charm issue; ceilometer-common does the db sync
<jamespage> but does not have a depends on MongoDB
<jamespage> maybe we need to re-think that a bit
<jamespage> I also think that MongoDB should be a suggests
<jamespage> so that I can deploy mongodb elsewhere
<jamespage> just like we do with nova
<yolanda> but then, we cannot do the sync in the package?
<jamespage> yolanda, hmm
<jamespage> yolanda, OK - so I think we have to take the sync out of the packaging - not ideal
<jamespage> yolanda, also the way the configuration file is being manipulated in the postinst script is not aligned to policy
<jamespage> as it does not take into account fully whether the ceilometer configuration file has actually been changed
<lifeless> doesn't ceilometer have the ability to run on postgresql too ?
 * jamespage looks
<lifeless> http://doughellmann.com/2012/10/grizzly-sessions-on-metering-openstack-with-ceilomter.html
<jamespage> lifeless, indeed it does - and mysql as well
<yolanda> and sqlite
<jamespage> yolanda, in which case the MongoDB stuff has to come out
<jamespage> its completely not safe
<yolanda> they have support for several databases
<yolanda> so i remove the mongo recommends, and the db sync?
<yolanda> user should have to do it manually?=
<jamespage> yolanda, all of the other openstack packages default to using sqlite
<yolanda> yes, but due to the nature of data it's better to have mongo
<jamespage> yolanda, yes - but that is a charm thing, not a packaging thing
<jamespage> yolanda, sqlite is a 'install and it works option', not a 'install and get huge scale' option
<jamespage> yolanda, sqlite is not the 'best' option for any of the openstack packages, but its a sane default.
<yolanda> so we have to depend on sqlite and do the sync using sqlite?
<jamespage> yolanda, broadley yes
<jamespage> yolanda, the sync should not happen in ceilometer-common  either
<yolanda> no? where should it be?
<jamespage> yolanda, it should happen in the agent package that writes to it
<yolanda> ok
<yolanda> and then, in the charm, we should do the sync for mongo?
<jamespage> yolanda, actually ignore that comment
<yolanda> which one?
<jamespage> the bit about not doing it in ceilometer-common
<jamespage> yolanda, please use cinder-common.postinst as a reference
<jamespage> yolanda, it broadley does the right things
<yolanda> ok
<jamespage> yolanda, the charm should reconfigure ceilometer when mongodb is related to it and then do a db-sync
<yolanda> ok, i can work on it
<jamespage> yolanda, great - ping me when you have a MP - I don't always notice email
<jamespage> lifeless, thanks for expanding my knowledge of ceilometer
 * jamespage is running to catchup
<yolanda> i need a while to finish something with the panels
<zetheroo> I have been spending 2 whole days now trying to get 12.04 to run on a server here ... one of three identical servers ... I performed an install on one of the three a couple weeks ago with no issues ... but now with the second one I am installing for the 5th time now
<zetheroo> basically no matter how I partition the RAID1 device, after the Ubuntu install completes successfully, the system reboots and then leaves me at "error: no such device" and grub rescue prompt
<zetheroo> this is a basic install that worked perfect of one server but is not happening on this one!!
<gucki> hi there
<Abu> hey, i have a problem with latest ubuntu server edition, if i leave it on its own the screen goes black and it stops responding
<gucki> i have eht0 defined as dhcp and eth0:0 as static. whenever the dhcp client (isc-dhcp-client) renews its configuration it removes/ brings down eth0:0. How can I fix this?
<gucki> Abu: energy saving enabled? :)
<Abu> well having hard time figuring that out :D
<Abu> on default does the ubuntu come with ready configuration?
<gucki> Abu: sry I don't know... :(
<Abu> its annoying problem
<Abu> it locks up in hour or something
<Abu> but now i have stuff on keyboard so that it presses a button all the time and it has been succesfully on for 6 hours
<soren> smoser: What is the current best practice for building customised Ubuntu OpenStack images?
<smoser> soren, my usual suggestion is http://cloud-images.ubuntu.com
<smoser> ie, i wouldn't recommend it any more than i'd recommend you build your own kernel or glibc. let your distro do it.
<soren> smoser: I fire up a bunch of instances. Their average lifetime is 8 minutes.
<soren> smoser: 5 of those are spent on installing the exact same stuff. Every time.
<soren> smoser: I'm pretty tired of that extra wait.
<smoser> well, one thing i've recently been doing which makes that particular pain less painful.
<smoser> is 'fast_apt' at http://paste.ubuntu.com/5607702/
<soren> smoser: Hm.
<smoser> but that wont help your network pipe
<soren> smoser: Yeah, I guess that might have an impact.
<soren> smoser: I don't actually think I'm spending that much time on actual downloads.
<soren> It's quite possible that eatmydata would make a difference.
<smoser> its a huge impact.
<soren> This is a recent addition to cloud-init, yeah?
<smoser> but if you are interested in building your own images, i'd at least suggest that you start with ours.
<soren> Right, that was certainly the intent.
<smoser> well, thats just a bootcommand that dpkg-dirverts
<soren> Sure.
<smoser> so its not really cloud-init itself. i just feed that as user-ata
<soren> Oh, I see what you're saying.
<smoser> then, i'd recommend either mounting and chrooting (mount-callback-umount http://bazaar.launchpad.net/~smoser/+junk/backdoor-image/files makes that part easy)
<smoser> or, since that is sometimes finicky with grub (which i'd love for you to just figure out the issue and get it fixed "right")
<smoser> the other way to do it is just to boot a cloud image in kvm 'cloud-localds' and given user-data that preps it the way you want.
<soren> What I wonder is what sort of cleanup I ought to do after doing that?
<smoser> well, you can do 'rm -Rf /var/lib/cloud /var/log/cloud-init*'
<soren> I could probably work it out, but if you have a list already, I'd love to have it.
<smoser> and 'dd if=/dev/zero of=/tmp/zero.img ; rm -f /tmp/zero.img'
<smoser> but realistically, cloud-init *should* re-run everything that is needed "per-instance"
<smoser> and when you boot it for real, it will have a new instance-id.
<soren> I'll try the eatmydata thing first. That's a great suggestion.
<smoser> i think you will be very surprised with how much time storing data takes.
<smoser> soren, if you have a better suggestion for the zero image thing, i'd love to know.  i know that there should be more high-tech route for that.
<soren> smoser: There is one for ext2 images.
<soren> smoser: I'm trying to recall its name.
<smoser> yeah. i think that actually with ext4 and kvm it should be able to poke the 'trim' requests down through virtio into the qcow image
<smoser> i'm not sure though.
<smoser> but at some point.
<smoser> i think that "should work"
<soren> smoser: zerofree
<soren> smoser: That's not the one I was thinking of, but it looks to be the same thing.
<rbasak> I didn't know about zerofree. Thanks soren! I wonder if there's some kind of convergence with fstrim possible here?
<smoser> http://intgat.tigress.co.uk/rmy/uml/index.html says "The utility also works on ext3 or ext4 filesystems. "
<rbasak> (fstrim works online)
<smoser> rbasak, but does it correctly poke through to qcow2 disk format?
<rbasak> smoser: I doubt it. So I was thinking of some hack. But what would be really nice if it did pass through, and qemu could be configured to dtrt for thinly provisioned stuff and optionally zero out for directly mapped stuff.
<smoser> rbasak, there has been work done in that area
<rbasak> Though for cloud image preparation perhaps offline operation would actually be better. More consistent for delivery.
<smoser> i know i've read of that before.
<smoser> rbasak, http://comments.gmane.org/gmane.comp.emulators.kvm.devel/66551
<smoser> at least a reference.
<smoser> http://www.outflux.net/blog/archives/2012/02/15/discard-hole-punching-and-trim/
<rbasak> As an aside, while we're talking about this, there's an argument here that says that cronned fstrim is preferable over a discard mount option for SSDs: http://en.opensuse.org/SDB:SSD_discard_%28trim%29_support
<zetheroo> I finally got Ubuntu Server 12.04 booting up on a server here ... but now when I reboot the server it just hangs with a black screen ... I have to power it off with the power button and start it up again ...
<zetheroo> how do I troubleshoot this?
<tedski> at what point does it hang with a black screen?
<tedski> after grub? before post?
<tedski> during os boot?
<tedski> when it's hung, are you sure it's not the console screen saver?
<zetheroo> not sure actually because I reboot it remotely ...
<tedski> when it's hung, is it pingable?
<zetheroo> I will reboot it directly and see where it gets to ...
<tedski> after you reboot it, what do the logs say?
<tedski> that's how you troubleshoot it... you identify the point of failure before you attempt to identify the causative factors
<zetheroo> but in syslog this is the last two lines before I manually turn the server off:
<zetheroo> Mar 12 14:51:06 saturn kernel: Kernel logging (proc) stopped.
<zetheroo> Mar 12 14:51:06 saturn rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1411" x-info="http://www.rsyslog.com"] exiting on signal 15.
<zetheroo> which logs should I be looking at?
<tedski> syslog to start
<sw> hi, how can we install Sun JVM >=1.5 on a Ubuntu server running 12.10?
<zetheroo> well this is what syslog has in it after the last powerup: http://paste.ubuntu.com/5607757/
<tedski> zetheroo: looks fine, then
<tedski> or was that the good boot?
<tedski> sw: google shows tons of results for oracle java 6/7 on ubuntu 12.10
<sw> I have no knowledge of Java, so Sun JVM 1.5 = Oracle Java 6/7?
<tedski> java 1.5 == java 5
<tedski> java 1.6 == java 6
<tedski> silly, but get used to silly with java
<tedski> so, > 1.5 is java 6 or 7
<tedski> oracle bought sun in 2010, so oracle ~= sun
<zetheroo> tedski: that was the output from the time that I manually powered the server on
<tedski> zetheroo: now get it to hang and see if it's pingable while it's hung
<tedski> zetheroo: then see if you can access any of the services that should startup (i.e. ssh)
<zetheroo> tedski: so I did a reboot again and the last line I see on the screen is "Will now restart" ... then it goes to a black screen and hangs there ...
<zetheroo> the system is then unresponsive to eveything
<zetheroo> and it never resets ...
<tedski> zetheroo: oh, then you have an acpi problem
<zetheroo> so I have to do a hard-reset
<zetheroo> acpi ... like in the BIOS or in Ubuntu?
<tedski> well... that depends :)
<tedski> is there a reboot= parameter in your grub kernel line?
<tedski> cat /proc/cmdline if the system is up
<zetheroo> BOOT_IMAGE=/boot/vmlinuz-3.2.0-23-generic root=UUID=a47c5e6b-583b-4bc3-b97e-37c8fee11df5 ro
<tedski> what type of system is this?
<zetheroo> just a rack server
<zetheroo> Intel ...
<yolanda> jamespage: https://code.launchpad.net/~yolanda.robla/ceilometer/grizzly/+merge/152916
<zetheroo> the brand is some SuperMicro thing ...
<tedski> try adding reboot=a,b,c to your kernel line
<zetheroo> how do I do that?\
<tedski> default is reboot=kbd which means it will first try to reboot via keyboard, then bios, then acpi
<tedski> so, a,b,c says try acpi, then bios, then cold reboot
<tedski> zetheroo: when it's booting, interrupt the timer in the grub menu and follow the onscreen instructions for editing
<zetheroo> ah like that .. ok will give it a shot ..
<zetheroo> but  this is not normal no!?
<tedski> it's normal for some systems
<zetheroo> I mean I have never had to do this before and we have 3 of these identical systems ... and one of them I setup with Ubuntu 12.04 a few weeks ago and it had no issues
<tedski> oh
<tedski> well, then no
<tedski> it's not normal
<tedski> and something is different in the reboot method between the working and non working system
<zetheroo> but then nothing about this particular server has been very "normal"
<tedski> go to your known good system and make sure all acpi settings are identical to this system
<zetheroo> trouble is that other server is busy running several production VM's :)
<zetheroo> so I cannot stop it to check the BIOS settings
<tedski> you can still cat /proc/cmdline
<zetheroo> ok
<tedski> and check sysctl
<zetheroo> BOOT_IMAGE=/boot/vmlinuz-3.2.0-37-generic root=UUID=34975010-34e2-477e-8b96-fca973948c74 ro
<zetheroo> looks the same
<zetheroo> check: comand not found
<tedski> heh
<tedski> no, don't run "check sysctl" :)
<zetheroo> oh sorry .. :P
<tedski> sysctl can show you all of the acpi settings
<tedski> man sysctl
<zetheroo> I don't see how this could have anything to do with Ubuntu actually ... but rather something from the BIOS
<tedski> sure
<tedski> i'm not saying that it is one or the other
<tedski> i'm telling you how to troubleshoot :)
<tedski> that was, afterall your question
<zetheroo> there isn't a way to get BIOS info without rebooting the server is there!?
<zetheroo> yes thanks :D
<zetheroo> maybe I asked the wrong question  ... LOL
<zetheroo> ahh there is a difference in the BIOS versions of the two server!
<jamespage> yolanda, comments in MP
<yolanda> ok
<yolanda> ok, got it, better to do a patch to modify the default conf
<yolanda> jamespage, should i set synchronous mode for sqlite? or i leave that by default?
<zetheroo> how do I change the number of seconds that grub counts down to before booting Ubuntu ?
<jamespage> zetheroo, /etc/default/grub
<jamespage> yolanda, leave everything else as defaults
<Ul_> hi everybody. having problems with slow performance of kvm on ubuntu 12.10. i suspect that hardware acceleration is not working. kvm-ok says that KVM can be used. after launching the virtual machine, the log file in /var/log/libvirt/VIRTNAME.log shows LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-x86_64 -name wheezy -S -M pc-1.2 -no-kvm -m 1024 -smp 1,s
<zetheroo> when I do apt-get update I get this:  http://paste.ubuntu.com/5607963/
<zetheroo> any idea why?
<jpds> zetheroo: You have a DNS problem.
<zetheroo> the machine is connected to the Internet
<yolanda> jamespage, recheck
<zetheroo> ic
<zetheroo> yes, DNS was the issue - thanks jpds
<zetheroo> I still have weird issues with rebooting ...
<jamespage> yolanda, OK - lgtm
<zetheroo> system now reboots and gets up to the GRUB menu ... waits 10 seconds ... and then seems like its going to boot the first entry in the menu ... screen goes black (but still on) and that's about as far as it gets
<yolanda> great
<sw> tedski: can you help what JDK or JRE is? in the INSTALL I'm looking at it says the requirement is JVM but I don't see this anywhere
<tedski> sw: JDK = java development kit; JRE = java runtime environment
<tedski> sw: if you're not going to develop java programs, then install the jre
<tedski> the jre has the jvm plus class libraries and some other supporting stuff
<tedski> jre is for running java programs, jdk is for developing java programs
<sw> tedski: ah ok :-)
<eagles0513875> hey guys is there a way i can tweak amavis a bit to provide some better memory usage?
<patdk-wk> amavis doesn't use much memory
<patdk-wk> something like 5-10megs
<patdk-wk> in my case, each amavis thread is using 60megs of ram, plus 80megs shared
<zetheroo> when creating a RAID1 setup should the swap be in the RAID or not?
<patdk-wk> and all of that is due to spamassassin rules
<patdk-wk> zetheroo, up to you
<patdk-wk> if a disk dies, do you want your system to crash?
<zetheroo> patdk-wk: could it be problematic ?
<xnox> zetheroo: would you like your machine to keep going, or die because something was in swap and the swap disk failed?
<billy_ran_away> Hi, I'm new to ldap and trying to setup dovecot to use ldap, this is kind of a dumb question, but I've never really read what it means to bind to an LDAP server...
<billy_ran_away> I have some idea of what I think it meansâ¦ but I don't definitively know what it means.
<billy_ran_away> I think it means...
<zetheroo> so having swap in the RAID is not a bad thing ...
<xnox> zetheroo: there is no point in RAID1 at all if swap is not part of it. With RAID1 you want your machine to continue to function if half of your disks fail, hence swap must be treated same as your rootfs.
<billy_ran_away> For an authentication process to attach itself to a specific part of the LDAP schema to be able to search from the point down...
<lwizardl> hello
<rbasak> xnox: true, but there is a second use case for RAID1. If you don't mind the machine failing, but do want to keep your data.
<zetheroo> xnox: ok good point :P
<rbasak> Though even in that cause I usually RAID1 my swap. There's no real cost to doing it.
<rbasak> that case
<lwizardl> so I have been fighting with a server only to find out that the isp comcast is blocking port 25. So is there a way that I can enable the server to maybe get around the block?
<xnox> rbasak: hence I said "same as rootfs" if the rootfs is not on RAID1 (as it can be trvially rebootstrapped), and only data partitions are on RAID1, having swap on raid1 improves things but not by much ;-)
<patdk-wk> lwizardl, upgrade to comcast business class
<zetheroo> I have been struggling to get Ubuntu Server working correctly on a server which is using this UEFI stuff ...
<zetheroo> should the EFI partition be in the RAID1 area or not?
<lwizardl> patdk-lap, yeah I plan to do that but i'm trying to get the server working for now
<patdk-wk> yes, and that is the *only* solution
<lwizardl> crap
<patdk-wk> unless your solutions involves not using comcast
<patdk-wk> then you can setup a vpn from somewhere that allows port 25
<lwizardl> I would love that option. but for my sucky town it is either comcast or AT&T uverse
<patdk-wk> well, there is a reason why, residential, says, no servers
<lwizardl> yeah
<zetheroo> is there anything wrong with rebooting using init 6 ?
<smoser> soren, did you try the eatmydata?
<hallyn> plars: so this is perplexing.  Your i386  lxc utah run says mounted cgroups weren't found when running lxc-execute.
<plars> hallyn: looks like the amd64 one finished too: https://jenkins.qa.ubuntu.com/view/Raring/view/Smoke%20Testing/job/raring-server-amd64-smoke-lxc/119/
<plars> hallyn: also showing cgroups not mounted
<hallyn> where do i see the actual test log then?
<hallyn> anyway that might be a utah error then.  cgroup-lite is being installed, cgroups definately ought to be mounted.
<hallyn> plars: I run locally using:
<hallyn> sudo -u utah -i run_utah_tests.py -p lp:ubuntu-test-cases/server/preseeds/lxc.preseed -i /home/serge/raring-server-amd64.iso lp:ubuntu-test-cases/server/runlists/lxc.run
<hallyn> is that all right?  suppose i should update my raring-server-amd64.iso...  maybe there's an error int he new one
 * hallyn updates
<plars> hallyn: we seem to be running run_utah_tests.py -p lp:ubuntu-test-cases/server/preseeds/lxc.preseed -i /var/cache/utah/iso/raring-server-amd64.iso lp:ubuntu-test-cases/server/runlists/lxc.run -x /etc/utah/bridged-network-vm.xml
<plars> but I think the networking config is not necessary for you
<hallyn> right i can't use that :)
<hallyn> plars: almost have the new iso, i'll start a utah run and log in and see if everything's kosher
<plars> ok
<hallyn> plars: by gollie, thsi seems to be a cgroup-lite bug
<lisandrop05> hello  to every one I'm installing firehol and when I execute the command "frirehol try" I get a lot of errors like this: "iptables: Memory allocation problem." I where lokking at top command while I where doing and is a lot of memory free in the system
<lisandrop05> any ideas?
<holstein> you excute "firehol try" i assume
<holstein> lisandrop05: you can also pastebin you exact errors
<RoyK> any particular reason you don't use ufw?
<holstein> !paste
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<lisandrop05> yes holstein
<lisandrop05> i execute "firehol try"
<sarnold> lisandrop05: the kernel cannot allocate memory for itself in the same way that applications can allocate memory. on 32 bit systems, the kernel actually has very tight memory requirements. over the run time of a system, memory gets more and more fragmented, making it harder to allocate large blocks of kernel memory.
<sarnold> lisandrop05: check dmesg; if you see something like "order 1 allocation failures" then your best bet is reboot.
<hallyn> plars: and now i'm seeing it on new raring cloud instances too!
<RoyK> lisandrop05: trying again - why don't you use ufw?
<plars> hallyn: well at least it's reproducible now I guess :)
<RoyK> seems firehol hasn't been much updated the last five years
<Testtube302> Afternoon I was refered to this channel
<lisandrop05> RoyK: what's the difference
<RoyK> lisandrop05: dunno firehol
<RoyK> !ufw | lisandrop05
<ubottu> lisandrop05: Ubuntu, like any other Linux distribution, has firewall capabilities built-in. The firewall is managed using the 'ufw' command - see https://help.ubuntu.com/community/UFW | An alternative to ufw is the 'iptables' command - See https://help.ubuntu.com/community/IptablesHowTo | GUI frontends such as Gufw (GNOME) and Guarddog (KDE from Lucid onwards) also exist.
<Testtube302> I currently have Ubuntu 8.04LTS Server running a older kernel 2.6.24-21-server. I am trying to install the Kernel sources but it appears they have been deleted? I am getting Package linux-headers-2.6.24-21-server is not available
<lisandrop05> okok, I'm installing ufw
<RoyK> Testtube302: you better upgrade soon - 8.04 only have a month left of support
<Testtube302> The only reason i need the Kernel source is because i am attempting to install a backup agent that needs the source to compile a module
<Testtube302> It almost looks like the linux-header files for the kernel i am using has been wiped off the planet :/
<RoyK> Testtube302: looks like you haven't updated that machine too much - the current version for 8.04 is 2.6.24-32
<Testtube302> Royk Thats correct this machine is in production use and pretty isolated
<RoyK> there are archives around for that sort of use, but the *supported* method is to upgrade to the last stable version
<Testtube302> Any idea of where i can get the linux header files for my kernel
<Testtube302> i have searched everywhere.
<RoyK> and as mentioned, 8.04 won't be supported for much longer
<RoyK> what sort of backup system is this?
<Testtube302> The backup agent is r1soft
<RoyK> also, is there a good reason for not upgrading to the latest 8.04?
<Testtube302> RoyK as mentioned this is a production system and there is no room for downtime right now. We have some time slotted later this year to upgrade to 10.04 like the rest of our cluster
<RoyK> ok
<sarnold> it's already 2013, would it make more sense to upgade to 12.04 LTS instead?
<Abu> hey now i got error message
<Abu> mmc0 got data interrupt even though no data operation was in progress
<RoyK> Testtube302: http://packages.ubuntu.com/source/hardy-updates/linux-ubuntu-modules-2.6.24
<Abu> how do i fix that
<RoyK> Testtube302: just wondering a bit why a backup agent needs access to the kernel source
<lisandrop05> ufw only 1 minutes and I get my first error: ERROR: problem running ufw-init
<RoyK> lisandrop05: pastebin command and output
<lisandrop05> ERROR: Could not load logging rules
<lisandrop05> it's short
<lisandrop05> command is shoet too
<RoyK> and the command was?
<lisandrop05> short*
<Testtube302> I dont believe that URL you provided actually has the Linux headers for my kernel
<RoyK> are you root?
<lisandrop05> ufw enable
<lisandrop05> and yes I add port 22 first
<lisandrop05> if I put it again it skip existing rule
<RoyK> lisandrop05: please *pastebin* the whole range of commands and output. and btw, which distro version is this?
<sarnold> RoyK: lisandrop05 is from cuba, probably the little iron curtain forbids getting to a pastebin site. I'm stunned lisandrop05 can get here...
<RoyK> oh
<lisandrop05> is Ubuntu precise
<RoyK> lisandrop05: sorry if you can't get to those services. first of all, are you root? can you do "ufw add 22/tcp" without issues? if so, the "ufw enable" command should work well
<lisandrop05> http://pastebin.com/Jtr9tm3J
<sarnold> ha :) color me wrong :)
<Testtube302> Royk http://packages.ubuntu.com/hardy-updates/linux-headers-lum-2.6.24-21-server  Says Package not available under depends
<lisandrop05> RoyK: yes, I'm root
<lisandrop05> I install the ufw
<lisandrop05> in the same shell
<RoyK> looks like there may be an issue on trying to add ipv6 rules on an ipv4-only system
<RoyK> https://bbs.archlinux.org/viewtopic.php?id=141008
<RoyK> not sure, though
<RoyK> lisandrop05: try setting IPV6=no in /etc/ufw/ufw.conf
<sarnold> RoyK: hrm, I thought that one had been fixed.
 * RoyK has no idea
<RoyK> lisandrop05: also, if sarnold is right - it might have been fixed - have you upgraded lately?
<lisandrop05> no I don't
<RoyK> try to disable ipv6, then
<Testtube302> Royk what exactly is Source Package: linux-ubuntu-modules-2.6.24
<RoyK> not sure
<lisandrop05> It work ok, I add IPV6=no to ufw.conf now, new question: where can I add an IP blacklist from a file
<Testtube302> your not sure?
<RoyK> Testtube302: I think what you need is 'apt-get install linux-source'
<lisandrop05> no need to update, And i check and my system is updated
<RoyK> ok
<RoyK> lisandrop05: I normally use 'apt-get update && apt-get -y dist-upgrade && apt-get -y autoremove' - there's usually some new packages around
<mmcji> anyone have any experience working with Kamailio and kamailio-ims-modules?
<glycoknob> hi, I'm using ubuntu server 12.04 for cluster computing machines (no services, just cli). Is it safe to remove dbus-daemons and all the other desktop-related stuff? especially does upstart still works fine without dbus? This appears to be the case but I'd like to get some more input before breaking things
<RoyK> Testtube302: still curious about why a backup solution needs to make kernel drivers - could you enighten me on this?
<Diegonat> hi guys! Ive got a problem. When I run a script with crontab does not work but when I run it, it works. Why?
<sarnold> Diegonat: quite often that is due to directories in your PATH that are set for an interactive login but not set when executed via cron
<Diegonat> sarnold, good shot
<lisandrop05> RoyK: how do I allow all outgoing traffic in ufw??
<Diegonat> basically when it fails
<Diegonat> it when it tries to excute one python script in a for loop
<lisandrop05> in firehol is simple: client accept any, but in ufw I don't know
<jdstrand> lisandrop05: $ sudo ufw status verbose
<jdstrand> Default: deny (incoming), allow (outgoing)
<jdstrand> that's the default
<RoyK> lisandrop05: it allows all outgoing
<hallyn> plars: fix for that breakage is uploaded to raring.
<plars> hallyn: you are awesome!
<plars> psivaa: ^^ should be fixed on tomorrows run
<jdstrand> lisandrop05: as for ipsets, that isn't supported in ufw yet, but it is planned
<hallyn> plars: nm.  there's an error in how i did it.  grrr.
<lisandrop05> RoyK: if I disable the firewall ufw all my old rules disappear?
<hallyn> stgraber: grrr.  my fix was bad.  changing locale and then fixing it for real.  (you threw me by calling the mount 'cgroup', i need to detect that moutn differently :)
<hallyn> biab
<RoyK> lisandrop05: ufw replaces iptables
<lisandrop05> I disable and enable ufw
<RoyK> lisandrop05: you need to configure ufw to use existing rules if you want to use it
<lisandrop05> and it block me out
<lisandrop05> my ssh http ftp all is not working
<stgraber> hallyn: mountpoint -q /sys/fs/cgroup?
<RoyK> lisandrop05: disabling it should flush all iptables rules - enabling it should enable ufw rules
<lisandrop05> I'm locked out of my server
<lisandrop05> I'll need to call support assistence
<lisandrop05> I'm on VPS
<lisandrop05> and ssh is all access I have
<jdstrand> lisandrop05: VPS sometimes don't have everything that is needed in their kernels. if ufw is not working for you, try running: sudo /usr/share/ufw/check-requirements
<sarnold> jdstrand: that's awesome :D
<lisandrop05> jdstrand: I can't I'm blocked out of my server
<RoyK> lisandrop05: do you have kvm access?
<lisandrop05> RoyK: I'm from cuba a lot of ports are blocked here
<lisandrop05> including all cpannel ports
<RoyK> yeah, someone told me (above) that you were from cuba
<RoyK> lisandrop05: see pm
<lisandrop05> what is pm?
<hallyn> stgraber: well phooi.  never heard of that
<RoyK> lisandrop05: private message
<RoyK> lisandrop05: try "/msg RoyK hi"
<soren> smoser: I fired it off, but didn't have time to stick around to see the results. I'm checking now.
<hallyn> stgraber: but that doesn't answer my problem actually - i need to know if anything under /sys/fs/cgroup is mounted
<soren> smoser: Well, it cut off ~75 seconds.
<jdstrand> sarnold: heh, thanks :)
<soren> smoser: So significant for sure, but not enough to make me not want to build custome images.
<smoser> out of ~ 300 ?
<soren> custome.
<soren> custom.
<soren> Shite.
<smoser> right.
<soren> Err.. Yeah, something like that.
<hallyn> i guess just 'mount -t cgroup' actually works
<stgraber> hallyn: ah, I'd just have changed the current for loop to do:
<stgraber> [ ! -d /sys/fs/cgroup/$d ] && mkdir /sys/fs/cgroup/$d (or use -p)
<stgraber> mountpoint -q /sys/fs/cgroup/$d || mount -n -t cgroup ...
<hallyn> stgraber: no, not that bit
<hallyn> stgraber: if any actual cgroups have already been mounted, then i dno't want to run
<hallyn> but i'm just doing 'if [ -n "$(mount -t cgroup)" ]; then exit 1; fi"
<stgraber> hallyn: ah, well, good luck detecting that... because logind is now mounting an empty cgroup
<stgraber> so on systems with logind, you'll get:
<stgraber> systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)
<stgraber> which shouldn't prevent cgroup-lite from doing its job
<hallyn> why not?
<hallyn> systemd wants to do cgroups its won way...  let it
<hallyn> /swon/own/
<hallyn> gah
<stgraber> no
<stgraber> logind will ONLY mount the systemd cgroup
<stgraber> on a system with logind, you still want cgroup-lite to mount all the others
<hallyn> sigh
<stgraber> (talking about logind here, not systemd. It's just a bit confusing because the logind cgroup is called 'systemd')
<hallyn> there are people out there with funky pre-existing setups who will want cgroup-lite to not bethe one mounting cgroups
<hallyn> i.e., they may simp0ly not want some cgroups mounted at all (for speed reasons, whatever)
<hallyn> i suppose you can say so long as they are writing their own upstart job,
<hallyn> they can set cgroup-lite tomanual,
<hallyn> long as you'll back me up on that :)
<stgraber> well, we don't hard-depend on cgroup-lite IIRC, just recommend it
<stgraber> oh, you do Depend on it from libvirt... maybe you should change that to Recommend then?
<stgraber> anyway, I think it's fair to say that if you install cgroup-lite (directly or indirectly), you want it to do its job. If not, then remove it or turn it off (with an upstart override file)
<hallyn> all right i'll do it that way - the old safeguards were really for people who still had cgconfig.conf installed
<stgraber> so I guess we want cgroup-lite to always try to mount any cgroup that's not already mounted somewhere (so on mount failure, silence the error and remove the directory)
<hallyn> eh ?  like 'mount -t cgroup -o $d /sys/fs/cgroup/$d || { rmdir /sys/fs/cgroup/$d } ?
<hallyn> i'm afraid that'll make local configuration errors look like package errors in lxc/libvirt...
<hallyn> do you think it's worth it?
<hallyn> i'd say at taht point, we're better off having cgroup-lite record that it failed,
<hallyn> and user can figure it out frmo there
<hallyn> let's see how this one goes
<hallyn> stgraber: do http://paste.ubuntu.com/5608710 and http://paste.ubuntu.com/5608711 look ok?
<hallyn> (tested ok on an instance, but that doesn't mean im' not doing something stupid)
<hallyn> whelp, this one seems to work, will wait a few mins for a NO YOU FOOL then push
<stgraber> hallyn: mountpoint -q /sys/fs/cgroup/$d || (mount -n -t cgroup -o $d cgroup /sys/fs/cgroup/$d || rmdir /sys/fs/cgroup/$d)
<stgraber> hallyn: so that if it's already mounted somewhere else we don't end up with an empty dir
<stgraber> hallyn: mountpoint -q /sys/fs/cgroup/$d || (mount -n -t cgroup -o $d cgroup /sys/fs/cgroup/$d || rmdir /sys/fs/cgroup/$d || true)
<stgraber> hallyn: actually, just to be safe ;)
<hallyn> you were supposed to say NO YOU FOOL
<hallyn> will do, thx :)
<hallyn> stgraber: ok so that leaves two potential issues (i.e. not sure if we care)
<hallyn> oh, no,j ust one
<hallyn> if someone decided to mount an empty tmpfs (for instance) under /sys/fs/cgroup/devices, cgroup-lite wont' mount it at start, but it will umjount it at stop
<hallyn> i think that's ok...
<stgraber> ah yeah, but someone would be pretty stupid to do that, so that's their own fault ;)
<hallyn> all right - pushing
<soren> smoser: I fired up a VM, logged in interactively, apt-get install'ed the relevant packages, ran "nova image-create" and am now using that snapshot. Seems to work like a charm.
<soren> smoser: I also added your eatmydata thing to apt-get as well as added it to the sbuild inside the VM. I don't have total numbers yet, but the difference for each step looks really good. Thanks for remininding me of eatmydata.
<digitalsanctum> hi, im attempting to vnc to 12.04.1 server from a mac and keep getting a black screen. any ideas?
<holstein> vnc to server?
<holstein> i would just ssh over to it
<digitalsanctum> yes, but i have a need for vnc....long story :)
<News2Me> I'm looking for help with Apache rewrite. System doesn't recognize .htaccess. I've read countless posts and tried all the proposed solutions. One clue is that my apache2.conf file won't accept "AllowOverride All", just errors with "AllowOverride not allowed here. Action 'configtest' failed.
<sarnold> News2Me: that means you put the 'AllowOverride' in the wrong location in the config file
<sarnold> News2Me: see http://httpd.apache.org/docs/2.2/mod/core.html#allowoverride
<News2Me> It has to be embedded in a <directory> tag?
<sarnold> "Only available in <Directory> sections"
<sarnold> yes
<News2Me> So even if you don't have virutal sites... just use it witout and conditions?
<News2Me> Looking at the page link... THANKS Sarnold!!!... been banging my head for days on this.
<sarnold> News2Me: that sounds about right, you've got the real apache experience now :)
<hubuntu> helllo
<hubuntu> if I am going to have some KVM managed VMs running in my desktop, which ubuntu server version should I use?
<hubuntu> is there a jeos or similar or shall I just use -server?
<hubuntu> this is to test varnish and additional tools
<RoyK> hubuntu: 12.04LTS would be a good start
<sarnold> hubuntu: -server is probably fine, but do note: http://cloud-images.ubuntu.com/
<holstein> !minimal
<ubottu> The Minimal CD image is very small in size, and it downloads most packages from the Internet during installation, allowing you to select only those you want (the installer is like the one on the !Alternate CD). See https://help.ubuntu.com/community/Installation/MinimalCD
<hubuntu> there we go, minimal 12.04 is certainly what I was looking for :)
<sarnold> oh good :) three different answers :) haha
<hubuntu> thanks holstein
<hubuntu> RoyK and sarnold
<hubuntu> :)
<sarnold> :)
<RoyK> :)
<News2Me> Sarnold: IT WORKS! ok, so there's a file for virtual servers that lives at: /etc/apache2/sites-available/default. and that one was set to AllowOverride None. It's hard to follow different flavors of Linux.. but thanks Sarnold!
<sarnold> News2Me: ah, yes, I know what you mean. :)
#ubuntu-server 2013-03-13
<gdeeble> Curiosity, I know that having software raid causes GRUB issues, that you have to repoint GRUB to the proper location, but does having the bios on an older desktop cause the same issue? Like I have 2 250gb drives that I want to run as a RAID1 for my server OS partition so I have that redundant, but want to check before starting into that project seeing everything is working and this would
<gdeeble> be an upgrade.
<patdk-lap> heh?
<patdk-lap> since when does raid cause grub issues?
<gdeeble> I was reading that. Maybe I misunderstood.
<TheLordOfTime> where did ya read that?
<patdk-lap> if you setup raid, grub should install itself on all raid disks to boot
<patdk-lap> so that if one is missing, it can boot from the other
<SpamapS> yeah
<patdk-lap> in order for that to work though, will depend on how your bios failsover from disk to disk to boot
<gdeeble> I was looking at the zentyal setup guide, when I was googling. Apparently, I need to go back and re-read it. LOL.
<gdeeble> I was going to run as RAID1 for boot/os, which should be a mirror image, meaning it should boot even if degraded due to 1 drive failure. Is this not right?
<SpamapS> gdeeble: that is right
<SpamapS> gdeeble: the point is, grub will just install itself onto both drives
<SpamapS> so there's no issue with repointing
<SpamapS> gdeeble: you're installing onto this software raid and building it during the installer, right?
<gdeeble> What I want to do, is allow the bios to raid the 2 drives then install the OS.
<patdk-lap> you want to use bios raid?
<gdeeble> This is the first time I'm working with any raids with linux, as I know on my desktop running windows, I just set it up in the bios and mounted the drive and it showed as 1.
<patdk-lap> then your stuck using dmraid
<patdk-lap> I would recommend not using a raid option in the bios, and use mdadm
<gdeeble> Wouldn't software raid use more processing power from the proc, than using bios?
<gdeeble> or are they the same
<patdk-lap> both are software raid
<patdk-lap> the bios is not hardware raid
<gdeeble> Interesting, didn't know that.
<gdeeble> I always assumed(yes bad) that if handled by the bios, it would show as 1 drive to any other software.
<patdk-lap> only if by bios you mean, add in raid card, and not the motherboard bios
<patdk-lap> you have 3 different raids currently
<patdk-lap> hardware raid, fakeraid, and softraid
<patdk-lap> fakeraid is what bios raid is
<gdeeble> fakeraid = bios
<gdeeble> Ok.
<patdk-lap> it's software raid + bios boot helper stuff
<gdeeble> It's not any better than just a standard software raid then, correct?
<patdk-lap> nope
<patdk-lap> well, it can be
<patdk-lap> but only in the case of, a disk failure
<patdk-lap> and then, it's still iffy if it's any better
<patdk-lap> the worst thing that can happen for pure softraid is if a disk fails, but doesn't completely fail
<patdk-lap> so the bios still attempts to boot it
<patdk-lap> but I believe that is also going cause fakeraid the same issues
<p7ank5te7> Sorry, dropped connection.
<p7ank5te7> So either way it could turn out bad without a true raid card
<paulorodrigues> hello
<paulorodrigues> anyone can help me?
<paulorodrigues> ls
<jotterbot1234> hey mate
<jotterbot1234> what's up?
<paulorodrigues> I am having problems with my ubuntu server
<paulorodrigues> after an reboot it doesnt boot anymore
<paulorodrigues> I can only boot using a boot cd
<paulorodrigues> I triend to install grub, but it dont fix it
<jotterbot1234> I would boot using a liveCD like you describe and repair your grub partition
<jotterbot1234> hmm
<paulorodrigues> yes
<paulorodrigues> how can i repair without losing data?
<jotterbot1234> Maybe someone else on here can contribute? Have you tried googling "grub restore"? DId you receive any error codes? any feedback at boot?
<paulorodrigues> no, it simple ask for a boot device
<paulorodrigues> its like i dont have operation system
<paulorodrigues> but I can boot with a live CD and the server works well
<jotterbot1234> Your hard drive might be dead? Do you have a spare you can try and copy it over to low level. Using dd? and then try and boot from that?
<paulorodrigues> but i can access my hard drive
<paulorodrigues> with the boot cd
<paulorodrigues> i have 2 disk in raid with the operation system
<Laogeodritt> Has anyone used darkstat before? Know if it's possible to get stats on a per-port basis for all hosts? (If not, any suggestions for packages that can do that?) I'd like to track the traffic and data transfer for different services running on my server
<JHofmann> hi
<JHofmann> apt-get error mkinitramfs... anyone experience?
<zetheroo> when configuring 2 identical drives as RAID1 do I have to set one of them to be bootable - or is it enough to make the ext partition on the RAID1 bootable ?
<JHofmann> server is installed on vmware vsphere 5 server.... "df -H"  -> /boot is 11% used
<JHofmann> here is screenshot : http://img109.imageshack.us/img109/6595/errored.jpg
<JHofmann> what can be the reason of this error? anyone have an idea?
<giaco> hello
<giaco> I need help: struggling with tomcat6 on Quantal. I'm having difficulties in running tomcat6 over java6 instead of java7
<giaco> should I forward the question to a tomcat channel?
<jamespage> giaco, are you using the tomcat6 packages from ubuntu? or an upstream tarball?
<giaco> jamespage: tomcat6 package from ubuntu
<jamespage> giaco, right
<jamespage> giaco, what issue are you seeing?
<giaco> but nevermind, I've just solved ( from JAVA_HOME=/usr/lib/jvm/java-6-openjdk to JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64 )
<jamespage> giaco, great!
<jamespage> the packaging init script should check in know locations for java
<koolhead17> jamespage: i found 1 missing line in /etc/swift/container-server.conf
<koolhead17> which comes from our cloud repo for folsom
<koolhead17> i had to simply run cat >> /etc/swift/container-server.conf << EOF [container-sync] EOF
<koolhead17> for the fix
<koolhead17> also i don`t understand why swift package by defualt does not allow logging :(
<alex88> hi guys, what are the advantages to use lvm?
<jamespage> yolanda, any chance you could ack https://code.launchpad.net/~james-page/horizon/memcache-fixup/+merge/153109 please
<yolanda> ok
<yolanda> jamespage, you are fixing cache and email settings also?
<jamespage> yolanda, no - I just use slightly different quilt settings to adam who did the last refersh
<yolanda> the diff was confusing me
<jamespage> yolanda, my ~/.quiltrc - http://paste.ubuntu.com/5610447/
<jamespage> yolanda, fyi oslo-config went 1.1.0 today - most projects are bumping pip-requires to >= 1.1.0
<jamespage> I've updated raring and backported to the PPA as well - should land ~1hr or so
<jamespage> adam_g, ^^ fyi
<yolanda> jamespage, do they pushed that changes upstream in some of the projects?
<jamespage> yolanda, yes - glance and ceilometer have already bumped
<yolanda> i can fix them
<yolanda> i'm just going crazy with the ceilometer meterings so a change of topic is good
<jamespage> yolanda, you really don't need todo anything
<yolanda> i saw it, i was confused because i think we had patched it
<jcastro> Daviey: do you know when we're supposed to hear about our talk submissions for ODS?
<ivoks> jamespage: you can reproduce this one? https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1069570
<uvirtbot> Launchpad bug 1069570 in isc-dhcp "1 MAC Address, two IPs - DNS is "out of sync" with DHCP leases databases, I think..." [Undecided,Confirmed]
<jamespage> ivoks, yes
<ivoks> within kvm?
<jamespage> ivoks, yes -  ipxe and the os present differently to the isc dhcp server so they get different IP's
<ivoks> i wonder...
<ivoks> dhcp server opens both ipv4 and ipv6 udp ports
<ivoks> ipv6 dhcp doesn't use mac address for identification
<ivoks> could it be that maas dhcpd offers ipv4 address over ipv6 udp (which is possible)
<ivoks> and then, once os boots up, it requests ipv4, since it's defined like that within /e/n/interfaces
<ivoks> if you can easily reproduce this, could you do me a favor and disable ipv6 on maas server and see if that happens again?
<Daviey> jcastro: i do not
<jamespage> ivoks, I don't think that is the case
<jamespage> ivoks, but I can try
<ivoks> jamespage: if it's something you could easily try, that would be awesome, since i'm low on resources right now to try it myself
<jcastro> Daviey: any idea who I could ping?
<Daviey> jcastro: ttx might know.
<Daviey> ivoks / jamespage: ipxe is ipv4 only, no?
<jamespage> ivoks, fwiw we have a patched version is isc-dhcp-server we use with vmaas
<jamespage> which ignores the uid that ipxe sends
<Daviey> well, makes the identifier purely MAC - rather than MAC + client-id
<ivoks> jamespage: right, but ipxe should send uuid only when ipv6 pxe booting
<ttx> jcastro: depends on the exact track, they started to tell people yesterday, I think
<Daviey> ivoks: Are you sure our ipxe supports ipv6?
<jamespage> ivoks, ah - right
<ivoks> Daviey: yes
<ivoks> well, pxe
<ivoks> not sure about ipxe :)
<Daviey> We did have this doubling up issue due to ipv4/ipv6 with UEC Avahi based registration.  Ended up having to pach out ipv6 support in the registration code.
<jcastro> ttx: that's a reasonably good enough answer! So I guess I should find out by EOW.
<ivoks> if we can easily test it, it would at least tell us that ipv6 isn't the problem
<ttx> jcastro: let me see what I can find -- what was your talk name ?
<jcastro> ttx: the juju charm workshop
<jcastro> ttx: we had another one too, something like "lessons learned from dogfooding openstack", but I can't remember the exact title.
<koolhead17> jcastro: salute
<ttx> jcastro: see pm
<Daviey> smoser: did you SRU that isc-dhcp checksum patch that seemed to impact xen?  If not, i think we should bite the bullet and carry this client-uid one aswell.
<smoser> i did not sru the isc-dhcp, no. i did say i'd follow up on that.
<smoser> and its not xen only.
<smoser> it was kvm that was reported.
<smoser> (virtuio driver)
<Daviey> smoser: er, i thought qemu carried support already?
<Daviey> smoser: http://git.qemu.org/?p=qemu.git;a=commit;h=1d41b0c .. so surely it's only on xen we are seeing this?
<smoser> it was reported on kvm.
<smb> Daviey, if that is about udp problem, libvirt installs a mangle rule for iptables
<smoser> i think it happens only if you have the virtio kernel module loaded
<smoser> which maybe (possibly) does in kernel what that qemu patch would do in user-space
<smoser> ?
<Daviey> smb: it does NOW, but didn't in earlier versions?
<smb> smoser, Daviey  The problem is with all pv network drivers
<Daviey> smb: Some of this is cross-distro issues, with RHEL.. With ubuntu as a guest.. So i'm not sure what their libvirt was doing.
<smb> Daviey, Not sure when it started, but its there in P at least
<smoser> Daviey, i seem to recall that it only occurs if you have the virtio_net driver loaded in the host
<smoser> or somethign liek that.
<smb> All depends whether "sudo iptables -t mangle -S" shows
<smb> -A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
<smb> So people that installed libvirt do not see it
<Daviey> Ah!
<Daviey> If you use the openstack xen support outside of libivrt, you would see this.
<Daviey> (openstack supports direct xen and libivrt xen)
<smb> Right
<Daviey> smoser: virtio-net can be used by xen, no?
<smb> Or people using kvm directly
<smb> Daviey, no
<smb> Daviey, They have their own pv driver
<smb> netback or was it netfront...? one of the two :-P
<koolhead17> soren: hello
<Daviey> smb: Oh, i thought you could now use qemu's virtio.. ok.
<jamespage> adam_g, yolanda: switched the trunk builds for quantum to the the milestone-proposed branch as master is now open for havana
<jamespage> Daviey, ttx: ^^ fyi
<jamespage> ttx: I'll keep one eye on openstack-dev ML but can you ping me as projects switch over from master for grizzly rc1
<Daviey> jamespage: thanks
<ttx> jamespage: I'll try to remember to ping you. You can also watch the appearance of a milestone-proposed branch
<jamespage> ttx, ta - will do
<soren> koolhead17: Ohai.
<jamespage> Daviey, can I object to a later version dependency upgrade in cinder? https://bugs.launchpad.net/cinder/+bug/1150720
<uvirtbot> Launchpad bug 1150720 in cinder "There is now a dependency on paramiko v1.9.0" [Undecided,In progress]
<Daviey> jamespage: why aren't we seeing this error?
<Daviey> jamespage: Oh, it's only visible with storwize_svc ?
<jamespage> Daviey, yep
<jamespage> Daviey, its a pita because cinder uses stevedore for extension loading; and if pip-requires is not 100% correct then #ERROR
<jamespage> Daviey, I can patch it out if need be and pick a fix for paramiko...
<Daviey> jamespage: well, it looks like a 1 line fix
<Daviey> jamespage: https://github.com/paramiko/paramiko/commit/f8239015ec427a2b5e62afa8370885894483a356
<jamespage> yes
<Daviey> jamespage: Seems annoying to bump this late stage, doesn't it
<Daviey> dosaboy: Hey, Your nova upload to precise...  i think that the outcome of our discussion a few days ago was to reject.  I have now rejected it, but can be salvaged easily if needed.
<jamespage> Daviey, yes - I've raised a task on that bug for Ubuntu explaining the situation.
<jamespage> Daviey, phew - paramiko has a test suite
<jamespage> Daviey, guh - we would need to SRU that fix
<jamespage> it does not qualify for the cloud-archive
<Daviey> jamespage: yep, find out what exactly it does.. and see if you feel it meets the SRU criteria
<dosaboy> Daviey: sorry got the right channel now ;)
<Daviey> dosaboy: hola!
<Daviey> dosaboy: Hey, Your nova upload to precise...  i think that the outcome of our discussion a few days ago was to reject.  I have now rejected it, but can be salvaged easily if needed.
<dosaboy> howdi
<dosaboy> ok so that patch has been applied through a different patch set?
<dosaboy> or are we rejecting it outright?
<dosaboy> seem to remember another patch set incluing it (possible from adam_g?)
<Daviey> bah
<streulma> hey, is there a ppa for updated php 5.3 and updated mysql-server ?
<Daviey> streulma: not a supported one AFAIK.  So if you find one, check the policy on security updates
<streulma> Daviey: in Debian we have Dotdeb, I just run Debian on the server right now, for updates...
<streulma> Daviey: but, love Ubuntu :)
<Pici> PPAs are provided by the community. We can't ensure that you will be getting security updates in a timely manner.
<Daviey> streulma: I'm not entirely sure of dotdeb's security policy.
<streulma> Pici: what to have to do to get security updates for mysql and php on Ubuntu Server then ?
<Pici> streulma: Nothing. If you are running a supported release then you will be getting security updates.
<streulma> Pici: ok, then choose a VPS with Ubuntu Server 12.04 LTS then :)
<jamespage> yolanda, w00t - I have a running ceilometer
<jamespage> yolanda, spotting a couple of bugs but nothing major
<yolanda> nice!
<jamespage> (in your charms that is)
<yolanda> is that working without stevedore problems?
<jamespage> yolanda, it is now that I've got the up-to-date oslo-config package
<Daviey> dosaboy: I *think* the situation was.. you upload needed re-basing as it was trumped by a security upload.. And adam_g confirmed that as it was now in stable/essex, a refreshed snapshot would include it.
<Daviey> dosaboy: The issue now, is that nobody has created a new set of snapshots.
<Daviey> (they need doing quickly really)
<yolanda> jamespage, did you notice the "connection lost" problem, after doing the relation between nova and ceilometer? i always have to reboot nova machine to make that work
<jamespage> yolanda, I'm not running in canonistack so don;t get that issue
<yolanda> and the duplicate nodes?
<jamespage> yolanda, I have seen that tho - but I think we already discussed this
<yolanda> jamespage, but you told that with latest juju versions it should work, i use juju from lp source, and i have that problem
<dosaboy> Daviey: gotcha, let me know if I can help with anything
<jamespage> yolanda, I don't see duplicate nodes
<yolanda> jamespage, how about if you reboot nova, or stop/start it?
<fakhir> Hello. I have a server with a network interface and a virtual network interface. how can I chose to always use the vitrual network interface for outgoing connections?
<SpamapS> fakhir: that is a routing issue
<SpamapS> fakhir: your default route determines where most outgoing connections go
<fakhir> SpamapS, ok how do I set the default route?
<SpamapS> fakhir: in /etc/network/interfaces
<SpamapS> fakhir: man 5 interfaces
<hallyn> stgraber: did you pull alex's patch for lxc-archlinux?  if not lemme push it real quick before you cut
<hallyn> oh i see you did, nm :)
<hallyn> are you pulling the lxc_map_ids patch right now?
<hallyn> (that one should be in rc, it's a regression)
<stgraber> hallyn: yeah, I think I pulled it too
<stgraber> hallyn: hmm, I don't see it applied actually, let me fix that. I certainly meant to have it
<stgraber> hallyn: pushed
<hallyn> thx
<rbasak> jamespage: around? I've got mongo past the alignment fault, but mongod hangs in atomic_intrinsics_gcc.h. I presume the ARM part of that needs fixing. Did you say that you'd done something with this already?
<jamespage> rbasak, right!
<jamespage> so that was the second part of the problem
<jamespage> lemme push my patch for that bit
<jamespage> rbasak, http://paste.ubuntu.com/5611149/
<jamespage> to start with I refactored stuff to use the gcc builtins for __sync_*
<rbasak> jamespage: ah - I didn't realise that wasn't being applied already.
<jamespage> but I'm not comfortable that I have it right
<rbasak> Or is it? I'm confused. Are you saying that this is the fix, or that it doesn't work?
<jamespage> rbasak, well the original patch from fedora-arm is - but it does not work
<jamespage> that is my revised patch
<rbasak> Ah, OK
<djkorn> good day guys.
<djkorn> need info for virtualbox-server
<Fieldy> djkorn: can you be more specific?
<koolhead17> hi all
<koolhead17> seems like patch  for this https://bugs.launchpad.net/swift/+bug/1024477
<uvirtbot> Launchpad bug 1024477 in swift "SAIO: logs filling up with "exception dumping recon cache"" [Undecided,Fix released]
<koolhead17> has still not landed in Ubuntu
<koolhead17> i am using 1.7.4-0ubuntu2~cloud0   & can still see the bug
<Pest> i need help
<Pest> i need that webcam saves screenshots something like /home/pictures/screen.jpg  but not rewrite each time. like screen 1, screen 2 etc
<Pest> advices, gentlemen?
<sarnold> Pest: maybe you could convince logrotated to rotate your photos as if they were logs
<stgraber> hallyn: http://paste.ubuntu.com/5611525/ does that look reasonable to you?
<stgraber> hallyn: also, what exactly is going on with the cupset cgroup? why can't we put pid 1 in there?
<Pest> i mean, should i put in sfwebcam.conf line like /home/pictures/screen %N .jpg ?
<stgraber> hallyn: with that change, I now get the following result for /proc/1/cgroup in a "b" container running in a "a" container. http://paste.ubuntu.com/5611528/
<hallyn> stgraber: hold on...
<hallyn> what are you trying to do?
<hallyn> the hook as is was working for me,
<stgraber> hallyn: I'm trying to fix the hook failing to setup the cpuset cgroup here, making the rest of it fail (as it's set -e)
<hallyn> stgraber: i think you need a reboot
<hallyn> then the first time lxc creates /sys/fs/cgroup/xxx/cpuset it will set clone_children to 1
<hallyn> and all should just work
<hallyn> it did for me last night
<stgraber> hmm, ok. Let me try that then
<hallyn> stgraber: well, or just set clone_children to 1 by hand,
<hallyn> and make sure to fill the mems and cpus fields in the lxc directory.  after that all should go well
 * hallyn has caused trouble yet again
<stgraber> hallyn: what do you mean by "fill the mems and cpus fields in the lxc directory"?
<Pest> oh, "tnx" guys... so helpful
<hallyn> stgraber: make sure that /sys/fs/cgroup/cpuset/lxc/cpuset.{cpus,mems} both are not empty
<RoyK> Pest: cheese should do
<Pest> RoyK: ?
<RoyK> Pest: cheese is an application to use webcams or similar
<stgraber> hallyn: ok, a reboot sorted it. The only cgroup that's wrong is the systemd one but I'm not sure I care ;) http://paste.ubuntu.com/5611746/
<hallyn> cool
<kieppie> hi folks
<kieppie> I have a headless 12.04 server doing my KVM/libvirt stuff, but I'd like to drasticly scale down the power-consuption on it - just shy of hibernation/standby when not doing any heavy lifting. how can I aggressively tune the power-consumption for on-demand use?
<lisandrop05> hello, I'm join files splitted using zip -s
<lisandrop05> but there is no way
<lisandrop05> which command I need to use to join the files again?
<lostme> Hey, I work on Openvpn on Ubu Server, I tried to get a certificate signed by my own with "sudo openssl ca -config ca.cnf -in xx.req -out newcerts/xx.crt -notext", but I get only "usr/ssl/serial: no such file or directory" and "error while loading serial number". What's wrong?
<sarnold> lostme: usr/ssl/serial  ?
<sarnold> lostme: can you copy-and-paste to a pastebin somewhere the entire command and output?
<lostme> serial is a file, and I cannot copy and paste, sry
<ScottK> lostme: Use the pastebinit package to put it in a pastebin.
<stgraber> lxc-stop: failed to stop 'test-lucid': Operation not permitted
<stgraber> FAIL
<stgraber> runTest (__main__.SimpleLxcTests) ... lxc-stop: failed to stop 'test-lucid': Operation not permitted
<stgraber> hallyn: ^
<hallyn> sigh.  where is this?
 * hallyn goes to replace all lxc testsuites with "if true: pass"
<stgraber> hallyn: running integration-test on current staging
<hallyn> what is integration-test?
 * hallyn still waiting for edubuntu to install :)
<hallyn> maybe i'll take the opportunity (if it boots :) to introduce the kids to it
<stgraber> hallyn: bzr+ssh://bazaar.launchpad.net/~serge-hallyn/+junk/lxc-test/
<stgraber> root@castiana:~# lxc-stop -n test-lucid
<stgraber> lxc-stop: failed to stop 'test-lucid': Operation not permitted
<hallyn> <grimace>
<stgraber> I can't remember ever seeing that kind of thing :)
<hallyn> stgraber: so what happens if you lxc-stop it yourself?
<stgraber> well, I can't ;) that's what I just tried
<hallyn> stgraber: the lxc.stopsignal seems a likely culprit
<stgraber> hallyn: yeah... I have that problem with any container on my machine, so it's a global problem, very likely introduced by the stopsignal stuff
<hallyn> can you strace it and see what is getting the eperm?
 * hallyn waiting on his test instance to finish installing ubuntu-dev-tools
<stgraber> hallyn: kill(32498, -1)                         = -1 EINVAL (Invalid argument)
<stgraber> hallyn: so for some reason it uses -1 as the default value for the signal, instead of 9 (I guess our default is SIGKILL right?)
<hallyn> stgraber: but the patch to lxc_stop_callback() defaults to using SIGKILL which was used before as explicitly
<stgraber> hallyn: yeah, that doesn't make sense ... the -1 value should only be possible if "handler->conf->stopsignal" was set, which it isn't in my case...
<stgraber> hallyn: if stopsignal isn't set, it should just be using the standard SIGKILL value as was the case in the past...
<hallyn> right
<stgraber> hallyn: shouldn't stopsignal be initialized to 0?
<stgraber> (in conf.h)
<stgraber> hmm, no, was missing context looking at the diff
<hallyn> right, memset
<hallyn> why does rootfs get an extra / prepended?
<hallyn> stgraber: you've verified that without that patch it works?
<hallyn> great, apparently i mistyped my interfaces file
<stgraber> hallyn: I haven't tried reverting yet, but I can't think of anything else which would change the second parameter of kill() to -1
<stgraber> I'm building a version with some extra debug statements now
<hallyn> oh wait, heh, i guess my instances are hanging from the cgroup-lite bug :)
<stgraber> hallyn: so handler->conf->stopsignal is definitely -1 in my case, now to figure out why
<stgraber> hallyn: and I also confirmed that config_stopsignal isn't called, so that part works too
<hallyn> buffer overrun?
<hallyn> stgraber: on my end, i can't get a raring instance with eth0 as bridge_port to br0 to boot today.
<stgraber> hallyn: so I confirmed that at the end of lxc_conf_init stopsignal is indeed 0, so if nothing touches it afterwards and it's -1 at the time of lxc_stop it looks like something is overwriting it by accident
<hallyn> right, i'm looking over git wc -p for clues
<stgraber> added a ton more debuging here, hopefully that should show me roughly where the value changes to -1
<mkander> I have a strange problem here. After I installed plex media server on the server it is requesting and receiving multicast stream from the IPTV from the provider. Even though Plex is turned off. Any idea how I can debug? I dont want the server to get flooded.
<mkander> cant find out what is generating the request
<sarnold> mkander: as I understand IGMP, the router will periodically poll the network to see if multicast groups are still needed, and unsubscribe from ones that aren't needed any longer
<sarnold> mkander: see e.g., http://en.wikipedia.org/wiki/Internet_Group_Management_Protocol#IGMPv3_membership_query
<sarnold> mkander: you might be able to just wait a little while..
<mkander> sarnold: Thanks for the tips, read the article
<mkander> sarnold: I have waited for one week but something is still requesting with IGMP
<sarnold> mkander: hahaha
<sarnold> mkander: oh jeeeze :) that's ... wow.
<sarnold> mkander: I thought perhaps it'd been ten minutes or something.
<mkander> it was 5 :P
<mkander> but still it is requesting the stream somehow
<mkander> verified this by telneting into the router and running igmpproxy
<mkander> after some minutes it gets a request from the server ip
<stgraber> hallyn: so it doesn't appear to be anything during initialization... I'm now digging into the mainloop/command handler
<hallyn> stgraber: i gotta run, sorry.  i *woudl* say just push without that patch, but that's like a ticking timeobomb if we're overrunning memory
<hallyn> i'd guess my monitor changes for cgroup are to blame
<hallyn> will look later tonight wehn ig et back
 * hallyn out
<stgraber> I'll dig some more. I'd rather not push rc1 with a potential memory corruption of the conf struct ;)
<stgraber> hallyn: yay, finally found the first place where the value appears to be -1. It's in lxc_command_mainloop_add. Looking at what happens before that one is called
<hubuntu> silly question: when using virtual manager I would like to share clipboard between my ubuntu desktop and my ubuntu server in the VM
<hubuntu> is that possible? there were some tool for such things for vmware last time I used that stuff in 2007
<sarnold> hubuntu: I think the virt-manager just fires up a VNC client. I'm not sure if any vnc clients make it easy to share clipboard or the selection buffer, but perhaps one exists? (I don't know that the standard one _doesn't_ do that..)
<hubuntu> using spice
<hubuntu> i think it is the spice-vdagent package
<hubuntu> but thanks a lot sarnold
<hubuntu> :)
<sarnold> hubuntu: woo :) good luck :)
<hubuntu> aptitude in precise is always a bit weird and wants to delete a lot of stuff all the time... I end up using apt-get and I get what i want
<hubuntu> is that normal?
<ScottK> Using apt-get is the sane way to go.
#ubuntu-server 2013-03-14
<stgraber> hallyn: made any progress on your side?
<hallyn> stgraber: no.  only more confirmation that the most likely time for my machines to not boot several times per day is during effing freezes
<stgraber> hallyn: heh
<hallyn> well i see one bug immediately, but it's not the cause of your troubles
<stgraber> hallyn: it's really odd, the problem I see here is handler->conf being fine in start.c but being wrong in lxc_command_mainloop_add (commands.c)
<stgraber> hallyn: though it looks like the struct is passed fine, so I can't really explain it
<stgraber> so anything that's a result of lxc_command_mainloop has the value as -1, but even after the lxc_command_mainloop_add, the parent (in start.c) still has the correct value locally
<stgraber> hallyn: not sure if I expressed myself clearly enough, hopefully this may help :) http://paste.ubuntu.com/5612544/
<hallyn> so inside lxc_command_mainloop_add() istself is where you print it out and get -1?
<stgraber> hallyn: yep
<hallyn> stgraber: ^ also, is there any chance at all that you have libraries or binaries from an old build under /usr/local/bin/ polluting things?
<hallyn> that's whack :)
<stgraber> hallyn: nope, I only use packages and they're all the same version I'm affraid
<hallyn> stgraber: and if you print it righth after the lxc_command_mainloop_add inside lxc_poll(), then it's 0 again?
<stgraber> yep
<hallyn> have you tried setting lxc.stopsignal = 9 ? :)
<hallyn> we need one of kees' super-duper memory trackers
<hallyn> wonder if you can set a watch in gdb adn have it run long enough to watch the value change
<stgraber> hallyn: I haven't tried setting lxc.stopsignal, no, but I can try that
<stgraber> hallyn: just to be 100% sure, I created a completely clean raring container, installed my test packages in there and I reproduce the issue just fine, so it's not some weird library mix on my machine
<stgraber> hallyn: just tried with lxc.stopsignal, still get -1!
<stgraber> hallyn: looking at my debug statements, setting lxc-stopsignal properly replaces all my "0" in a standard run by "9", but I still get "-1" inside lxc_command_mainloop_add and so kill() still uses -1 and fails
<stgraber> hallyn: http://paste.ubuntu.com/5612581/ not terribly useful if you don't have my patched version in front of you, but that gives you an idea. Basically it's init at 0, then set to 9 and remains like that except within lxc_commands
<hallyn> boggle
<hallyn> stgraber.  one funky idea.  can you just add '#include conf.h' at commands.c?
<stgraber> hallyn: it already includes lxc/conf.h
<hallyn> yeah, can you get rid of that and change it to "conf.h"
<stgraber> ok
<hallyn> if that doesn't work, then print out handler->conf->fstab, and handler->name
<hallyn> (or do that at the same time if you haven't yet compiled :)
<stgraber> build is already running
<hallyn> k
<hallyn> oh look, 14 xfs userns patches from Eric tonight
<stgraber> hallyn: didn't make a difference. Adding the debug statements. I guess you want that in lxc_command_mainloop_add?
<hallyn> yeah
<hallyn> see if any of those structs is making sense
<stgraber>       lxc-start 1363229394.271 DEBUG    lxc_commands - stgraber: debug1: -1
<stgraber>       lxc-start 1363229394.271 DEBUG    lxc_commands - stgraber: debug1: fstab: /var/lib/lxc/tpl-raring-i386/fstab
<stgraber>       lxc-start 1363229394.271 DEBUG    lxc_commands - stgraber: debug1: name: tpl-raring-i386
<stgraber> hallyn: ^
<hallyn> stgraber: (you see where I'm going with this :) how about other fields of lxc_conf near the end?
<hallyn> i've just about got a working build for myself
<stgraber> hallyn: I could try adding another member to the struct after this one, see if that magically fixes stopsignal and breaks the next one
<hallyn> yeah
<stgraber> hmm, though it's not the last one so that wouldn't make a lot of sense
<stgraber> anyway, let me get autodev and kmsg, autodev should be 0 and kmsg should be 1 I think
<stgraber> and they are before/after stopsignal in the struct
<hallyn> i have autodev as 0, and rcfile as NULL
<hallyn> stgraber: the address of stopsignal is different at the call to lxc_command_mainloop_add, and inside of it!
<stgraber> hallyn: I have autodev as 0 and kmsg as 0
<hallyn> and rcfile is not null outside of it
<stgraber> hallyn: the latter seems weird to me as it should be 1
<hallyn> (but ull inside)
<stgraber> hallyn: hmm, wth is going on there? :)
<hallyn> null, not ull :)
<stgraber> I just added a bit more debug to make sure kmsg is indeed 1 outside of lxc_command_mainloop_add but 0 in
<stgraber> (if it's also 0 outside, then we have another bug that needs fixing)
<hallyn> stgraber: but that's accidental :)  bc even &handler->conf->autodev is different address
<stgraber> right, confirmed, kmsg is also affected. So we're just lucky when we happen to get the right value ;)
<hallyn> oh sorry, i see :)
<hallyn> read too fast
<stgraber>       lxc-start 1363230149.320 DEBUG    lxc_commands - stgraber: debug1: kmsg: 0
<stgraber>       lxc-start 1363230149.320 DEBUG    lxc_start - stgraber: debug12: kmsg: 1
<hallyn> and yet, handler and handler->conf have the right addresses
<stgraber> maybe that 0 in kmsg is the one I was supposed to have for stopsignal :P
<stgraber> anyway, I wonder what's messing the end of that struct so badly
<hallyn> it's not the end of the struct
<hallyn> the addresses are wrong.  so the compiler is messing up the struct between the two files.
<hallyn> i'm printing out 'handler->conf' and '&handler->conf->autodev' in both places - conf is the same, but autodev is off by a bit
<hallyn> AND - sizes are different
<hallyn> http://paste.ubuntu.com/5612627/
<hallyn> some -DAPPARMOR or the like is not being set for both i think
<hallyn> 8 bytes difference, one pointer is in one and not the other
<stgraber> hallyn: missing config.h include in one of the two?
<hallyn> my guess is on money is on HAVE_SCMP_FILTER_CTX
<hallyn> hm?
<stgraber> we typically get those defines through the autoconf generated config.h
<hallyn> well commands.c incles that last, tha tmay be the problem
<hallyn> trying with that moved up
<hallyn> yeah that fixes that!  now lxc-stop just hangs
<stgraber> hallyn: so just moving config.h above the other includes?
<hallyn> yup
<hallyn> but like i say then lxc-stop hung,
<hallyn> but of course that could be one of hte other .c files having the same problem.
<stgraber> I'm tempted to just patch all our .h and .c to always include config.h as the first thing they do
<hallyn> well it might be enough to have conf.h and start.h do it
<stgraber> I guess. Next time I'm bored I'll figure out how to get a nice graph of all the in-project includes then look at making them make a bit more sense ;)
<hallyn> yeah that fixes lxc-stop fully for me
<stgraber> root@castiana:~# lxc-start -n tpl-raring-i386 -o debug -l debug -d
<stgraber> root@castiana:~# lxc-stop -n tpl-raring-i386
<stgraber> root@castiana:~#
<stgraber> right, same here
<stgraber> I just added an include of config.h in conf.h and start.h as you suggested
<hallyn> phew
<stgraber> that was a fun one ...
<hallyn> hah
 * hallyn hopes to now hear that all tests pass :)
<stgraber> I sure hope so
<stgraber> hallyn: fix pushed to staging with both of us as sign-off
<stgraber> hallyn: building clean packages now and making my machine run all the tests so I can hopefully just send the pull request tomorrow morning
<stgraber> hallyn: I wonder how many other bugs we just magically fixed in the process ;)
<hallyn> or introduced
<hallyn> certainly a step in the right direction, but that doesn't mean we won't run into other glitches first :)
<hallyn> anyway - thanks!  good night
<stgraber> good night!
<koolhead17> can someone tell me if we have any issue with the cloud archive
<zetheroo> anyone with experience working with GlusterFS in Ubuntu?
<jibel> jamespage, hey
<jibel> jamespage, is there a plan to backport jenkins 1.480 to 12.04?
<eagles0513875> hey guys i need some help tweaking spamassassin and amavis, they are working as they should but I am wanting the subject to have ***SPAM*** added to it so when delivered the person who gets it knows its spam but that is not happening any ideas what i need to tweak
<histo> eagles0513875: spamassasin configs
<eagles0513875> you want me to pastebin them for you histo
<histo> eagles0513875: I don't really have time to look right now. I believe it would be in there. but on a second thought.....
<eagles0513875> ?
<eagles0513875> should i just pastebin them
<histo> It should be in your MTA
<histo> eagles0513875: you using postfix or exim?
<eagles0513875> postfix
<eagles0513875> i followed the server guide for 12.04
<histo> eagles0513875: http://www.akadia.com/services/postfix_spamassassin.html
<eagles0513875> histo: im a bit confused though what exactly should i be looking at cuz most of the settings seem to be setup by default
<histo> eagles0513875: That page shows what settings need to be in postfix as well as local.cf for spamasassin
<histo> eagles0513875: Or you will have to wait for someone else to help I have to go sorry
<eagles0513875> histo: the ubuntu server guide has them already thanks for your help though
<histo> eagles0513875: which guide are you following exactly?
<eagles0513875> histo: https://help.ubuntu.com/12.04/serverguide/mail-filtering.html
<buengenio> guys, can somebody please recommend a calendaring server/service?
<acalvo> Has anyone successfully used preseeding using an 12.04 alternate image installer (no ubiquity)?
<acalvo> For some reason, it does not even read specified boot parameters
<xnox> acalvo: one needs to use a different preseed (ubiquity and alternate do it slightly differently)
<acalvo> xnox, I know, finally managed to get it working
<acalvo> really tricky though
<acalvo> https://coderwall.com/p/oq1xta helped me a lot
<nailora> hi there, we upgraded our vm host from ubuntu 10.04 to 12.04 and the following happened:
<nailora> http://nailor.devzero.de/oneshot/2013-03-14/iostat_ios-month.png/02fd5f8c5f1059b7/iostat_ios-month.png
<nailora> any idea what could be the reason?
<patdk-lap> problably cause of device mapper
<nailora> it definitely seems to be related to device mapper. any suggestions how we could get back the old performance (or comparable performance)?
<patdk-lap> actually, maybe not
<patdk-lap> 12.04 machines I have updated in the last week also have that issue
<patdk-lap> I've been ignoring it, as I figured it might just be cause they got rebooted
<patdk-lap> http://stats.patrickdk.com/iostat_ios-month.png
<maxagaz> hi
<maxagaz> how to change my server's address from 1.2.3.4:3000 by myservice.mydomain.com ?
<nailora> maxagaz: you want your server to be reached when someone enters myservice.mydomain.com in his browser (or other software client)? you need to set up proper DNS entries then. or am i misunderstanding what you want to do?
<maxagaz> yes, I want people to see myservice.mydomain.com in their browser instead of 1.2.3.4:3000
<maxagaz> nailora: my subdomain is created on my account at bluehost.com and the server in an Amazon EC2 server
<nwillems> hi there. I'm trying to setup a small network, and would like to install a DHCP service - I've seen some talk on dnsmasq and dhcp3-server, but what to choose?
<nailora> maxagaz: so you know the ip address of your ec2 instance. at bluehost you must configure your subdomain to point to this ip address. i do not know what interface bluehost offers to do this, but the "DNS" and configuration/entries/... should be written all over it.
<maxagaz> nailora: thank you
<maxagaz> nailora: I managed to redirect is display http://myservice.mydomain.com:3000/, but now how can I get rid of :3000 ?
<nwillems> Hello again. My connection dropped. What is the preferred DHCP service on ubuntu?
<maxagaz> is it something to do with my resolv.conf ?
<freddy__> hi- I want to give my ubuntu 12-4 nic more than one ip. How do I do that with cli?
<nailora> maxagaz: 3000 is the port you connect to. different services (e.g. apache httpd, mysql db server) listen on different ports, and you can probably configure the port in the configuration file of the corresponding service. i dont know what service uses 3000 by default
<maxagaz> nailora: redmine
<maxagaz> nailora: I don't want to change it, just to mask it
<nailora> http://serverfault.com/a/150368 briefly explains how to setup a reverse proxy with apache2 to do exactly that. i dont know if i would recommend reverse proxying to a beginner, though. alternatively you might change mongrel (or whatever webserver ruby/redmine ships) to listen on port 80 (or 443 for https) and you wont need to explicitly mention it in your browser.
<eagles0513875> hey guys I have an amavis spamassassin clamav setup  and i would like he subject to display ***Spam*** yet its not doing that could someone please tell me why its not displaying that tag as part of the subject
<jacobw> eagles0513875: check $sa_spam_subject_tag in /etc/amavis/conf.d/20-debian_defaults
<eagles0513875> jacobw: this is what i have $sa_spam_subject_tag = '***SPAM*** ';
<eagles0513875> jacobw: any logs i coudl provide you with
<jacobw> eagles0513875: I'm not sure, that was my only suggestion
<eagles0513875> jacobw: i followed the ubuntu 12.04 server guid the section on mail filtering
<jacobw> eagles0513875: Do you have a link to that page?
<eagles0513875> hold on
<eagles0513875> jacobw: https://help.ubuntu.com/12.04/serverguide/mail-filtering.html
<jacobw> Have you checked for the headers in the mail that goes through content filter?
<jacobw> X-Spam-Level, etc.
<eagles0513875> jacobw: in spamassassin
<jacobw> eagles0513875: No, in the mails that Postfix delivers
<eagles0513875> jacobw: im seeing in the syslog amavis logs the hits and marks them as spammy
<jacobw> eagles0513875: Send yourself a mail, and check the headers for the X-Spam-Level field
<jacobw> eagles0513875: https://help.ubuntu.com/12.04/serverguide/mail-filtering.html#mail-filter-testing
<eagles0513875> ok
<jacobw> eagles0513875: See if the X-Spam-Level is high enough to trigger the subject tagging, $sa_tag2_level_deflt
<eagles0513875> ok
<eagles0513875> jacobw: there is no spam level in the header at all
<patdk-wk> make sure amavis knows the domain is *local*
<eagles0513875> patdk-wk: where exactly do i look for that as well it functions as the header shows amavis is picking up the emails
<patdk-wk> in the amavis config file
<eagles0513875> which one the debian-config one
<jacobw> Can you paste your syslog messages? And check that mails your looking at have the same Message ID field in the headers as the mails you see in the syslog messages.
<eagles0513875> mail id matches
<eagles0513875> jacobw: here is the syslog http://paste.ubuntu.com/5613562/
<eagles0513875> if you want me to filter it to something specific let me know
<jacobw> Can you filter it for Amavis lines only?
<eagles0513875> sure
<eagles0513875> jacobw: http://paste.ubuntu.com/5613573/
<eagles0513875> patdk-wk: if it wasnt working locally wouldnt i have errors showing up in my syslog in regards to amavis
<patdk-wk> who said locally?
<patdk-wk> a config option that is called *local* has nothing to do with locally
<eagles0513875> patdk-wk: which file am i to be looking at as amavis has a bunch of then in its conf.d directory
<patdk-wk> any one you want
<patdk-wk> normally I put all my customizations in their own file, like 99
<jacobw> It's best to use one that isn't a conffile
<jacobw> Like 99-local.conf or something
<eagles0513875> patdk-wk: is this what you are referring to$sa_local_tests_only = 0;    # only tests which do not require internet access?
<patdk-wk> no
<patdk-wk> @local_domains_acl and it's relatives
<eagles0513875> i have nothing in my file
<eagles0513875> at least not that im seeing
<patdk-wk> or local as selected via sql
<eagles0513875> jacobw: im using what was told to me via the server docs
<patdk-wk> local_domains_*
<eagles0513875> patdk-wk: single domain setup not using virtual users or anything of that
<eagles0513875> sort
<patdk-wk> well default is to include $mydomain
<patdk-wk> so if $mydomain is set correctly
<patdk-wk> it is fine for a *single* domain then
<eagles0513875> ahh wait
<eagles0513875> that was in a different file let me find it
<patdk-wk> but amavis only modifies emails that are *local*
<eagles0513875> ok
<eagles0513875> patdk-wk: i had to set that in a file i forgot which and where
<patdk-wk> grep
<eagles0513875> patdk-wk: found it in the amavis 05-node_id file $myhostname = "mail.faa.org.mt";
<eagles0513875> that is the FQDN
<patdk-wk> hmm, that is not $mydomain
<NaGeL_Work> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<NaGeL_Work> hello is there any alternatives to webmin?
<eagles0513875> humm im doing something wrong with the grep command to find this think
<eagles0513875> NaGeL_Work: learn the command line :p
<NaGeL_Work> i'm trying but there area few users who wants a GUI for the server
<ogra_> NaGeL_Work, https://help.ubuntu.com/12.04/serverguide/zentyal.html
<eagles0513875> patdk-wk: i dont have it defined anywhere
<NaGeL_Work> thanks
<patdk-wk> personally I just set, @local_domains_acl = ( ".$mydomain","example.org","example.com", .... );
<NaGeL_Work> jabber?
<eagles0513875> patdk-wk: what im lost and confused with is all the config files available
<patdk-wk> that is the debian way :)
<patdk-wk> I ignore all of them, and just add my own
<patdk-wk> they are appended in numberic order, so highest number wins
<eagles0513875> patdk-wk: ahh ok so what do i add to i guess ill use the debian config
<eagles0513875> and where on earth would i speacify mydomain and how
<patdk-wk> well, default config is, 05-domain_id:chomp($mydomain = `head -n 1 /etc/mailname`);
<eagles0513875> ok so it also has the @local_domains_acl
<eagles0513875> so i just add mail.domain1.com, mail.domain2.com in quote wiht comma separated
<patdk-wk> yes, so if you use mydomain, in any other config file, it will be useless, unless you also define local_domains_acl too
<patdk-wk> not for mydomain
<patdk-wk> for local_domains_acl, sure
<eagles0513875> patdk-wk: like this @local_domains_acl = ( ".$mydomain","mail.faa.org.mt","mail.ambjentahjar.org" );
<patdk-wk> heh?
<patdk-wk> user@mail.faa.org.mt is what you use?
<eagles0513875> @faa.org.mt but for some reason delivery shows mail.faa.org.mt
<eagles0513875> let me check something
<eagles0513875> the problem is that faa.org.mt is pointing to the website on a different server and im a bit worried to break something the main domain that is
<patdk-wk> heh?
<patdk-wk> this controls adding spam headers
<patdk-wk> if you want spam headers you MUST supply the domain there
<eagles0513875> ok
<eagles0513875> basically remove the mail.faa.org.mt what about the 2nd domain
<patdk-wk> probably has the same issue
<eagles0513875> ok they have no been removed should i send another test email
<eagles0513875> patdk-wk: that did it now i have a spam flag
<eagles0513875> my the heck is the server guide incomplete then
<patdk-wk> why programs have their own documentation :)
<eagles0513875> patdk-wk: which arent any better sometimes though
<patdk-wk> I haven't had an issue yet
<jamespage> hallyn, I do still see issues with the /dev/kvm permissions on a fresh raring install of openstack
<jamespage> its owned root:root with group write permisions
<hallyn> jamespage: ok, thanks.  i'll set everything aside today to get to the bottom of that
<hallyn> unless stgraber has bad reports about the lxc tests
<jamespage> hallyn, sorry :-(
<stgraber> hallyn: everyone looks good for now. I'm finishing a batch of PPA builds to make sure I didn't miss anything, but all the tests succeeded
<stgraber> hallyn: so I hope to have the pull request out in an hour or so
<stgraber> hallyn: FWIW, I plan on announcing that we'll be at Plumbers around the time rc1 is tagged and we prepare for the final 0.9. Hopefully I'll have some time to update the wiki before that.
<hallyn> cool.
<NetoAtaides> hello
<NetoAtaides> I'm setting up a LDAP Samba server here
<NetoAtaides> I've compiled samba and openldap from source
<NetoAtaides> now I'm having issues regarding creating init scripts for both
<NetoAtaides> I'm following instructions from http://www.faqs.org/docs/securing/chap29sec292.html and I cannot figured out what that file /etc/sysconfig/network means
<NetoAtaides> I've read that this file only exists on Debian GNU Linux
<NetoAtaides> what about Ubuntu?
<NetoAtaides> does anyone know?
<jacobw> NetoAtaides: It doesn't exist on either
<jacobw> NetoAtaides: /etc/sysconfig is a Redhat thing
<NetoAtaides> this /etc/sysconfig/network file is said to be the Source networking configuration
<NetoAtaides> jacobw: and is there a similar file in Ubuntu?
<jacobw> You're doing this really hard way, follow this guide and use the Ubuntu packages that have already been built and tested
<jacobw> https://help.ubuntu.com/12.04/serverguide/samba-ldap.html
<jacobw> NetoAtaides: /etc/network/interfaces
<NetoAtaides> I've asked people from openldap channel as I was having trouble with this guide....and they told me to compile the last versions
<jacobw> NetoAtaides: Most projects will tell you something like that to fob you off
<jacobw> What problems are you having with that guide?
<jacobw> This is the right place to ask about the Ubuntu Server Guide :)
<NetoAtaides> I've tried with /etc/network/interfaces instead of the sysconfig stuff...
<NetoAtaides> and got $ sudo /etc/init.d/ldap start
<NetoAtaides> /etc/init.d/ldap: 5: /etc/network/interfaces: auto: not found
<NetoAtaides> /etc/init.d/ldap: 6: /etc/network/interfaces: iface: not found
<NetoAtaides> /etc/init.d/ldap: 9: /etc/network/interfaces: auto: not found
<NetoAtaides> /etc/init.d/ldap: 10: /etc/network/interfaces: iface: not found
<NetoAtaides> /etc/init.d/ldap: 20: [: =: unexpected operator
<jacobw> Ok, remove whatever you compiled yourself (make uninstall or similar)
<jacobw> And start from '
<jacobw> sudo apt-get install samba samba-doc smbldap-tools
<jacobw> '
<NetoAtaides> well...I don't remember the issues about samba-ldap Ubuntu package...I'll retry it then
<hallyn> grrr.  qemu64 cpu is not getting vmx.  why?
<NetoAtaides> I'll come back if problems appear
<NetoAtaides> thanks
<sliddjur> in file /etc/sysctl.conf line net.ipv4.ip_forward=1 this enable port forwarding ?
<sliddjur> or this echo â1â /proc/sys/net/ipv4/conf/eth0/forwarding
<jacobw> sliddjur: it's to enable forwarding of IP packets not destined for the local host
<jacobw> sliddjur: i.e. routing
<sliddjur> jacobw: which one?
<jacobw> sliddjur: the first one, the second one is for port forwarding
<ogra_> the second one would just output 1 though
<ogra_> :)
<ogra_> you need a > to direct that 1 into the file
<sliddjur> ogra_: yeah was type :)
<sliddjur> typo*
<sliddjur> :P
<rbasak> jamespage: pretty sure I've got to the bottom of the mongodb problem. Will try a fix tomorrow.
<jamespage> rbasak, marvellous!
<Kranker> heyho may someone help me with some server issues on ubuntu ? :P
<holstein> !ask | Kranker
<ubottu> Kranker: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<Kranker> i have the problem that my mail server is sending me massivly Mail Delivery Fails. And if i put ps aux i get many smtp's also i have got 55k Mails on my email pop... so any suggestion what happen there?
<Xaro> hey all
<sarnold> Kranker: do the reject messages say why they are being rejected?
<Kranker> w8
<Kranker> Well just saying it couldnt be deliverd to the adress x@x
<Kranker> if i put ps aux i have more than 40x times bounce -z -n defer -t unix -u -c
<sarnold> is there any more information in your logs?
<Kranker> nothing
<Kranker> i found a file in /var/mail/
<Kranker> called nobody
<Kranker> Mar 14 19:48:27 localhost postfix/smtp[7577]: warning: to prevent loss of mail, turn off command pipelining for 72.34.65.34 with the smtp_discard_ehlo_keyword_address_maps parameter
<Kranker> mail log kinda 600mb big OO
<jjl> Hi, I was wondering if I could get some help getting a tftp server up and running?  Followed ever guide I could find with no luck.  Is this the place to ask a question like that?
<kieppie> jjl - not sure. dig a bit into the PXE docco's
<resno> im having a small issue.
<resno> ive setup postfix on a host as a smarthost, so it will send email to my mailserver.
<resno> i have a app using sendmail, and the emails are sending
<lenny__> I am running a small file server in my house running ubuntu 12.04. I have been looking online but can't seem to find if a program is available that will allow me to see what files are currently being accessed on my server. thanks for any help.
<sarnold> lenny__: /proc/pid/fd/ for a running process will show you the files that process currently has open.
<lenny__> sarnold: thank you very much :)
<sarnold> lenny__: note that small files might be open for way less time than you'd reasonably sleep between polling that list....
<sarnold> lenny__: but if you're curious why your cable connection feels supremely slow, it might be useful :)
<adam_g> jamespage, Daviey http://people.canonical.com/~agandelman/folsom/glance-2012.2.1-0ubuntu1.2~cloud0/
<nailora> hi there, we upgraded our vm host from ubuntu 10.04 to 12.04 and the following happened:
<nailora> http://nailor.devzero.de/oneshot/2013-03-14/iostat_ios-month.png/02fd5f8c5f1059b7/iostat_ios-month.png
<nailora> any idea what could be the reason?
<SpamapS> nailora: IO scheduler? something else un-bottlenecked, causing busier disks?
<sw0rdfish> 5GB vps is enough to setup znc and vpn right?
<sarnold> sw0rdfish: one hopes it is quite overkill :)
<sw0rdfish> heh
<sarnold> nailora: I've got a very vague memory that I've heard about this result before. Sorry I can't recall details..
<SpamapS> 785M    irclogs
<SpamapS> thats 2 years maybe?
<SpamapS> of being in 30 - 50 channels
<SpamapS> so yeah, 5G, plenty of space :)
<miu2n> anyone know cloud server?
<sw0rdfish> sarnold, I'm in norway ... will there be much difference in ping times between getting a vps from sweden or netherlands?
<qman__> I'm experiencing a strange problem and I'm stumped on how to troubleshoot it
<qman__> I have a server with two NICs in a bond with a bridge, which works great locally
<qman__> but the KVM guests get a really delayed ARP response
<qman__> so when their ARP table doesn't have the host, it takes about 10-20 pings before the ARP comes through and it works
<sarnold> sw0rdfish: most providers can help you find ping times to their different datacenters; I'd expect nearly anything in europe to be tolerable for an irc proxy :) but if you really care, your providers probably have a host in the datacenter you can ping to measure
<qman__> pinging the guests from other hosts on the LAN does not experience this issue
<qman__> it's only when the guests try to communicate out on the LAN
<sarnold> qman__: I think I've heard the spanning tree protocol can take a very long time to stabilize routes, and a delay when starting a bridge is normal... look into it, see if that describes your problem?
<sw0rdfish> sarnold, well the vpn will also be used to transfer large files (my own speed is only 5mbps btw)
<qman__> well, it's not when starting the bridge, and I have bridge_stp off
<qman__> it's every fresh connection, once the arp table drops an entry it happens again for that host
<sarnold> qman__: dang. :)
<sarnold> sw0rdfish: latency matters more for small files; with large files, things'll move along fine with 30ms or 300ms latency..
<qman__> the bizarre thing is that it does eventually get through, and then for the duration the entry is there, it works perfect
<sw0rdfish> i see, sarnold
<nailora> SpamapS: scheduler: noop deadline [cfq]
<nailora> sarnold: could you try to remember -- like really hard?
<sarnold> nailora: hehe :)
<sarnold> nailora: I thought a google search for "io latency regression site:lwn.net" would find it eventually, but I'm not seeing anything that would fit in that timeframe :/
#ubuntu-server 2013-03-15
<qman__> I think I'm running into bug 633392
<uvirtbot> Launchpad bug 633392 in linux "Bridged Guests losing network connectivity" [High,Expired] https://launchpad.net/bugs/633392
<qman__> unfortunately it went unresolved
<qman__> I'll try unbonding and see if it solves the problem
<qman__> yep, it did
<qman__> so I guess bonding + bridging + kvm is broken
<patdk-lap> can't say I ever joined bonding + bridging
<qman__> bonding + bridging works great on the host OS
<qman__> it just breaks stuff with the KVM guests
<qman__> not sure where the fault is
<patdk-lap> bonding and bridging on the host and normal nic on kvm guest?
<qman__> yes
<qman__> both normal nic and a guest with a bridge
<qman__> same thing happens to both guests
<patdk-lap> and firewall rules on the host?
<qman__> accept all
<qman__> it just has the normal KVM stuff
<qman__> or libvirt
<qman__> or whatever puts it in there
<patdk-lap> just thinking :) I normally put a firewall on the host
<qman__> yeah, this is all externally secured
<RoyK> bonding on the host, not the guest?
<qman__> yes
<RoyK> what sort of bonding?
<qman__> balance-rr
<qman__> I don't have smart hardware
<qman__> unmanaged switch, two different types of NIC
<RoyK> bonding to a single switch?
<qman__> yes
<RoyK> then why not LACP?
<RoyK> should work well
<RoyK> perhaps not to a dumb switch, though
<qman__> yeah, I don't think I can
<qman__> I just wanted to try and squeeze some more bandwidth out of it
<qman__> it's not critical, just annoying that it's broken
<RoyK> get a good switch
<RoyK> well, file a bug report
<RoyK> if enough users/developers think it's a problem, it'll be solved
<MraAlbertina> hi. could you please help me sonve this issue; "perl: warning: Setting locale failed." Pastebin: http://pastebin.com/p3N17prX
<MraAlbertina> sonve/solve..
<sarnold> MraAlbertina: 'locale -a' will show you the installed locales on your system
<sarnold> MraAlbertina: I guess one of your locale variables there is not one of the legal values reported by locale -a
<MraAlbertina> wow... i need to discover where that is
<MraAlbertina> sarnold: i have a C and C.UTF-8 after 'locale -a' everything else seems ok (all en_**.utf8)
<MraAlbertina> i have no clue where that C is coming from. might that be the problem?
<sarnold> MraAlbertina: "C" is the safe fallback :)
<MraAlbertina> oh
<MraAlbertina> oh, another entry i have is POSIX, besides that C and all en*
<MraAlbertina> is it possible to reconfigure locale, in a quick fix, sarnold ?
<qman__> pretty sure this one is the problem: LC_ALL = (unset),
<MraAlbertina> because everything seems ok, with locale -a
<qman__> there's a dpkg-reconfigure you can do to set the locale
<qman__> I can't remember which package though
<MraAlbertina> i saw that LC_ALL = (unset) somewhere
<qman__> related: http://ubuntuforums.org/showthread.php?t=1720356
<qman__> shows three methods to fix, in the order you should try them
<MraAlbertina> oh, ya, on the first "warning" i got, in the pastebin
<MraAlbertina> LC_ALL = (unset),
<MraAlbertina> okay, thanks so much qman__
<MraAlbertina> thanks sarnold
<sarnold> MraAlbertina: what fixed it? :)
<MraAlbertina> going for a reboot after editing /etc/environment  and i'll tell you :)
<MraAlbertina> sarnold: adding: LC_ALL="en_GB.utf8" -to- /etc/environment and rebooting solved it
<sarnold> MraAlbertina: excellent :) thanks
<MraAlbertina> thanks for discovering that qman__
<patdk-lap> isn't bond-mode balance-tlb going be better than balance-rr?
<patdk-lap> balance-rr when using a single switch, can cause out of order packets
<patdk-lap> that might be your issue
<patdk-lap> the other one, balance-a?? can cause issues with devices that depend on the mac being static (cable modems, some switchs management interface, basically anything using mac for a security cookie)
<sarnold> qman__: ^^^
<qman__> while that's possible I don't think it's the problem at hand, when watching a tcpdump, the arp requests go through the bridge and get back to my host, but simply don't get to the guests, most of the time
<qman__> and the host has no issues at all communicating with the rest of the network over the bridge on the bond
<patdk-lap> ya, I imagine the balance-rr issue will be more if you load the interfaces up good
<patdk-lap> I believe I have seen that arp issue before
<patdk-lap> but it's been awhile
<qman__> likewise, real hosts on the LAN can reach the guests just fine, it only applies to the guests trying to initiate
<patdk-lap> I've defently seen that before
<patdk-lap> but totally can't remember what it was
<patdk-lap> I don't use kvm, but used to use xen with bridges like that
<autoditac> hey everyone. is this the right channel to ask questions regarding nfs on ubuntu?
<autoditac> i' d be glad if someone could give me a hint regarding posix acls and nfs4 on ubuntu.
<autoditac> question is: will posix acls be applied if i access a ext3 filesystem with heavy usage of acls using nfs 4 without using kerberos? i have the same userbase both on the client and the server (LDAP)
<xnox> yes, but they will be able to bypass it, if they have root on the client.
<autoditac> hi, xnox :)
<autoditac> xnox, users don't have root access on the client. furthermore, no_root_squash is not set
<autoditac> i was just wondering if the nfs4 acls and posix_acls map and if the acls get enforced on the server or on the client side?
<patdk-lap> enforced on the client
<buengenio> guys, can somebody please recommend a shared calendar server/service?
<patdk-lap> gmail?
<resno> buengenio: google calendar?
 * resno highfives patdk-lap 
<patdk-lap> just dunno what a shared calendar server/service is
<patdk-lap> like a community calendar? a wordpress plugin?
<buengenio> no, like a caldav
<patdk-lap> or like exchange/outlook? a webmail thing?
<patdk-lap> isn't caldav a protocol?
<buengenio> exchange/outlook type of thing but that can work with Outlook/Thunderbird/Mail, etc....
<patdk-lap> heh?
<sarnold> good luck with outlook :/
<patdk-lap> outlook does it's own thing
<holstein> you can use google cal with those
<buengenio> I'd love to say that to our boss
<patdk-lap> and last I knew thunderbird and that doesn't support calanders
<resno> im still suggest google calendar
<patdk-lap> buengenio, install exchange
<buengenio> no thank
<buengenio> no thanks
<resno> can you even install exchange in linux?
<sarnold> buengenio: iirc there's a horrible plugin thingy for outlook to make google calendars work there. I'm sure they did their best, but I don'tthink outlook was meant to have plugins. so.
<buengenio> but boss is sticking with Outlook till dies irae
<patdk-lap> I run exchange 2010 currently, not a big deal
<patdk-lap> buengenio, next best thing, outlook365 :)
<buengenio> isn't there something OSS?
<buengenio> standards based
<buengenio> that works everywhere?
<patdk-lap> there are standards?
<sarnold> buengenio: it's the "works everywhere" that fails, outlook doesn't want to play that game.
<patdk-lap> outlook has no standards, atleast till outlook 2013, then it can use activesync
<sarnold> buengenio: and iirc nothing else really speaks exchange
<sarnold> (client-side)
<holstein> owncloud
<patdk-lap> I guess you could install horde webmail, setup activesync, then use outlook2013
<patdk-lap> I have not *tested* that though
<resno> theres zimbra
<resno> zimbra the paid versoin speaks it
<patdk-lap> there is always openchange
<patdk-lap> no idea how well that works
<sarnold> buengenio: I've heard good things about http://en.wikipedia.org/wiki/Open-Xchange but never used it myself
<shauno> I'd be wary of google calendar, their caldav access is on the chopping board
<buengenio> Their biggest problem is that adding event invitations sent to a non GMail address doesn't work
<buengenio> At least in Thunderbird
<patdk-lap> heh?
<buengenio> which is what almost everyone uses at the office
<resno> if you had users comfortable with ftp, how would you allow them to upload their files?
<resno> whats a resonable alternative or a secure ftp server?
<sarnold> resno: I'd get them comfortable with sftp right quick.
<patdk-lap> there is no difference between ftp and sftp these days to a user
<sarnold> except no more baffling image vs text or pasv vs active :)
<patdk-lap> hmm, most programs hide that too :)
<patdk-lap> just when it won't work, do you have to deal with it :)
<sarnold> :D
<patdk-lap> like my friends router that messed up active ftp :)
<resno> is sftp that much improved over ftp?
<patdk-lap> resno, yes and no
<resno> im sure the "s" brings secure, but is it night and day
<patdk-lap> personally I hate sftp
<resno> i hate s/ftp
<patdk-lap> but it uses a single connection, unlike ftp, fixing nat issues
<patdk-lap> and it uses ssh
<patdk-lap> so it just works better :)
<resno> oh?
<resno> so, i wouldnt need an ftp server?
<patdk-lap> depends
<patdk-lap> most ftp servers these days support sftp too
<resno> you give me hope and then snatch it away
<patdk-lap> but give you more control than ssh will give you for sftp
<patdk-lap> all depends on what goal you have
<patdk-lap> use ssh for both
<patdk-lap> or use like proftpd for sftp
<patdk-lap> I think pure-ftp does it too now, but haven't checked
<resno> theres 3 main ftp servers right?
<patdk-lap> what is *main*?
<resno> uhm
<resno> ill look into proftpd
<resno> ive managed to avoid ftp being installed, so i want to make sure its all good and secure
<patdk-lap> there is one issue with that :)
<patdk-lap> you can't have both proftpd and ssh both listening on port 22 (I believe, maybe they did a passthough thing?)
<patdk-lap> so you would have to move normal ssh to another port
<resno> you cant have proftpd listen on another port as well?
<resno> seems like a reciepe for disater
<patdk-lap> if you have it listen on another port, it will confuse users
<patdk-lap> when they use the default port :)
<sarnold> man, i'd so much rather just rely on sshd to do sftp rather than get one of the ftpd servers involved.
<patdk-lap> sarnold, like I said it depends :)
<patdk-lap> the ftp server has more control, than ssh gives you per user
<patdk-lap> and personally, I love file upload notifications
<patdk-lap> so I can realtime scan and check files people upload
<resno> patdk-lap: do you do that even for trusted users? ie) coworkers
<sarnold> something that'd be annoying to put together with imcron and sshd over a few thousand users :)
<resno> i dont know your enviroment
<patdk-lap> trusted users? those exist?
<patdk-lap> anyones account could be compromised
<resno> true
<resno> hmm, i didnt think of that actually
<jamespage> yolanda_, https://code.launchpad.net/~james-page/python-quantumclient/grizzly-2.2.0/+merge/153512
<jamespage> when you get a chance please :-)
<yolanda_> jamespage, meeting
<jamespage> yolanda_, (I know :-))
<nailora> sarnold: i found this https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1008400
<uvirtbot> Launchpad bug 1008400 in linux "Ubuntu server uses CFQ scheduler instead of deadline" [Medium,In progress]
<yolanda_> jamespage, i'm looking the code at the diff, i see that in changelog * debian/control: Set version minimum for python-cliff (>= 1.3.1).
<yolanda_> but i don't see that reflected in the diff, is that from a previous commit?
<jamespage> yolanda_, yeah - it needed a tweak in the changelog to drop the ~
<jamespage> as changelog and change did not actually match
<yolanda_> approved it
<jamespage> bug 1155556
<uvirtbot> Launchpad bug 1155556 in maas "HP ProLiant DL380 G7 tftps kernel, but initrd tracebacks in tftp server.  DL380 G6 succeeds." [Undecided,New] https://launchpad.net/bugs/1155556
<jamespage> adam_g, reviewed and uploaded to folsom CA
<jamespage> adam_g, I swept that and the django fix through to -updates
<smoser> roaksoax, jamespage woot, isc-dhcp in raring with our no maas no-uuid patch.
<smoser> er.. what ever that patch was. thansk to stgraber
<jamespage> smoser, great!
<roaksoax> smoser: nice!!
<smoser> roso i guess actualy, in raring you should modify maas to use that.
<smoser> in its default config.
<zetheroo> got a couple ubuntu servers both running 12.04.2 here  ... one looks like this when logged in "[root@mars ~]#" while the other looks like this "root@saturn:~#" ... why does mars have those brackets ?
<patdk-wk> probably cause of the shell your using
<zetheroo> I am accessing them both through the same terminal via ssh ...
<zetheroo> all servers are using bash shell
<zetheroo> weird ... I closed the saturn session and reopened it and now saturn has those brackets ... [root@saturn ~]#
<jamespage> adam_g, http://people.canonical.com/~jamespage/ca-updates/
<jamespage> quantumclient and new version of python-django-compressor for horizon
<crankharder> what is it about this builder command that I can't ssh or telnet into the host once it's built & started?  https://gist.github.com/crankharder/c0063a365996f90b170c
<ttx> jamespage: cinder milestone-proposed cut
<jamespage> ttx, ta
 * jamespage switches configs
<soren> Daviey: There's a question for you(r team) in my post to the tb mailing list earlier today. Would you mind (having one of your minions) taking a look?
<Daviey> soren: I am the minion to the cretins. :)
<soren> Daviey: That's the spirit.
<Daviey> soren: I'll reply to that.. just not right now. Thanks for raising it
<soren> Daviey: By extension, is... err.. Rick Spencer  your minion?
<Daviey> soren: The higher you go, the least important you become.. so yes :)
<Daviey> I say jump. and he says,
<Daviey> "your're fired"
<pmatulis> hmph, on quantal (at least) i see that 'deluser --remove-home' does not remove the home directory, just the files
<Croves> Hello guys! I'm trying to install Ubuntu Server 12.04 i386 on a XenServer Virtual Machine, but when I try to install the sistem, I get this error: "Your installation CD-ROM couldn't be mounted. This probably means that the CD-ROM was not in the drive. "
<Croves> Any idea?
<saki`> hi ho
<saki`> i was wondering
<saki`> if i could run an ubuntu server OS off a live usb?
<melmoth> saki`, http://en.wikipedia.org/wiki/Ubuntu_Live_USB_creator
<crankharder> what is it about this builder command that I can't ssh or telnet into the host once it's built & started?  https://gist.github.com/crankharder/c0063a365996f90b170c
<Croves> Anyone here is familiar with XenServer?
<saki`> thanks melmoth
<saki`> maybe i should mention
<saki`> i'm going to be trying to run this off it: http://www.sourcefabric.org/en/airtime/download/
<melmoth> saki`, the usb creator thingy let you have a "stat" in your live usb system. wich means, any change you make, will be there after a reboot.
<melmoth> so you can apt-get install or compile stuff you need, and they will be available on the usb stick.
<melmoth> all you need is... space on the key.
<saki`> okay cool, so a persistent install or whatever yeah?
<saki`> thanks melmoth.
<melmoth> indeed.
<melmoth> when you create the key, you have an option about "casper", that s the persistant stuff
<melmoth> basically, it ask you how much space you want to allocate to the persitsant storage (if i understand correctly)
<saki`> ah ok
<melmoth> i dont kow the details, i just know it "just worked" last time i needed it
<saki`> hmm this seems to require me to compile it somewhere. if this works let me try and use a windows based installer instead.
<saki`> as in, if it works anyway
<saki`> you wouldn't happen to know of one would you melmoth? the only one i know of is YUMI, and that's for multiboot stuff
<melmoth> nope, sorry.
<saki`> ah no worries
<saki`> found one
<zetheroo> in trying to setup glusterfs here I am running into this message and cannot seem to find a fix that works for me:  /mnt/gluster or a prefix of it is already part of a volume
<hrenovo> so to add a rule with ufw I do something like this "ufw allow 8080" , now how can I delete this rule from showint up in "ufw status" ?
<jpds> hrenovo: That's... not suppose to be done?
<jpds> Why would you add a rule, then hide its existance?
<hrenovo> if I no longer need it
<hrenovo> not hide, just get rid of it
<hrenovo> i gigured it out
<hrenovo> its ufw delete allow 8080
<hrenovo> like that
<jpds> hrenovo: Ah, right. :)
<fabiofranco> something weird is happening with my tomcat7 and mysql server. when I add firewall rules with iptables to open the port 8080 and 3306 and add the last rule dropping everything else the communication between tomcat7 and mysql just stops... any thoughts?
<patdk-wk> fabiofranco, thoughts without seeing the rules?
<fabiofranco> sure, I add: iptables -A INPUT -j ACCEPT -p tcp --dport 8080, iptables -A INPUT -j ACCEPT -p tcp --dport 3306 and finally iptables -A INPUT -j DROP -p tcp
<fabiofranco> just those three... and after I add the last one the communication stops immediately
<fabiofranco> I add the one open ssh too of course
<qman__> you need to add rules to allow all traffic on the loopback interface
<fabiofranco> qman__ example pls?
<adam_g> jamespage, those 2 new CA updates LGTM
<qman__> iptables -A INPUT -i lo -j ACCEPT
<qman__> iptables -A OUTPUT -o lo -j ACCEPT
<fabiofranco> qman__ i see.. gonna give a try
<qman__> also, it's conventional to put the -j at the end of each line
<qman__> not sure if it affects the rules
<qman__> but if it does, your last rule could mean iptables -A INPUT -j DROP, which would certainly not be great
<qman__> also, do you have rules for established traffic?
<qman__> iptables -A INPUT -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT
<fabiofranco> no, I dont
<qman__> you definitely need that too
<fabiofranco> gonna apply it right now
<jamespage> adam_g, great - ta
<fabiofranco> qman__ do you think the last rule should be iptables -A INPUT -j DROP?
<qman__> it can but that will break all icmp and udp traffic
<qman__> so if you want any of either, make sure you allow it first
<fabiofranco> qman__ I see... gonna try it
<fabiofranco> qman__ it worked... thanks a lot
<jamespage> adam_g, that horizon oddness with firefox is fixed with the new version of django-compressor
<adam_g> jamespage, great
<adam_g> jamespage, which projects are we waiting on for RC1?
<jamespage> adam_g, most of them
<adam_g> ah
<jamespage> quantum and cinder have released rc1's
<jamespage> I started on quantum
<jamespage> but noticed the watch file does not work that well and got distracted....
<jamespage> adam_g, MP for quantum rc - https://code.launchpad.net/~james-page/quantum/grizzly-rc1/+merge/153606
<adam_g> jamespage, nice.
<jamespage> adam_g, I think all the required deps are in the grizzly-staging PPA now
<jamespage> there are a few catchups outstanding but nothing critical.
<adam_g> jamespage, when i got online, saw a precise+grizzly test had just failed on volume creation. hope its something transient. :)
<jamespage> adam_g, hmm - worked a few minutes ago - I'll try again
<jamespage> adam_g, btw I'm working on a tool to make backporting easier
<jamespage> ca-backport-package 'os_series' 'package' 'Comment for Changelog'
<adam_g> jamespage, hah
<jamespage> hopefully it will mean the only thing you can get wrong is the changelog comment :-)
<adam_g> jamespage,  i just did this yesterday http://paste.ubuntu.com/5617160/
<jamespage> adam_g, lol
<jamespage> great minds and all that
<jamespage> we should consolidate stuff
<jamespage> mines a bit more hacky write now
<adam_g> jamespage, yeah, what are your thoughts on making this automated, in response to the version_drift failing?
<jamespage> adam_g, I'd be up for that - I implemented the changes we discussed in the CA archive admin tooling to help support that today
<jamespage> ca admins now get the change details so can choose to ignore things.
<adam_g> jamespage, where do you envision the bot pushing the auto-built backports? straight to the staging PPA or somewhere for a human to do that?
<jamespage> adam_g, cinder looks OK to me - http://paste.ubuntu.com/5617170/
<jamespage> adam_g, I think step one would be to put it somewhere for a human to review, sign and upload
<jamespage> adam_g, but so long as that proves reliable then full automation ++
<adam_g> jamespage, thats what i was thinking. a staging-staging-PPA so we can ensure builds, as well
<adam_g> need to step away. back in 10
<jamespage> adam_g, yeah - one that inherits of the staging PPA would be neat
<jamespage> hmm - that give me a thought
<jamespage> we could just write a tool that pulls stuff from there, signs the packages and uploads them to the true staging PPA
<jamespage> actually thats almost an extension of the tool I already wrote for syncs staging->proposed->updates
<jamespage> adam_g, other thing I have been doing is switching the build configs from master -> milestone-proposed as the branches are cut
<jamespage> done for cinder and quantum - ttx has been good at pinging me when that has happened
<jamespage> adam_g, if you agree with the approach I took in the mysql charm re openstack-charm-helpers I'll add that to tha ha-helpers branch, re-sync swift-proxy and start working on keystone on monday
<jamespage> I guess the unison helper could live in charm-helpers as well.
<jamespage> And then we can write some unit tests.
<jamespage> w00t
<adam_g> jamespage, +1 to all that. do the branches that have a milestone-proposed also have havana version bump in master?
<jamespage> adam_g, yes
<jamespage> but due to the way we override the OSLO version number in the lab we don't get busted by that
<jamespage> i.e. the release created is always 2013.1
<jamespage> adam_g, OK - I have to go now
<jamespage> adam_g, weekend and all that - I will check back in a bit later +3 hr
<jamespage> ttfn
<adam_g> jamespage, k, looking at the mysql stuff now. after we merge that i'll sync the helpers branch with those changes + anything else still pending
<jamespage> adam_g, ahead of you - lp:~james-page/openstack-charm-helpers/ha-python-updates
<jamespage> feel free to merge - I added headers over the mysql versions to tell people its part of openstack-charm-helpers
<adam_g> doh! :)
<HelloWorld321> I'm looking at a resolved help-forum post that seems similar to my problem (http://boards.portforward.com/viewtopic.php?f=3&t=9910&sid=201967eaaef5c2335ed22ea5a811c5d3&start=10) and I don't know what the poster means in his resolution: "I noticed that the net:bind_ip in utorrent was set to a different internal IP address than the static one assigned my computer, so I just cleared this, so the field was blank.".  I'm not 
<HelloWorld321> What is net:bind_ip?
<sarnold> HelloWorld321: you were cut off at "I'm not"
<sw> HelloWorld321: net:bind_ip is the IP address that it binds too. from their website: net.bind_ip: If your computer setup requires that you use a specific LAN adapter for incoming connections, you may specify that adapter's IP address here.
<sarnold> HelloWorld321: when a program listens on a socket, it binds a socket to a port on an IP
<sarnold> HelloWorld321: the usual interface is, if no IP address is specified, bind that port on _all_ IPs the machine uses. If an IP is specified, then bind only on that IP, so other IPs on the machine don't expose the service -- or can run a different service.
<HelloWorld321> thanks.  so I can find that in netstat?
<HelloWorld321> (cut off at "I'm not"):  ... using utorrent, I'm trying to set up an ftp as a proof-of-concept, since I figure ftp is pretty standard (maybe a little too standard: I'll disable it for security once I figure out what's going on with my router), ...
<HelloWorld321> but I don't know what the net:bind_ip is or where to clear it
<sarnold> HelloWorld321: yikes, ftp is a pain in the butt all around :) active vs passive connections is extremely irritating.
<HelloWorld321> I was supposiung that it would be easy, because it was so old and so standard
<HelloWorld321> (afk4lunch!)
<HelloWorld321> bak
<HelloWorld321> I'm able to hit the ftp server from inside the network, so I suppose that the hsot is set up properly
<hrenovo> hi. iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
<hrenovo> I have this rule added in iptables
<hrenovo> is there a way to add it in ufw ?
<hrenovo> if I enable ufw this rule is blocked
<HelloWorld321> My system is listening for ftp on all ip addresses (http://pastebin.com/WjhZtzTX) and I'm having no trouble reaching the ftp server from inside my network, so I believe that my original question about checking/clearing net:bind_ip is not really the issue.  Is that correct?
<qman__> HelloWorld321, that is correct
<qman__> FTP is ancient and poorly designed, and does not play well with NAT
<qman__> http://mywiki.wooledge.org/FtpMustDie for more information
<qman__> the short if it is, don't use FTP, SFTP is in all ways superior and you probably already have it
<Pici> Also ,don't confuse FTPS is not SFTP
<Pici> Also ,don't confuse FTPS with SFTP
 * Pici confuses himself sometimes
<Pici> er, not with that, with typing the right thing.
<HelloWorld321> okay, I'll take that into account.  But for now, my point isn't actually to set up an FTP server., it's to set up my router.  I just want to hit anything inside my network from the world IP address.
<qman__> HelloWorld321, FTP is the worst possible protocol to test that with
<qman__> becuase FTP specifically will not work over a NAT without lots of hacking
<HelloWorld321> I used to run a Ventrilo server, but I had to stop when I got a different router, so now all my gamer friends are bummed.
<HelloWorld321> okay, what's the simplest application to test that with?
<qman__> http or ssh
<HelloWorld321> okay.  I have apache running on that same box.
<HelloWorld321> I'll go open those ports and try that.
<HelloWorld321> http is port 80, right?
<qman__> yes
<qman__> bear in mind that if you have a residential ISP, they may block it
<qman__> many block 25, 80, and 443
<HelloWorld321> I've thought of that.  I asked the support desk.  They said they didn't.  But they also sounded like they didn't know what "ports" are
<HelloWorld321> and yes, this is on a residential ISP.
<qman__> I've never seen one that blocks 22 though
<qman__> and you can always try forwarding a high port, like 8080 -> 80
<imrook> I'm trying to build the php5_5.3.10-1ubuntu3.6 source package, but getting "debian/setup-mysql.sh: 44: debian/setup-mysql.sh: USER: parameter not set" during test-results.txt
<HelloWorld321> is 22 sftp?
<qman__> ssh/sftp
<imrook> This was fixed back in 3.3 http://irclogs.ubuntu.com/2012/12/21/%23ubuntu-server.html
<imrook> Is this a known regression?
<HelloWorld321> okay, 80 doesn't work with this configuration.
<RoyK> HelloWorld321: setup ssh and tell us the ip address - unless you have a very bad password, it should be safe to post the address for some of us to test. if you have a bad password, your box will be compromised in hours anyway
<HelloWorld321> I have just tried to sftp localhost, and ssh to the internal ip, so sftp is setup.  My outside ip address is 98.148.120.187
<HelloWorld321> o:
<HelloWorld321> but I haven't opened that port yet: 22
<RoyK> hehe
<qman__> yeah, it's being dropped
<HelloWorld321> okay, I think I've opened that port.
<HelloWorld321> But I also think that that's my problem.  I'm not setting up the router properly.
<qman__> I got a response
<qman__> it's open
<HelloWorld321> you see me?  freaky!  yay!
<HelloWorld321> can you guess my password?  :P
<qman__> The authenticity of host '98.148.120.187 (98.148.120.187)' can't be established.
<qman__> RSA key fingerprint is 86:6a:1b:00:03:2c:85:bd:6e:2e:dc:31:50:47:6a:2a.
<qman__> so, that part of it works
<HelloWorld321> Hm.
<qman__> you can check if your software is listening correctly by doing `netstat -lanp | grep $port`
<HelloWorld321> That's not the same figerprint I'm seeing
<HelloWorld321> I can hit ssh & sftp at localhost, but I can't hit them from the external ip I just gave you.  Would you mind hitting it one more time, tell me, then I'll disable it, and see if it stopped.  Just to make sure that it's me
<hallyn> jamespage: please do let me know if/when tests confirm the /dev/kvm issue is fixed - i'll wait until then to sru the fix.  (have written down to look at it again next w if nothing else)
<qman__> HelloWorld321, yes, it's still working
<HelloWorld321> This means that the box will accept ssh from anywhere?:   tcp6       0      0 :::22                   :::*                    LISTEN
<qman__> HelloWorld321, most NAT routers won't route traffic back in destined for your external IP
<qman__> you have to specifically configure it
<qman__> so, you can't reliably test the setup from inside your own network
<HelloWorld321> I have stopped forwarding port 22, see if you can hit it now.
<qman__> nope, dropped
<HelloWorld321> Nifty.  So I'm onto something here.
<HelloWorld321> and the reason I couldn't do the same with ftp was because it was the hardest example, not the simplest
<qman__> yes
<qman__> FTP requires ports 20 and 21, in addition to a range of high ports
<HelloWorld321> lemme try http now ...
<qman__> and your FTP server must be configured with those high ports, and must also be configured to hand out your internet IP
<qman__> you need a minimum of 3 open ports to handle a single connection
<HelloWorld321> I would suppose that it's generally safe to leave port 22 (ssh) open, as long as I have a strong password policy in place?
<qman__> it's actually best to disable password authentication
<qman__> but if you have strong passwords it should be ok
<qman__> it's also advisable to limit brute forcing through things like fail2ban or a rate limiting firewall
<HelloWorld321> would you mind trying to hit me at http://98.148.120.187
<qman__> squirrelmail
<HelloWorld321> that's right.  Thanks!
<HelloWorld321> This was driving me nuts!
<imrook> sshguard is also an easy and effective solution to prevent hammering on 22.
<HelloWorld321> For http, I would suppose I only need TCP open, not UDP?
<HelloWorld321> Thanks qman__, imrook, RoyK.  I was totally stuck on that.
<qman__> HelloWorld321, for all of the above, only TCP is needed
<HelloWorld321> I'ma secure my ssh in all the ways you've said:  disable password authentication, fail2ban, and sshguard.
<HelloWorld321> fail2ban was already installed and auto-configured.  I've poked abuot that documentation, and don't understand a word, from which I'll infer that the default configuration is reasonable?
<HelloWorld321> I've installed sshguard 1.5-4 from the package, and the developer site says that post 1.5 there is zero configuration.  Is that correct?
<imrook> If you're just protecting sshd, then yes
<imrook> Aside from the bug I reported that hasn't been closed yet
<imrook> Having the string 'ssh' in your hostname causes the regex to fail and sshguard will not properly detect failed login attempts
<HelloWorld321> bummer.  k.  that won't be a problem for my hostname
<sarnold> imrook: hah :)
<HelloWorld321> I now beleive I have my port 22 open, sshguard & fail2ban installed, and password authentication turned off for ssh at 98.148.120.187.   Care to verify?
<sarnold> HelloWorld321: Permission denied (publickey).
<sarnold> no password prompt. woot.
<HelloWorld321> That's good, right?  Yay, I did it.  Thanks.  That's pretty cool.
<HelloWorld321> Now I can run around opening other ports
<HelloWorld321> I tell ya: it was driving me NUTS!
<HelloWorld321> I totally thought I had a bum router
<sarnold> :)
<dsmythies> Hello, I am not normally on IRC, and am actually somewhat IRC challenged, but there is an issue I was hoping to get help with.
<dsmythies> .
<dsmythies> The computer is an Ubuntu Server 12.04 LTS with no GUI. To install a virtual machine (a ubuntu 12.04 server again) I am following the Ubuntu Serverguide Virtualization chapter, sub-section 1 Libvirt.
<dsmythies> The problem is that I can not figure how to complete a virtual machine installation, without either a GUI or a 2nd computer with a VNC viewer client.
<dsmythies> The serverguide sub-chapter mentions both virt-manager and virt-viewer, but both require a GUI.
<dsmythies> I finally figured out that I could use another computer with both a GUI and a VNC viewer client, if I used this command:
<dsmythies> .
<dsmythies> sudo virt-install -n virt32_01 -r 128 --disk path=/var/lib/libvirt/images/virt32_01.img,bus=virtio,size=12 -c ubuntu-12.04.2-server-i386.iso --accelerate --network network=default,model=virtio --connect=qemu:///system --graphics vnc,listen=0.0.0.0 --noautoconsole -v
<dsmythies> .
<sarnold> dsmythies: (please don't use . to try to add paragraphing to irc :)
<dsmythies> The important part of the command being: "--graphics vnc,listen=0.0.0.0"
<dsmythies> .
<dsmythies> My question: Is there a (Libvirt) way on a non-GUI server without involving other computers?
<dsmythies> .
<dsmythies> References:
<dsmythies> https://help.ubuntu.com/12.10/serverguide/libvirt.html
<dsmythies> https://bugs.launchpad.net/serverguide/+bug/1129649
<uvirtbot> Launchpad bug 1129649 in serverguide "Chapter 20 - Subsection 1 - Virtualization - Libvirt needs updating" [Undecided,In progress]
<dsmythies> http://ubuntuforums.org/showthread.php?t=2116415
<dsmythies> .
<sarnold> dsmythies: have you tried leaving off the --graphics command line option?
<sarnold> dsmythies: I use the 'uvt' wrapper to build, snapshot, and revert VMs, no VNC required: https://wiki.ubuntu.com/SecurityTeam/TestingEnvironment
<sarnold> dsmythies: .. though it is Yet Another Tool to configure.
<dsmythies> If I leave off the --graphics line, then I am unable to connec to anything.
<dsmythies> I do not know of "uvt", but will look into it. Right now I am specifically trying to use virt-install...
<dsmythies> In the end, I hope to edit the serverguide itself with better emphasis on a non-GUI server.
<sarnold> dsmythies: uvt doesn't do anything that you couldn't otherwise do, but it does make it easy to ignore the virt-* details :D
<dsmythies> Before the --graphics stuff, this is the command I was trying:
<dsmythies> sudo virt-install -n virt32_01 -r 128 --disk path=/var/lib/libvirt/images/virt32_01.img,bus=virtio,size=12 -c ubuntu-12.04.2-server-i386.iso --accelerate --network network=default,model=virtio -v
<luminous> hello! what is the ubuntu solution to easily encrypting/decrypting a directory
<luminous> similar to truecrypt, but not tc
<sarnold> luminous: ecryptfs if you just want directories; dm-crypt on an entire block device if you want everything...
<luminous> the fuse encfs?
<luminous> sarnold: ^^
<tyhicks> luminous: ecryptfs and encfs are similar
<tyhicks> luminous: you get to choose :)
<luminous> but not the same
<tyhicks> no, different implementation
<luminous> interesting.. i will need to read more
<tyhicks> ecryptfs is an in-kernel filesystem, encfs is fuse based
<luminous> great!
 * luminous does not like fuse
<tyhicks> ecryptfs will get you a little better performance, encfs probably has more knobs and features (but I haven't looked at it in a while)
<luminous> ll i need is to be able to do is copy/store a git repo and some files
<luminous> to confirm, encryptfs requires one to decrypt, edit/update/read, then encrypt, w/ encryption/decryption initiated manually - correct?
<tyhicks> luminous: no, it does it all transparently
<sarnold> luminous: ecryptfs is intended to be transparent -- once mouted, the decryption and encryption happen for you
<tyhicks> luminous: it is a stacked filesystem that goes on top of your existing local filesystem
<luminous> or, said another way... if in use, it is readable to all / like a normal directory
<luminous> it has to be unmounted to be 'protected'
<patdk-lap> once unlocked it's usable by the whole system, yes
<luminous> k, good to know, ty
<tyhicks> luminous: yes... it is close enough to be considered a posix compliant filesystem
<sarnold> luminous: if you're instead wanting git to store remote repositories encrypted, there's a tool for that specifically under development: https://github.com/blake2-ppc/git-remote-gcrypt
<luminous> that's cool
<luminous> i'll check it out, though i do want to feel reasonably confident in the setup
<sarnold> no doubt ecryptfs has seen more development time and more peer review :)
<luminous> yea
<luminous> thanks for your input!
<luminous> it is apprecited
<sarnold> have fun :)
<luminous> oh, and if interested in this stuff.. have a peek at crypton.io
<sarnold> nice :)
<luminous> yea :) still early, but very promising
<luminous> and backed by spideroak.com
#ubuntu-server 2013-03-16
<dsmythies> sarnold: Thanks for replying to my question earlier, and sorry for the "." line separators in my description.
<HelloWorld321> thanks all for your help with my router this afternoon, esp tx qman__
<{aaron}> hi guys, i fired up an "ubuntu cloud" branded image on ec2 today. i just wanted to confirm - besides Amazon EC2 charge there's no additional charge to run those images right?
<TheLordOfTime> don't think so.
 * TheLordOfTime runs two EC2 instances with Ubuntu on them *shrugs*
<sunuslee> sunuslee
<sunuslee> 6:11
<sunuslee> hi, is anyone there to help me debugging a network problem? i have three servers(A,B,C) in a subnet. A,C can ping to EACH OTHER. but neither of them can ping to b. then i check the arp table. A got the correct hwaddr of b. but b got the incomplete of A.
<sunuslee> 6:11
<sunuslee> A, B has dual nics. but i think i set up right. eth0 in a subnet, eth1 in another.
<sunuslee> 6:12
<sunuslee> A send arping to b can not get reply. but B send arping to a can get reply. but it seems like B didn't received it.
<sunuslee> 6:13
<sunuslee> anyone?
<sunuslee> sunuslee
<sunuslee> 6:11
<sunuslee> hi, is anyone there to help me debugging a network problem? i have three servers(A,B,C) in a subnet. A,C can ping to EACH OTHER. but neither of them can ping to b. then i check the arp table. A got the correct hwaddr of b. but b got the incomplete of A.
<sunuslee> 6:11
<sunuslee> A, B has dual nics. but i think i set up right. eth0 in a subnet, eth1 in another.
<sunuslee> 6:12
<sunuslee> A send arping to b can not get reply. but B send arping to a can get reply. but it seems like B didn't received it.
<sunuslee> 6:13
<sunuslee> anyone?
<sunuslee> ip addr show dev eth1
<sunuslee> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br100 state UP qlen 1000 ââ what is master br100 do?
<elemay2> hi guys, i use bonding on our servers, but one server sets its link up for eth1 only with 100mbps. its an intel dualport 1000 nic, so why he does it? switch is on 1gbps
<RoyK> elemay2: does it use gigabit without bonding? and what sort of bonding is this?
<elemay2> RoyK: yep it uses gbit without. bond-mode 802.3ad
<RoyK> using LACP really shouldn't change the link speed
<elemay2> i have set this bond-lacp-rate 1
<elemay2> i have the exact same setup on another server, there it works
<RoyK> what does dmesg output say about links speed?
<elemay2> RoyK: e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx  e1000e 0000:01:00.1: eth1: 10/100 speed: disabling TSO bonding: bond0: link status definitely up for interface eth1, 100 Mbps full duplex.
<RoyK> wierd
<RoyK> no idea, sorry
<RoyK> and it comes up with gigabit without LACP?
<RoyK> same switch type?
<elemay2> yep. same switch, same port. port speed is also set to automatic
<RoyK> what if you force both ends to gigabit? really shouldn't be needed, though
<elemay2> RoyK: i just tried it on another switch, same manufacturer. if i enable bonding in ubuntu i get an ip, all is well. as soon as i setup LAG on the switch, i can't reach some ip's in my network, also i have the same symptons on the other switch. eth0 = 1gbps, eth1 = 100mbps
<RoyK> elemay2: I haven't setup LACP from linux for some time, but last I did it (two years back?) it worked well and gave me full speed
<elemay2> also it doesn't matter if i use 2 separate intel nics, or the dual port one. or one from netgear and one from intel and so on. exactly the same setup on my other machine. bonding set up, switch set up. all is fine.
<RoyK> a dual-port is just two nics on a single board
<elemay2> RoyK: i wanted to say that it can't be the nics ;)
<RoyK> that's highly unlikely anyway
<RoyK> if it works without LACP, it's software
<RoyK> which distro/kernel??
<elemay2> but i don't get it why. other machine is almost the same. same ubuntu version. 12.04 x64 3.2.0-38-generic
<RoyK> 3.2 on the other as well? 12.04.2 comes with 3.5
<RoyK> (for some moronic reason, they changed the kernel version on .2)
<elemay2> yep 3.2. i don't do dist-upgrades
<RoyK> dist-upgrade won't upgrade the kernel
 * RoyK always uses dist-upgrade
<elemay2> i could try to upgrade kernel. but it works on the other machine... strange
<RoyK> same machine make?
<RoyK> can you try the same switchports the other machine is using?
<elemay2> yup, i'll try
<elemay2> takes a moment, i have to shutdown it first
<RoyK> seems something may be broken in the bridging
<RoyK> just upgraded my precise box and bridging works now and then
<RoyK> works fine from localhost to vm's, but not always from remote
<brian_petersen> I am running Ubuntu 12.04 LTS server edition and I notice that two packages are being held back, linux-headers-generic-lts-quantal and linux-image-generic-lts-quantal. I did not explicilty set this packages to be held back and I am wondering what caused them to be held back automatically. Any insight would be appreciated!
<andol> brian_petersen: Running apt-get upgrade? Will apt-get dist-upgrade want to upgrade those packages?
<andol> brian_petersen: The difference is that apt-get upgrade won't upgrade packages which require new depencies to be installed, and those packages are meta packages depending to the newest such kernel package.
<brian_petersen> andol: ah, I see. That's a little different that what I am used to. I'm coming from Arch's pacman and they didn't have that distinction. Thanks!
<brian_petersen> So right now it's downloading a new kernel version, that means that I will have multiple kernels installed now, correct?
<andol> Yepp
<brian_petersen> how would you recommend cleaning up older kernel versions?
<jcastro> sudo apt-get autoclean seems to catch most of them for me
<Nine_9> hello guys, do you know any webmail that will allow me to use multiple imap accounts at once? I've installed roundcube but it doesn't have this feature, squirrelmail doesn't seem to have also, and horde is impossible to install
<patdk-lap> why is horde impossible to install?
<Nine_9> patdk-lap: i couldn't find a way to install it
<Nine_9> and I don't think it supports multiple accounts also
<patdk-lap> well there is a ubuntu package, but I never liked it
<patdk-lap> it doesn't
<patdk-lap> just use something like dovecot that supports shared and/or master accounts
<Nine_9> it's not my local machine, i don't think i can install it
<Nine_9> roundcube has a paid plugin
<Nine_9> it seems no such software exists.
<Nine_9> a similar question: http://stackoverflow.com/questions/2490818/webmail-client-that-can-open-multiple-email-addresses-at-once-like-a-desktop-cli
<histo> Nine_9: What do you mean multiple imap accounts at once?
<Nine_9> histo: like any desktop email cliente, you can 'manage' multiple emails at once.
#ubuntu-server 2013-03-17
<Nine_9> seriously, it seems there's no opensource webmail that can manage multiple accounts at once.
<Nine_9> it's a pity I'd be forced to go closed-source :(
<baymont> Can anyone provide some help with the Squid proxy
<NginUS> How do I see what I have set as an environment variable? And how do I reset it to what I want it to be?
<NginUS> How do I set the SERVICE_TOKEN environment variable to match that of a value in a config file?
<savr> hi
<savr> why is all the java software in ubuntu really out of date
<savr> like solr/tomcat even eclipse
<savr> they're still out of date for 13.04
<savr> I know java software doesn't really need to be installed from apt but it is nice to get all the system scripts
<savr> is there a ppa somewhere that I haven't found?
<histo> savr: How "out of date" are they?
<savr> eclipse is years old
<histo> savr: perhaps no one is maintaining the package anymore.
<savr> tomcat is like half a year old
<savr> solr is still on 3.6
<histo> !info eclipse
<ubottu> eclipse (source: eclipse): Extensible Tool Platform and Java IDE. In component universe, is optional. Version 3.8.0~rc4-1ubuntu1 (quantal), package size 16 kB, installed size 121 kB
<savr> 4.2 has already been released
<histo> !info tomcat
<ubottu> Package tomcat does not exist in quantal
<savr> !info tomcat7
<ubottu> tomcat7 (source: tomcat7): Servlet and JSP engine. In component main, is optional. Version 7.0.30-0ubuntu1.1 (quantal), package size 37 kB, installed size 353 kB
<savr> oh one sec
<histo> savr: which repo is eclipse in?
<histo> savr: that's why
<savr> tomcat 7.0.35 has recently been pulled in from debian experimental to 13.04
<histo> It's maintained by the community
<histo> tomcat is in main so it will get updated
<savr> yeah they seem to be pulled in from debian
<histo> If no one updates eclipse it will not be ever updated.
<savr> the latest release of tomcat is .37
<savr> so it is two released behind
<histo> If it's something you use quite a bit you may want to package it.
<savr> hmm
<histo> savr: linux != windows if you are looking for bleeding edge system that breaks frequently you may want a different distro.
<savr> solr is the one really bothering me
<savr> I'm not... just hope the security fixes are ported back
<histo> savr: no one is stopping you from building the package from source. If you do I would recommend checkinstall It will build a deb for your for easy removal.
<savr> if the security fixes aren't being handled I'm better off installing it myself
<histo> savr: that's why we have the security team
<savr> I like leaving my ubuntu boxes to auto update it's what makes ubuntu so great
<savr> so I prefer to use the apt packages as much as I can
<savr> histo: is it safe to rely that security fixes are being back ported to software like tomcat or should I be managing the latest version on my boxes myself?
<yeats> savr: security fixes are applied, yes.  If you're paranoid and have the time, you can manage it manually
<savr> I don't have time :)
<savr> thanks security team :D
<yeats> savr: I would be careful with auto update - sometimes you want to review what's being installed before applying it
<yeats> especially if you're managing manual builds
<savr> yeah
<__dan__> hi there guys
<__dan__> having some kernel panic related fun with megaraid
<__dan__> using an LSI megaraid sata 300-8x card pci-x card
<__dan__> can anyone help or drum up some ideas etc?
<__dan__> compiled latest kernel (3.8.3) overnight and tested just now, still getting lockups :/
<__dan__> same card has worked without fault for years on win2003 server
<__dan__> cmon 361 ppl someone must be alive :P
<RoyK> __dan__: got a dump of that panic?
<__dan__> no mate sorry it never dumps :/ or at least i dont think it does
<__dan__> it just hangs
<__dan__> sometimes it flashes the caps and scroll lock lights
<__dan__> http://bugs.centos.org/view.php?id=5383
<__dan__> this looks promising but im not sure whether that patch has made it into any kernel i've tried yet
<__dan__> and im loathed to install centos although i am comfortable with it
<__dan__> was just looking at the kernel source to see if that patch has made it into 3.8.3 but im way out of my depth tbh
<__dan__> seems like at least some of that patch made it into upstream
<__dan__> although it has been twiddled
<__dan__> so it seems to me anyway
<__dan__> been all up and down the internet and tbh other than install centos just to see if it works im out of ideas now
<__dan__> i know its an old raid card but it's all i got and was working under win2003 server for years
<histo> !enter | __dan__
<ubottu> __dan__: Please try to keep your questions/responses on one line. Don't use the "Enter" key as punctuation!
<Monotoko> hey guys, after today... is there anything I can do that would stop certain commands actually being executed on servers? One of our clients demanded sudo access to a server, and accidentally ran ... a bad command I'm not sure I'm allowed to put here
<histo> Why would you give a client sudo access
<histo> they have the keys to the kingdom then.
<histo> First you should ask what they need sudo access for in the first place.
<Monotoko> histo, he was a Linux tech himself, I wasn't there when my boss authorized it, I personally wouldn't have given them the access
<histo> Monotoko:You would need to ask why they need the access for what particular file or command?  Then just provide them access to that and not the whole system.
<Monotoko> sadly he's my boss and I can't seem to convince him that he really shouldn't hand this access out to clients....and now he's asking about this... and I can't find anything on the web about it - I'm planning to tell him when I next see him "this is the way to do what you wanted, however these things are wrong, and this would be a better way to do it"
<Monotoko> then if he wants to impliemnt the blacklist of sorts, it's on his head next time someone screws up
<histo> Monotoko: Your plan is correct. You can't give them sudo and then restrict certain things. They will find away around it especially since they can just sudo -i  or su -  to get root. The proper way would be to just give them access to what they need. Or do it the bosses way and not have any access control at all.
<qman__> sudoers has functionality to grant access for specific commands
<qman__> however, it's whitelist-style, not blacklist
<__dan__> lxc is the way ahead :)
<histo> Monotoko: qman__ has it right as i've been saying you have to give them access to the commands they need whitelisting
<qman__> for example, I use it for the backuppc user to back up my system
<qman__> backuppc  ALL=NOPASSWD: /usr/bin/rsync --server --sender *
<Monotoko> qman__, could you whitelist everything, then blacklist certain things in the sudoers file? (I know this isn't the way to do it, and I have suggested he should only give them what they need already - I'm just going to give him the solution he's asked for if possible, then tell him for one last time that the better ways would be to only give them commands they need, not give them access
<Monotoko> etc etc)
<qman__> that line allows only that user to run only that command as root
<sw_> hi is it possible to keep apache VirtualHosts in one single file, instead of seperate sites in /etc/apache2/sites-available?
<qman__> Monotoko, no, that is not possible
<qman__> it does not have a blacklist function
<Monotoko> qman__, okay cheers, I will have a google around to see if I can find anything with a blacklist function, but if not I will just tell him what he wants isn't feasible
<qman__> as mentioned, such a thing is pointless, because the user could write their own script or program and work around it, if not abuse the shell
<qman__> a whitelist is the only way to actually secure things
<qman__> you won't find one, but even if you do, it won't work
<qman__> it's not possible to secure a system that way
<Monotoko> qman__, I know... but work is work, and when your boss isn't being logical it's difficult to convince him... hopefully if I find something, and it breaks, he will understand
<Monotoko> personally I'm hoping a user does break it so he sees sense
<qman__> it's your job to demonstrate this without doing that
<qman__> show him a list of fork bombs, or a list of ways to delete files, make it visible that there's an infinite number of possible ways to actually achieve it
<Monotoko> qman__, I have tried... I've shown him the hex bomb, and told him there could be many many others created that this blacklist thing he wants wouldn't pick up
<qman__> then, in my opinion, it's time to dust off the resume
<Monotoko> trust me, this has turned into a bit of a heated arguement at work today... with him pointing fingers at me and me pointing fingers at him
<Monotoko> I explicitly told him not to give users root access to servers because that's what we are paid to manage
<Monotoko> and he went and did it anyway
<qman__> I won't be held responsible for systems that incompetents have root access on
<qman__> and that's exactly the position you're in
<Monotoko> I'd rather him call me at 2am to sort a problem
<Monotoko> than give the bloody user root access
<Monotoko> >.>
<sw_> hi is it possible to keep apache VirtualHosts in one single file, instead of seperate sites in /etc/apache2/sites-available?
<qman__> sw_, yes, but that's not the debian way
<__dan__> just write a script that runs for him instead of sudo
<__dan__> and says "Bad command or file name"
<__dan__> or sommet
<__dan__> eheh
<histo> lol
<sw_> qman__: it still works though?
<qman__> yes
<histo> You could make a fake sudo lmao
<qman__> as a matter of fact, the stock apache configuration is a single file
<qman__> debian splits it up to make it more manageable
<sw_> qman__: how would we do that if we wanted them all in a single file? like where should it be stored etc.?
<Monotoko> histo, that's so insane... it sounds like exactly what he'd go for
<__dan__> sometimes u gotta fight moron with moron :)
<qman__> sw_, it can go in any of the files
<__dan__> or do what i said before and play with lxc
<qman__> sw_, there isn't a right one to do that in, because that's not the right way to do it
<__dan__> make him a little sandbox you can blow away and reinstate in 30 seconds :)
<sw_> qman__: how isn't it the right way? the Debian way is the right way?
<Monotoko> "here is the new username and password for this server, whatever you do, do not ever give it to the user *hint* *cough*"
<qman__> sw_, ubuntu is debian based and uses the debian way, therefore the right way to do things on ubuntu is the debian way
<Monotoko> knowing him, il set that up, and he'll bloody not do it again
<qman__> that's the supported way, the way the system is designed to be used
<qman__> you can do whatever you like, of course
<Monotoko> hmmm, on an unrelated topic
<Monotoko> where is service located? I thought it was /sbin/service
<sw_> qman__: is there a reason why Debian does it this way?
<__dan__> still got this megaraid problem if anyone fancies taking that on
<qman__> sw_, as mentioned, it's for manageablility, if each site has its own file, it's easy to turn them on and off with the a2ensite/a2dissite commands, for both troubleshooting and general management purposes
<__dan__> although i compiled latest from kernel.org and disabled pcie aspm and so far badblocks hasnt made it puke :)
<__dan__> also does anyone know how to force text mode on bootup?
<__dan__> i did it once but i forget - this machine tries setting 1280x1024 and my screen doesnt display that properly
<qman__> nomodeset
<qman__> if you mean disabling KMS
<qman__> and you can configure in grub to use a different resolution at boot time
<__dan__> kind of - i just want standard text mode
<qman__> as opposed to?
<qman__> are you using X or a high resolution console?
<__dan__> no it's text only, doesnt boot a gui
<__dan__> but it sets the mode to 1280x1024
<__dan__> i just want it to leave things alone and use whatever it gets when it boots
<qman__> ok, then if you append nomodeset to the kernel line, it will boot to an old 80x25 console instead of a high resolution framebuffer
<__dan__> cool yeah thats exactly what i want, will try next time it dies :) thx
<qman__> KMS is really good in general, that's why it's default
<qman__> works on widescreen and everything
<qman__> but some monitors report resolutions that they don't work well or at all with
<__dan__> ah this is an old ati rage onboard, its a server
<__dan__> doubt it would know what to do with kms
<__dan__> and since i installed 3.8 kernel it doesnt display at all - on 3.5 ubuntu standard kernel it cut off the left hand side
<sliddjur> How do I block an IP to connect to my server? I see somebody has tried to login via ssh over 9000 (!) times
<pmatulis> sliddjur: just make sure you're using keys for authentication
<sliddjur> pmatulis: i want to block an ip
<pmatulis> sliddjur: not much sense in doing that as IPs tend to change a lot.  but use iptables
<pmatulis> sliddjur: if not familiar with iptables (linux firewall) use ufw, it's a frontend that is more user-friendly
<pmatulis> !info ufw
<ubottu> ufw (source: ufw): program for managing a Netfilter firewall. In component main, is standard. Version 0.33-0ubuntu2.1 (quantal), package size 156 kB, installed size 714 kB
<sliddjur> i have ufw
<sliddjur> but it doesnt seem to work
<sliddjur> "ufw deny proto tcp from 59.6.39.170"
<sliddjur> still get tries from that ip
<sliddjur> I also did iptables -A INPUT -s 59.6.39.170 -j DROP
<pmatulis> sliddjur: i don't have the syntax memorized.  you're prolly missing something
<pmatulis> sliddjur: a quick search show 'sudo ufw deny from 192.168.0.1 to any port 22'
<pmatulis> sliddjur: also, you need to look at all the rules.  an earlier allow rule may be triggering (first-match-wins)
<shauno> I'd look into something like fail2ban so they never reach 9000
<sw_> sliddjur: pastebin your $ iptables --list, might be best to $ iptables --flush and start over though, as pmatulis said you might have an earlier ALLOW rule that's overriding it
<metap0d> Hi everyone, I've just set up Ubuntu Server 12.10 on my home computer but I'm having issues setting a static IP. I followed some documentation I found online but I end up having no internet access. My googling tells me this is an issue related to DNS so I tried to add "nameserver 192.168.0.1" to use my router but that didn't fix it. Any ideas?
<metap0d> For the time being I've installed SSH server on the machine and set the network back to dhcp
<Monotoko> hey guys, I have a bit of a weird problem... my server is set up with BIND9, and the domain itself works
<Monotoko> but I can't query it: http://pastebin.com/r9v5LDbR
<Monotoko> here is the contents of the /var/lib/bind entry: http://pastebin.com/c0CZqHX0
<Monotoko> (very basic I know - maybe that's the problem here?)
#ubuntu-server 2014-03-10
<Hornet> hi; having 'issues' installing grub on a server, namely, it won't install automatically and I can't see the wood for trees about how to install it manually
<Hornet> default guess is /sda, which fails
<Hornet> I made a seperate boot partition, but that seems invisible to ash, although it messes with mount points so who really knows
<Hornet> this is 12.4 if it matters
<Hornet> and yes I want to use 13.10, but it has a critical bug which prevents me
<RoyK> it's 12.04 btw
<Hornet> yes him
<sheptard> Hornet: so grub loads partially?
<Hornet> it tries to install and fails
<Hornet> it guesses /sda
<RoyK> add a /dev to that
<Hornet> I cannot see the boot partition in df -h so I have no idea what it did to it
<Hornet> well sda is in the raid anyway
<RoyK> check /proc/partitions
<sheptard> Hornet: check fstab, sometimes /boot isn't mounted
<Hornet> so it should be part of sde
<sheptard> or yeah, /proc/partitions is better
<Hornet> hm, hang on
<Hornet> this is a computer in the other room to the bloody server
<sheptard> and I think the command would be sudo grub-install /dev/sde
<Hornet> so tennis ahoy
<Hornet> sde has multiple partitions too, but hopefully yes
<Hornet> something simple like that would be refreshing
<sheptard> or alternatively, play musical sata ports until your disk ends up in the port for sda, instead of sde
<Hornet> can't, server doesn't allow that
<sheptard> que
<RoyK> doesn't really matter the device name
<apb1963> any bootp fanatics here?  And perhaps just maybe intimate with dhcpdump?  I'm seeing things I just don't understand.  Like... why is my system broadcasting bootp requests?   Why is it sending them to a private address?  How is it that I'm actually getting responses?  And why won't it stop?  Here's a capture file: http://fpaste.org/83863/41241313/
<apb1963> ubuntu 12.04
<Hornet> jesus this is mad
<Hornet> looks like sde1 is the right target
<Hornet> but can't mount it
<Hornet> ash won't see grub-install as a command
<Hornet> nor apt-get
<Hornet> so ....
<Hornet> sheptard, disks have to stay where they are, there's only one physical location for a non-raid disk and it's in it
<Hornet> sde* is all the non-raid stuff
<Hornet> anybody? can't turn the server off until this is sorted
<Hornet> and will be in a worse state than bare metal if I have to start again tomorrow
<Hornet> afaict everything's set up other than grub
<apb1963> Hornet: use fdisk -l to see your partitions
<Hornet> apb1963: this is ash, not bash
<Hornet> so no fdisk
<Hornet> stuck at the last stage of installation
<apb1963> fdisk isn't builtin to the shell
<Hornet> apb1963: so can I use it another way?
<Hornet> atm it doesn't work, I don't know the details of what ash can and can't do
<apb1963> Hornet: I don't quite understand the question
<apb1963> Hornet: You typed fdisk -l and it said... ?
<Hornet> fdisk: not found
<Hornet> this isn't a live system, it's still in the installer
<Hornet> it has ash and presumably some things
<Hornet> seems grub-install isn't one of them though
<Hornet> so quite how I can install this system I don't know
<apb1963> so finish the install and then come back and install grub later
<Hornet> it won't boot!
<apb1963> why not?
<Hornet> no grub?
<apb1963> what happens?
<apb1963> is there an error?
<Hornet> I strongly expect so
<Hornet> if I restart and it breaks I lose hours of work
<apb1963> You don't know?
<apb1963> hmm
<Hornet> I can drive a car off of a cliff to see if it falls, yes
<Hornet> I dont know until I test it
<Hornet> but I have a very good idea
<apb1963> I'm pretty sure grub isn't required to boot
<apb1963> let me take a quick look...
<Hornet> this is a raid system
<Hornet> single OS, tbf
<Hornet> fs
<Hornet> in other room now
<Hornet> laptop didn't like that at all
<Hornet> and nor did the people leeching from the torrents on the external hdd
<Hornet> oh well
<Hornet> they can get their lain later
<apb1963> what does ls -l /dev/mapper say?
<Hornet> control iirc
<Hornet> one sec
<Hornet> this is mdadm raid, not bios or fakeraid
<apb1963> what kind of raid are you doing?  bios? software? fake?
<apb1963> ok
<Hornet> crw----- 1 root root
<Hornet> mdstat's at 16%, to give you an idea of how long this has been set up
<Hornet> and this is a 6tb raid 6 array
<Hornet> none of the ubu server docs mention grub within the context of raid
<apb1963> Hornet: suggest reading all the way through this: http://askubuntu.com/questions/43036/how-do-i-install-grub-on-a-raid-system-installation
<Hornet> yet that's a surefire breakage
<Hornet> I've googled it too, yes :\
<Hornet> that's a different raid setup aiui
<Hornet> thing is I have a partition that was meant to be used for /boot, but can't mount or select it
<Hornet> at the point where I may as well restart it and see what smokes
<Hornet> and repair if I can from a live usb stick with something that resembles a real os
<Hornet> otherwise start again tomorrow
<Hornet> fs
<apb1963> The gist of it that I got from a quick skim was you need to create a small ( 1 mb ) partition flagged as bios_boot to install grub on GPT
<Hornet> the installer did something like that automatically
<Hornet> but seems not to have flagged it?
<Hornet> basically it's halfarsed it and left me high & dry
<Hornet> wonder if I should start again without grub
<Hornet> raid even
<Hornet> then create manually later
<Hornet> more trouble than it's worth
<Hornet> ridiculous system
<apb1963> things are always a mystery until you know the answer :)
<Hornet> well, ash won't let me install grub
<Hornet> so that's basically a golden 'screw you' surely
<Hornet> it's now saying I'll have to boot manually passing parameters
<Hornet> so here comes that cliff
<Hornet> sigh
<Hornet> restarting
<GeekDude> darthanubis: Hey. I reformatted (don't ask), and installed xubuntu server. I can mount drives, and am generally pleased with it. I swapped out firefox for midori and removed gimp, but other than that it's pretty nice
<Hornet> nothing there
<Hornet> wonderful
<Hornet> black screen, no dhcp allocated
<Hornet> dead box
<Hornet> told you
<darthanubis> GeekDude, glad it worked out
<Hornet> LOL
<Hornet> fuck me
<Hornet> the stick I was planning to use to recover it, seems to be 13.10
<Hornet> AKA usb keyboards die
<Hornet> which is all I have to use
<apb1963> !nomodeset
<ubottu> A common kernel (boot)parameter is nomodeset, which is needed for some graphic cards that otherwise boot into a black screen or show corrupted splash screen. See http://ubuntuforums.org/showthread.php?t=1613132 on how to use this parameter
<Hornet> jesus
<Hornet> it would have been on the network
<apb1963> I was going to ask if you had gdisk or gparted... but I guess I took too long :/
<Hornet> no, it's the installer
<Hornet> gnome anything is unusable
<Hornet> so now I'll have to start all over again without raid I think
<Hornet> and toast all my partitions
<Hornet> last chance, will see if the 12.4 installer has a usable recovery anything
<apb1963> gdisk has nothing to do with gnome
<Hornet> hm, what am I thinking of then
<Hornet> anyway, sod all was available
<Hornet> so I highly doubt it
<apb1963> nor does gparted for that matter
<Hornet> in rescue mode now
<Hornet> need laptop charger, brb
<apb1963> you could have also tried cfdisk
<apb1963> which is curses based
<apb1963> did you try nomodeset?
<Hornet> well, in rescue mode atm
<Hornet> I had no opportunity to do anything
<Hornet> boots to nothing
<apb1963> !nomodeset | hornet
<ubottu> hornet: A common kernel (boot)parameter is nomodeset, which is needed for some graphic cards that otherwise boot into a black screen or show corrupted splash screen. See http://ubuntuforums.org/showthread.php?t=1613132 on how to use this parameter
<Hornet> not on network either
<Hornet> how could I add a kernel parameter without grub or seeing what I'm doing?
<apb1963> well, that is in fact a good question
<apb1963> the truth is out there
<Hornet> anyway, rescue mode can't assemble the raid
<Hornet> tl:dr; all fucked, abort abort abort
<Hornet> will restart from bare metal without raid
<apb1963> don't know if you know it, but there's also a #ubuntu which is more active at the moment... maybe someone there can help
<Hornet> it's too active
<Hornet> it's drowned in noise
<apb1963> just add to the noise... if someone sees your question... they'll try to help
<Hornet> thanks for the assistance though, is much appreciated
<Hornet> heh
<apb1963> sorry I didn't know the answer... lots of info out there... but it does take quite a bit of sifting.
<Hornet> apb1963: np, thanks all the same, and partly that's the problem
<Hornet> so many similar problems with inapplicable solutions
<Hornet> that give partial anecdote-based answers
<Hornet> ubuntu rescue mode is messing around atm
<Hornet> just as well I have nothing to lose, in theory
<Hornet> otherwise I'd be concerned
<Hornet> given my luck it'll blow the bloody fusebox nexyt
<Hornet> -y
<apb1963> yeah... unfortunately this whole OS is a bit of trial & error.... no two systems are exactly alike.  Different combinations of hardware & software... lots of choices leads to a tower of babble.    I've got my own frustrations :/
<Hornet> I'd be happier if I were able to use a newer version
<Hornet> 2 year old software isn't great
<Hornet> 13.04 has a critical bug though, so I can't do anything with it
<Hornet> 12.10 would mean two upgrade hops and is too old for support anyway
<Hornet> so weirdly, 12.04 is the only option
<apb1963> Yeah, 12.04 is LTS "can't go wrong" with that solution.
<apb1963> You have a choice of "stability" vs. latest & greatest features with all the bugs included.  It's a tradeoff.
<apb1963> I went with stability.  Not sure I'm happy about it.
<Hornet> this is why I can't use debian any more
<Hornet> solid yes, but by the time you use anything it's basically half a decade old
<Hornet> only other OS I could consider is fedora
<apb1963> yeah, I hear ya.
<Hornet> admittedly a lot better than it was
<Hornet> but still, hardly my first port of call
<apb1963> Sorry but... I have a phone call to deal with.. wish you luck bud
<Hornet> np, thanks again
 * apb1963 exits stage left
<Hornet> 'rescue mode' seems to be 'install everything again' mode
<Hornet> oh well
 * Hornet ankis
<Hornet> okay this is mad
<Hornet> when I reboot it with the usb stick it, it seems to work as it should, but I can't log in
<Hornet> without it, it doesn't seem to do anything
<Hornet> usb stick is just the unetbootin installer for 12.04
<Hornet> wonder if it's somehow installed grub on the usb stick
<Hornet> this is mad
<Hornet> right so, it seems that when I install ubuntu server, it latches onto the usb stick and sticks grub on there
<Hornet> manual reinstallation of grub fails
<Hornet> looks like it works, fstab looks fine too, but it doesn't
<Hornet> seems my device names shuffle about sometimes
<jamespage> rbasak, hey - what did you think about my virtual-mysql-* email?
<rbasak> jamespage: I've been mulling over it.
<rbasak> jamespage: it sounds good to me, but I'm not sure it's enough. What bothers me is the postrm.
<rbasak> jamespage: AIUI, as long as we have a postrm, there are two separate sources of truth about who owns what, and this seems suboptimal to me. This is limited to my understanding though. I had to refer to the policy manual.
<lordievader> Good morning.
<noobb2> http://ubuntuforums.org/showthread.php?t=2210294
<jamespage> rbasak, https://bugs.launchpad.net/ubuntu/+source/percona-xtradb-cluster-5.5/+bug/1290331
<jamespage> urgh
<rbasak> jamespage: any opinions on how to fix that?
<jamespage> rbasak, thinking about it now
<rbasak> jamespage: it seems like a really fundamental question to me. AFAICS, it doesn't make sense for the alternatives to share the same config file path.
<rbasak> Perhaps we need to hack something together for Trusty though.
<jamespage> rbasak, I didn't pick this up in testing because I'm using a charm which writes the entire my.cnf before installing anything
<jamespage> rbasak, I was just testing the virtual-* stuff and noticed this...
 * jamespage sighs
<rbasak> Well, that's it. Charms can handle this. I don't think it makes sense for packaging to magically handle switching out of alternatives but maintaining the same DB and config directories.
<rbasak> That makes the assumption that they won't diverge, but being forks, clearly they will.
<jamespage> rbasak, I was thinking we could switch out that bit into a conf.d snippet for mysql-5.5 + others
<jamespage> but on a switch in/out that file would not be removed as its config right?
<rbasak> jamespage: I think that would be a problem, yes. Since we permit packages to be removed but not purged, and configuration files remain in that state, yet we expect a swap-out to work in that case.
<jamespage> rbasak, I guess the alternative is to ship with pxc disabled by default
<jamespage> with a README
<jamespage> looking at the maintainer script this should be OK
<rbasak> jamespage: I can't think of a better option right now.
<jamespage> rbasak, not sure that will work - the maintainer script want to use mysql directly if offline to manipulate the database
<jamespage> that fails as well...
<jamespage> rbasak, maybe patching pxc to ignore those values might work
 * jamespage looks
<jamespage> rbasak, going to think about it more over lunch
<tomixxx3> hi, my ubuntu 12.04.4 lts run for several days, today i looked at it and it prints me the following command line error: "ata1: exception Emask 0x10 SAct 0x0 SErr 0x4090000 active 0xe frozen. ata1: irq_stat 0x00400040, connection status changed, ata1: SError: { PHYRdyChg10B8B DevExch }
<tomixxx3> however, after hitting "enter" the system still react and i can execute commands
<tomixxx3> what is a good energy saving program on ubuntu server 12.04.04? because i dont want ubuntu to allow to go into energy save mode, because it should always run
<jamespage> rbasak, I'm a dulderhead
<jamespage> /etc/mysql/conf.d/pxc.cnf (set query_cache_* to 0)
<jamespage> #OK
<Walex2> tomixxx3: a "program" consumes energy :-)
<Walex2> tomixxx3: what you want is a power saving configurators. The default powersaving modules in Ubuntu seem more or less reasonablke.
<Walex2> tomixxx3: as to that ATA error that is a hw error in th bus to the disk. That it worked still means that the bus to the disk was reset and that worked.
<tomixxx3> walex2: i have already installed "caffeine"
<tomixxx3> Walex2: kk
<rbasak> jamespage: will mysql-5.5 (for example) pick that up? What if you install percona-xtradb-cluster-5.5, then remove it and install mysql-server.5.5?
<jamespage> rbasak, its remaings
<rbasak> jamespage: so still broken but less broken?
<jamespage> rbasak, the pxc config is valid for mysql-5.5 - its just not the normal defaults
<jamespage> rbasak, yes - less broken
<Walex2> tomixxx3: the error may be just a random thing, or it may be a symptom of a failirg disk.
<jamespage> rbasak, I'm seeing this as the least worst option right now
<rbasak> jamespage: agreed
<Walex2> tomixxx3: for power management, look at packages 'powermanagement-interface' and 'pm-utils'.
<Walex2> tomixxx3: I have written my own script BTW here: http://sabi.co.uk/#almostSources
<tomixxx3> walex2: i need a mechanism, so that my ubuntu server 12.04.4 together with a deployed OS-GUI do NOT change to any kind of power saving mode.
<gaughen> jamespage, rbasak, smoser - just wanted to make sure you all had reviewed the vUDS schedule.  http://summit.ubuntu.com/uds-1403/track/servercloud/
<jamespage> gaughen, I just scanned through yes
<smoser> gaughen, looks reaasonable.
<smoser> sortiing is funy.
<smoser> oh. no its not.
<moparisthebest> anyone know what badblocks output means, there are 3 numbers before errors, like so:
<moparisthebest> 17.21% done, 33:10 elapsed. (0/83943044/0 errors)
<moparisthebest> what do those 3 numbers represent? (I'm assuming regardless this drive is dead :))
<genii> If someone could clarify... can procmail work on outgoing or only incoming?
<mardraum> procmail is for delivery
<genii> mardraum: OK thanks.
<mardraum> generally you handoff delivery to procmail
<mardraum> if you need it for something, that is
<genii>  We want to fetch GPG keys for recipients and add them to our local database the first time they are sent something ( or we receive something from them). For incoming procmail works fine for this, outgoing seems to be another matter.
<mardraum> sounds like fun
<mardraum> which MTA?
<genii> Postfix
<mardraum> surprised nobody has done it before with postfix :P
<genii> Yes, it seems odd that nothing simple like procmail but for outgoing seems to be around. I looked at altermime but it seems more for rewriting/appending than for calling some arbitrary script
<genii> I wonder how stuff like ClamAV do it
<andol> genii: Using procmail to filter outgoing mail sounds like something which is possible, but non-trivial.
<jrwren> they likely have.
<ws2k3__> Hello, i have an question, i have 2 servers doing the same thing have the same stats the server on ubuntu 11.04 has a load of 0,05 and the ubuntu 12.04 has a load of 1.50, is there changed something in the load mesure?
<andol> genii: Writing something custom, and use milter to hook it in?
<genii> Hm.
<andol> For postfix that would be smtpd_milters/non_smtpd_milters
<jdo_dk> I have an ubuntu server, where postfix maillog ends into syslog. System is running rsyslog and ubuntu 13.10
<genii> andol: Thanks for the tip, looking into it now
<andol> jdo_dk: See /etc/rsyslog.d/50-default.conf
<jdo_dk> andol: mail.*                          -/var/log/mail.log
<slowcon> hey guys, just installed ubuntu server 12.04 and installed desktop. does anyone ever get any kickbacks where ubuntu just throws you back to the login screen. ubuntu software center will close automatically as well after 30s of being opened
<jrwren> genii: what specifically would you do on outgoing?
<Walex2> slowcon: that looks like some hardware issue causing the X windows to crash and/or applications to crash.
<genii> jrwren: Grab the recipient address, then basically: look for keys.domain.tld or keyserver.domain.tld and if found, fetch the recipients public key from there. If not found there, fall back to public keyservers in order of pgp.mit.edu then keyserver1.pgp.com and keyserver2.pgp.com ( with ldapsearch). If a result is found, add it to the local ldap databse under recipients so that it could be looked up locally
<Walex2> slowcon: the hardware issues could be as simple as insufficient RAm triggering the OOM killer
<Walex2> slowcon: or it could be an issue with the graphics chip driver being buggy.
<jrwren> genii: so you want to do this action on every email which traverses your email server?
<Walex2> ws2k3__: probably the measure has not changed, but most likely there are subtle differences in the daemons running.
<genii> jrwren: At least on first time they are used, yes. And of course not for local-local because these are already in the database
<jrwren> genii: i agree with the milter suggestion.
<jrwren> genii: sorry for no more help than that
<ws2k3__> Walex2 well what i see is that the rest of the value's in top are the same the %us and %sy are Exacly the same
<ws2k3__> and it looks like the one server is not more the bizzy then the other the value's is just a bit higher
 * genii goes back to investigating
<tomixxx3> hi, it seems i cannot use environment-variables like "export OS_USERNAME=blabla" on my ubuntu deployment (12.04.04)
<tomixxx3> because whenever i try to execute commands which should use these variables, i get a response indicating that OS_USERNAME for example is missing
<pmatulis3> tomixxx3: pastebin all the exact details.  commands and output
<tomixxx3> command is "keystone token-get" vs "keystone --os-username=admin --os-password=ADMIN_PASS --os-auth-url=http://controller:35357/v2.0 token-get"
<pmatulis3> tomixxx3: that's a big jump from your earlier comment.  simplify things to convince yourself you have the basics working
<tomixxx3> pmatulis: content of openrc.sh: http://pastebin.ubuntu.com/7068279/
<pmatulis3> tomixxx3: but it sounds like a openstack/keystone problem and not a ubuntu server problem
<tomixxx3> pmatulis3: kk, ty
<jrwren> tomixxx3: did you source openrc.sh or are you running it?
<tomixxx3> jrwren: yes, i have executed "source openrc.sh"
<jrwren> tomixxx3: ok.
<jamespage> coreycb, if you fancy something to hack on whilst at your sprint:
<jamespage> https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1290423
<Walex2> ws2k3__: very small differences can give slightly different load averages as they are rounded...
<coreycb> jamespage, thanks I'll take a look
<coreycb> jamespage, I need someone to review my "fix" for 1288344
<jamespage> coreycb, merge proposal?
<coreycb> jamespage, yes, , for keystone
<jamespage> coreycb, keystone charm - for bug 1288344
<jamespage> yes
<coreycb> jamespage, that's right
<jamespage> coreycb, ok - once you have something up ping me and I'll review
<coreycb> jamespage, thanks
<coreycb> jamespage, it's up at https://code.launchpad.net/~openstack-charmers/charms/precise/keystone/trunk/+activereviews
<jamespage> coreycb, looks ok
<jamespage> +1
<coreycb> jamespage, ok wasn't sure if that was the right approach. but it's done elsewhere.   thanks.
<jamespage> coreycb, ivoks and I redux'ed that charm completely week before last - but that work has not landed yet so your change is appropriate.
<jamespage> coreycb, the right way would be to have a post-start section in the upstart config that could tell when keystone was ready
<coreycb> jamespage, excellent that's good to know
<genii> Hm, looks like it may be possible to do outbound filtering with header_checks
 * genii goes back to reading
<smoser> jrwren, your cloud-init change is in -proposed now
<smoser> https://lists.ubuntu.com/archives/precise-changes/2014-March/021465.html
<jrwren> thanks.
<jamespage> hallyn, congrats btw
<hallyn> thanks :)
<rbasak> jamespage: do we need to remove src:samba4 from universe? It hasn't had an upload in Trusty, and src:samba seems to have overtaken it.
<jamespage> rbasak, good spot - yes
<rbasak> Looks like it has been removed from sid/jessie
<rbasak> I'll file a bug.
 * rbasak has filed bug 1290448
<rbasak> https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1290448
 * rbasak wonders where the bot went. MIA?
<rbasak> soren: do you still run the bot (I think it was you?)
<rbasak> Or was that just the new bug filed bot?
 * rbasak shrugs
<tarvid> Ubuntu 13.04 do-release-upgrade -d
<tarvid> Checking for a new Ubuntu release
<tarvid> No new release found
<RoyK> huh?
<tarvid> can't do-release-upgrade from 13.04
<RoyK> what does /etc/update-manager/release-upgrades have to say about Prompt?
<ogra_> most likely bcause you cant go from 13.04 to 14.04 straight ...
<tarvid> can't get to 13.10 either
<ogra_> and 13.10 (your actual upgrade option) is not a devel release
<tarvid> Prompt=normal
<tarvid> so how do I so a sim[ple upgrade to 13.10
<ogra_> just do-release-upgrade doesnt work ?
<ogra_> (no -d or any other option)
<tarvid> do-release-upgrade
<tarvid> Checking for a new Ubuntu release
<tarvid> No new release found
<RoyK> tarvid: did you check that file?
<RoyK> /etc/update-manager/release-upgrades
<RoyK> ah
<RoyK> yes
<RoyK> sorry
<RoyK> wierd
<RoyK> tarvid: what does lsb_release -a tell you?
<tarvid> root@horace:/etc/apt# lsb_release -a
<tarvid> No LSB modules are available.
<tarvid> Distributor ID:	Ubuntu
<tarvid> Description:	Ubuntu 13.04
<tarvid> Release:	13.04
<tarvid> Codename:	raring
<ogra_> are you sure your system is up to date with all updates and security updates ?
<tarvid> that doesn't work either
<ogra_> that doesnt sound like an issue with update-manager then though
<RoyK> !pastebin | tarvid
<ogra_> you always need to apply all updates first before running do-release-update ...
<ubottu> tarvid: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<tarvid> prettry lame for a 3 line post
<RoyK> 6
<tarvid> my dns server screwed up
<howudodat> am having a very unusual problem.  installed ubuntu server, two nics: em1 wired ethernet, default route.  wlan0 wireless.  em1 is responding with its MAC for arp requests for wlan0.  I have no bridge configed, nor proxy arp as far as I can tell.  em1=192.168.1.4/24 and wlan0=192.168.1.3/24 pastebin config http://pastebin.com/bEiwGGrE
<ball> Does Ubuntu work nicely on a Lenovo ThinkServer RD340?
<RoyK> ball: no idea - try
<ball> RoyK: I'll have to buy one ;-)
<ianward> Does anyone know when us-east-1.ec2.archive.ubuntu.com stopped going through S3?  I read it started using S3 here https://lists.ubuntu.com/archives/ubuntu-cloud-announce/2012-June/000026.html but it looks like it no longer uses S3 directly.
<sarnold> ianward: recently, a few weeks ago
<ianward> sarnold: is there somewhere i can read more about this?
<sarnold> ianward: I'm not sure what the root cuase of the errors were, but the s3-based mirrors were unreliable and appeared to randomly deny pacakge downloads.
<sarnold> ianward: sorry, no url handy, I read it on irc [tm]  :)
<ianward> sarnold: ok, thanks
<sarnold> ugh, the irc logs are organized by date and then channel...
<sarnold> ianward: sorry, I'm not finding the references again. :( I believe I got the impression that the mirrors would be back "soon".
<ianward> sarnold: ok, thanks for checking. good to know
<gQuigs> jamespage:  thoughts on the networking blueprint?  or discussion?  https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2014-March/014877.html
<gQuigs> it would still need approval
<gQuigs> ^open to any track leads, btw
#ubuntu-server 2014-03-11
<savid> I have a file in /etc/cron.d that does not seem to be running. I put a dummy job at the top, and the dummy job does not execute. The same line in a separate file works fine. How can I debug this? Can't seem to find a logfile for cron...
<raub> savid: I think you should see an entry in syslog when crontab finds that job
<raub> savid: can you pastebin your file?
<savid> raub: ok, I'll check syslog. The crontab file is rather long, lots of maintenance jobs in there.
<savid> Ah, wrong file owner :-)
<raub> savid: another mystery solved ;)
<raub> savid: I meant your file in /etc/cron.d, not in crontab
<erixNICK> hey all
<sheptard> HI
<erixNICK>  having trouble getting samba to run on a 12.04 LTS server.  I see that there are bugs posted,  but havent seen fix.  Can some one please shed some light on this for me
<sheptard> what error is samba giving
<erixNICK> one sec will pull up
<erixNICK> smbd/server.c:1107(main)
<erixNICK>   standard input is not a socket, assuming -D option
<sheptard> thats normal
<erixNICK> it seems like I cant set up even a basic guest share
<erixNICK> sheptard,  even if I ignore error I cannot get a guest share up
<sheptard> so samba is listening, and you can connect to it, you just can't see any shares?
<erixNICK> from windows I get an access denied
<erixNICK> http://pastebin.com/nbnndKuE is what I have for the share
<sheptard> can you ls -al /shared
<sarnold> hey sheptard :)
<erixNICK> yeh
<erixNICK> i have shared 777'd for testing purpose
<erixNICK> "/shared" ^
<sheptard> hi sarnold
<sheptard> erixNICK: Hum
<erixNICK> it looks like out of the box it wont allow guest shares... im also on a win8 box (unfortunatly)
<sheptard> if you do something like
<sheptard> smbpasswd -a test
<sheptard> then try to connect as test, does that work?
<erixNICK> k
<sarnold> erixNICK: your ls -al /shared  just returns "/shared"? no . or .. directory entries? o_O
<erixNICK> it returns a directory list
<erixNICK> also i am able to change user "test" password
<sheptard> erixNICK: can you connect to the samba share as test?
<sheptard> if you have test as an allowed user
<erixNICK> not sure how I would do that
<erixNICK> im trying to just do a net view \\liuux_box from a dos window
<erixNICK> even set user.group to nobody for /shared
<erixNICK> looks more like it doesnt no what to do with windows auth
<erixNICK> unix password sync = Yes <- would that affect a guest share
<erixNICK> looks like alot of people are having this problem when they changed from 10.04
<SunkSullen> anyone alive? having trouble
<SunkSullen> with LAMP and UBuntu
<SunkSullen> server
<cfhowlett> !lamp
<ubottu> LAMP is an acronym for Linux-Apache-MySQL-PHP. However, the term is often used for setups using alternative but different software, such as Perl or Python instead of PHP, and Postgres instead of MySQL. For help with setting up LAMP on Ubuntu, see  https://help.ubuntu.com/community/ApacheMySQLPHP - See also the Server CD installation process.
<SunkSullen> okay but...
<SunkSullen> the server was setup with root
<SunkSullen> I added an account
<SunkSullen> added to sudoers file
<SunkSullen> and now I want to change document root
<SunkSullen> ugh
<sarnold> yes?
<SunkSullen> sorry so how do I make the account I made which has sudo permisission ~/public_html my document root instead of /var/www
<SunkSullen> ?
<sarnold> edit your apache configuration files to set the DocumentRoot to your new location
<sarnold> it'll be somewhere in /etc/apache2/ if I'm remembering it correctly
<SunkSullen> k
<SunkSullen> and also
<SunkSullen> THANKS
<SunkSullen> first off
<SunkSullen> nevermind
<SunkSullen> I can find that out my own
<SunkSullen> the correct permissions of the fodler for /home/cameron/public_html
<sarnold> the directory will need to be readable and executable by the apache process; so you'll need to pick permissions that grant the www-data user the permissions it needs while still meeting the security needs of the rest of the system
<sarnold> SunkSullen: that might setting the directory to e.g. 755, so that everyone can read and execute the directory, or it might mean you need to set group ownership of the directory to www-data and then use e.g 750
<SunkSullen> sarnold: ah confusing lol sorry
<sarnold> SunkSullen: unix permissions are amazingly concise and sharp. it takes a few years familiarity with them to appreciate how simultaneously flexible and concise they are
<sarnold> before then of course it's just baffling and confusing
<SunkSullen> hehe yea
<soren> rbasak: uvirtbot?
<jdo_dk> I have a postfix, which logs maillog to syslog. How to split the log into mail.log ?
<jdo_dk> Ubuntu 13.10 and running rsyslog
<ikonia> jdo_dk: look at the syslog.conf and you should see the priorities/levels which determain what goes to what file
<jdo_dk> ikonia: http://privatepaste.com/c8c65eba26
<jdo_dk> ikonia: As i can see in the /etc/rsyslog/50-default.conf, mail.* already goes to: /var/log/mail.log
<ikonia> jdo_dk: what's the actual line from your syslog that you want to into mail
<ikonia> it may not actually be a mail alert
<ikonia> (but look like a mail alert)
<jdo_dk> ikonia: http://privatepaste.com/40a7c0622f
<ikonia> that's probably not going to be classed as "mail"
<ikonia> thats an application/daemon reporting
<jdo_dk> Ok
<jdo_dk> But i would "expect" to see those "normal" log entries in mail.log
<jdo_dk> Should i try to install syslog instead of rsyslog ?
<ikonia> which normal ones ?
<ikonia> can you give me an example of a normal one ?
<jdo_dk> Will find another ubuntu server.
<jdo_dk> just a sec
<jdo_dk> Ikonia: mail.log is empty on all "my" servers...
<ikonia> jdo_dk: yes, I'd expect that
<ikonia> unless you have mail events
<jdo_dk> ikonia: http://privatepaste.com/af6174fe9c
<jdo_dk> I would expect mail.log to look like that file...
<ikonia> nah,
<ikonia> that's postfix
<ikonia> not mail
<jdo_dk> I have postfix installed.
<ikonia> postfix is an "application" from the point of view of the syslog
<jdo_dk> So i need to configure "postfix" in the syslog conf ?
<ikonia> you need to configure postfix to log to the correct places
<ikonia> or find out the "event" that syslog will monitor from postfix, I did this years ago with exim and ksyslogd (so it will be a bit different now) but it works out easier to configure the application to log correctly, rather than configure the syslog to try to manage it's events
<jdo_dk> Ok
<jdo_dk> I will try. Thanks mate.
<jdo_dk> ikonia: http://ubuntuforums.org/archive/index.php/t-1893739.html
<jdo_dk> stop rsyslog, chown / chmod start syslog solved the logging...
<ikonia> really ?
<ikonia> it picks up those postfix events as "mail" ?
<marcoceppi> hey rbasak, thanks for the info for the charm-tools thing. I have no idea why the packaging branch is a head of what's in precise I just need to update the bits about recommends, etc. I'll get a public bug filed from the private one in a few though.
<marcoceppi> How would i go about getting the right packaging branch for precise?
<rbasak> marcoceppi: I'm confused about that too. Yeah just updating the recommends is fine. With just an SRU bug I'm happy to upload that.
<rbasak> marcoceppi: I'm not sure how you can submit a UDD merge proposal based off the version that's in precise, but don't worry about that too much. I'll happily upload from a debdiff, or if you just update that MP without worry what it's based on, I'll fix it before upload.
<marcoceppi> rbasak: the entire charm-tools packaging is a giant bag of two headed snakes
<marcoceppi> so this somehow doesn't surprise me
<rbasak> marcoceppi: BTW, is this fixed in Trusty? We need that before uploading an SRU, but I presume it is?
<marcoceppi> rbasak: yes, it was fixed with charm-tools 1.0.0
<marcoceppi> but 1.0.0 broke a package which is why there isn't a backport request
<rbasak> OK
<marcoceppi> rbasak: here's the public bug, lp:1182905
<rbasak> marcoceppi: great! Are you doing the SRU justification (are you familiar with that?)
<marcoceppi> rbasak: I'm not, this is a completely new process! I'm completely interested in learning though
 * marcoceppi hunts down a wiki page
<rbasak> marcoceppi: https://wiki.ubuntu.com/StableReleaseUpdates#Procedure
<rbasak> marcoceppi: it's for the SRU team, who will need to review before accepting any upload. And also to leave a trail for anybody who receives the update and wants to know why.
<rbasak> Or wants to know why everything broke, etc :)
<jdo_dk> ikonia: Yeah. Mail.log just works... So no need to configure syslog or postfix further...
<ikonia> jdo_dk: that is very surprising, but well done
<jdo_dk> ikonia: ;)
<marcoceppi> rbasak: Okay, I updated #1182905 with SRU template. It mentions uploading the package to release-proposed, but I'm not sure how or if I should do that
<rbasak> marcoceppi: OK, thanks. I'll take a look in a bit - a few too many balls in the air right now :-/
<marcoceppi> rbasak: np np! I get that completely, thanks for the help thus far! :D
<stetho> I have a hard disk on a server that's filling at quite a rate. I can see with iotop that it's a remote user doing something over nfs but I can't figure out which user. How can I figure out which files are growing?
<rbasak> marcoceppi: juju-core doesn't exist in precise. How would you feel about just dropping the Recommends: juju to Suggests: juju?
<marcoceppi> rbasak: uh, yeah that works
<marcoceppi> rbasak: I realized that while writing the SRU
<rbasak> marcoceppi: charm-helper-sh can still suggest juju, right? I don't see why that would break anything.
<rbasak> Sorry I'm confusing myself. Let me pastebin you a debdiff in a moment.
<marcoceppi> rbasak: charm-helper-sh is installed solely server side, and doesn't need juju at all, in fact it's not even really associated with juju
<moparisthebest> is anything in /var/log/ important? and if so, can I mount it on tmpfs and setup logrotate to just delete things when they get so big?
<rbasak> marcoceppi: my proposed diff: http://paste.ubuntu.com/7073884/; full new control file: http://paste.ubuntu.com/7073885/
<marcoceppi> rbasak: I suppose suggests is fine for charm-helpers
<marcoceppi> lgtm
<marcoceppi> I should just stick with mvp in the future
<jhobbs> hallyn: should containers be able to mount ext2/3/4 filesystems in loopback mode in trusty?
<jhobbs> hallyn: looks like i still need to add some aa_profile/cgroup config to my lxc conf to get it to work
<smoser> jhobbs, no. not by default.
<smoser> i think there is a config that allows it though eplicitly
<smoser> jhobbs, /etc/apparmor.d/lxc/lxc-default-with-mounting
<jhobbs> smoser: ok - cool; i'm not really familiar with app-armor; how do i make that apply to new containers created via juju-local?
<smoser> you can set some value in /etc/lxc/default.conf
<smoser> to set the default app armor profile
<smoser> but i dont have that handy at the moment
<jhobbs> ah ok cool, i'll chase it down; thanks
<smoser>  lxc.aa_profile
<smoser> (see man lxc.container.conf)
<jhobbs> so i have some settings in there already that apply to containers that i create manually; i set the aa_profile to unconfied and change some cgroup settings
<jhobbs> but those don't seem to apply to the containers that juju-local created
<smoser> they wont.
<smoser> only new ones.
<smoser> they get copied.
<smoser> i got to run.
<jhobbs> yes, i had the settings there prior to creating the juju-local containers
<jhobbs> ok
<jhobbs> thanks for the lp
<jhobbs> help
<hallyn> jhobbs: if you need it you need it, but of course it is disallowed bc there *is* danger in allowing it,
<hallyn> so i recommend against enabling by default
<jhobbs> hallyn: ok
<hallyn> (sorry, had some hubbub around these parts)
<jhobbs> hallyn: that's cool; i just wanted to know if the behavior i was getting matched what was expected
<jamespage> rbasak, are you covering the HWE eol meeting?
<rbasak> jamespage: yes
<jamespage> great
<jamespage> listening in whilst I wait for mysql-xx to build
<zzxc> ahhhhh so this is why I'm always getting mentions from this channel.
<irv> howdy, i installed 13.10 to an hp dl320e g8, when the setup ran, (it setup off a bootable USB), grub installed to /dev/sda (which was the USB drive). I want to install grub to the HD. I've since rebooted the machine (it will only boot to ubuntu with the USB in there) and did sudu grub-install /dev/sda (/dev/sda is the HD once the setup rebooted), says no errors. reboot and the same issue.
<irv> any ideas on how i can verify that grub is indeed installed to the MBR on the HD
<irv> it's a SATA drive single volume on a b120i raid controller
<irv> when i boot without the USB in it says non system disk or disk error for C
<markthomas> irv, it sounds as though it still is not installed.  Are you certain that once the system is booted that your SATA drive is /dev/sda?
<irv> when i run df -h when it's booted, i see 457 gb or something on /dev/sda1
<irv> there's only the one drive and the USB is 8gb
<irv> so that's how i figured it was sda at that point
<markthomas> irv, okay.  Try update-grub and then grub-install /dev/sda again.
<irv> k sec
<irv> rebooting now
<markthomas> If it doesn't work, then we may have to dissect it a bit.
<irv> sounds good. i did install a lowlatency kernel, but it must've run update-grub since that was the kernel that booted off the USB after i installed it
<irv> but who knows :p
<irv> just booting up now
<irv> hrm, still no joy
<irv> could it be something to do with the raid controller?
<irv> booting off hte USB now
<irv> okay, sudo blkid (after removing the USB that booted it) shows /dev/sda1 and /dev/sda5 as swap
<irv> sda1 is ext4 and is the partition all the files are on heh
<markthomas> irv, Just to be thorough, can you confirm whether your /boot is a separate partition?
<irv> it is not, i did a guided entire disk partioining
<irv> tried just running sudo grub-install /dev/sda and then sudo update-grub /dev/sda as per someone in #ubuntu
<irv> also didn't work
<irv> "non system disk error"
<irv> maybe an issue with my fakeraid? i can put it into legacy mode or AHCI mode as well
<markthomas> OH, fakeraid.
<markthomas> irv, Your best bet with fakeraid is to turn it off in the BIOS and use software RAID.
<irv> ya it's only a single drive, so i'm not actually doing any RAID
<irv> legacy mode?
<RoyK> irv: ahci it be
<genii> Is there some reason I would not see a process running if I ssh-ed in and ran it? ( not inside screen)
<irv> i set it to AHCI and then re-ran grub-update and install-grub but no joy, gonna do a full reinstall on the AHCI'd drive
<irv> genii: wehre are you looking to see if it's running?
<irv> different userspace?
<RoyK> irv: probably different mapping
<genii> irv: ps aux doesn't shor it, either on server or client machine running
<irv> RoyK: i'm surprised it even booted at all even off hte USB
<irv> i figured the files would be gone after i changed the controller's 'mode'
<irv> just writing a Ubuntu server USB again
<irv> since the one i used to install now has GRUB On it
<irv> minor side-lesson, select 'no' for grub install when installing from a USB :P
<irv> ok so after changing the sata config to AHCI, reinstalling ubuntu server and in the setup i picked "no" for the grub install which exposed a new screen that I was able to pick the destination of GRUB. i put it to /dev/sdb (which was the HD as the USB install key shows up as /dev/sda during hte install)
<irv> now it boots straight off the HD :)
<irv> thanks for the help all
<keithzg> Well that's mysterious. Trying to use cygwin+rsync to periodically pull some Doxygen-generated documentation off of a Windows autobuild machine and put it up on our internal website, and the rsync share is including cygdrive, dev and proc? WTF?
<jrwren> gah, just realized its UDS this week.
<keithzg> UDS doesn't have the same pomp and circumstance it used to, eh?
<keithzg> Moving entirely online will do that, I suppose.
<ogra_> it is a lot more focused
<ogra_> ... but you have to drink alone afterwards
<jamespage> hallyn, if you are around can you take a look at https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1254872
<jamespage> I'm trying to get to the bottom of it but this appears to be impacting openstack-gate in some way
<hallyn> jamespage: I can't keep up with the cutesy cloud archive names :)  which ubuntu release is that libvirt from?
<jamespage> hallyn, so that bug appears to relate to 12.04 itself - 0.9.8 release
<jamespage> I have another
<jamespage> for 1.1.1 (saucy)
<hallyn> meaning you have the same error on both?
<jamespage> hallyn, oh - I see you already on https://bugs.launchpad.net/nova/+bug/1228977
<jamespage> hallyn, good-oh
<jamespage> that was only yesterday
<hallyn> jamespage: right that one is in saucy only and no way am i backporting those patches to precise :)
<hallyn> but certainly the commit mentionedin comment #13 is worth trying
<jamespage> hallyn, no indeed
<jamespage> hallyn, is the 1.2.2 we have in trusty going to suffer from bug 1228977 as well?
<hallyn> jamespage: 1.2.2 was *just* released so i don't think so
<hallyn> oh, but, yes
<jamespage> hallyn, can you join me in #openstack-infra - doing piggy in the middle right now :-)
<hallyn> zul: ^ we'll probably need the patch in comment #40 of bug 1228977 added to trusty's libvirt
<jamespage> hallyn, zuls in florida on a beach
<hallyn> curses
<adam_g> hallyn, jamespage FWIW https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1248025 will eventually be a big issue for the upstream gate, at least the portions of it that run on rackspace.
<adam_g> stgraber, im happy to give you an instance on rackspace if you want to poke at that yourself ^
<jamespage> adam_g, you mean smb
<adam_g> doh
<adam_g> smb, ^ :)
<jamespage> adam_g, :-)
<smb> adam_g, yeah, thanks. If you can have something running for me in the morning and email me the details for looking there tomoroow
<smb> *tomorrow
<jamespage> thanks smb
<jamespage> and adam_g
<jamespage> :-)
<adam_g> smb, sure. rackspace gave out a bunch of free credit at an ODS a while ago--might as well put it to good use
<smb> adam_g, Cool, yeah it really mysterious what is different there since all seems well on my test systems even with a cloud-image based guest
<smb> I suspect its something in the setup but hopefully one can find traces of that inside the guest
<smb> adam_g, Just as a note that the day is mostly over, so to make best use of any credits you can bring up the guest late your day (assuming you are on the other side of the Atlantic) :)
<adam_g> smb, just spun a precise and saucy up. should be fine to keep running as long as you need. ill email you the details
<smb> adam_g, Ok, thanks
<Lord_Set> What is the config file which has all installed packages in it? I need to force remove a package that wont't remove via apt-get purge properly or reinstall properly... I just forgot the name of the file that I have to remove the entries of the package so that Ubuntu doesn't even think it is installed.
<rbasak> Lord_Set: I usually edit the maintainer script in /var/lib/dpkg/info to fix it so it does remove (eg. with an exit 0 at the top), and then do whatever cleanup it was trying to do manually instead.
<rbasak> I've always felt that this is cleaner. Then at least the package manager can maintain its own state, so no chance of corrupting it.
<Lord_Set> Thanks rbasak
<Lord_Set> But just in the other case what is the file that has a list of all installed packages?
<smb> adam_g, Ah, hm. So there is xe-guest-utilities which causes a xenfs mount to /proc/xen and that is causing the failure. That package probably comes from managing the guests with XenServer. There is also a nova-agent using some file exported in the. Not sure where that comes from, it does not seem to have a package installed.
<adam_g> smb, so essentially libvirt conflicts with  xe-guest-utilities?
<smb> adam_g, So I think that is enough for first round of data gathering. I can check that locally tomorrow. And will put that info into the bug
<smb> Not really conflicts but it leads to some unexpected situation. Maybe the error of getting capabilities needs to be made a soft failure
<smb> Or it needs another check to avoid thinking the proc mount alone is hinting a dom0
<smb> But it should be enough info for me to reproduce and then discuss this upstream
<smb> adam_g, So I am off the guests and you can shut them down to save credits
<adam_g> smb, great. thanks a bunch
<raub> How do you configre openldap to use a comodo-bought cert? i.e. what should I feed TLSCACertificateFile with?
<Lord_Set> rbasak: But just in the other case what is the file that has a list of all installed packages?
<thumper> hallyn: if I snapshot a dir based lxc image with aufs, and the base container gets updated (update/upgrade), is it expected that the cloned image will works?
<thumper> s/snapshot/clone/
<hallyn> thumper: no.  snapshots of dir backed containers should not be done
<hallyn> and lxc should be defaulting to a copy-clone
<hallyn> you should create an aufs clone first, then snapshotting that
<hallyn> unfortunately
<thumper> I think I used snapshot wrong
<thumper> let's step back
<thumper> with juju, we want to improve the local testing process
<thumper> the idea is to create containers with clone
<thumper> so we can skip the apt update/upgrade step mostly
<thumper> btrfs works wonderfully
<thumper> but we want a stable solution that works for people without btrfs
<thumper> that still gets them good speed, good disk combo
<thumper> hallyn: do you think that cloned containers using aufs will give us this?
<thumper> one thought was btrfs on loopback
<thumper> which I had working locally, but there are issues around keeping track of the size, mounting in the right place, resizing etc
<thumper> I could go down this track if it will give us the best user experience
<hallyn> thumper: btrfs *is* the ideal solution,
<hallyn> thumper: overlayfs has its own bugs, and aufs has its own (stgraber can tell you about them)
<hallyn> oh that's right, aufs doesn't do xattrs?
<thumper> so in your opinion, if they don't have btrfs natively, offer btrfs with loopback devices?
<hallyn> anyway, if you want to use aufs, it shoudl mostly work
<hallyn> you just have to create a unchanging container base, then create clones from those
<thumper> one problem with aufs that I have been told about, is that wordpress charm doesn't work
<hallyn> and not update the base
<thumper> and given that is our CI, and used in all the examples
<thumper> not ideal
<hallyn> heh
<hallyn> wonder why
<thumper> wordpress installs nsf kernel modules
<thumper> nfs
<thumper> not nsf
<hallyn> that should fail with btrfs too
<hallyn> wordpress should not do that
<thumper> hmm...
<thumper> I should really go try
<thumper> but would that also fail in plain containers?
<hallyn> yes
<hallyn> containers cannot install modules
<thumper> ok, in which case it must be something different
<thumper> because wordpress works in dir backed containers
<thumper> backing up a little
<thumper> if I wanted to test aufs backed clones
<thumper> I should make sure that the base image is not updated for the lifetime of the clone?
<hallyn> yes
<thumper> how badly will it break if it does happen?
<hallyn> depends on what is changed
<thumper> still able to destroy the containers?
<hallyn> yeah
<hallyn> mainly, if file /a/b was overwritten in the clone, then updates in the main container to /a/b won't show up,
 * thumper nods
<thumper> also if a is updated and b is running
<thumper> missing inotify for things that aren't overwritten?
<thumper> stuff like that?
<hallyn> for things that are overwritten
<hallyn> remember, we're not sure (afaik) whether aufs will end up in final trusty kernel or not
<thumper> ok
<thumper> so, back to juju awesome user experience
<thumper> for trusty, we are going with clone for creating containers
<thumper> if you are on btrfs, it is awesome
<thumper> if not, lots of I/O and disk usage
<thumper> have a plugin to support btrfs on loopback
<thumper> sound like the best option?
<hallyn> thumper: I dn't know how to prioritize the downsides.  I think using aufs and never upgrading the base is good.
<hallyn> this also might be a use-case for a qcow backend,
<hallyn> (but that doesn't exist yet :)
<thumper> qcow>
<thumper> ah
<hallyn> and really it'd have the same expectations as aufs
<thumper> is there a way to ask the container (aufs clone) what it is cloned from?
<hallyn> the base rootfs is in the config file
<hallyn> the rootfs for the clone will be "aufs:<rootfs_base>:delta_dir"
<thumper> so, could easily parse the config files for all the current containers
<hallyn> yeah,
<thumper> hmm...
<hallyn> or juju could just call the canonical container "juju-precise-20140311"
<thumper> well, the canonical one now is 'juju-precise-template'
<hallyn> then the next time it creates a base, 'juju-precise-20140315'
<thumper> what do you mean 'next time it creates a base'?
<thumper> make a dir based clone of the base template?
<hallyn> if juju ever updates teh template
<thumper> hallyn: ok, so if I want to use clone with aufs, all I need is a base container to start with (dir backed), and go "lxc-clone -B aufs -o juju-precise-template -n new-name" right?
<hallyn> yeah (plus -s)
<thumper> oh, need the --snapshot too?
<thumper> the help says "The new container's rootfs should be a LVM or btrfs snapshot of the original."
<thumper> but ok, will add it
<thumper> i just thought that if I specified the backingstore, it would just know
<hallyn> thumper: there may be cases where someone doesn't include "-s" and really means "full copy".  in that case aufs should retufn failure, asit can't do that
 * thumper nods
<thumper> ok
<hallyn> maybe the clearest thing to do would be to add a --nosnapshot option and have the default be "whatever is best"
<hallyn> but i don't nwat to change that right now
<thumper> sure
 * thumper goes to experiment with aufs
<apb1963> Could someone run dhcpdump on their machine and tell me if you're sending out continual bootp requests (about every 15 seconds or )?  That's what my machine does and I'd like to know if anyone else is seeing this as well (and doesn't know it).
<sarnold> apb1963: nothing with 'sudo tcpdump -n -i wlan0 'udp and (port bootpc or bootps)'' when waiting a minute-ish here
<apb1963> sarnold: thank you... what version of ubuntu are you running?
<sarnold> apb1963: 13.10
<apb1963> hmm... 12.04 here.
<apb1963> plus i'm wired with a cable provider
<apb1963> I need more of an apples to apples comparison I suppose
<sarnold> could be :) dhcp is finnicky.
<apb1963> is it?
<sarnold> the never-ending stream of bug reports of folks running in vms or containers when the kernel doesn't fill in the udp checksums because it thinks that's being offloaded to a NIC.. there must have been months of those bugs. :) heh
<apb1963> hmm
<apb1963> i'm glad I missed out on that spot o' fun
<apb1963> it was purely accidental that I found this issue....  I've added a whitelist to my firewall... and was logging dropped packets.... that's when I saw this.
<apb1963> nobody seems to know why it's happening.  I've stumped the entire world :/
<sarnold> apb1963: strace the process responsible?
<apb1963> that's actually a good idea...  I'm not sure I know what's responsible, but I'm guessing it's dhclient
<apb1963> just killed it
<apb1963> lets see if that kills the packet stream
<GeekD00d> Any reason I can't get php to work with nginx? It gives me a 502 bad gateway error
<GeekD00d> whenever I try to load a php file, that is
<apb1963> sarnold: it seems to have stopped the outgoing, but I still have incoming Replys coming back... not sure if they're going  to run out like they're buffered... or if it will continue forever.
<apb1963> GeekD00d: #nginx #php
<GeekD00d> thanks apb1963
<sarnold> apb1963: oh that's odd..
 * teward yawnws
<apb1963> sarnold: it's broadcasting Reply packets
<apb1963> I presume it's my ISP since it's coming from a 10.65 address
<sarnold> apb1963: eww. gross.
<apb1963> IP: 10.65.192.1 (0:24:14:61:38:d9) > 255.255.255.255 (ff:ff:ff:ff:ff:ff)
<sarnold> wait, the ethernet mac is set to ff:ff:ff:ff:ff:ff? I thought dhcp replies were sent to the specific mac that asked for the address?
<apb1963> I THINK, that's an IP address, not a MAC
<apb1963> it's just converting to hex
<apb1963> wait... maybe not
<sarnold> heh, then it'd only be four bytes long rather than six
<apb1963> yeah, I was just about to say... that may be an IP6 address
<sarnold> in that case there'd be more than six bytes :) hehe
<apb1963> sarnold: you're right... so I guess it is a MAC...
<apb1963> sarnold: interesting conversation on it here: https://learningnetwork.cisco.com/thread/32255
<jrwren> that is eithernet broadcast mac
<jrwren> is it CDP or something?
<apb1963> ?
<jrwren> cisco discovery protocol
<apb1963> no idea
<apb1963> I don't think so
<apb1963> but I couldn't tell you for sure
<apb1963> which brings me back to ... no idea :)
<jrwren> is that tcpdump?
<apb1963> I use ethernet to connect to my cable modem
<apb1963> beyond that... no idea
<apb1963> the abovfe?
<jrwren> yes, the above.
<apb1963> the output I posted?
<apb1963> it's actually dhcpdump
<jrwren> oh, i'm not familiar with that
<apb1963> it's like tcpdump... but...  deciphers dhcp packets
<apb1963> this is tcpdump: 16:48:51.974312 IP 10.65.192.1.67 > 255.255.255.255.68: BOOTP/DHCP, Reply, length 300
<jrwren> that wasn't much deciphering :)
<apb1963> oh, that's because I only took an isolated line
<jrwren> oh, pastebin the whole thing.
<apb1963> ok
<apb1963> a moment
<jrwren> tcpdump with -vv does a pretty good job of dhcp protocol decode
<jrwren> and tcpdump with -vv -X shows hexdump payload, and dhcp responses are reasonably readable
<apb1963> http://fpaste.org/84549/39458201/
<apb1963> jrwren: good to know
<sarnold> o_O bootfile name..
<apb1963> sarnold: yes?
<apb1963> it's what I'd expect in a bootp reply :)
<apb1963> just not necessarily broadcast to the world.. but according to that cisco doc.. it's not wrong.
<sarnold> apb1963: well yes and no, I'm curous what your ISP is trying to get you to boot :)
<apb1963> haha
<apb1963> yeah
#ubuntu-server 2014-03-12
<apb1963> well, I'm not quite as worried about my ISP... i'm more interested in why MY machine is sending out perpetual bootp requests.
<apb1963> Or more exactly..... dhclient.
<apb1963> And more importantly... how to make it stop.... other than killing dhclient... which I presume I need running.
<apb1963> sarnold: strace output - http://fpaste.org/84551/94583190/   Nothing jumps out at me, but then I don't really know what I'm looking for....
<teward> sarnold: unrelated: with the NGINX MIR, who else needs to review it?  And should I poke them, or just wait?
<jrwren> its normal of dhclient to continuously do that when you have short lease times
<apb1963> sarnold: I'm also not sure how valid that strace is... since it forks a child and exits
<apb1963> jrwren: 15 seconds?
<jrwren> that is excessive
<jrwren> it is always and continuous every 15seconds?
<sarnold> teward: good question; since infinity has already done an earlier review he may be ideal for finishing; he's on the MIR team, is familiar with the results and the packaging
<apb1963> jrwren: which is why I'm here :)
<sarnold> apb1963: yeah, I wondered what was happening in that child.. nothing else seems to stand out. maybe ltrace would be more instructive, we might see some library calls (strcmp, for example) that would be instructive
<teward> yeah he helped getting the FFe for the 1.4.6 merge into Trusty so that we have the latest, I"ll poke him and have him do a review of the debdiff
<teward> if all goes well, the debdiff that's uploaded to launchpad now might just be all that we have to do for this.
<teward> (the nginx twitter has already retweeted my post that only the nginx core modules are getting main inclusion, so at least we have that)
<apb1963> sarnold: it's a good idea... struggling with pastebin now, the file is very large so I've been stripping off lines to pare it down to something "reasonable", but I'm not sure how useful a pared down file is going to be.  problem is I still don't know what I'm looking for... I mean can grep for strcmp, but the question still remains
<apb1963> sarnold: In any event, that's what I did... so here's that: http://paste.ubuntu.com/7076719/
<sarnold> apb1963: hahaha, that's an awesome paste :)
<apb1963> is it?
<sarnold> apb1963: it's a question I've never thought to ask, "what do all the strcmp calls in this program look like?" :)
<apb1963> I don't like using that pastebin... it has no expiration... that's why I was struggling with fpaste
<apb1963> sarnold: I got the idea from you
<apb1963> sarnold: so you get 98% of the credit
<sarnold> apb1963: have you see pastebinit? it's pretty helpful for pasting huge things and lets you pick different pastebins
<apb1963> sarnold: yeah... but it doesn't work with fpaste (despite it being on the list)... and you still can't set expiration time... or at least I certainly don't know how.
<sarnold> apb1963: aww. :/
<apb1963> sarnold: yeah there's even bugs reported against it for the same reason -along with a claim that it's fixed... but it's not.
<sarnold> apb1963: it looks mostly like this captures the internals of the bash shell script interpreter, I'm guessing that different tokens get hashed into a certain hash bucket and then bash walks the chain of entries looking for the correct command; e.g. "echo" looks up "else" "elif" "esac" "echo"...
<apb1963> sarnold: yeah well... to be honest I'm burnt out on filing  bugs at this point.
<sarnold> apb1963: ha, I know the feeling.
<apb1963> sarnold: maybe i'll get annoyed enough at some point to overcome the annoyance of filing yet another bug report
<apb1963> sarnold: Thanks for your help though!  I learned a new tool... never saw ltrace before.  Catch you another time :)
<sarnold> apb1963: have fun! take care :)
 * apb1963 exits stage left
<GeekD00d> apb1963: earlier you referred me to a channel #php. That channel is invite only?
<Midion_> What does everyone here recommend for setting up a server? I just want something with a control panel. I was using virtualmin with webmin and I was told ubuntu no longer supports webmin.
<Midion_> I'll just reload the server and try vestacp again.
<Midion_> Is there a server setup with ubuntu that includes a control panel and ftp?
<kriskropd> Midion_: I just learned about this channel, but I prefer working with my server via ssh - I run bash shells inside of tmux sessions so I can reconenct and continue from where I leave off
<kriskropd> s/reconenct/reconnect
<kriskropd> you probably are going to want to learn to do the same if you plan on using unix or unix-like servers - while some GUI cpanels exist for this-and-that they tend to be flaky and easily broken
<Midion_> I'm just on more of a time crunch. I'm on day 2 of googling every other command i try to put into command line.
<Midion_> I still don't have my backed up site up yet. If I had a gui like cpanel which I don't have the money for currently. I would have had my site up by the dealine I set.
<Midion_> I'm just stressed. Just so many commands and ofcourse alot have to have something else installed previously just to use that command.
<kriskropd> Midion_: you might find commands easier to use and learn if you learn to read the man pages - googling will take too long and often give you information that might be incorrect for your system - also if you are learning something robuts, you might find a channel on irc for that specific topic, for instance #bash or #awk or #sed
<kriskropd> s/robuts/robust
<kriskropd> Midion_: my suggestions is to take the command thing as slow as you need to, but make the effort to learn it - you will kick yourself for relying on gui once you familiarize with even just the basics of cli :)
<kriskropd> Midion_: unfortunately, I don't have any good suggestions on speeding up such a tasking learning process :(
<Midion_> I have learned alot more about command line. Using nano, vi, importing my sql through command line. I can only connect to my vps with ssh at this point.
<Midion_> I just lost my job after getting in a car accident. So im just trying to get this site up and running as a source of revenue. Its more for personal reasons that im stressing.
<Midion_> Thankyou for all of the advice and help.
<kriskropd> Midion_: Sorry I can't make things any easier, but I do promise if you can get over the hurdle of cli, you will find things immensly easier as your mind starts to adapt to the more traditional unix-like system
<kriskropd> also you have my condolences, I hope things turn out for the best and fortune comes your way :)
<arrith> Midion_: i'm not sure if it applies but you see cpanel around
<arrith> Midion_: taking a while to read the ubuntu server guide is the best way to do it
<sarnold> ugh please not cpanel, might as well just publish your root password on your website..
<arrith> haha
<kriskropd> Midion_: webmin did at one point work very well and was, AFAIK, one of the better choices, but that was years ago and it was starting to fall apart by the time I actualyl got to try it for the first time
<sarnold> ugh not webmin either please :)
<kriskropd> sarnold: I'm not supportign webmin :) I used it once, and it never has worked for me
<sarnold> I dunno what it is about those control panels, but most 'generic' control panel things have absolutely horrid code quality
<sarnold> kriskropd: good good :)
<sarnold> perhaps some ISP has a nice home-rolled thing somewhere, but it surely hasn't been well publicised :)
<kriskropd> sarnold: I've the feeling that most of them are developed with good intentions, but somewhere along the way the developers realize they are working really hard to build an ineffective wrapper around a very flexible system
<sarnold> kriskropd: yeah, I suspect they didn't set out to be a fertile field for remote exploits.. :)
<Midion_> now i just have to make sure i setup sftp correctly.
<lordievader> Good morning.
<pmatulis3> morning
<Midion_> mornin
<Kully> Hey guys; I'm having issues with a new set up ubuntu pptp server; the vpn server can communicate to all nodes, lan and vpn-lan, vpn-lan nodes can ping to the nodes behind the server, but not the other way around... also nodes on the vpn cannot ping each other. Any help is appreciated!
<stetho> The company I worked for was acquired and my users have all been recreated on the parent company's servers. This means all their login names and UIDs/GIDs have changed. Short of recreating everyone is there a way I can give them access to our old NFS shares? Some mount -o user=fred.smith option that I can't find in the man pages for example?
<rbasak> stetho: NFSv4 has http://manpages.ubuntu.com/manpages/precise/en/man8/idmapd.8.html. I know nothing more about the topic though.
<andol> stetho: Might you not want to update the uids the NFS server, or do you have other systems still relying on the old uids?
<stetho> andol: There are other systems.
<stetho> rbasak: Thanks, I'll take a look.
<DammitJim> silly question. I just set up my server to have a static IP address, but the DNS servers aren't being updated
<DammitJim> is there something I need to do to kickstart the update process? My /etc/resolv.conf still says the old servers
<DammitJim> even though I added dns-search and dns-nameservers to /etc/network/interfaces
<pmatulis3> DammitJim: what did you do ito restarting networking?
<DammitJim> sudo /etc/init.d/networking restart
<DammitJim> then I did service resolvconf restart
<DammitJim> but none of those are doing anything
<pmatulis3> DammitJim: what release of 'buntu?
<DammitJim> 12.04 TLS server
<DammitJim> unless I was supposed to add my dns entries in /etc/resolvconf/resolv.conf/base ?
<DammitJim> I"m confused... there are so many places that the internet sites tell you to do it at
<rbasak> DammitJim: https://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/ is a good source, written by an Ubuntu core developer
<rbasak> DammitJim: the correct and easiest way is to "ifdown eth0" (or whatever), change /etc/network/interfaces and "ifup eth0" again. With care, since if you're remote you won't be connected after the first step.
<rbasak> DammitJim: but I'm not sure what state your system is in now to fix it from where you are, sorry.
<DammitJim> I can go to the machine and make those changes
<DammitJim> brb
<DammitJim> rbasak, but you are saying if I did the changes in interfaaces, it should all update, right?
<rbasak> DammitJim: when the interface is brought up, yes.
<DammitJim> I did that, but after bringing the interface back up, it didn't work
<DammitJim> maybe I'm missing something
<DammitJim> http://pastie.org/8911642
<DammitJim> that's my interfaces entry
<DammitJim> WEIRD, I ended up rebooting the server and that did it
<DammitJim> maybe there is some other service that needs to be restarted when making changes like this
<coreycb> jamespage, hi, for bug 1289935 we're missing an upstream patch in our icehouse keystone branch.  to fix that would we need to add the patch to debian/patches?
<tomixxx5> hi i have inserted my usb-stick with a ubuntu server 12.04.04 iso image because i want to split my current ubuntu partition. what should i select in the main window. I can only see "Install Ubuntu Server", "Multiple server install with MAAS", "Check disc for defects", "Test memory", "boot from first hard disk", "Rescue a broken system"
<tomixxx5> i follow this guide: http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/
<jamespage> coreycb, we should probably cherry pick that fix
<coreycb> jamespage, ok, so put it into debian/patches?
<jamespage> coreycb, yes
<coreycb> jamespage, alright I'll work on that
<pmatulis3> DammitJim: restarting networking bits is not obvious unfortunately.  if you can reproduce the behaviour please open a bug
<jamespage> hallyn, not getting much more running the test individually
<jamespage> hallyn, gonna gdb it
<hallyn> jamespage: since this is only happening in ppa, i woudl assume it's either an old-kernel or permissions problem
<jamespage> hallyn, I get it in a local schroot as well
<hallyn> jamespage: ok that's good then :)
<jamespage> so thats precise schroot on trusty
<hallyn> since a precise container on trusty kernel on precise userspace didnt' work for me,
<hallyn> i assume it has to do with a missing Depends?  maybe...
<jamespage> hallyn, might be
<jamespage> coreycb, responded on the db sync ticket
<coreycb> jamespage, thanks
<jamespage> coreycb, yeah - we should fix it the same way everywhere imho
<coreycb> jamespage, ok
<jamespage> hallyn, its failing in virNetSocketNewConnectSockFD I think
<jamespage> src/rpc/virnetsocket.c:899
<hallyn> jamespage: and what kind of socket is it trying to connect to there?
<coreycb> jamespage, something is wrong with the keystone package repository -- first odd thing I notice is 'dch -i' generates a version with '1:2014.1~b3+master-0ubuntu2'
<hallyn> oh i see it in the test,
<jamespage> coreycb: we use that notation when we've had todo something based on trunk changing
<jamespage> coreycb, I had to refresh the patches to fix the trunk builds earlier today
<jamespage> coreycb, I would suggest that you just work against ubuntu:keystone for this change - we can merge it back in later
<jamespage> (as changelog only if you see what I mean)
<coreycb> jamespage, ok so that's not the issue but building the source package failed too.  I'll try ubuntu:keystone.
<jamespage> coreycb, yeah - you would need to pull the master tarball
<jamespage> coreycb, I have  script that does that
<hallyn> jamespage: can you get an error printed from virNetSocketNew ?
<jamespage> coreycb, http://paste.ubuntu.com/7080211/
<coreycb> jamespage, thanks!
<jamespage> hallyn, trying to
<jamespage> coreycb, generally I update the version to include +master and then run that inthe branch
<coreycb> jamespage, ok..  so is it ok if I work off this branch?
<jamespage> coreycb, as that needs to go to distro now its easier if you work from ubuntu:keystone
<jamespage> we can sort out the changelog later
<coreycb> jamespage, ok
<jamespage> coreycb, or you could just work from the previous revision of the ubuntu-server-dev branch and merge it back in
<jamespage> either would work
<coreycb> jamespage, is ubuntu:keystone lp:ubuntu/keystone ?
<jamespage> coreycb, yes
<coreycb> jamespage, ok because it was last modified in oct 2013
<jamespage> coreycb, urgh
<jamespage> might be out of date
<jamespage> sometimes that happens
<coreycb> jamespage, yeah looks to be
<jamespage> coreycb, http://package-import.ubuntu.com/status
<coreycb> jamespage, I guess I'll send an email to  ubuntu-distributed-devel
<rostam> HI Is there any reference on how to install ubuntu server with secure boot enabled?
<sarnold> rostam: I think you just grab a 12.04.4 installer and go... no?
<rostam> sarnold, I am using official 12.04.3 (will try .4 if that is the issue). The secure boot is enabled in BIOS but I do not see /sys/firmware/efi   in the filesystem?
<sarnold> rostam: I think the .3 should have had support as well..
<rostam> sarnold,  may be I need to check the BIOS configuration again.
<sarnold> rostam: note that our "secure boot" efforts are just to get the machines running using ubuntu, there's no integrity checking in the kernel for e.g. loaded modules..
<rostam> sarnold,  is there a plan to support it in the next release of LTS (14.04) ?
<sarnold> rostam: here's a huge pile of miscelaneous notes.. https://wiki.ubuntu.com/SecurityTeam/SecureBoot
<sarnold> rostam: I don't think so
<sarnold> rostam: it'll be the same "enablement" but not "enforcement"
<rostam> sarnold, thanks, for the info.
<tych0> hi hallyn, (smoser?) if i want to write some data to /etc/cloud/cloud.cfg.d/ on an LXC i'm creating, is there a nice way to do that?
<tych0> i see there are clone hooks
<tych0> but i'm afraid it is a little opaque for me
<smoser> in the end the only way to do it to my knowledge is to write files there.
<smoser> the clone hook would be able to run some code on every clone of the given container
<tych0> ok, that's what i was afraid of
<smoser> which could do that for you
<hallyn> tych0: the dir /etc/cloud/cloud.cfg.d/ would be on the host, or in theh container?
<tych0> sorry, in the container
<delete> question, I deployed an image from ec2 into another cloud, it boots fine but it gets stuck for 5 minutes after some ec2 script that I am not able to find to deactivate. syslog output http://pastebin.ca/2655826
<delete> any idea?
<delete> ssh doesn't start for quiet some time
<Kully> anyone ever seen connecting to a vpn bog out a machine?
<Kully> especially running sudo commands and such
<Kully> non-sudo seems to run at normal speed, but sudo nano etc takes for ever to load, and all sites on tomcat/apache load VERY slow
<pmatulis3> Kully: do you use any remote (ldap) user management/authorization?
<Kully> eh; tomcat hit's a database, but that's just for the websites
<Kully> I can barely get hte sites to load, when they do there is a time out between the database connection I can see in the logs
<Kully> but even when I was using tomcat-users for authentication I was still seeing the server bog
<Kully> if I work as root it speeds up the commands; but sites still take to long to load etc. when I close the vpn it's still really slow until I reboot, then everything is back to normal
<pmatulis3> maybe the server is using the vpn when it shouldn't, dunno, sniff traffic on the wire (tcpdump)
<Kully> I don't think I know enough to do that :/ I wouldn't really know what I'm looking at
<Kully> know any alternative VPN clients I could try for ssl vpn? right now I'm using netExtender for dell ssl vpn
<Kully> we have a sonic wall
<pmatulis3> openvpn
<Kully> cool
<Kully> thanks
<markthomas> Kully, for starters, check the routing table before opening the VPN connection and then again after it's closed.  They should be identical.
<Kully> i'll give that a show
<Kully> ok
<markthomas> +1 for OpenVPN
<Kully> http://pastebin.com/iHMMjP0X
<Kully> they're not the same
<pmatulis3> before and after, not during and after
<jamespage> hallyn, it might be related to http://osdir.com/ml/libvir-list/2013-10/msg00871.html
<jamespage> but I'm not 100% convinced
<Kully> http://pastebin.com/eZXYjDGY
<Kully> still different, what does that mean
<mgw> I'm having an issue getting dnsmasq and resolvconf to play nicely
<mgw>  /etc/resolvconf/resolv.conf.d/base has my name servers in it
<mgw> but they're not propagating to /var/run/dnsmasq/resolv.conf
<mgw> I'm having an issue getting dnsmasq and resolvconf to play nicely
<mgw> Â /etc/resolvconf/resolv.conf.d/base has my name servers in it
<mgw> but they're not propagating to /var/run/dnsmasq/resolv.conf
<el_seano> I'm playing around with lib-virt, and deployed a vanilla precise-server image.  However, upon trying to connect to the VNC console after finishing the install, the console stops just before the login screen
<el_seano> I installed openssh during the installation, and can ssh into the system, but it requires me having to sift through my logs to find which ip it nabbed via dhcp
<el_seano> tried switching from vnc to spice with the same result, and getting it to play nice via serial console is giving me a headache
<el_seano> any suggestions?
<el_seano> primarily using virt-manager on 14.04 beta
<thumper> hallyn: when running lxc-create, man page doesn't mention -B best, but people are talking to me about it
<thumper> right now I'm special casing btrfs
<thumper> what is the logic behind -B best?
<hallyn> thumper:         char *best_options[] = {"btrfs", "zfs", "lvm", "dir", NULL};
<hallyn> it tries in that order to pick a backing store
<thumper> we have zfs?
<hallyn> sure, i've used it with the zfs module at least
<thumper> right now I can determine btrfs by asking the OS
<thumper> and juju logic currently says "if not btrfs, use aufs"
<thumper> is it worth special casing lvm?
<thumper> and if so, how would I know?
<thumper> hallyn: so as a side note, man page for lxc-create needs to mention "best"
<hallyn> yup (or we remove it :)
<hallyn> lvscan?
<hallyn> but, if you use -B best then you don't need to check it yourself
<thumper> if we use lvm backing we need extra params as well don't we?
<thumper> I'm just thinking on the clone side
<hallyn> only if you don't like the defaults
<thumper> kk
<thumper> however, I need to work out when on clone to specify "aufs"
<thumper> I can use "-B best" on create
<hallyn> stgraber: i suppose aufs should be added to the -B best set ^
<thumper> juju now always specifies "--snapshot" for clone
<hallyn> no wiat,
<thumper> and if not btrfs, specifies "-B aufs"
<Midion_> Is there anyway for me to test why my site is down? I ping the name and it points to the ip used for my server.
<thumper> but that won't help lvm backing
<hallyn> thumper: *or* you could just say that btrfs is highly recommended, and use only copy clone dir backed containers otherwise
<hallyn> let the user specify a custom lxc path (i.e. /opt/juju) where they can mount btrfs, but leave it at that
<hallyn> that would simplify debugging imo
<hallyn> just bc lxc provides all those other optiosn doesn't mean you should support all of them
<thumper> true that
<thumper> I'm going to keep things explicit I think
<thumper> other mount dirs can come later
<thumper> btrfs aware juju, and defaulting to aufs will be fine for this version IMO
<thumper> thanks
<hallyn> thumper: well i was suggesting not even doing aufs...
<thumper> nah, aufs is too cool
<thumper> keeping that :)
<hallyn> alrighty
<thumper> thanks
<thumper> no doubt we'll iterate on the lxc goodness inside juju
<hallyn> just had someone on precise on 3.13 kernel report some host hangups with aufs....
<thumper> but this is good enough for now
<thumper> we are doing this for trusty ++ only
<thumper> and only for the local provider
<thumper> not lxc inside other machines
<hallyn> well this was a kernel bug;
<hallyn> but presumably will be fixed
<hallyn> heck lemme see if i can reproduce that right now
<Midion_> I went to http://www.downforeveryoneorjustme.com/ and it says my site is down. I am ssh'ed into my ubuntu server. When I ping my site with command prompt it is also giving the ip for my server.
<flaccid_> i setup juju per the docs for amazon. when i do juju bootstrap it returns, ERROR environment has no access-key or secret-key, but those are set. is this a bug?
<Midion_> went to http://www.whyismywebsitedown.com/ and it says: Your site is responded with a message of 503: Service Unavailable
<coreycb> jamespage, 1289935 is ready but I haven't done a merge request because it looks like a merge against lp:~ubuntu-server-dev/keystone/icehouse will have conflicts
<Midion_> Site was only working a few hours. Maybe it was hacked because I have done nothing on it but post articles on my blog.
<sarnold> Midion_: excellent :) look through your logs in /var/log/ and try to find that 503 response..
<Midion_> ty
<coreycb> jamespage, due to stepping back 1 revision
<Midion_> maybe i got ddoss attacked.
<Midion_> most likely spelled that wrong....
<sarnold> Midion_: ddos shouldn't cause 503
<Midion_> named[26827]: client 65.55.37.36#27785: query (cache) 'ns1.saptraining4all.com/A/IN' denied
<Midion_> I see this in my log over and over and over
<Midion_> i have no idea what ns1.saptraining4all.com is.
<sarnold> that could be trying to use your dns as part of a ddos, yeah
<sarnold> I hope your dns recursor isn't wide-open?
<Midion_> i have no idea what a dns recursor is. i have been battleing with ubuntu for 3 days just to get my site up and finally a few hours ao it was up and now its down.
<sarnold> hrm, why are you running bind?
<Midion_> im using vestacp
<Midion_> i think it includes bind
<flaccid_> maybe there is some crazy problem with juju not working on os x. i guess i'll try it on ubuntu. everything is certainly configured correctly!
<Midion_> should i install CSF + LFD???
<Midion_> Is there anyway to get my site back?
<sarnold> Midion_: did you find the source of your 503 errors?
<Midion_> the only thing I can tell is on_> named[26827]: client 65.55.37.36#27785: query (cache) 'ns1.saptraining4all.com/A/IN' denied was showing up over and over for many many hours.
<Midion_> im wondering of that is the problem.
<Midion_> if
<sarnold> Midion_: it can't be good. if you don't know why you have bind instlaled, I'd go to some effort to turn it back off again. it's nothing but trouble.
<Midion_> is bind for nameservers?
<sarnold> yes
<Midion_> that what it must be for then.
<Midion_> I followed this tutorial. http://www.servermom.org/add-new-website-vesta-cp/1066/
<sarnold> Midion_: so, at step 2, what DNS servers did you put for your domain?
<Midion_> i have ns1.mydomainname.com and ns3.mydomainname.com in my configuration.
<sarnold> Midion_: do you host those servers yourself?
<Midion_> i mean 2
<Midion_> i changed my nameservers at internetbs.net to those 2.
<sarnold> do you host those servers yourself?
<Midion_> i guess not
<Midion_> i was just following what the tutorual says and it just used my same ip.
<sarnold> then I'd turn off named :) you don't want to run bind.
<sarnold> and if your configpanel thingy doesn't make that easy, I'd kill the vm instance and start over, their intsall script scribbles over a -lot- of settings
<sarnold> (to be fair, their intsall script wasn't as horrible as I feared, but there's way too much downloading without integrity and authenticity checking for my tastes.)
<Midion_> ok i think its off now.
<Midion_> but my site still down. i can only get to it from the ip.
<Midion_> i just understand because if i ping my domain name it also gives me my ip.
<Midion_> dont understand i mean
<sarnold> Midion_: did you find the reason for the 503 in your web server logs yet?
<Midion_> this is what i see http://pastebin.com/pDVzHbip
<sarnold> Midion_: check your web server logs, either /var/log/nginx/* or /var/log/apache*/*
<Midion_> ok ty
<Midion_> http://pastebin.com/G4BtyqUM
<sarnold> Midion_: it doesn't look like a particularly clean config, but no real errors either. one bad file descriptor from a client socket is hardly the end of the world..
<sarnold> Midion_: check the other logs in that directory? something has to have logged that 503.
<Midion_> ok
<Midion_> http://pastebin.com/k3yJuJkF
#ubuntu-server 2014-03-13
<sarnold> Midion_: wow. that's an amazing pile of errors. which of those have you fixed and which of those haven't you fixed?
<Midion_> chown -R admin:www-data /home/admin/web/websitename.com/public_html maybe because i did this earlier.
<Midion_> because everything root:root and the website would not upload pictures or anything at first.
<sarnold> Midion_: it'd be a good idea to hit reload, see what gets added to the log, fix those problems, hit reload, fix, hit reload, etc. iterate until it's fixed. :)
<Midion_> is there a command line for that im im in ssh.
<sarnold> tail -f /var/log/whatever/file.log
<sarnold> very handy :)
<Midion_> ty
<Midion_> restarted the vps and ran the command you just told me to and the site is back up.
<Midion_> ty
<Midion_> maybe its temporary i think there is an issue with the mail.
<Midion_> i did notice i cannot send out from or recieve mail to the site.
<Midion_> emails only come to the site internally
<sarnold> yes, your mail software looks pretty busted..
<Midion_> lol
<Midion_> well atleast im learning
<Midion_> guess reload the server and just install lamp with phpmyadmin. since now i finally figured out how to use sftp.
<Midion_> i just thought everything would work installing vestacp.
<Midion_> question if im running lamp. should my site be able to send out emails like forgotten passwords and such?
<Midion_> or do i need to download something seperate for email functioning?
<sarnold> Midion_: sending emails is an incredible undertaking these days. email isn't what it used to be.
<Midion_> I just want my website to be able to send out forgotten passwords if someone needs it.
<teward> it's not *that* simple, as sarnold pointed out.
<teward> you need something to send the mail with, and from an IP that isn't residential (so it's not blacklisted)
<Midion_> I have 2 google apps free edition accounts. So I just use those for email.
<sarnold> Midion_: getting your emails past gmail's fairly heavy-hitting antispam rules takes some effort. depending upon your ip address it just may not be possible.
<Midion_> ah
<teward> you also need to see that which sarnold just said
<teward> having siad this, your backend application needs to support using external SMTP if you want to use google apps as your method for email
<teward> and you have to restrict registrations/emails to a certain amoutn in a certain period
<teward> otherwise your emails get rate-limited and then your emails cease
<teward> good for low-traffic, low-email-volume stuff, not so much for 1000 emails a day and such
<teward> and again, sarnold is right :)
<sarnold> well, I figured he was hosting his own mail.. btdt, not much fun, and I'm sure it's gotten worse in the last decade
<sarnold> teward: external smtp to google sounds better than a lot of options :) hehe
<teward> sarnold: not all PHP (assumed!) apps will behave with SMTP
<teward> and then you sometimes have to use... tricky workaround configurations... to make it behave with google smtp
<teward> and it really DOES depend on the volume of the site
<teward> if you're going to be having the system send a lot of emails, then google apps isn't a good choice for this
<sarnold> teward: yeah, seems like the folks in here with the worst problems were trying to do mail from php. sounds miserable to me. :)
<teward> (they WILL block you from sending emails for a week when you routinely explodify with email volume)
<sarnold> aka: 'we can't insert advertising in email. go talk to sendgrid'  :)
<teward> lol
<Midion_> im usinf drupal and a invision powerboard forum. So I just want people to be able to use the forgot password functions.
<mgw> I'm having an issue getting dnsmasq and resolvconf to play nicely
<mgw> Â /etc/resolvconf/resolv.conf.d/base has my name servers in it
<mgw> but they're not propagating to /var/run/dnsmasq/resolv.conf
<mgw> I know I can add nameservers to /etc/network/interfaces, but isn't it possible to use the resolvconf base to bypass having to ifdown/up the interface to pick up the change?
<ice9> when I run clamscan  after couple of seconds it prints 'killed' and exists
<sarnold> ice9: check dmesg | tail, see if it hit a segv..
<ice9> sarnold: no
<sarnold> ice9: hrm, might be time to break out strace, try "strace -o /tmp/clam.out clamscan" -- check ou the /tmp/clam.out file and see if there's anything interesting there
<ice9> sarnold: +++ killed by SIGKILL +++
<ice9> sarnold: how can I know which process killed it?
<rostam> HI I  have  two disk on my system. I like to install two instances of Ubuntu (one on each disk) on this system. Is there a utility who could help me on this task please? thx
<Midion_> What is the difference between using this command: sudo apt-get  install lamp-server^ phpmyadmin and Taskel to install lamp?
<arrith> Midion_: i'm not sure lamp is an option in tasksel
<arrith> tasksel --list-tasks
<arrith> tasksel --task-packages web-server (List the packages that would be installed by that task)
<arrith> oh so it is
<Midion_> Ok so I setup my server using lamp. I made a new user and gave it sudo permissions. I logged in via sftp and attampted to upload my backup files to /var/www
<Midion_> I got the following. /var/www/backup_3_8_2014.zip: open for write: permission denied
<Midion_> I ran sudo visudo again to check and i do have all the same permissions as root.
<arrith> Midion_: folder permissions issue perhaps
<lordievader> Good morning.
<Alina-malina>  hmmmm how maximum entry processes i can set up on VPS with nginx usually for example if it is VPS can i put like 40?
<hxm> hi, can I change the partition without restart?
<lordievader> hxm: What partition? If it is the root partition, no.
<hxm> right now /var is in / and I have /home in a different partition, I wanted to split /home and use it for /var
<hxm> is that possible with mess everything?
<cfhowlett> hxm, y?!
<hxm> ?
<hxm> in /var I have /www and it is getting big
<hxm> and the home partition is a 1Tb at 1% used
<dnano91> hi, i'm running an ubuntu server and have trouble updating. i tried aptitude upgrade, but it got stuck on maria-db server, and now i can't use apt anymore, because it wants to finish the previous update, but it can never seem to stop maria-db.
<lordievader> hxm: LVM would in this case be very usefull. Anyhow what you could do is resize /home, create a new partition with that free space. Mount the newly created partition somewhere copy /var/ over to it. Rename /var/ to /var.old and create a new folder /var mount the new parition to it. And ajust /etc/fstab.
<hxm> chances are to lose data?
<lordievader> hxm: That's why you move the folder, so you have a backup. But yes, with resizing partitions there are risks involved. Be sure to back up critical data.
<lordievader> Or non-recoverable data.
<Midion_> you try rebooting dnano91?
<hxm> lordievader: thanks for the help
<Midion_> Any reason why i would keep getting this in my error log? [Thu Mar 13 06:30:48 2014] [error] [client 75.143.147.134] File does not exist: /var/www/findomli/webmail, referer: http://xxx.xxx.xxx.xxxx/webmail/?_task=mail&_mbox=INBOX&_refresh=1
<lordievader> hxm: No problem.
<Midion_> i have no mail and i only setup lamp with phpmyadmin.
<Midion_> is someone trying to attack my server or something?
<cfhowlett> Midion_, more likely you're misconfigured something
<arrith> Midion_: do you recognize that ip?
<dnano91> any idea how i can fix aptitude?
<rbasak> jamespage: looking at the python-jujuclient MIR now.
<Midion_> no i do not reconixe that ip.
<Midion_> recognize
<Midion_> its like every second
<lordievader> Midion_: Fail2ban can also do http things, though it might be more related to http authentication. To be honest haven't really looked into fail2ban+http.
<dnano91> isn't there anyone who could help me with my aptitude problem?
<jamespage> rbasak: excellent
 * jamespage goes back to fixing openstack bugs
<Midion_> Do you think CSF + LFD would fix it?
<Midion_> I found a tutorial here https://www.digitalocean.com/community/articles/how-to-install-and-configure-config-server-firewall-csf-on-ubuntu?refcode=7ee052965ae9
<smb> hallyn, Could you review/sponsor my libvirt fixes for S and T (potentially accepting the nominations, too) for which
<smb> hallyn, I added debdiffs in the bug report (bug 1248025)
<jamespage> smb, its annoying our bot has disappeared
<raj__> Could I tell rsnapshot to keep the initial (ie., the very first backup) backup forever, incase I may need it anytime (like for /etc directory ) ?
<Kully> can someone give me a hand? I've got two nic's on the same network I need one's nic to keep the default gate of the lan (10.0.0.1) and the other to take the default gate of the VPN lan 192.168.0.1, when I run route add default gate 192.168.0.1 eth0 I lose connection to the host unless I'm on the VPN... how can I accomplish this?
<pmatulis3> Kully: so don't do that.  you can only have one default gateway
<pmatulis3> Kully: the 192 will be a specific route
<Kully> how do I configure a specific route?
<dnano91> i can't update anymore cause aptitude tries to finish it's last update, but i can't restart maria-db for some reason, any idea why? and how i can solve this?
<dnano91> *aptitude can't restart maria-db
<pmatulis3> dnano91: pastebin output to 'sudo apt-get update; sudo apt-get dist-upgrade'
<alexmoldovan> hi pmatulis3   :)
<pmatulis3> alexmoldovan: morning
<alexmoldovan> how come you need up the 3rd?
<pmatulis3> alexmoldovan: dunno, something just "did it" to me
<dnano91> pmatulis3: is there a way to direct output to a file while still getting it to console? 1>log.txt and i can't see anything
<pmatulis3> dnano91: sure
<pmatulis3> dnano91: echo blah | tee file
<dnano91> thanks :)
<dnano91> pmatulis3:  http://sprunge.us/RXiB this is how far it goes. then nothing happens
<pmatulis3> dnano91: my 1st guess is that the packages in the mirror is screwing things up
<pmatulis3> http://tweedo.com/mirror/mariadb/repo/10.0/ubuntu/
<dnano91> pmatulis3: i'll try changing mirror then
<Daviey> jamespage / roaksoax: python-seamicroclient FTBFS, given back as it seemed an odd error.
<jamespage> Daviey, ack
<jamespage> I'll take a look if it does it again
<pmatulis3> dnano91: well, you'll prolly need to first remove the existing package.  is that feasible?
<pmatulis3> dnano91: that mirror provides 'out-of-archive' packages
<hallyn> smb: will do
<dnano91> pmatulis3: you mean uninstall maria db?
<smb> hallyn, Great, thanks. Note that this also needs to go to some cloud archive which I am not sure about the procedure. Or would that be automatic?
<pmatulis3> dnano91: yeah.  why are you using those packages anyway?
<dnano91> pmatulis3: i wanted to switch from mysql to maria db. later tried to switch back but it's not that easy
<dnano91> pmatulis3: i should probably have switched to maria 5 instead of 10
<hallyn> smb: yeah afaik that's automatic.  whether by a robot, or by jamespage (another robot)
<smb> hallyn, :D
<jamespage> smb, libvirt?
<smb> jamespage, yep
<jamespage> smb, hallyn: I've still not figured out the failing test yet so libvirt -> CA is currently blocked
<smb> jamespage, Is that both Saucy and Trusty or just T?
<jamespage> smb, its a manual merge into the ca for both unfortunately
<smb> jamespage, Oh ok cloud archive and not California... :)
<jamespage> smb, but I'm just blocked on T libvirt
<smb> jamespage, Ok, but  just to make sure, that should not keep us from updating the package(s) for the normal archive. Just migration to the cloud archive will be delayed.
<jamespage> yes
<smb> ok cool
<Daviey> jamespage / roaksoax: bug 1292029 (failed twice)
<jamespage> tsimpson, hey - please could we get our bot up and running again?
<Daviey> jamespage: Don't know if it still the case, but the bot here was ran by Soren?
<Daviey> soren ^^
<jamespage> was it? I was just reading the bots page
<hallyn> jamespage: i assume you've tried and succeeded building libvirt in a *trusty* schroot?
<jamespage> hallyn, yes
<hallyn> jamespage: what's the policy on the archive then - how bout just disable that test for that archive?
<jamespage> hallyn, the only difference I could see was that symbolic-functions was used on 12.04 but I don't think it is on 14.04
<jamespage> afaict
<hallyn> oh
<hallyn> i was undesrtanding that backwards yesterday
<hallyn> but as i say, it built for me in a precise container
<jamespage> hallyn, http://www.redhat.com/archives/libvir-list/2013-October/msg00950.html
<jamespage> hallyn, how did you build? would that have enabled all of the hardening flags?
<hallyn> jamepage: i first did install that cloud archive;  maybe that added something
<hallyn> smb: about your patch,
<hallyn> does it do the right thing if /proc/xen/capabiltiies does not exist?
<hallyn> oh yeah, i guess so.
<smb> hallyn, I think it should and did as virFileReadAll returns an error in that case
<hallyn> can that file be empty?
<hallyn> and if so what does that mean?
<smb> hallyn, Yes, that means you did mount xenfs to /proc/xen on a normal Xen guest
<smb> hallyn, The file is empty then and on a dom0 it returns control_d
<hallyn> and if control_d is there, you want to say no xen caps detected?
<hallyn> oh no.  phew.  the reuse of status is confusing there :)
<smb> hallyn, No ther way raund but yeah, it might be a tad to "efficient"
<smb> hallyn, By now that is the version accepted upstream so we gonna have to live with that evil programming style. :)
<hallyn> smb: yup, pushed the pkg, thanks.
<hallyn> jamespage: oh, so i built using debian/rules build && fakeroot debian/rules binary.  not sure if that enables *all* the flags, i would ahve thought so
<smb> hallyn, Great, thanks. That is at least one down...
<hallyn> smb: the saucy one will have to wait as 8.6 is in proposed.
<hallyn> smb: can you put it in your own tickler file or whatever to check in 2 weeks if saucy-proposed has been cleared?
<smb> hallyn, Ah ok.
<hallyn> smb: thanks
<smb> Trying to make a knot into my towel...
<hallyn> ah what the hell i'm putting it on my own for the 24th
<jamescarr> who maintains this??? http://cloud-images.ubuntu.com/locator/ec2/releasesTable#
<jamespage> Daviey, seamicroclient build fixed
<jamespage> * d/control: Add missing BD on openstack-pkg-tools (LP: #1292029).
 * jamespage sighs
<jamespage> the folly of not using upstream release tarballs....
<Daviey> jamespage: Yeah.. wonder why it worked in Debian? One of it's BD depending on it already?
<jamespage> Daviey, I wonder :-)
<jamespage> its not like it might have built somewhere un-official
<Daviey> jamespage: Oh *sigh*, that hadn't even crossed my mind.
<jamescarr> here you go... how to pull the latest EC2 Ubuntu AMI https://gist.github.com/jamescarr/9531577
<jamescarr> wish there was a better way!
<rbasak> jamescarr: are you aware of sstream-query?
<jamescarr> sstream-query?
<jamescarr> string json?
<jamescarr> *streaming
<jamescarr> first result from google https://bugs.launchpad.net/simplestreams/+bug/1290744
<rbasak> One moment, I'll knock up a sample invocation for you. Can you give me an example query in English?
<rbasak> (eg. what region, arch, etc?)
<jamescarr> Give me the AMI for raring for amd64 arch  in us-east-1
<rbasak> $ ubuntu-cloudimg-query amd64 us-east-1
<rbasak> ami-0b9c9f62
<rbasak> I'm not sure if that uses simplestreams or not.
<jamescarr> srsly???
<rbasak> sstream-query allows you to query the metadata more directly, but it's longer to invoke.
<jamescarr> I asked that question a lot and no now answered
<rbasak> Yes srsly :)
<jamescarr> *no one
<rbasak> I did :-P
<jamescarr> I asked on stack overflow, twitter, irc chans ;)
<rbasak> Oh, I didn't say raring
<rbasak> $ ubuntu-cloudimg-query amd64 us-east-1 raring
<rbasak> ami-951524fc
<rbasak> I presume it defaults to something sensible
<jamescarr> that's it
<jamescarr> dammit
<jamescarr> oh well, it was a fun way to spend 20 minutes
<rbasak> jamescarr: sorry this stuff is not better documented.
<rbasak> smoser has an outstanding task to do some of it. It should be in the server guide really.
<jamescarr> no problem ;)
<jamescarr> just replaced 15 lines of code with one ;)
<jamescarr> so there's that
<jamescarr> magnificient
<jamescarr> also saves me from adding a silly dependency to my build scripts
<rbasak> We support many more providers than just EC2 now, so I think this command might be deprecated eventually, and replaced with a more generic query tool.
<rbasak> The tools and metadata are already in place. sstream-query queries it, but perhaps ubuntu-cloudimg-query needs to be written to source simplestreams and default to EC2 for compatibility.
<jamescarr> what if I wanted instance store?
<jamescarr> the ami returned is for ebs
<rbasak> $ ubuntu-cloudimg-query amd64 us-east-1 raring instance
<rbasak> ami-1d132274
<rbasak> There doesn't appear to be a manpage, but "ubuntu-cloudimg-query -h" has some information.
<jamescarr> ty
<cocoa117> does this syslog-ng things are program that running after kernel and library ready, so it can handle the log files? the program such as sendmail will still need to be configured to use syslog-ng to handle its log?
<pmatulis3> cocoa117: why you remove rsyslog?
<cocoa117> pmatulis3, i been told "we" are using syslog-ng at here...
<pmatulis3> cocoa117: strange
<cocoa117> pmatulis3, i am learning, so just take what has been throw at me
<cocoa117> so am i right to assume the syslog-ng or rsyslog or syslogd is the program that is designed to handle logs, so it can be redistributed or managed. The individual program such as bind, sendmail, httpd will need to be configured to use those syslog program in order for administrator to handle the logs properly?
<pmatulis3> cocoa117: apps will log.  usually to /var/log/syslog.  but you can customize where stuff goes and what gets logged
<pmatulis3> cocoa117: then again, i've never used syslog-ng, but it should work
<cocoa117> pmatulis3, you mean when program is designed to use syslog() library will auto send its message to /var/log/syslog? i assume your /etc/syslog.conf will allow you to configure what goes where?
<pmatulis3> cocoa117: yes, for the first part of your question.  dunno about second part since rsyslog uses different configuration files from syslog-ng
<cocoa117> pmatulis3, that's fine. i was trying to understand the concept here
<pmatulis3> woot. rsyslog on Trusty will have native support for log file encryption and log anonymization
<pmatulis3> (ip addresses anyway)
<sarnold> stgraber: hey, are you the right person to look at https://bugs.launchpad.net/ubuntu/+bug/1292019  ? :)
<sarnold> stgraber: (another user was in here within the last few days complaining about too-frequent dhcp packets, but he got tired of looking into it before filing a bug. sigh.)
<stgraber> sarnold: seems to be a netcfg bug, triaged
<sarnold> stgraber: thanks! :)
<mgw1> given an ubuntu 12.04 system with resolvconf and dnsmasq, and static ip assignment, is there a way to update nameservers without editing /etc/network/interfaces and bouncing the interface?
<mgw1> in particular the bouncing part
<stgraber> sarnold: any progress on the cgmanager MIR?
<sarnold> stgraber: hehe, thanks for the poke :) not yet, sorry.
<ice9> clamav get killed after couple of seconds from running it
<ice9> clamsac!
<ice9> clamscan
<mgw1> stgraber: I see you wrote a blog post on resolvconf/dnsmasq some time ago. Do you have any suggestions on my issue?
<stgraber> mgw1: so if you have dnsmasq installed, resolvconf will only set 127.0.0.1 as your DNS server and ignore any other setting you may have, so just update dnsmasq's config and restart it
<mgw1> right, but how can I get dnsmasq to make a dns change?
<ice9> clamscan get killed after couple of seconds from running it
<mgw1> stgraber: ^
<stgraber> mgw1: no idea, I don't know dnsmasq, though I suspect it'd involve changing its config and sending it SIGHUP or something
<mgw1> strgraber: ok, thanks
<mgw1> *stgraber
<vedic> Hey guys, I just now setup nginx on my remote server. From linux virtual machine if I access the site, I get the msg: "The connection to the server was reset while the page was loading." but when I access it from IE or FF from Windows, site works gr8. What could be the reason?
<ice9> vedic: then your virtual machine is not working properly with the outside world, check your iptables and how the virtual machine is connecting to the internet
<rbasak> vedic: that message means that the TCP connection is getting broken. It sounds likely to me that the issue is with the virtual machine's networking with the host machine.
<vedic> ice9, rbasak: Guest is on bridge mode
<rbasak> vedic: try tcpdump/wireshark on the host, guest and your server to see what is killing the TCP connection.
<rbasak> AIUI, wireshark is available on Windows too, if you have a Windows host.
<rbasak> You need to have at least a rudimentary to understand the results though
<rbasak> rudimentary understanding of TCP that is.
<vedic> rbasak: Firewall rule is default on guest. Deny incoming and allow outgoing
<coreycb> JJj
<Midion_> What would be the best way to edit sshd_config from a server standpoint? To change roo login to no? To deny root? To allow only certain users? Or to allow only certain groups?
<thurstylark> What do you guys think about Ajenti for a Webmin replacement for a couple of home servers? I'm tired of webmin at this point...
<rbasak> Midion_: I usually add AllowUsers to restrict to a set of named users. Easier to audit then.
<rbasak> (eg. each one has an ssh key and a disabled password)
<rbasak> I also enable 2 factor auth
<rbasak> I blogged how to do 2 factor here: http://www.justgohome.co.uk/blog/2013/07/better-two-factor-ssh-authentication-on-ubuntu.html
<rbasak> Or at least force key login
<Midion_> ty rbasak i'll read your blog now.
<Midion_> Is the best way to setup lamp just to run: sudo apt-get  install lamp-server^ phpmyadmin
<Midion_> Sadly my phone kicked the bucket. So I don't know if I can use this method.
<thumper> smoser: ping
<rbasak> teward: nice blog, and thank you for your hard work!
<Midion_> Question..... So I setup my new user and gave him sudo permissions. I also added the name to the www-data group. I ran the chown command on /var/www  as www-data/www-data but I cannot transfer files to it over sftp still.
<Midion_> What am I doing wrong?
<Midion_> I also added umask 007 to /etc/apache2/envvars.
<sarnold> Midion_: did you sftp www-data@servername ?
<pmatulis3> jamespage: hi, is this completely baked? -- https://code.launchpad.net/~openstack-charmers/charms/precise/keystone/ssl-everywhere
<pmatulis3> jamespage: and if so, how do users eventually get that charm?
<Midion_> no i did sftp username@servername
<Midion_> I previously added the user to the www-data group.
<Midion_> so its better to sftp using www-data instead of the user?
<sarnold> does /var/www/ and child directories have group write permission turned on?
<sarnold> it's better to not have your data owned by www-data at all (despite the stupid name)
<Midion_> I think write permission is turned on. would that be when I had to add Add umask 007 to /etc/apache2/envvars?
<Midion_> The only thing I ran on /var/www was chown -R www-data:www-data /var/www
<sarnold> Midion_: ls -ld /var/www
<sarnold> Midion_: the apache umask setting will affect of course only files that apache creates. I hope it is not many :)
<Midion_> ok I ran ls -ld /var/www then sudo service restart apache and disconnected from sftp and reconnected to sftp and it still says write permission failed.
<Midion_> Thank you I didn't know umask was only for files apache creates.
<sarnold> did your ls -ld show group write privileges?
<Midion_> when i run ls -l /var/www I get back -rw-r--r-- 1 www-data www-data
<sarnold> ah there you go, you don't have group write privileges set on the directory
<mardraum> you also don't have execute, wtf are you attempting to do
<sarnold> mardraum: i assumed that meant he forgot the -d and was instead showing the output of a file :) heh
<Midion_> I setting up a server i ran sudo apt-get  install lamp-server^ phpmyadmin and now im trying to put my site backup zip in /var/www
<Midion_> im*
<adam_g> jamespage, https://review.openstack.org/#/c/70750/ is this going to destroy you if it lands in havana?
<midion_> Ty everything is working now after running sudo chmod -vR g+w /var/www/
<sarnold> woo :)
<midion_> now to read a tutorial on making my mysql datbases using cli. just incase i run into a situation where i cannot user phpmyadmin.
#ubuntu-server 2014-03-14
<smoser> thumper, pong.
<Midion__> If adding a user to a certain database in mysql is this the proper command? GRANT ALL PRIVILEGES ON DATABASENAME . * TO 'DATABASEUSERNAME'@'localhost';
<Midion__> I want to user to have access to the database and all types inside the database.
<Midion__> tables inside the database i mean.
<MavKen> I am running 12.04 on my server... Should I run dist-upgrade?  I am getting... The following packages have been kept back:
<MavKen>   linux-headers-generic-lts-raring linux-image-generic-lts-raring
<Midion__> ok all errors on my server are one but this: [Thu Mar 13 22:10:10 2014] [notice] Graceful restart requested, doing restart [Thu Mar 13 22:10:10 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.10 with Suhosin-Patch configured -- resuming normal operati
<basketball> can i set up a old 2002 computer as a server
<MavKen> yep
<MavKen> common use for an old machine
<MavKen> clean install of ubuntu server and you could have a decent setup basketball
<basketball> can i set up a old 2002 computer as a server
<basketball> whoops
<basketball> MavKen,  is it easy to set up and will it be slow
<sarnold> MavKen: I suggest apt-get -u dist-upgrade -- see which packages apt wants to delete, -then- decide if you want to do the upgrade...
<sarnold> Midion__: that message may just be for log rotation or something similar. if you see it roughly daily that's not a big deal.
<basketball> what version should i use 12.04 or 13.10
<sarnold> basketball: probably 12.04 LTS; it's still supported for another three years. 13.10 support will end in another three months or so, at which point you'll be forced to upgrade to 14.04 LTS.
<MavKen> basketball, how much RAM do you have?
<basketball> like 80 mb
<sarnold> basketball: while we all expect 14.04 LTS to be good, it'd be nice if you could upgrade to it on your schedule rather than rushed into it. :)
<basketball> if not less
<MavKen> if you are just learning... use 13.10 because the virtual hosts are like 14.04 will be... it is different than 12.04...subtle but different
<MavKen> you need at least 512mb RAM... ideally 1GB
<basketball> i have 240 mb
<MavKen> i have been running 13.10 on my production server, I have 12 clients hosted on it.
<basketball> MavKen,  can i set it up like dropbox and if i only have 240 mb will it still work
<sarnold> while we were able to make good use of a 80 megabyte machine back in 2002 I wonder about today's software
<MavKen> no
<basketball> no?
<MavKen> how can you only have that much ram on a 2002 era computer?
<MavKen> you sure you are not talking about the cache?
<basketball> it says memory 240 mb
<MavKen> where?
<MavKen> sarnold, thanks for that info
<sarnold> MavKen: did you find out what was going to be uninstalled?
<MavKen> I have not run it yet... i ran upgrade to get a few things...will ssh back in shortly and try that
<MavKen> ahh... nothing removed...
<MavKen> The following NEW packages will be installed:
<MavKen>   linux-headers-3.8.0-37 linux-headers-3.8.0-37-generic
<MavKen>   linux-image-3.8.0-37-generic
<MavKen> The following packages will be upgraded:
<MavKen>   linux-headers-generic-lts-raring linux-image-generic-lts-raring
<MavKen> 2 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
<sarnold> 240 is more like it, you might not get a nice and purty owncloud experience but running svn over apache ought to get you a tolerable web-dev experience.. sftp will of course just be awesome
<sarnold> MavKen: yeah you probably do want new kernels :) new kernels come iwth security fixes
<MavKen> yeah... what is the best command to use when rebooting production server and is it best practice to reboot on occasion?
<sarnold> MavKen: you'll have to gauge the severity of things fixed in the kernel vs your local users and sometimes potential for remote exploits when deciding when to reboot
<sarnold> dinner time ;) have fun
<basketball> MavKen,  the bios says 256 memory
<MavKen> you can buy memory very cheap... 1GB is $20 at Fry's
<basketball> MavKen,  so it wont work with only 256
<MavKen> https://help.ubuntu.com/community/Installation/SystemRequirements
<MavKen> it says it will
<MavKen> ubuntu-server not ubuntu-desktop
<Midion__> Can I uninstall phpmyadmin using the following?  sudo apt-get purge phpmyadmin  Will this mess anything up if I do?
<Midion__> I will just do everything cli.
<Midion__> I just dont want to mess anything up trying to uninstall it.
<basketball> sarnold,  can you run ubuntu server without keyboard mouse or monitor just the computer
<basketball> I installed ubuntu server on a usb and when i boot i get Non-System disk or disk error
<arrith> basketball: make sure you can boot to usb
<basketball> ok brb trying
<arrith> Midion__: that might work, it won't be guaranteed to get rid of everything possibly, since phpmyadmin might put things in odd places
<arrith> basketball: its in the bios
<basketball> yes it is bootable
<basketball> arlen,
<arrith> basketball: do you still get the error?
<arrith> basketball: you might not have usb high enough in your boot order in your bios
<basketball> I tested it on this laptop
<basketball> boot order usb is #1
<basketball> I am preforming a hard drive check it is a 60%
<arrith> ok
<arrith> basketball: are you installing ubuntu server on the laptop or another computer?
<Midion__> arrith: ok thanks
<basketball> another computer
<arrith> Midion__: doing a reinstall might be better. but only do that i guess if you run into issues
<arrith> basketball: you should check in that other computer's bios if usb is set higher than hard drive, in boot order
<basketball> arrith,  that is what i did check
<basketball> i am talking on laptop
<arrith> basketball: i'm not sure i understand, what do you want to do with the laptop?
<basketball> arrith,  nothing
<basketball> arrith,  i am installing it on desktop
<Midion__> I ran sudo apt-get purge phpmyadmin and then sudo apt-get autoremove. I think its all gone now.
<arrith> basketball: ok. so you checked the desktop bios?
<basketball> yes
<arrith> Midion__: yeah autoremove is good. alright
<arrith> basketball: and you get the error on the desktop?
<basketball> and it is preforming a hard drive check
<basketball> and yes
<arrith> basketball: you might have to install with a CD
<basketball> i cant as it doesnt reconize dvds and i dont have cd with enough space
<arrith> ah
<arrith> basketball: have you booted to usb on anything before on that desktop?
<basketball> no my granddad gave it to me today
<basketball> no my granddad gave it to me today
<basketball> the hard drive check is at 8-%
<basketball> 80
<basketball> arrith,  what next?
<basketball> should i reinstall xp then try to install ubuntu server
<sarnold> sigh I know we've got some server images around here somewhere that will fit on a cd..
<sarnold> basketball: ah, finally found it. this might be the thing: https://help.ubuntu.com/community/Installation/MinimalCD
<basketball> sarnold,  i am trying a xp disk and that works
<Midion__> That was a version of windows that I did like.
<sarnold> heh, not me, I thought it looked like cartoonland
<sarnold> win2k was the last one I thought was decent :)
<basketball> windows is poop go ubuntu]
<Midion__> Just thinkin about wondows 8.1 stresses me out. lol
<basketball> i hate windows 8
<arrith> basketball: you want to dual boot?
<basketball> nope just server
<arrith> basketball: you can just turn it off when it does a hdd check like that, that's a windows thing. whenever linux checks it's called 'fsck' (stands for "filesystem check")
<basketball> I am installing xp so that it will get rid of that error then i will erase xp and install ubuntu server
<arrith> basketball: ok so when you try to boot with the usb inserted, do you see any messages about "boot list"?
<basketball> no
<arrith> basketball: that error is from the contents of the hdd, since you're going to be just having ubuntu on the hdd, it doesn't matter if there's an error
<arrith> basketball: do you press f12 to get into the bios on the desktop?
<basketball> i am installing xp i cant check
<arrith> basketball: since you don't want xp, you can hold the power button until it turns off
<arrith> basketball: any errors on it now are from the old windows
<arrith> there's this one cd that i remember
<arrith> i think i found it
<arrith> http://download.plop.at/files/ploplinux/4.2.2/ploplinux-4.2.2/ploplinux-4.2.2.iso
<arrith> from http://www.plop.at/en/ploplinux/download.html
<arrith> basketball: when you boot that iso it will give you a menu with "usb" as one of the options. use the arrow keys to select "usb" then hit enter
<arrith> you can use that to install ubuntu server from usb
<arrith> the iso is about 80 megabytes
<basketball> arrith,  and sarnold  the link worked and it is installing
<arrith> basketball: did you use the plop iso?
<basketball> now is there a way to have the server run without a monitor, keyboard, or mouse
<arrith> or minimal
<basketball> no i used the https://help.ubuntu.com/community/Installation/MinimalCD
<arrith> ah
<arrith> basketball: just don't install any desktop, and select "openssh server" during the install
<arrith> then you can ssh in
<basketball> you know how dropbox you can go to dropbox.com
<basketball> can i do something like that with this where i can access my files anywhere
<arrith> basketball: yes. there is dropbox for linux. but sparkleshare is better, in my opinion
<basketball> no like my own server version
<sarnold> basketball: the computer that runs my irc client is a little pandaboard with no monitor and no keyboard attached, it just has power and ethernet. :)
<basketball> how can i put my files in the cloud
<sarnold> basketball: some computers will fail to boot if they don't have a keyboard attached -- that stupid "Error! No Keyboard! Press f1 to continue!"
<arrith> basketball: google sparkleshare and owncloud
<arrith> basketball: also freedombox and mediagoblin
<basketball> arrith,  i mean like have it on my own server instead of theirs
<cfhowlett> !cloud|basketball,
<ubottu> basketball,: Ubuntu Cloud Infrastructure is a ready to deploy Infrastructure-as-a-Service (IaaS) based on OpenStack. See https://help.ubuntu.com/community/UbuntuCloudInfrastructure for further details.
<arrith> basketball: sparkleshare is all local
<arrith> basketball: read up on it already *hits with stick* :P
<sarnold> and don't read about openstack, that's a waste of time on your machine :)
<basketball> arrith,  how do i use sparkle share if i dont want a monitor
<sarnold> very few things require a monitor
<basketball> does sparkle share
<arrith> basketball: you use your server just for file storage
<arrith> basketball: server for sparkleshare does not
<basketball> arrith,  yes that is what i want
<basketball> it is still installing
<arrith> basketball: google ubuntu sparkleshare server
<sarnold> ouch, sparkleshare requires the mono runtime.. that might be a bit heavy on a twelve year old machine with little memory
<arrith> read up on guides to set it up
<arrith> sarnold: oh. might be
<arrith> sarnold: eh if it's not running anything else
<sarnold> arrith: yeah, maybe. patience is a virtue anyway :)
<basketball> sarnold,  what else can i run
<sarnold> basketball: dunno, I've always been content with scp :)
<arrith> rsync heh
<basketball> scp?
<sarnold> oh yeah, rsync is bloody awesome :)
<basketball> hey i am at software selection what do i do
<arrith> btw funfact, sparkleshare requires nothing on a server besides ssh and rstnc
<arrith> sarnold: so the clint would do the mono stuff
<sarnold> arrith: oh? is there a different package than sparkleshare that does the server-side work?
<arrith> basketball: openssh server, but make sure no desktop environmnt
<sarnold> arrith: wait. is there a server-side software? :)
<arrith> sarnold: no, it's just supposed to work with a ssh account
<arrith> sarnold: there isn't :)
<sarnold> arrith: perfect for basketball :)
<arrith> basketball: read up on the ubuntu server guide
<basketball> if i pressed enter does that install it
<basketball> do you have a specific link
<arrith> basketball: https://help.ubuntu.com/lts/serverguide/
<arrith> basketball: space toggkes, enter goes to the next screen. at least in the debian installer
<basketball> shoot then i didnt install openshh
<sarnold> just be sure to run apt-get install openssh-server openssh-client  some time before rebooting then :)
<arrith> basketball: you can run taskel at any time
<arrith> basketball: sudo tasksel
<arrith> you can also go back in the installation menu, but that's not necessary
<basketball> it is still installing i cant run anything
<sarnold> basketball: that's fine, no hurry.
<basketball> I think the hardest thing i have ever done is replace hardrive on this thing
<cwiggs> Can anyone share some thoughts on running my server off of raid1 thumbdrives?
<cwiggs> I was thinking of running ubuntu off of two thumbdrives and then use my 2 1tb drives for data?
<sarnold> cwiggs: probably pretty poor speed reading / writing to those things but I suppose you're prepared for that if you're asking :) hehe
<cwiggs> Worse than a 3.0gb sata hdd?
<basketball> do i install grub boot loader to the maSTER BOOT RECORD
<cwiggs> grub replaces the MBR
<sarnold> cwiggs: the ones I've had tended to do 15 MB/s
<cwiggs> sarnold: interesting, i figured they'd be faster since they are SSDs
<sarnold> basketball: I think so, yes
<sarnold> cwiggs: oh, actual SSDs hooked up over usb?
<cwiggs> no, thumbdrives, usb drives, jumpdrives, etc.  but they are SSD right?  just very small ones?
<cwiggs> they aren't magnetic drives, thats for sure
<basketball> do i set time to UTC
<cwiggs> basketball: yes
<sarnold> cwiggs: similar but not nearly the same speed :) the ssd in my laptop gets something like 500 MB/s ..
<cwiggs> sarnold: i just figured the HDD would be the slowest, then thumbdrives, then SSDs
<sarnold> cwiggs: test yours, dd if=/dev/zero of=/dev/sdwhatever bs=16k count=1000 conv=fsync
<basketball> sarnold,  it is booting now what
<sarnold> basketball: log in, install openssh-server and openssh-client
<cwiggs> sarnold: i haven't bought the thumbdrives yet, i was just wondering what the benefits/downfalls would be
<basketball> how do i make it auto boot to the server
<arrith> cwiggs: usb 2.0 is slower than sata2
<arrith> cwiggs: if you buy anything, buy like a 64 GB SSD to run your OS on
<arrith> cwiggs: probably could get one for $50 or less
<cwiggs> arrith: a 16gb usb drive is much cheaper than a 64gb ssd drive though, and i'd like to raid them.
<sarnold> cwiggs: most thumbdrives are mighty slow. usb2 is painful. usb3 is pretty speedy but I think most 'thumb drive usb3' would top out at speeds roughly equal to 5400rpm drives, probably not yet to 7200 rpm drive speeds
<sarnold> basketball: how do you mean? it'll boot right into ubuntu, right?
<basketball> it waits on grub and i have to press enter
<arrith> cwiggs: i would take a sata2 drive over a usb thumbdrive, hm raid 0 might help. but an ssd is so much better
<arrith> sarnold: there are some pricey usb3.0 thumbdrives that are effectively tiny SSDs, they do a few hundred megs a second
<sarnold> arrith: OOOO
<cwiggs> arrith: I was actually thinking raid1, for redundency.  True, but i was trying to be cheap
<sarnold> arrith: do tell more :)
<arrith> sarnold: yeah it's sick
<basketball> sarnold,  how do i install openshh client and server
<sarnold> basketball: sudo apt-get install openssh-server
<arrith> sarnold: basically say on newegg, any usb3.0 usb drive should be around there
<sarnold> basketball: probably the client is already installed but if not openssh-client for it
<arrith> cwiggs: well for speed, which you're gonna need.
<basketball> unable to loacate package
<arrith> cwiggs: one thing, thumbdrives are definitely not meant to be used that way, as in booted off of and having OSes installed on them. i tried to do that for like a month and it was hell
<arrith> basketball: you really need to read that server guide
<arrith> basketball: also google tasksel
<cwiggs> arrith: what happened? a lot of bitcoin miners were running off of usb drives
<sarnold> arrith: wow! :)
<sarnold> 200MB/s: http://thewirecutter.com/reviews/the-best-usb-3-0-thumb-drive/
<arrith> cwiggs: i forget the details but it was buggy and swapping was the worst thing ever. this was on a laptop though
<sarnold> arrith: awesome, thanks for correcting that old bias :)
<arrith> sarnold: yeahh check that out
<arrith> sarnold: np. usb 3.0 is awesome
<cwiggs> arrith: good to know, thanks
<sarnold> basketball: sudo apt-get update -- try again?
<arrith> i keep waiting for phones and routers to freakin come out with usb3.0
<arrith> cwiggs: server might be different actually
<basketball> i did run that
<sarnold> cwiggs: haha, I take it back :) knock yourself out, that's bloody awesome :)
<arrith> cwiggs: if they're usb3.0 i guess
<cwiggs> not usb3.0 : (
<arrith> ehh
<arrith> i say buy it and try
<arrith> then buy an SSD
<arrith> :P
<cwiggs> haha
<basketball> got it
<cwiggs> that would be nice
<cwiggs> another question for you, if i have two 1tb drives, what is the best way to add them together to get 2tb?  my friend said to use Glusterfs so that i could always add more drives later
<cwiggs> but i could use LVM or MooseFS?
<sarnold> never heard of moosefs..
<basketball> sarnold,  i installed it now what
<sarnold> basketball: test it :) ssh localhost
<cwiggs> http://www.moosefs.org/
<cwiggs> it is pretty awesome
<cwiggs> my understanding is GlusterFS is better
<arrith> http://www.newegg.com/Product/Product.aspx?Item=N82E16820211718
<basketball> sarnold,  how do i access files from web browser on a windows pc and linux pc
<arrith> cwiggs: that's crazy what
<sarnold> heh, a pal who tried out glusterfs -hated- it with a passion. never seen him hate something so much
<arrith> cwiggs: just lvm or mdadm, or if you want to get super fancy the best is zfsonlinux
<cwiggs> sarnold: lol
<sarnold> basketball: try owncloud. it might be too heavy for your machine, but worth a shot.
<arrith> depends on if you want performance or redundancy kind of
<cwiggs> i just want performance
<sarnold> hrm, moosefs looks neat right up until "fuse"
<cwiggs> just gonna hold media that i can always d/l again
<arrith> basketball: sparkleshare might be better for what you want, but it's not from a browser
<sarnold> I know fuse works well for a lot of folks but .. I'd hate to rely upon it for something I cared about.
<basketball> i need to access it from school and home
<arrith> cwiggs: do a big raid 0 then, you can always add drives to that then grow the fs. ext4 on software raid 0
<arrith> cwiggs: reading up on zfsonlinux is good to do though
<cwiggs> arrith: hmmm, i'll check out zfsonlinux
<cwiggs> thanks
<arrith> cwiggs: but yeah you can use standard mdadm for a raid0
<cwiggs> arrith: there are so many options, it's a little overwhelming
<cwiggs> makes me want to just keep the server how it is.
<arrith> doesn't have many features
<arrith> cwiggs: well like glusterfs and stuff, unless you want to learn about them, are only for people with like lots of servers
<basketball> what is mysql and why do i need to set a passowrd
<cwiggs> arrith: true, i started in a sysadmin type place about 6 months ago , so it wouldn't hurt to learn more
<sarnold> zfsonlinux looks like awesome stuff, I intend to set up my own nas-server-of-doom using it Sometime Soon; but I wouldn't want to use ZoL on /, it just looks like a real hassle
<cwiggs> mysql is used to store the username/password data for owncloud
<sarnold> cwiggs: cripes, over-engineer much? :)
<cwiggs> sarnold: haha, just a little
<sarnold> don't get me wrong, I tend to think of mysql as a toy -- unfairly, but there it is -- a bloody username:hashed_password file would do the job fine for 99.9% of users though.
<basketball> sarnold,  once this installs how do i add the files to it
<sarnold> basketball: no idea :) never seen it before
<basketball> the owncloud
<sarnold> basketball: I hope it'll have a /usr/share/doc/owncloud/README that will give some hints though :)
<cwiggs> Basketball: it's pretty easy
<basketball> cwiggs,  can i put my desktop / server under the desk without it making noise
<cwiggs> basketball: it's mostly web based, so you can use a web browser to go to [ip of owncloud server]/owncloud
<cwiggs> basketball: i'm not sure what you mean?
<cwiggs> basketball: it'll make just as much noise as a regular computer
<basketball> like without a monitor keyboard and mouse
<cwiggs> oh, yeah
<cwiggs> you can run it headless
<cwiggs> use ssh to work on it
<basketball> can i add files from web browser
<cwiggs> yep
<cwiggs> e.g. go to 192.168.1.130/owncloud
<cwiggs> and it'll ask you login and then you can upload stuff to owncloud
<cwiggs> owncloud is pretty sweet
<basketball> is this the amount of noise the computer will normally make
<cwiggs> what do you mean by noise?
<basketball> it is making a lot of noise
<cwiggs> is it screaming at you or something?
<cwiggs> what kind of noise?
<cwiggs> like the HDD spinning?
<basketball> yeah
<cwiggs> huh, it'll probably be like that a lot, maybe the HDD is going out?
<cwiggs> my server doesn't sound louder than my desktop
<basketball> You need to enter either an existing account or the administrator.
<basketball> i am at http://192.168.1.117/owncloud/index.php
<cwiggs> for owncloud?
<cwiggs> didn't it ask you to create an account?
<basketball> it just asked for password
<cwiggs> that might be for the admin account
<cwiggs> can you login ?
<basketball> what is the user
<cwiggs> try admin
<basketball> nope
<cwiggs> odd, when i setup mine it asked me to create a user
<basketball> i dont have a gui
<basketball> is that why
<cwiggs> no
<cwiggs> it asked me to create the user when i went to the webui
<basketball> wha do i run
<cwiggs> when you went to the webui for owncloud, that's when it asked me to create a user
<basketball> how do i redo that
<cwiggs> someone else more familar with SQL might be able to help you find the username
<cwiggs> uh, i think you'd have to reinstall owncloud
<basketball> ok uninstalling
<cwiggs> good luck
<cwiggs> it might kept the old sql db, i'm not sure
<basketball> purge :)
<cwiggs> haha, yep
<cwiggs> if i remember i had a probably purging it
<cwiggs> i just ended up reinstall linux since it was a virtual machine
<basketball> nope that didnt work
<basketball> it remebered the db
<cwiggs> hm
<sarnold> then apt-get purge mysql-server-5.5 or whatever it is
<sarnold> i can never remember with mysql..
<basketball> still didnt ask me for a password
<cwiggs> it won't ask you for a password when installing it in the command line
<cwiggs> but when you go to the webui it should
<basketball> wait i see
<cwiggs> it'll ask you to create a sql db too
<basketball> what is the data base user
<cwiggs> root?
<basketball> db name
<cwiggs> you are creating the db
<cwiggs> so you can name it w/e
<cwiggs> i just named mine owncloud
<basketball> You need to enter either an existing account or the administrator.
<basketball> people said to manually create db how do i do that'
<raj__> How could I tell rsnapshot to keep initial(very first) backup forever?
<cwiggs> basketball: i don't know enough about SQL to help you there
<cwiggs> raj__: i don't think you can, that's isn't really what rsnapshot is for.
<cwiggs> you might be able to set the backup interval really high
<cwiggs> ?
<basketball> arrith,
<basketball> sarnold,
<sarnold> raj__: maybe you could do something kinda gross, like running it once, then setting the "immutable" attribute on all those files and directories
<raj__> cwiggs: what should I use for keeping the iniitial backup so that I (could always see the default confgurations) ?
<basketball> sarnold,  do you know
<arrith> lol uhm
<arrith> i'd be following a guide personally
<arrith> or the readme
<cwiggs> raj__: default config for what?
<sarnold> basketball: never used it, I suggest checking out /usr/share/doc/owncloud/README to see if there's a nice guide
<raj__> i meant default configuration of all softwares & my system
<cwiggs> raj__ do you edit that stuff regularly?
<raj__> nope not regularly ..just rarely or once
<cwiggs> hmmm
<cwiggs> I might be the wrong guy to answer but i'd probably just use rsnapshot or use LVM to take snapshots
<sarnold> laptop battery nearly dead, have a good night :)
<cwiggs> with rsnapshot if nothing has changed it won't do anything
<cwiggs> it just creates hardlinks
<cwiggs> sarnold: night, thanks for the help
<raj__> i know that... love it for that
<sarnold> cwiggs: thanks for the education :)
<cwiggs> np
<raj__> just wanted to keep the initial backup as well
<cwiggs> raj__: boot from live usb and rsync the whole drive?
<cwiggs> or dd?
<raj__> cwiggs: actually it is a remote server (vps)..
<cwiggs> raj__: hmm, idk, but that is a good question
<cwiggs> anyone else got any ideas for raj__?
<raj__> may be I could just rsync to another remote server but I better like to keep a rsnapshot like "diff" kind of backup on the same server so i could just use/see it when needed
<raj__> sarnold: and how do I set that immutable attribute ?
<basketball> anyone know how to set up owncloud
<arrith> basketball: you need to google up a guide on that
<arrith> eh
<arrith> raj__ needs to do some kind of snapshot
<arrith> i suggest rsnapshot
<arrith> sudo mkdir /backups and use rsnapshot, can even set it on cron, to put snapshots there
<arrith> also etckeeper could be an option, if the concern is only configuration files
<basketball> i cant find one
<raj__> arrith: I am using rsnapshot, my only wish is to keep the initial first backup created by rsnapshot forever.. is that possible ?
<arrith> raj__: yes, personally i just set my backup count really high. but you can manually copy it to the side
<arrith> raj__: rsnapshot says the commands it does
<arrith> raj__: i think it's like   "cp -al"
<basketball> got it
<basketball> how do i sync dropbox and this own cloid
<raj__> arrith: may be I should add an yearly interval ? ..to keep year old backups ? increasing backup count may considerably increase space requirements...!?
<arrith> basketball: probably rsync and cron. i'd use sparkleshare myself
<arrith> raj__: hm that's an idea. i forget how it does rotation but keeping a backup from a year ago is a good idea
<arrith> er snapshot
<arrith> not a backup
<raj__> arrith: "snapshot not a backup"- rsnapshot can just create backups ? did you mean to suggest a snapshot other than a backup  by  rsnapshot?
<arrith> raj__: technically a backup has certain requirements. different media, offline, possibly at a remote location
<arrith> just being pedantic heh
<basketball> why does my computer need keyboard plugged in
<arrith> basketball: if you do tasksel and install ssh server you can remotely login
<raj__> okkay.. got that.. you call it snapshot as I am keeping it on same server right.. or does not fully meet requirements to be called "backup".. right
<arrith> raj__: yep
<arrith> raj__: or if you want just do one rsnapshot then do a "cp -al" of it to keep it around
<basketball> arrith,  the computer still needs to be on for the own cloud
<arrith> cp - al daily.0 keepforever
<arrith> basketball: yeah you have to leave the server on
<basketball> how do i upload more than 512 mb at a time
<raj__> arrith: is this is for one time snapshot that will be kept forever ?
<arrith> raj__: yeah. if it's not named the way rsnapshot named it, it won't know about it and won't delete it
<raj__> arrith: thanks.. the last clarification.. if I get this correct, this needs to be run after rsnapshot has created my first backup named "daily.0" & this will actually create a hardlink so that this new snaphot folder is never deleted by rnsapshot while rotating/deleting old backups
<arrith> raj__: yeah. you need to make sure that the "cp" that you run creates hardlinks though, so it saves space
<arrith> i think "cp -al" but i'm not totally sure
<raj__> "ln" could also be used to create harlink...
<sarnold> raj__: chattr
<basketball> arrith,  is there a way to make the url easier
<arrith> basketball: which url?
<arrith> raj__: well, recursively for a huge directory tree?
<arrith> can't hardlink dirs :P
<raj__> arrith: I attempted & got the error.. :) thanks your solutions (cp -al) works though.. it created a directory of hardlinked files, I think this infact could be used to 'keep forever' snapshots of some specific times..
<arrith> raj__: haha. yeah, ln only for some stuff
<raj__> the good thing is that rsnapshot keeps each backup as if it is a full backup so this solution is working for us.. else it wouldn't be possible
<arrith> raj__: snapshot*
<arrith> raj__: google   backup 3 2 1 rule
<raj__> yes *snapshot*
<arrith> heard about this 321 rule recently and it's pretty nice
<raj__> hmm
<jamespage> adam_g, I'll look today
<lordievader> Good morning.
<freakynl> Hi, our webserver was running an older 12.04 release with the 3.2 kernel. Just installed linux-generic-lts-saucy but now there's errors on vmblock.ko building (part of the dkms package of open-vm-tools) - anyone know how to build it?
<arrith> freakynl: http://paste.ubuntu.com the errors
<jamespage> sarnold, fyi hopefully we'll have a 1.18.0 release of juju-core today or next week
<jamespage> sarnold, just in case you are mid review of 1.17.4
<jamespage> sarnold, it should not change hugely but if there is anything coming out of the MIR review, then we can feed that back upstream for fixing
<freakynl> arrith: http://paste.ubuntu.com/7089575/
<freakynl> it also fails on a 'natively' installed 12.04.4 version though with the same error (ie ubuntu server that came with 3.11 out of box)
<freakynl> several other modules do exist, like vmw_balloon.ko
<arrith> freakynl: that's from the open-vm-tools in the repos?
<mardraum> freakynl: 12.04.4 with 3.11 out of the box?
<freakynl> arrith: yep
<freakynl> mardraum: doesn't matter, gives the same error - but the box it concerns comes from 12.04.0 or 12.04.1
<arrith> freakynl: that might be a real-live bug
<arrith> freakynl: checking for bug reports in the ubuntu bugtracker for that package would be good. also looking into upstream and how they handle bugs in distro packages
<freakynl> if the version number is any indication it's no wonder :) seems to come from 2011
<arrith> freakynl: that might be. you might have to grab an upstream release. hopefully they have a repo and/or debs
<raj__> do I need to install anything to get the logs written in my logfiles ? cannot see anything in 12.04 !
<arrith> raj__: logs of what?
<freakynl> raj__: /var/log/syslog ubuntu doesn't seem to use messages
<raj__> iptables logs
<arrith> iptables specifically has to be told to LOG for one thing, and uhm, yeah that should be in syslog. there's also like .allow and .deny files somewhere
<freakynl> raj__: iptables -L -v -n
<freakynl> see if anything has LOG as target
<raj__> arrith: I've set iptables rule to log, but they dont get written unless i install rsyslog, sysklog or syslog-ng
<freakynl> raj__: all ubuntu installs I have by default came with rsyslogd
<raj__> once installed they start to get logged
<freakynl> did you remove it?
<raj__> freakynl: not on my 12.04 that i got from my webhost.. i didn't remove .. anyway.. is it neccessary even if i dont do remote logging or anything complex with logs ?
<freakynl> yes kernel doesn't write to log files - what else would you expect to do that?
<raj__> i thought rsyslog was required for remote logging or other complex log management tasks
<freakynl> it can send logs to remote, so can most others like syslog-ng
<raj__> can  sysklogd be  also used for remote logging ?
<raj__> i cant use rsyslog due to some limitations
<raj__> freakyn: i have no idea what these logging daemons are capable of besides, collecting & writing logs from running services.. could you some idea about other capabilities of them ?
<arrith> raj__: their manpages will go into that
<raj__> like one is remote logging..
<arrith> http://manpages.ubuntu.com
<basketball> arrith,  the one to get to own  cloud
<basketball> arrith,  is there a way to make the url easier
<jamespage> Daviey, if you are around - https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1292579
<jamespage> could do with a release team ack - multiple teams sprinting on testing icehouse right now :-)
<jamespage> urgh - wrong channel technically
<Daviey> jamespage: done.
<jamespage> Daviey, ta
<Daviey> jamespage: xhannel doesnt matter too muxh imo. -releaae ia subacribed.
<pmatulis3> jamespage: re ubuntu & ceph, it is strange that our supported combination is 'dumpling on precise' (inktank only supports this) yet we need to go to the upstream repo to get dumpling.  i imagine this will be the same for 'firefly on trusty'
<jamespage> pmatulis3, sorry - not sure I understand
<jamespage> oh - I see
<pmatulis3> jamespage: well, dumpling is not available on precise, unless i use the cloud archive i think
<jamespage> pmatulis3, ceph dumpling is avaliable in the Havana cloud archive for Ubuntu 12.04
<jamespage> pmatulis3, however firefly will be the release version on trusty - no cloud archive required.
<RoyK> what is dumpling?
<jamespage> RoyK, its a type of squid and also the codename for the 0.67 release of ceph
<jamespage> dumpling, emperor, firefly, giant, hammer....
<pmatulis3> jamespage: really?  i heard that inktank will not consider firefly stable on trusty until the first or maybe second point release
<jamespage> RoyK, like Ubuntu but they started at 'A'
<pmatulis3> jamespage: but good :)
<jamespage> pmatulis3, well Sage and Neil failed to mention that in the vUDS session we had this week
<Daviey> pmatulis3: where did you hear that?
<basketball> how do i find the graphics and the ram amount
<pmatulis3> Daviey: i asked inktank, but maybe the person was not 100% knowledgeable
<pmatulis3> let's go with the vUDS session
<jamespage> Daviey, sorry - I cocked up - python-ipaddr is not in main
<jamespage> I should have spotted that :-)
<Daviey> jamespage, as should i :)
<jamespage> Daviey, I raised an MIR
<jamespage> https://bugs.launchpad.net/ubuntu/+source/python-ipaddr/+bug/1292629
<basketball> arrith, and sarnold  how do i ssh into the server
<sarnold> basketball: ssh hostname   or ssh ip address
<basketball> how do i know the host name
<bekks> basketball "or".
<sarnold> basketball: depends how you set up nameservice in your network
<sarnold> jamespage: ooh! excellent news :) thanks
<basketball> got it
<bekks> jamespage: What does "MIR" stand for in the context of launchpad?
<jamespage> bekks, Main Inclusion Request
<bekks> Ah, thanks :)
<jamespage> its the process packages follow for entering Ubuntu main
<basketball> how do i change the max upload at a time
<jamespage> sarnold, yeah - looking at 1.17.5 now; however the juju-mongodb bits got reverted as it was buggy - it should make 1.18.0
<jamespage> (juju not juju-mongodb)
<bekks> basketball: Whats does "the max upload at a time" mean? In which context?
<byprdct> hey guys how can I add a new user to another users public folder for sftp access?
<sarnold> jamespage: it'd be nice to get all the work done for the 1.17 timeframe into trusty, I've been impressed by how much the juju team gets accomplished :)
<sarnold> byprdct: the 'easy' way is to create a new group for both your users, set the group owner of the directory in question to the new group, and make sure it has group read, execute, and perhaps write if you want them both to use it
<basketball> owncloud
<sarnold> byprdct: you can also use the posix-ish acls (man setfacl) to do the job, but that may require new mount options for your filesystem
<byprdct> thanks sarnold think I will stick with the easy route :)
<jamespage> sarnold, well if they don't pop a 1.18 release before final freeze, I'll have to revert to 1.16.x
<byprdct> so create a group called sftp and add me (root) as group owner and them (somename) with read, execute and write for only the public folder?
<sarnold> byprdct: well, you'd be 'equal' co-members of the group...
<sarnold> jamespage: here's hoping they like hte way 1.18 is shaping up..
<byprdct> sarnold I just want to allow this user to sftp to a public folder but not be allowed to access anything beneath it e.g. srv/www
<byprdct> actually srv/www/public
<sarnold> byprdct: I think so long as any child directories don't have that group ownership the other user couldn't modify them or their contents..
<hallyn> jcastro: no.  home/end in the place of control?  (uh, capslock :)  did you photoshop that?
<byprdct> sarnold I'm worried about the parent directories e.g. srv/www so it should work that was too as well
<byprdct> ?
<basketball> sarnold,  how do i put my irc bot in the server
<jcastro> hallyn, I did not make it up, that's what it actually is, I can't believe it either
<bekks> basketball: Which ircbot in which server?
<basketball> supybot and my server
<bekks> basketball: And which irc network do you want to connect with?
<basketball> freenode
<bekks> basketball: Honestly, no offense meant, but you shouldnt mess with bots until you know how to connect them to a irc network ;)
<basketball> bekks,  i have my bot up
<bekks> And thats very well documented for supybot, e.g.
<basketball> bekks,  that is my bot
<bekks> What is your bot?
<bekks> I dont see part/join messages.
<basketball> suppybot^,
<basketball> ~reply bekks  I am basketball's bot
<suppybot^> basketball: bekks I am basketball's bot
<bekks> Whatever you need that for.
<basketball> how do i put him on server
<bekks> Upload it.
<basketball> how
<bekks> Using whichever file transfer protocol you configured on your server.
<basketball> so i transfer everything
<basketball> and then it works
<bekks> No.
<bekks> You transfer it, and then you have to make sure everything works as expected.
<basketball> bekks,  i am installing the program on the server how do i copy all the files
<basketball> i dont have a gui
<bekks> Using whichever file transfer protocol you configured on your server.
<bekks> If you dont have a gui, you have to use the terminal.
<bekks> !ssh | basketball
<suppybot^> basketball: SSH is the Secure SHell protocol, see: https://help.ubuntu.com/community/SSH for client usage. PuTTY is an SSH client for Windows; see: http://www.chiark.greenend.org.uk/~sgtatham/putty/ for it's homepage. See also !scp (Secure CoPy) and !sshd (Secure SHell Daemon)
<ubottu> basketball: SSH is the Secure SHell protocol, see: https://help.ubuntu.com/community/SSH for client usage. PuTTY is an SSH client for Windows; see: http://www.chiark.greenend.org.uk/~sgtatham/putty/ for it's homepage. See also !scp (Secure CoPy) and !sshd (Secure SHell Daemon)
<bekks> Settings that suppybot^ on ignore now.
<basketball> ~ part #ubuntu-server
<bekks> Better let him pat the entire network.
<bekks> *part
<basketball> !filetransfer
<basketball> what is the file transfer program
<bekks> Please read the text ubottu just told you.
<bekks> After that, please read the links given.
<raj__> is there anything worth backing up in /var ? besides /var/lib folder ?
<bekks> raj__: Almost everything, besides /var/tmp and /var/run
<sarnold> /var/spool/mail or wherver your databases live :)
<bekks> raj__: If you dont backup /var, you will entirely break you package management system in case of restoring it.
<sarnold> imho backups are for your data
<sarnold> you can install a new OS on a new drive pretty quickly
<raj__> i am not looking for a full restore .. just to save the custom configurations..
<bekks> raj__: Then backup everything.
<raj__> not everything under /var is for configurations .. i already backup the db.. /cache , /run, /lockm /tmp do they need to be backed up?
<bekks> raj__: I told you which too directories under /var dont need to be backed up.
<bekks> *two
<basketball> bekks,  how do i make bot startup at startup
<bekks> basketball: By configuring your system to start your bot automatically.
<bekks> !autostart | basketball
<ubottu> basketball: If you want to edit your Autostart programs, open the Unity dash and search for Startup Programs. If you're on XFCE, use menu -> Settings -> Settings Manager -> Session and Startup, if you're on KDE, use Kickoff -> Computer -> System Settings -> Autostart. For more details see https://help.ubuntu.com/community/AddingProgramToSessionStartup. For LXDE see http://ubottu.com/y/4
<basketball> bekks,  i dont have a gui
<bekks> basketball: Then you have to write an init script which starts your bot as the user desired.
<mgw> basketball: http://upstart.ubuntu.com/cookbook/
<jrwren> what is the difference between %C and %T in find -printf output?
<sarnold> jrwren: %T is the modification time -- when the file data changed. %C is the status change time, which could include e.g. chown/chmod or touch to change the mtime or atime..
<jrwren> sarnold: thanks
<hmhrex> I'm having an issue with netatalk and file permissions. Here are some details: http://askubuntu.com/questions/434324/permissions-issue-with-netatalk
<sarnold> hmhrex: any error messages from netatalk in the logs? any messages on the OS X logs?
<hmhrex> That's something I didn't check. Do you know off the top of your head where the netatalk log file is in Ubuntu>
<sarnold> hmhrex: guessing, /var/log/netatalk.log ..
<sarnold> if not, /var/log/syslog may have something
<hmhrex> Let me test the issue again and I'll post the logs.
<hmhrex> Finally found the netatalk log. All it says is this:
<hmhrex> Mar 14 14:11:08.693414 afpd[1986] {ea_sys.c:110} (E:AFPDaemon): sys_getextattr_size: error: Permission denied
<hmhrex> nothing in the Mac logs.
<sarnold> hmhrex: how about syslog?
<hmhrex> nothing in there either.
<hmhrex> It's weird cause it claims that it's a permissions issue, but everything checks out as fine.
<byprdct> I'm having trouble with this. Does this chown -R ubuntu:sftp /srv/www/public allow anyone from the sftp group access to the public folder and not parent directories?
<sarnold> hmhrex: what does 'getfattr' report on each of those files?
<sarnold> byprdct: someone from the sftp group will also need read and execute permissions on above directories (either via world or group permissions) otherwise they won't even be able to cd /srv/www/public
<hmhrex> sarnold: I haven't used this command before, any switch in particular you want me to use?
<byprdct> sarnold how would I employ something like that from the group level. I usually do something like this chown -R ubuntu:www-data /srv/www because I'm the user ubuntu but now I have another user accessing the server and I don't want him to have any access to anything above the public folder
<sarnold> hmhrex: I think -d looks best
<hmhrex> so basically getfattr -d /path/to/files ?
<hmhrex> that's not coming up with anything.
<sarnold> byprdct: perhaps I should take a step back.. what do you mean by "above the public folder"?
<sarnold> hmhrex: okay, try the same command but use sudo to run it as the user that runs the netatalk daemon
<hmhrex> user.com.apple.quarantine="0042;52fa5018;QuickTime\\x20Player;"
<hmhrex> That's what I get from the .mp4
<hmhrex> I don't get anything for the rtf.
<sarnold> hmhrex: interesting. i would have expected the resource fork to tell os x which application to use to edit the file..
<sarnold> hmhrex: well, I -expeted- a 'permission denied' error, just like netatalk :) but still.
<hmhrex> haha.
<hmhrex> Yeah. It's odd.
<sarnold> hmhrex: sorry, out of ideas. you could try adding that info to your post, more details never hurts, and it'll push it back up the list of posts again :) heh
<hmhrex> I'll do that.
<hmhrex> I do have one question about running getfattr. What should I be getting when I run that on that .mp4?
<hmhrex> nothing?
<sarnold> hmhrex: the user.com.apple.quarantine bit was added by finder or safari or whatever; iirc the first four digits indicates which application to use from an application registry of some sort; the last bit is a human friendly name to use when the application isn't installed to tell the user what to go buy :) -- and i'm not sure on the middle digits. could be something related to the seatbelt-based security policies..
<hmhrex> Huh. Interesting. Is there any way to strip that information out?
<sarnold> hmhrex: setfattr -x
<sarnold> hmhrex: if you can umount the afp mount before doing it, remove the attr, re-mount, and then test, I think that'd be a more representative test if this can work as a solution
<sarnold> os x may cache the resource fork information while the mount is active
<hmhrex> Would there anyway to set defaults on these folders so that they don't have this information?
<sarnold> hmhrex: you could mount the filesystem without extended attribute support; that's a bit brutal as solution goes but might do the job
<hmhrex> What would that entail?
<hmhrex> You can link me to resources if you want. I'm fine with researching this solution further.
<sarnold> hmhrex: that'd be mount(8) option nouser_xattr ; add it to your filesystem in /etc/fstab and run mount -onouser_xattr,remount /filesystem  ..
<hmhrex> Are there any cons to this solution?
<byprdct> sorry sarnold daughter grabbed me for a sec. what I meant by above the public folder is any parent directory to the public folder or anywhere else in the file system. I want the other user to only have access to the public folder and anything in that folder e.g. public/*
<sarnold> depends upon what other programs on your system are expecting to use extended attributes.
<hmhrex> Alright. That might be helping. Let me do a couple other tests.
<hmhrex> Yeah. It didn't seem to help.
<hmhrex> So frustrating. i'll take a deeper look into the groups, etc. Maybe something I'm missing.
<hmhrex> yeah, I accidentally logged out. Was there anything else to try?
<hmhrex> sarnold: it didn't seem to help. But I added that info to my question on askubuntu. So hopefully can help me out. Thanks for all your help!
<Midion__> is csf better than fail 2 ban?
<basketball> what is gnu screen
<sarnold> screen is a terminal multiplexer, it lets you run more than one shell in a terminal, and the best part is you can detach and re-attach to it.
<basketball> will my computer have that
<sarnold> if you're going to use it I suggest finding a hardstatusline setting from somewhere and sticking it in your ~/.screenrc. it'll help immensely.
<sarnold> basketball: you just need to apt-get install screen
<sarnold> basketball: I personally prefer tmux, for no real reason, except I find ^B more convenient than ^A
#ubuntu-server 2014-03-15
<rostam> hi
<semiosis> rostam: friendly greetings are nice, but if you want to ask a question, go for it!
<rostam> I am following some of the official ubuntu website on how to install a grub2 on usb device.
<rostam> The command I use is:
<rostam> grub-install --target=x86_64-efi --efi-directory=/mnt/ --bootloader-id=grub2 --recheck --debug
<rostam> it fails due to bad option:  --target=x86_64-efi .  Not sure why?
<semiosis> sorry i can't help with that, but hopefully someone else will be able to
<rostam> semiosis, thx
<semiosis> yw
<Midion__> hmmm as soon as i setup csf and whitelisted the cloudfare ip's for my domain. My site has started loading faster again.
<afib> hey all. ive got a a strange problem going on with my ubuntu 12.04 server and was hoping for some insight into how i could track down the problem
<afib> i bought a System76 Eland Pedestal, and it seems that data connection flakes out from time to time. im not sure what the source of the problem is or how to identify it
<semiosis> afib: wired or wireless?
<afib> but for example, if i ping the router, or an address, i don't lose packets. but it just seems to hang sometimes, and remoting in to the server sometimes fails
<afib> wired. ive tried plugging both ethernet ports in, and only one. and swapped between the two
<afib> ive got it on a static IP on a router running the Tomato Linux Firmware on a WRT54GL
<semiosis> the 'dmesg' command might show hardware issues
<semiosis> thats all i got
<afib> since i do have another server in the office, and they do run the same daemons (SSH, VSFTP, WebMin interface for local etc), in order to differentiate between them i use different ports
<afib> hmm okay ill give that a try. i wasnt finding anything in my logs
<sarnold> afib: perhaps the port on your switch is flaky; can you pop a laptop onto that port and let it ping all day?
<afib> but right now, i logged in via SSH tunnel => VNC... and then it hung, and dropped. but it's fine on the other server. and right now, im having to VNC into the one server, and then into the other in order to even get in
<sarnold> afib: try with and without that specific ethernet cable, too, you never know, it might be flaky too
<semiosis> i was thinking maybe ethernet cable too
<afib> but oddly, SMB doesnt seem to be affected, and im not losing packets as far as i can tell. maybe im not pinging long enough
<afib> so LAN seems fine, out to net is not
<semiosis> possibly QoS on the router, but i have no idea how to check that
<semiosis> not familiar with tomato
<sarnold> tomato is slick but pretty basic..
<afib> QoS is disabled. it's very similar to dd-wrt
<semiosis> my routers run openwrt or openbsd....
<afib> yeah, it's generally fine for the office fine. 2 servers, 8 computers mixed PC and Mac
<semiosis> thinking about switching from openwrt to dd-wrt
<semiosis> but i digress
<sarnold> I -do- slightly wonder about bufferbloat. I dunno if I'm just parotting the new hotness or not, but it -might- be bufferbloat..
<sarnold> repeat me repeat
<sarnold> this looks cool: http://www.bufferbloat.net/projects/cerowrt
<afib> hmm yes unfortunately im limited to what this router can handle
<afib> since it's basically just a hardware and firmware modified consumer grade deal
<sarnold> yeah, I ran tomato for a few years on an oldschool wrt54g
<afib> budget does not allow anything more advanced, and really all it handles is the internet traffic since it's only 100mbps and we're a design agency
<afib> so the lan traffic is done through apple base station
<afib> then the ddns, ip assignments and port addressing is done through the wrt
<sarnold> hrm, are both servers downstream of the apple base station?
<afib> yes. both are hosting info used for everyone. one is a simple lampp stack web server for proofing sites and keep the git repos and databases in sync, and the other is a file server. both run 12.04. amd both run nearly identical configurations, although hardware is quite different
<afib> the pedastal is brand new
<afib> but eg, while one might have SSH on 3669, the other would be 3996 or something like that. also the web server runs virtual machines with various OSes and browsers for testing websites
<afib> and all of those are remotel vboxes
<afib> but that machine reuns fine, and internet on others is fine. its just the one, and its frustrating because it's supposed to be pushing data to the cloud, but it's getting disconnected (even with backup software off and uninstalled)
<sarnold> DNS problems can lead to awkward behaviors. it doesn't feel like it'd be your problem, but if switch port and ethernet cable don't get you there..
<sarnold> afib: -maybe- there are firmware updates for your nics, check the linux-firmware package or your nic vendor's website..
<raj__> how do I override bash settings for all users, i want to override history size in .bashrc for all users...
<raj__> if I configure in /etc/bash.bashrc that is overridden by local files. Any way to force global bashrc settings ?
<sarnold> raj__: you can patch bash to ignore that setting
<sarnold> raj__: what exactly are you trying to achieve?
<raj__> sarnold, I want to increase history size for all users & do immediate history saving after each command is executed.. "for all users"
<sarnold> raj__: why?
<raj__> sarnold: increase history size so that I can keep a long history of everything executed on my server... immediate saving so that..if client gets disconnected..history is saved..
<sarnold> raj__: but... why? :) I'm trying to figure out what problem you're trying to solve, there may be something better ..
<raj__> long history to keep track of what all has been done on the server.. history can tell everything i have done to server..
<sarnold> raj__: i wonder if you could execute 'script' via ssh forced commands; that would be overkill, it'd record output as well as commands..
<raj__> could I put a certain file in some VCS like git/subversion/etc only "when" I am  modifying it. So first the put the unmodified version & then the modified version. The desire is to put the file in VCS only when I want, rather than doing initial full directory commits.. possible ?
<sarnold> raj__: I've heard good things about etckeeper
<raj__> sarnold, but that is only for /etc & it does a initial full directory commit ? why do I do that.. i would only want to keep the files i modify & manually manage rather than autocommits at each install...
<jfcastro> hi, I want to install OpenStack Icehouse on Ubuntu
<jfcastro> must I use ubuntu 14.04?
<jfcastro> in others words, is going to be maintenance Openstack Icehouse in ubuntu 14.04?
<jfcastro> thanks in advance :)
<lordievader> Good morning.
<jfcastro> morning
<lordievader> Hey jfcastro, how are you doing?
<jfcastro> fine :)
<jfcastro> I'm trying to install OpenStack Icehouse on Ubuntu but I have some doubts
<jfcastro> I normally use ubuntu-cloud.archives.canonical.com on ubuntu 12.04 but I would like use ubuntu 14.04
<jfcastro> and it's no packages in ubuntu-cloud.archives to ubuntu 14.04
<jamespage> jfcastro, the cloud archive does have icehouse  for 12.04 -  use the proposed pocket for now - but so does 14.04 which will be supported for 5 years
<jamespage> the packages in the main archive are icehouse in 14.04
<jamespage> no cloud-archive required
<jamespage> (and please raise bugs if you find any :-))
<jfcastro> jamespage: good!
<jamespage> jfcastro, I was testing all of last week on 14.04 - its general good - ceilometer is broken but that should be fixed next week
<jfcastro> jamespage: and these OpenStack packages on 14.04 will be maintenance same as cloud-archive?
<jamespage> jfcastro, the openstack packages in 14.04 are the source for the packages for 12.04 in the cloud-archive - so yes!
<jfcastro> it sounds fine!
<jamespage> jfcastro, http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/icehouse_versions.html
<jamespage> to illustrate!
<jfcastro> thanks!
<jfcastro> I'm going to read :)
<jamespage> (that report lies about greenlet and neutron - they are up-to-date)
<Tex_Nick> ubuntu 13.10 ... how would i start a VPN service before lightdm ?
<jfcastro> jamespage: so, to deploy Icehouse I must to use 14.04?
<jfcastro> and openstack package will be maintenance for 5 years
<jfcastro> really?
<DeltaHeavy> Hey, I have a VPS running Ubuntu 12.04 to mainly serve up my website amongst a few other things. I have netfilter configured as tightly as sanely possible but I was wondering if it would perhaps be best to install VPN software on there, and make services like SSH, and my databases only open on my VPNs interface. Is this a good idea or are there pitfalls?
<arrith> DeltaHeavy: well only ssh needs to be open really
<arrith> and even then you can lock that down
<DeltaHeavy> arrith: I'm a developer so I want to be able to access my DB's from DB clients.
<DeltaHeavy> Ended up deciding that it would cause too much inconvinance to be worth while. Just going to continue on the way I'm going I think.
<arrith> DeltaHeavy: you can tunnel db connections over ssh
<DeltaHeavy> arrith: True, I usually just use SSL though.
<arrith> DeltaHeavy: there is a difference between authentication and encryption. ssh does both
<arrith> DeltaHeavy: by itself, ssl just does encryption
<DeltaHeavy> arrith: Yeah, all the RDBMSs I use though have their own auth.
<arrith> DeltaHeavy: larger attack surface. ssh you can lockdown in one place. also knockd
<DeltaHeavy> True, I should look into that perhaps
<DeltaHeavy> Thanks!
#ubuntu-server 2014-03-16
<basil60> Hi I'm using Ubuntu 12.04 with xfce 4.8 as a GUI. What do I change my keyboard layout to, so if I press shift+3, I get a hash instead of a pound (english currency) symbol?
<sheptard> basil60: google changing your locale
<basil60> my locale is correct I believe
<basil60> mostly en_AU.utf-8
<basil60> <sheptard>I've got the change keyboard command...just not which option gives mu us international
<sheptard> http://dry.sailingissues.com/us-international-keyboard-layout.html
<sheptard> that's US international
<basil60> <sheptard>I'd been on that site. Which is more appropriate - generic 102 key intl or 105 key?
<basil60> or 104?
<basil60> Hi fixed my keyboard layout...but when I punch in my password now, it's not recognised. I wasn't running as root - just a normal user. How can I change my passwd to something recognisable under the new keyboard layout? Please?
<wkmanire> What specifically makes a DNS server authoritative for a domain?
<wkmanire> I'm trying to get set up a slave to zone transfer from my master following the Official Ubuntu Serve guide.
<wkmanire> I can't get the zone to transfer and I get this message in syslog.
<wkmanire> Mar 15 23:40:28 ubuntu5 named[760]: client 192.168.1.102#6962: received notify for zone 'wkmanire.net': not authoritative
<wkmanire> Although the master DNS is resolving all of the sub domains for my network with no problems.
<wkmanire> nevermind
<wkmanire> I made a mistake
<lordievader> Good morning.
<PeterGriffin> Hi there. I have a cupple of servers on my net with 1000 Mb network interfaces and a few with 100 Mb. The speed between them is awfully slow, like 5K. Is it possible this to be caused by the differentspeed setting of the interfaces?
<SpamapS> PeterGriffin: use ethtool to check what they're actually linked at
<PeterGriffin> what in the output shows this
<PeterGriffin> Do you mean the actual speed being set
<SpamapS> PeterGriffin: sometimes auto-negotiation fails.. or silly things happen like it chooses half duplex
<bekks> You should check duplex settings and MTU sizes, too.
<SpamapS> PeterGriffin: just 'sudo ethtool eth0'
<SpamapS> on a single flat layer 2 I would be surprised if MTU was an issue
<SpamapS> you have to try really hard to screw that up
<bekks> Or just play with "oh, wow, those jumbo frames sound cool." :)
<arrith> can test network speed with iperf
<arrith> could be application-level issue
<PeterGriffin> this is the ethtool output on both sides of the problem http://pastebin.com/NBmACLhq
<bekks> And how do you "test" the connection speed?
<PeterGriffin> I showld mention that there are some switches and an optic between them, but the speed is bad only when connecting with the servers, not other Kubuntu desktops.
<PeterGriffin> Well not very precisely actually. I send a bog file with netcat and monitorthe  progress.
<bekks> So you have to check all physical links, maybe there is somethjing odd.
<PeterGriffin> big* file
<PeterGriffin> but when I send a file using exactly the same route, but from my laptop, the speed is ok
<PeterGriffin> doesn't this exclude the phisical links
<bekks> No.
<SpamapS> PeterGriffin: tracepath otherbox
<bekks> Unless you connect your laptop to the same switch port as the "other" computer you are trying it with.
<SpamapS> PeterGriffin: give iperf a try
<PeterGriffin> bekks: I didn't use exactly the same port. What I did is I changed the whole switch, so I suppose the port wouldn't matter. I did this on both sides just to be sure. Only I had one extra switch so I was able to have a new switch only on one side at a time.
<bekks> So use iperf for further investigations.
<PeterGriffin> SpamapS: tracepath reaches the other end
<SpamapS> PeterGriffin: with pmtu == 1500 ?
<PeterGriffin> I will
<PeterGriffin> SpamapS: yes
<PeterGriffin>  1:  192.168.68.1                                          0.097ms pmtu 1500
<PeterGriffin>  1:  192.168.68.10                                         1.008ms reached
<SpamapS> PeterGriffin: ok, -> iperf
<PeterGriffin> these are the endpoints. The route does not show
<PeterGriffin> iperf it is ... :)
<bekks> being in the same network, no route is need.
<bekks> *needed
<PeterGriffin> I have never used iperf so I started the server and the client without options. It connects, but it does not showanything useful.
<PeterGriffin> do I need to send a file throu it or something?
<arrith> http://www.reddit.com/r/linux/comments/1maws6/checking_lan_transfer_speed_under_linux/
<PeterGriffin> Ok. I did some learning and testing with ierf. I make the desktop server for iperf and start a connection from the ubuntu-servers as clients (both of them). When test UDP connection it is just fine. But when I start a TCP connection it fails. tcpdump shows that the iperf server sends ack for each packet, but the client recieves only a small part of them. So may be somewhere on the network they are lost and that makes the
<PeterGriffin> connection fail.
<SpamapS> PeterGriffin: I'd postulate that there is a bad cable, bad switch, switching loop, etc.
<PeterGriffin> ok. I'll check again everything. Thanks for your help.
<raj__>  which folder should be used for storing data to be used freely by all users(or preferably sudoers) of system ?
<el_seano> raj__: I'm a fan of /srv, personally
<raj__> el_seano: /srv.. what kind of files does that contain ?
<el_seano> raj__: nothing, out of the box
<raj__> okay
<el_seano> later revisions of the FHS recommend using it as a place for data stored by services
<el_seano> e.g., ftp, http, whatever
<raj__> i mainly want to store some bash scripts that are used by all sudoers..
<el_seano> raj__: http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM
<el_seano> ah, probably simpler would be one of the */bin's
<el_seano> like /usr/share/bin
<el_seano> otherwise, just make a /srv/bin and ensure that it's added to the path of your skeleton file, set guid, etc
<raj__> okay..
<el_seano> one org I worked with actually pointedly didn't add our sysadmin team's script directory to our PATHs, largely with the thinking of "If you're going to be using the gun, you need to unlock the safe"
<raj__> el_seano: hmm.. thanks for the pointers..
<el_seano> sure
<el_seano> if you're not familiar with skel or guid, look at the man pages for chmod and useradd
<raj__> how does the '-f' in 'bash -f script.sh' make a difference ? what is it for ?
<cwiggs> can someone help me with an NFS mount problem i'm having with my server?  I've tried everything i can think of.
<bekks> cwiggs: Just state the problem then, please :)
<cwiggs> the nfs mount works on all computers except one, and i don't see anything in the logs on either the client or the server
<cwiggs> when i run mount.nfs in verbose mode it doesn't list anything, just hangs forever
<cwiggs> i restarted nfsd on the server already
<cwiggs> and checked the exports file
<cwiggs> everything looks good, in fact, this server did mount fine until recently
<cwiggs> i've done all updates as well
<bekks> "mount.nfs" is an incomplete command, and please dont use enter as a punctuation sign.
<cwiggs> okay
<cwiggs> when i run mount -t nfs -v it outputs
<cwiggs> "mount.nfs: timeout set for Sun Mar 16 12:27:43 2014"
<cwiggs> "mount.nfs: trying text-based options 'vers=4,addr=192.168.1.2,clientaddr=192.168.1.7'""
<cwiggs> and now it just hangs there
<cwiggs> nothing in syslog for the server or client
<bekks> Which is an incomplete command.
<bekks> You need to specify the share and the mountpoint.
<cwiggs> "sudo mount -t nfs -v 192.168.1.2:/mnt/sdb1 /mnt/sdb1/"
<cwiggs> i did
<cwiggs> i just didn't type it here to save time
<cwiggs> the command above is what i typed
<bekks> Which did not save time at all. And DONT use enter as a punctuation sign. Please.
<cwiggs> without the ""s of course
<bekks> So is the server actually sharing that share, using NFS v4?
<cwiggs> yes
<cwiggs> yes.*
<cwiggs> Like i said above, i have a few other computers that are accessing that share using NFS.
<cwiggs> "/mnt/sdb1	192.168.1.0/24(rw,async,no_root_squash,nohide)" is what /etc/exports contains.
<bekks> Thats a NFS v3 export line.
<bekks> Mount it using NFS v3, not v4.
<cwiggs> You're right, my mistake.
<cwiggs> I can't remember the option for using v3 off the top of my head.
<bekks> vers=3
<cwiggs> Thank you, that fixed it.
<cwiggs> Does anyone know how to make autofs use nfsv3?
<cwiggs> bekks: btw, thank you for your help.
<raj__> do bash scripts need to have any extension like .sh.. I used .sh extension with my "bash scripts".. & now realised it .. should I remove the .sh extension from those files ?
<RoyK> raj__: no, extensions aren't necessary on unix systems
<RoyK> just chmod +x, and a file is runnable
<raj__> RoyK: thanks!!
<RoyK> with scripts, usually the first line contains something like '#!/bin/bash' which tells the shell which interpretor to use
<raj__> RoyK: Thanks a lot !! when I  use any extension like .sh the editors on my desktop identify the file as a script but without extension they have no clue, for that purposes if I keep any extension, what should it be ? .sh worked but mine  is a bash file
<jrwren> there is nothing wrong with having a .sh or .bash extension.
<RoyK> jrwren: didn't say that - just said it wasn't mandatory ;)
<jrwren> RoyK: I was not attempting to correct you. ;]  Just trying to help our raj__
<RoyK> raj__: if run from the shell, it doesn't matter. it's not like on windows where files are run in different instances on behalf of their extensions
<raj__> RoyK: yes I understood that.. Thank you very much! :)
<raj__> jrwren: yes, I got that.. thanks jrwren !! :)
<RoyK> if only ubuntu had proper zfs support...
<jrwren> why zfs? isn't btrfs supposed to be the future?
<RoyK> jrwren: yes, it's possibly the future, but it's been the future for a long time
<RoyK> jrwren: and I'd rather want something that works in the present
<raj__> $ rm "-home.hist"           says: rm: invalid option -- 'h'         Try `rm --help' for more information.
<raj__> deleting the above file fails.. how do I delete the above file successfully .. ?
<sheptard> raj__: \-
<sheptard> \-home.hist
<raj__> sheptard: trying out
<raj__> sheptard : didn't work but acutually the actual file name is a bit more complex.. tried:                   rm  \-home-raj-.history
<raj__> sheptard , the actual file name is "-home-raj-.history", how could I remove ?
<raj__> sheptard: sorry it was due to my mistake.. fixed!!
<raj__> when i give a space before writing a command in terminal.. it does not appear in history.. is that a way to skip history ???
<RoyK> raj__: rm -- -home.hist
<RoyK> -- terminates the parsing of options
<miceiken> hi guys, total noobie on the security front here. I'm running a simple ubuntu server, what precautions should I take? will a firewall suffice?
<Patrickdk> well, a firewall works
<Patrickdk> as long as you never make connections into or out of the server
<Patrickdk> might as well unplug it and get the same security
<Patrickdk> you have to worry about everything, any program that makes connections from pulling in something bad
<markthomas> miceiken: you haven't said anything about what you're doing with the server, or anything about the environment it's in.  You can get some general rules-of-thumb that way, but it will be of limited use.
<Patrickdk> and anything listening for connections, doing something bad (like webservers)
<Patrickdk> and this assumes no one has console/ssh access
<miceiken> sorry markthomas, I'm renting a VPS - for now I'm running a few services, nothing major but I have a mail server, web server, an irc bouncer and stuff like that
<miceiken> I'm most worried about someone gaining control over my server
<Patrickdk> so every insecure thing you could locate :)
<markthomas> miceiken: is this system behind a firewall?  Does it have local users besides yourself?
<miceiken> I am the only user, but the system has multiple users "running" different services. And no, no firewall.
<markthomas> miceiken: First, start with something like this: https://help.ubuntu.com/10.04/serverguide/firewall.html
<markthomas> miceiken: then, if this is a VPS, one of the most vulnerable attack points will be one you haven't mentioned yet: how are you logging into the system to administer it?  SSH?
<miceiken> markthomas, yep, ssh with plain username/password auth
<markthomas> miceiken: that is where you need to start your security efforts.  If someone compromises your account, that's a serious problem.
<markthomas> miceiken: Start with authentication.  If you can, use a keypair for authentication and disable password logins.  Of course, make sure root ssh is disabled for good measure, even if you have the root password "locked"
<markthomas> miceiken: then, look into an application called fail2ban.
<miceiken> keypair is what? private ssh keys?
<markthomas> miceiken: yes.  man ssh-keygen
<RoyK> miceiken: just run ssh-keygen and then ssh-copy-id yourserver:
<RoyK> user@yourserver, perhaps, if you haven't setup ssh aliases
<markthomas> miceiken: one you've done that, you need to review the docs for every service you've listed.  If this is running services you care about (i.e. you're running a business on this instance) then you'll want to get some help.
<miceiken> ah right, well no, it's only private use, I just want to take the basic measures to prevent anything
<markthomas> miceiken: It'll be a fun learning project.
<markthomas> RoyK: thanks for jumping in.  I have to drop off for awhile.  miceiken, have fun.
<miceiken> thanks for your help markthomas
<miceiken> RoyK, can I use sudo passwd -l <user> to disable log in to a user account?
<miceiken> by disabling I mean login
<RoyK> miceiken: better google it - http://www.cyberciti.biz/faq/linux-disable-user-account-command/ was ok
<RoyK> miceiken: locking a user by changing the password won't disable her/him to login with ssh keys
<miceiken> okay, thanks
<miceiken> if I have 2 computers I want to access ssh from - do I make 2 seperate keys? or do I use the same? (I'm asking because of the username@HOST part)
<RoyK> you can use the same keys
<RoyK> ssh keys are just like any other cryptographic keys
<RoyK> I usually use a set of keys per machine
<RoyK> if one is stolen or otherwise lost, I don't need to change the keys for everything
<RoyK> a set of keys meaning a public and private key, the set that's generated with ssh-keygen
<miceiken> how can I tell if it works though RoyK?
<miceiken> I set PasswordAuthentication to no in ssh_config but it still prompts me when I connect
<miceiken> duh sshD
<miceiken> thanks RoyK - got it all set up now
#ubuntu-server 2015-03-09
<netameta> how do i add more extensions to php on ubuntu - specifically sqlsrv
<lordievader> Good morning.
<rbasak> smoser: are you going to take care of the bcache-tools MIR?
<rbasak> frediz: around?
<Demon_Jester> Hey guys, if someone can help me out, I am trying to install ubuntu server on hp media center pc (old one) and it keeps hanging at
<Demon_Jester> "retrieving grub-mount-udeb"
<smoser> rbasak, filed. http://pad.lv/1429857
<rbasak> smoser: thank kyou!
<devster31> hi, I got the message that some packages need upgrading, how can I know if any of them require server reboot before upgrading?
<rbasak> jamespage: doko is asking why we haven't updated to golang 1.4 in Vivid?
<rbasak> I think you had a reason!
<jamespage> rbasak, other than no time todo it
<jamespage> no reason
<jamespage> rbasak, oh wait - docker.io 1.3.3 has some 1.4 problems
<jamespage> we'd need to update to 1.4.x or 1.5.x - I'm trying to find time to help tianon do that in Debian (experimental)
<rbasak> OK, thanks
<jamespage> but that appears to be aluding me right now
<yogg> Hi
<yogg> I have a apt-get repo over apache. Now I wan't to use https with an self signed certificate. I tired this: http://pastebin.com/E1vDn8kk      but it does not work :(  (ubuntu 12.04 Server)
<alchemistswl> Hallo there. I have a problem regarding permissions and apache2. I set the www-data folder to /mnt/hdd/httpd and added myself to the www-data group. Everytime I upload a picture from my user, it's permission denied. I have set permissions for read/write too, didn't help much. Tried this: http://askubuntu.com/questions/19898/whats-the-simplest-way-to-edit-and-add-files-to-var-www Every help appreciated.
<MoPac> Howdy. I'm having what I think is a me-being-stupid problem with installing headless java runtime environments. I'm installing a .deb package (ciphermail) that depends on java6-runtime. Which is a virtual package provided by openjdk-[6][7]-jre or default-jre. I have the headless versions of those installed, but the unmet dependency is still showing up.
<MoPac> This program surely isn't meant to be requiring a whole X11 environment, so I know I must be going wrong in trying to install the non-headless versions...
<squisher> rbasak, ping -- I forgot to ask when you're usually on, but I guess I'll find out :)
<MoPac> Alternate question: if I install all of the X11 windowing and audio components that the non-headless java runtime depends on, is there necessarily any performance hit to doing that, or do the packages just take up space on the drive? This is a cloud-hosted VPS, so it wouldn't generally be connected to a display or sound card...
<squisher> MoPac, there's no performance hit unless you are actually running some of that stuff
<MoPac> squisher: cool; thanks
<rbasak> squisher: o/
<squisher> MoPac, btw, the correct solution to your problem is of course to ask the provider of the deb to actually list correct dependencies
<rbasak> squisher: 9-6 In EUrope/London normally
<squisher> rbasak, I was guessing as much :) I'm in UTC-6, so I just have to remember to check in early in the day
<rbasak> squisher: so I favour git-buildpackage, unless you want to use git-dpm
<squisher> +1 for gbp, that's what I was using before too
<rbasak> squisher: so I think the upstream branch should probably be identical to the agreed upstream VCS
<rbasak> The master branch would have just the debian packaging delta, and we should start off with that being the same as what is in unstable.
<rbasak> I don't mind if the master branch misses my history to get there (currently in my Github), but we could have that in.
<rbasak> squisher: how's your git-fu to arrange that? Or I can do it if you like?
<squisher> well, why not? I don't think there's much extra effort required to do that if we're already importing upstream vcs
<rbasak> I'm not sure what other maintainers do with the pristine upstream tarball with this arrangement though.
<squisher> my git foo is a good medium ;-) this one seems rather straight forward
<rbasak> I favour using pristine-tar, but I don't know where to get that tar from.
<rbasak> OK - I'll leave it to you then? Feel free to force push when you're ready.
<squisher> yeah, pristine tar doesn't make much sense, but I think I had a branch for it anyway -- but I don't remember right now how I created the tarball
<squisher> seems kinda superflous though, or should we keep doing pristine-tar when there is no official tarball?
<rbasak> The reason I'd like it is that I think we should keep a record of what orig.tar.gz we did upload in a pristine-tar branch, even if upstream doesn't publish it.
<squisher> ok, sure
<squisher> so I'd guess that the pristine-tar branch then doesn't need an update even, but I'll double check
<rbasak> Yes that sounds right
<squisher> great. Do you know how that "description" thing works?
<squisher> just for the anon-git web listing
<rbasak> squisher: https://www.kernel.org/pub/software/scm/git/docs/gitweb.html suggests "git config gitweb.description" or a flat file with the contents. Both of which I think maybe need jamespage to amend for us.
<squisher> rbasak, not a high prio of course but I think it'd be nice to have
<squisher> let's get the initial layout done first, then request that
<rbasak> OK
<squisher> rbasak, I'll try to get that stuff done tonight and will ping you again afterwards
<rbasak> Thanks!
<Mik3Silv3r> hi
<Mik3Silv3r> how you are?
<Mik3Silv3r> i need help with samba shares... any?
<Mik3Silv3r> i need to define a block access a subfolder of a share, the share is access for all, but i need that one user not access
<Mik3Silv3r> i need to define a block access a subfolder of a share, the share is access for all, but i need that one user not access it
<haithar> re
<haithar> Does anyone have a hint whether Ubuntu 10 LTS will actually end in beginning of April or end of April?
<haithar> unfortunately this matters to us (don't ask..;))
<mdeslaur> haithar: april 29th 2015
<haithar> mdeslaur: a load off my mind, thank you kind sir! :)
<mdeslaur> haithar: yw
<haithar> Also, I don't understand the full bugfix lifecycle, can someone tell me if this is now ready to be coming with a Tahr upgrade or needs more work/support? -> https://bugs.launchpad.net/ubuntu/+source/squid3/+bug/1336742
<pmatulis> haithar: nothing has been done for 14.04 and it's not clear if anything will
<haithar> pmatulis: thanks! About understanding "sponsorship" in this context, do you think comments 16 and 18 suggest that I can contact him and offer money to Oleg / his favourite charity /... to keep pushing it so it gets into trusty?
<pmatulis> haithar: ubuntu development is not a free-for-all.  changes are backported according to certain guidelines.  https://wiki.ubuntu.com/StableReleaseUpdates
<Robert_Labrie> I'm on LTS, and I don't need to be, how can I get on vivid? Is it just updating my sources.list or is there some ubuntu way of doing it right?
<shauno> you probably need to change for 'lts' to 'normal' in /etc/update-manager/release-upgrades  and then run do-release-upgrade
<shauno> (but personally I'd stick on lts unles you have a good reason not to, particularly on -server)
<Robert_Labrie> shauno thanks! It's a sandbox, when I downloaded the ISO I just grabbed the first one it said
<Robert_Labrie> I use server so it doesn't harangue me to install the desktop on what's going to be a headless VM that I SSH/CIFS into
<Robert_Labrie> but dude, thanks so much for the tip
<aboSamoor> hi, I am getting a dependency error while installing libffi-dev (http://paste.ubuntu.com/10570803/) I am using ubuntu 14.04 server
<rharper> rbasak: smoser: either of you played with overlayfs in vivid yet?  when I mount something up it complains that the merged mount point isn't in fstab;  if I just put an entry with the mount point in there, it all works fine;  overlayfs (probably the older version in trusty) doesn't ask for an entry in fstab
<rbasak> rharper: I've not tried it directly no. Does mount not put it in the fstab when you mount with it?
<rbasak> Oh, hold on. I'm thinking mtab.
<syeekick> about to install ubuntu server tommorow on my xeon machine. I was wondering what apps i should use to stream to a standalone smart tv?
<squisher> syeekick, I haven't used it very extensive, but I think minidlna may be worth looking at
<rharper> rbasak: it seems not;  also I tried to do an ubuntu-bug on vivid, but it says my kernel isn't supported (which seems odd to me, but I'm probably missing something here)
<sarnold> aboSamoor: apt-get update first?
<aboSamoor> sarnold: it did not fix the problem, still unmet dependency!
<sarnold> aboSamoor: try dpkg --get-selections | grep -v install
<rharper> rbasak: I think I've been putting an error the mount command, and it just spits that error out;  I've got something working now (sudo mount -t overlay overlay -olower=X,upper=Y,workdir=Z /path/to/merged   -- that's working.  so nm then
<aboSamoor> sarnold: the list is empty
<sarnold> aboSamoor: dang, sorry, no more obvious ideas from me :(
<teward> sarnold: stupid question but what do you know about bind DNS servers?
<teward> sarnold: any idea if i can have the master being at my network, the 'slaves' all slaving from my 'master', and then have my domains' NS records point at the slaves?
<sarnold> teward: not much about bind itself, but that setup shuold work fine, so long as the slaves are in glue
<alchemistswl> Since nobody yet could help me I ask here, kinda OT but here we go: I have a problem regarding permissions and apache2. I set the www-data folder to /mnt/hdd/httpd and added myself to the www-data group. Everytime I upload a picture from my user, it's permission denied. I have set permissions for read/write too, didn't help much. Tried this: http://askubuntu.com/questions/19898/whats-the-simplest-way-to-edit-and-add-files-to-va
<smoser> rharper, i guess based on no other information ...
<smoser> https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1430025
<TJ-> Got an issue with apache 2.2 on Linux, using mod_dav_fs. When PUTing a file that ends ".pdf" the server issues a "403 Forbidden", but simply changing the file extension solves it. What should I be looking at in the server configuration that would affect this? I've already grep-ed for 'pdf' and can only find the mime-type
#ubuntu-server 2015-03-10
<teward> sarnold: wrt glue records, can you explain a little bit on what exactly that is...?
<teward> i understand the concept... but... :/
<teward> well, actually, i think i have a solution... *shrugs*
<teward> my more stupid questoin: how can i configure a mailserver to accept relay for aliases from google mailservers or other mailservers outside my network
<teward> or should i go and set up a full email server, postfix, etc. and mailman?
<jirido> Hi. I have installed vsftpd and want to sighn and validate my clients certificates against my own ca authority. Is it possible?
<jak2000> hi all
<jak2000> how to ask if a file exists in a bash scripting file?
<Sling> [ -f /pathtofile ]
<jak2000> i edn a line with a ; ?
<jak2000> *end
<jak2000> Sling: http://pastie.org/10015639
<jak2000> wich is wrong?
<Sling> jak2000: google for it :) you are missing a ; indeed
<Sling> if [ -f /file ]; then
<lordievader> Good morning.
<haithar> re
<haithar> Got this question yesterday, after some hours when going home I just left the window open, now it's past the window history, can't find where the channel is logged, sorry if someone has answered it :/  ssssooo,
<Sling> 21:45:15 < haithar> pmatulis: thanks! About understanding "sponsorship" in this context, do you think comments 16 and 18 suggest that I can contact him and offer money to Oleg / his favourite charity /... to keep pushing it so it gets into trusty?
<haithar> Context: I'd like to roll out trusty for our Squid proxy park. There's an open bug in trusty which drops cache hit rate from 80% to 10%. Since I can't reliably code in C and know zero about the Ubuntu bugfixing lifecycle, I'm willing to offer a small amount of $$ for anyone who's willing to push that bug through the pipes so it'd appear in trusty.
<haithar> >
<Sling> 22:09:03 < pmatulis> haithar: ubuntu development is not a free-for-all.  changes are backported according to certain guidelines.  https://wiki.ubuntu.com/StableReleaseUpdates
<Sling> that ?
<haithar> wow man thanks
<haithar> yep
<Sling> ok :)
<strikov> haithar: do you mean Vary-header bug?
<haithar> yep
<haithar> since we're using CDN for images etc. squid basically caches favicon.ico only :)
<OpenTokix> haithar: replace squid with nginx, - squid is a piece of shit
<strikov> haithar: i saw your merge request for trusty and response that i needs to be fixed in vivid first; just fyi -- i has been fixed in vivid dew days ago
<haithar> can nginx reverse proxy?
<OpenTokix> haithar: yes
<OpenTokix> haithar: like seven billion times better then squid
<haithar> strikov: kudos man!
<OpenTokix> haithar: also support spdy-sockets and modern tech like that. - Also super easy to make a dual stack frontend, for your ipv4 backend etc.
<haithar> strikov: I'll be honest, I don't think I have the capacity to fully understand https://wiki.ubuntu.com/StableReleaseUpdates , even though I reckon it must contain all infos. Do you think there's a chance for this to appear in trusty? Is it appropriate to offer a small $$ for you to help that process?
<haithar> OpenTokix: thanks for the suggestion. Trying nginx is on my bucket list for years. I'll look into it.
<OpenTokix> haithar: do it, do it now. - You will hate yourself for wasting time on squid
<strikov> haithar: I don't think that $$ make any sense in that case. SRU process is mostly about (value for all users) / (potential issues for all users).
<haithar> OpenTokix: just to be on the safe side, can you send me a link with open nginx bugs in trusty? ;)
<OpenTokix> haithar: https://bugs.launchpad.net/ubuntu/trusty?field.searchtext=nginx&search=Search&field.status%3Alist=NEW&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.assignee=&field.bug_reporter=&field.omit_dupes=on&field.has_patch=&field.has_no_package= <-- all four of them, 50% relating ...
<OpenTokix> ... to "default config"
<haithar> strikov: yeah that's something I don't get, ever since people are using CDNs, this issue renders Squid useless in trusty for reverse proxying. Sounds like a high-impact thing for me, but hey, I'm in sysops for like 2 months after a 10 year gap:)
<OpenTokix> so not really a bug per say
<haithar> OpenTokix: thanks, checking it now
<strikov> rbasak: Morning. What's your opinion on the squid3 vary-header bug? Do we have any chance to get positive response for the SRU? haithar really wants to have this bug fixed
<ReScO> how can i monitor with a cronjob if there are new updates?
<ReScO> without actually installing them?
<ReScO> i prefer to do this with a bash script
<OpenTokix> ReScO: yes, there is a program for it even
<OpenTokix> ReScO: apticron or something, you can tell it to either install, or mail about them etc.
<OpenTokix> yes, apticron
<lordievader> Or cron-apt.
<OpenTokix> lordievader: cron-apt will install, apticron report
<lordievader> Ah, check.
<lordievader> Thought you could set cron-apt to not install stuff though.
<ReScO> i don't want to receive emails
<OpenTokix> ReScO: Where do you want the report ?
<ReScO> i'm looking for a bash implementation so i can use curl to send a PushBullet message
<OpenTokix> ReScO: check the manpage fro apt-get, it has tools for getting info about what updates is available
<rbasak> ReScO: /usr/lib/update-notifier/apt-check. This is what the motd uses.
<rbasak> ReScO: if you want more detail, you could use apt-get --simulate
<ReScO> i only need that, thanks :)
<ReScO> hmm, i'm having trouble splitting the output of that script, rbasak
<ReScO> $(echo $IN | tr ";" "\n")  IN=$(/usr/lib/update-notifier/apt-check)
<rbasak> ReScO: looks like it outputs to stderr. Which is bad. That should probably be a bug.
<rbasak> But you can redirect using 2>&1
<rbasak> And then eg. cut -d\; -f1 (and -f2)
<OpenTokix> /usr/lib/update-notifier/apt-check 2>&1|tr ";" "\n"
<arcsky> what logs do you guys sends to your external syslog servers?
<jpds> All of them from syslog?
<jpds> auditd.
<arcsky> which logs from /var/log/ are more important?
<jpds> arcsky: Only you know the answer to that questio.
<jpds> arcsky: My RSS reader logs things to tt-rss.log.
<jpds> arcsky: I'm not too fussed about that so I just put it into my backups.
<caribou> utlemming: smoser: anyone of you aware of issues with the trusty ubuntu cloud image not applying the "ubuntu-pass=" kernel parameter ?
<caribou> just checking before I dig in further
<caribou> I can reproduce the problem easily with the current image. works fine with Precise
<smoser> caribou, i dont know that that was ever really officially supported.
<smoser> 'uncloud-init', right ?
<caribou> smoser: it's well documented here : https://help.ubuntu.com/community/UEC/Images/KVMKernelOptions
<caribou> smoser: yes
<caribou> smoser: doesn't mean it is supported though
<smoser> caribou, i'd much rather teach someone how to use the nocloud seed.
<smoser> http://ubuntu-smoser.blogspot.com/2013/02/using-ubuntu-cloud-images-without-cloud.html
<caribou> that could be an option
<caribou> smoser: ah, yes I used to do a lot of that before uvtools. Still do for some debian makedumpfile testing
<smoser> uvtool does it under the covers.
<arcsky> hey i have added a line in my /etc/rsyslog.d/50-default.conf, auth.* @192.168.168.199:516 and restarted. but now when i do an ssh login i cant see that it sends a syslog message to that host. what do i do wrong?
<Walex> arcsky: 'man rsyslog.conf | grep -w port'
<arcsky> Walex: No manual entry for rsyslog.conf
<arcsky> Protocol udp/tcp [default udp] and i use udp
<samba35> i was using ovs version 1.9 with bridge compatibility mode (brcompt)  but now after host upgrade now i have to use ovs version 2.0.2 . now i have create VLAN with ovs 2.0.2 but i have no idea which mode i should use
<samba35> i want to use vlan in guest with 1 interface
<samba35> and 2 nic with pci- passthrought but that is not problem at present
<samba35> using ubuntu 14.04.2
<samba35> is there any way i can use older version of openvswitch (ovs)
<samba35> i am using ubuntu 14.04.2 , i have openvswitch version 2.0.2 but i want to use 1.9 version of ovs ,is there a possiblity to use old version of software
<designbybeck> I have a test Ubuntu server I'm playing with remotely. I'm ssh'd into it and just ran updates. what is the best way to setup and remote into it via GUI?
<teward> designbybeck: ubuntu servers don't get GUIs by default
<teward> designbybeck: you have to manually install a setup, install / configure desktop sharing, etc.
<designbybeck> correct teward I think I did install Xubuntu
<designbybeck> I'm using cloud at cost servers.. and they have a console that pulls it up
<designbybeck> I'm just trying to learn how to do it without the console teward
<RoyK> just install ssh server
<RoyK> apt-get install openssh-server
<designbybeck> RoyK, I can get in via ssh, I'm trying to get into GUI
<designbybeck> like Remote Desktop
<designbybeck> RDP?
<RoyK> designbybeck: not a server thing
<designbybeck> Just trying to learn
<designbybeck> I'll keep playing with it
<teward> designbybeck: that's not a server thing though - that's what me and RoyK are saying
<designbybeck> right gotcha
<designbybeck> thanks
<bekks> designbybeck: For getting into the GUI, you have to install one, before.
<designbybeck> that is what I'm asking some other coworkers, they are debating ;)
<teward> this also isn't Windows
<teward> Windows Server does a GUI - but that's Windows
<bekks> Yeah :)
<designbybeck> yeah I don't do windows!
<designbybeck> I'm a Linux power end users so to speak!
<bekks> designbybeck: So all you need is ssh :)
<designbybeck> Just don't know much about server side and such
<teward> designbybeck: then you can't claim to be a power end user
<teward> designbybeck: I'm a power end user - i use the terminal for a lot of things that are 'simple' over the GUI but that's by choice
<teward> I also backport software I need for both server *and* GUI
<teward> but you can't claim to be a power user if you don't understand the console
<teward> anyways, i am trying to achieve the most basic way of setting up a mailnig list - via postfix aliases - problem is it's saying relay denied.  I need google mail servers and my university's mail server to communicate to my server and work - not sure how to achieve this though
<designbybeck> teward, ok maybe power user was a bad term to use. What I meant was I know a good deal of the common Linux DE's and managing DE's as well as design/media/photography using Linux and Open Source Software
<patdk-wk> designbybeck, you want #ubuntu
<patdk-wk> gui's are not part of server, so people in here will have limited knowhow to do that, if at all
<teward> ^ that
<patdk-wk> cause it's just not done in here
<patdk-wk> but most likely your looking for vnc or nx
<designbybeck> yes working on trying to get logged in with vnc, thank you guys
<patdk-wk> teward need to know what you mean by a mailinglist and aliases file
<patdk-wk> you mean your running a *real* mailing list software? or just using aliases?
<teward> patdk-wk: just using aliases
<patdk-wk> and the relay denied is totally different issue
<patdk-wk> so likely, it's just badly misconfigured then
<patdk-wk> postconf -n dump somewhere
<patdk-wk> and probably a few log lines just so we all know
<teward> patdk-wk: well, bad news is, i am currently debugging a ftbfs in nginx on my sbuild instance, trying to figure out why
<teward> then i have other things for work on my todo list that need to be done within the next 30 minutes before i can go onto working on that part
<patdk-wk> well, I will probably only be around for the next few years
<teward> (i'm only alive now 'cause i was waiting for that build to finish)
<teward> PO
<teward> :P *
<edenist> hey
<delinquentme> command line util to list all headers at a given url?
<sarnold> delinquentme: HEAD ?
<delinquentme> sarnold, curl -I
<sarnold> that's three more characters :)
<arcsky> how can i send syslog messages to an external syslog server? i get no success with the crappy rsyslog
#ubuntu-server 2015-03-11
<Patrickdk> sarnold, this is a ubuntu channel, head is not to be used in here :)
<Patrickdk> arcsky heh?
<Patrickdk> it's like one line
<arcsky> Patrickdk: how?
<Patrickdk> @syslogserver
<Patrickdk> or for tcp @@syslogserver
<arcsky> does not work
<Patrickdk> where is your config?
<Patrickdk> your doing something wrong
<Patrickdk> I have done this to thousands of servers using rsyslog without issue
<Patrickdk> using udp, tcp, and relp
<arcsky> Patrickdk: http://pastebin.com/FYgWhWC7
<Patrickdk> I'm suprised rsyslog runs at all with all that stuff
<Patrickdk> comment out likes 61, 32, 64, and either 66 or 67
<Patrickdk> what DOES your receiving side support?
<sarnold> Patrickdk: what's wrong with HEAD? :)
<Patrickdk> nothing, if done properly
<arcsky> Patrickdk: its udp on the server side (syslog-ng) which works super
<Patrickdk> and it is *listening* on 516?
<Patrickdk> cause the *normal* port is 514
<Patrickdk> so comment out all that stuff, 61, 63, 64, and 67
<arcsky> Patrickdk: listning on 516 yes. udp. normal 514 yes. ok let try
<arcsky> Patrickdk: no succes
<Patrickdk> run, tcpdump port 516
<Patrickdk> and see if you see any packets when syslog messages are generated
<arcsky> nothing
<Patrickdk> odd
<Patrickdk> try putting *.* @192.168.168.199:516 before the $IncludeConfig line
<Patrickdk> restart rsyslog, and see if tcpdump sees anything then
<Patrickdk> maybe one of the included configs has a stop command
<arcsky> Patrickdk: can it be something that i had syslog-ng running too?
<Patrickdk> heh?
<Patrickdk> ubuntu normally doesn't allow syslog-ng and rsyslog to be installed at the same time
<Patrickdk> unless you work hard to break it
<arcsky> case closed. thanks
<sarnold> what ws it?
<Patrickdk> we will never know
<arcsky> :D
<arcsky> i did install a fresh Ubuntu server and added *.* @192.168.168.199:516 and restarted it .. worked..
<arcsky> thanks Patrickdk and good night
<arcsky> must been conflict with syslog-ng
<Patrickdk> well, normally, when you install syslog-ng, ubuntu uninstalls rsyslog
<Patrickdk> :)
<keithzg> Hmm, interesting, the current Windows 10 Technical Preview is unable to access SMB shares from our fileserver or any other boxes around the office network (all running 14.04, samba 4.1.6), immediately complains that "you might not have permission to access this network resource". Windows 8.1 and older clients connect just fine still.
<WACOMalt> Hey folks. How can I allow ssh on more than the default port 22?
<WACOMalt> standard ssh ports are blocked at my work, cant connect to my server
<sarnold> WACOMalt: see ListenAddress in sshd_config(5)
<WACOMalt> ah there we go
<WACOMalt> should I just add a new line under "Port 22" with like.. "Port 25678" or whatever?
<WACOMalt> currently my ListenAddress line is commented out, is that normal?
<Patrickdk> yes
<WACOMalt> ok
<Patrickdk> not sure if multible are supported or not
<WACOMalt> oh
<Patrickdk> an iptables redirect rule could be used too
<WACOMalt> would that more likely allow more than one port?
<sarnold> oh heh there is a "port" config :) I found the listenaddress and didn't look further
<sarnold> looks like multiple port commands can be given
<Patrickdk> then yep :)
<WACOMalt> awesome
<WACOMalt> thanks for the info :)
<Patrickdk> iptables redirect works for anything though :)
<Patrickdk> it's a backup option
<WACOMalt> ok, if this doesnt go I'll read up on that
<WACOMalt> never done iptables stuff
<sarnold> yeah, but the nice bit about config is that netstat -anp will give the right answer :)
<WACOMalt> ah, right
<sarnold> if you do it via iptables it won't be as obvious..
<WACOMalt> I dont mind it being obvious
<WACOMalt> or do you mean obvious to my employer?
<sarnold> obvious to whoever admins the machine :)
<WACOMalt> that'd be me :)
<sarnold> it'll be identically obvious to employers either way
<WACOMalt> trying to remember the command to restart sshd on ubuntu. tried /etc/init.d/ssh restart and service sshd restart
<WACOMalt> but the fact that it's not giving me status or kicking me out of my current ssh session tells me thats not working...
<sarnold> sudo service ssh restart should do it
<sarnold> it shouldn't disconnect existing connections either
<WACOMalt> oh cool
<WACOMalt> there we go, got it working
<WACOMalt> thanks sarnold  and Patrickdk
<SuperLag> I did "apt-get install lamp-server^". It installed mysql 5.5.41. If I download the .deb files for 5.7.6, is there an easy way to upgrade, or will stuff break?
<WACOMalt> so this is offtopic I'm sure. Tell me to shut up if you want. :)  But is there any way when I am sitting at work, to test and see what ports arent blocked so I can set my ssh server to one of them?
<sarnold> WACOMalt: not easily; nmap -p1-65535 ip.add.res.s would scan all ports on your host; if you see some "filtered" then those are being dropped by a firewall, but firewalls can be configured to reject packets too, in which case it might not be obvious which ones get through and which ones are just not open on your computer
<sarnold> WACOMalt: note that a portscan like that is liable to set off some blinking luights at your employer... if they closely monitor things, this one is liable to show up.
<WACOMalt> Well, I'm a lower level tech specialist guy. They dont mind me getting around so much
<WACOMalt> as long as they dont have to make a door for me :)
<sarnold> probably port 80 just works :) do you need a web server:)
<WACOMalt> I do :(
<WACOMalt> thats the main feature of this server
<dtscode> can someone help me with these errors? https://bpaste.net/show/514c1b3e8eb5
<dtscode> anyone?
<lordievader> Good morning.
<marus> hello, is there a good way to restore file/folder from encrypt filesystem?
<jpds> Unlock the filesystem?
<lordievader> marus: If you have forgotten the password then you've lost the files. That is the whole point of encryption.
<arcsky> rsyslog.conf, *.* sends all from /var/log right? even dirs?
<jpds> arcsky: No.
<jpds> arcsky: It sends what's sent to syslog.
<jpds> arcsky: Do you use Puppet by any chance?
<jpds> arcsky: Apache for example doesn't log to syslog.
<arcsky> jpds: nope i dont use puppet. how do i send evryhing in /var/log to a external syslog?
<jpds> arcsky: You can't.
<jpds> arcsky: That's what I was telling you yesterday.
<jpds> arcsky: Best you could do for that is rsync everything somewhere else.
<arcsky> ok
<arcsky> to bad , rsyslog
<jpds> rsyslog works fine.
<jpds> It's just that not everything is made to log to syslog.
<jpds> I wouldn't want my Apache stuff in syslog.
<jpds> I'd rather every site logged to its own log.
<arcsky> jpds: if i do login with ssh with wrong passwd. it get logged to auth.log is that a part of syslog right?
<jpds> arcsky: Yes.
<arcsky> jpds: ok i understand
<arcsky> so if i want for example httpd logs i have to use rsync?
<jpds> arcsky: Yep, just for /var/log/apache2/ though.
<arcsky> oke rsync sends to syslog or only via ssh/samba/nfs ?
<jpds> arcsky: rsync can go via ssh.
<jpds> Much easier.
<arcsky> ok
<Lachezar> Hey all: Ubuntu Server Ubuntu 14.04.2 LTS: At apt-get upgrade: dmsg: Request for unknown module key 'Magrathea: Glacier signing key: ...' err -11
<Lachezar> The dmesg line is repeated 11 times
<Lachezar> What's wrong?
<Lachezar> Hm. Is it possible, that a previous apt-get update+upgrade has replaced the modules for the current kernel (version), so now the modules are signed with one key, but the kernel that was loaded has a different key? Because I can see "Loaded X.509 cert 'Magrathea: Glacier signing key: ...'", but the key does not matchâ¦
<Walex> Lachezar: web search for "Magrathea: Glacier signing key"
<jpds> Lachezar: I think that there's a different key for every kernel.
<Lachezar> Walex: already did that. Nothing helpful.
<Lachezar> jpds: I remember, that I had an update that upgraded the kernel packages WITHOUT installing new packages, but rather upgrading the current ones. This seemed odd, as almost every time I get a kernel upgrade the new kernel comes as new packages. I was not sure if it was something to be warned about.
<Lachezar> jpds: Might that be the reason for having the same kernel (version): history.log:Upgrade: linux-image-extra-3.13.0-46-generic:i386 (3.13.0-46.76, 3.13.0-46.77) ...
<pmatulis> Lachezar: boot into the old kernel and see
<Lachezar> pmatulis: I am booted into th eold kernel.
<Lachezar> pmatulis: The kernel update did not request a reboot, so I'm still sporting the kernel before the upgrade, possibly with modules after the upgrade. Looks like a botched partial upgrade. I've scheduled a reboot at night. Will see tomorrow.
<smoser> randomly speaking out loud, wondering if someone has a solution for this.  i want to log consoles of libvirt domains with conserver-server
<smoser> but if the domain isnt running, then 'virsh console <name>' will fail, and eventually conserver-server will give up
<smoser> it seems like you could poke at conserver-server on libvirt events to tell it to re-read or re-try its consoles... i'ms ure thats all possible. but just wasn't obvious
<smoser> and i was somwhat surprised to not know of a solutoin
<smoser> s/not know/not find/
<rbasak> smoser: maybe wrap it?
<rbasak> smoser: in something that retries indefinitely. An expect script maybe.
<rbasak> (as it wants to provide a TTY really)
<smoser> right. yeah, thats what i thought. and libvirt provides a way to get events for start stop of the thing.
<smoser> so its possible
<smoser> i just kind of fifugred someone would have odn ethis beofre.
<smoser> typos aside ;)
<helpseeker> Hello
<helpseeker> I need some help for using ubuntu server for an idea I had
<helpseeker> Is someone in here which can advise me
<smoser> well, you can ask. someone might respond. maybe not.
<helpseeker> okay thx
<helpseeker> I have a team in a game with five guys
<helpseeker> And I have a root server with ubuntu 14.04
<helpseeker> We want that server to recieve some streams
<helpseeker> one stream is a livestream from a game
<helpseeker> five webcams with streams from five different computers
<helpseeker> the server has to add these six streams into one and forward that to twitch
<helpseeker> is this possible?
<helpseeker> not "add". the better word is "merge"
<helpseeker> on the right: game-strem; on the left five webcams in a vertical row
<helpseeker> I do not find any solution for that by googling that or get any answers by some forums
<jrwren> helpseeker: yes, it is possible. it has nothing to do with ubuntu-server. You would need to write software to do this.
<helpseeker> Ok, I thought someone built that before and I did not find it
<helpseeker> What kind of tutorial would you prefer in this case to get a solution?
<helpseeker> Or what type of software do you mean? A addon maybe for a darwin-thing?
<Blinkiz>  Hello. Am running ubuntu 14.04 and have a openvpn server running in a unprivileged container. My problem is that the container can not access /dev/net/tun. How can I solve this?
<Blinkiz> oh, talking about LXC here
<ivoks> Blinkiz: two ways
<ivoks> Blinkiz: run container as root
<ivoks> Blinkiz: or allow container to access that device
<Blinkiz> ivoks, hi. thanks for the suggestions above
<Blinkiz> ivoks, I like the idea about giving the container access to that device. How can I do this?
<Blinkiz> I have in my config file "lxc.cgroup.devices.allow = c 10:200 rwm". Is this what you mean allowing access?
<ivoks> Blinkiz: yeah, something like that
<ivoks> Blinkiz: i don't know it by heart
<ivoks> i suggest googling lxc device passtrough
<RudeViper> I just did a complete clean install of 13.04 (14.04 keeps failing for some reason - none given) - when I try to do a "sudo apt-get update" I keep getting lines and lines of "Failed to fetch http://*****" can someone tell me how to fix this?
<Blinkiz> ivoks, the line is correct. Nice search string, will try that.
<patdk-wk> RudeViper, 13.04 isn't supported anymore, it's too old
<patdk-wk> you could update everything to use the archive repo though
<RudeViper> patdk-wk - how do I do that?
<patdk-wk> dunno, google?
<RudeViper> oh this is great cause 14.041 won't work either
<RudeViper> 14.04
<patdk-wk> https://help.ubuntu.com/community/EOLUpgrades
<patdk-wk> http://askubuntu.com/questions/91815/how-to-install-software-or-upgrade-from-an-old-unsupported-release
<RudeViper> doing that now - maybe by upgrading that will solve the issues I was having with a clean install of 14.04
<RudeViper> patdk-wk - thanks - that seems to be working
<linocisco> hi all
<linocisco> Good night or good morning
<linocisco> hi all, lets say I have a domain name and single ubuntu mail server configured using domain name. If internet down to my server, and I dont wanna lose all emails, how could I do?
<linocisco> if my internet is down for 1 day or 1hr due to some reason, how can I protect now to lose incoming emails especially during these period?
<linocisco> if my internet is down for 1 day or 1hr due to some reason, how can I protect not to lose incoming emails especially during these period?
<RudeViper> ok patdk-wk - have run those scripts - would you be able to tell me if there is a command to find out what version I am running now? Just to see if all went alright.
<patdk-wk> cat /etc/release? or /etc/issue
<patdk-wk> forget exactly
<RudeViper> patdk-wk ok thanks - will run those as soon as updates are finished
<patdk-wk> ya /etc/issue will show you
<RudeViper> ok it is showing 13.10 - so have to do it again to get 14.04 - thanks - you have been a really big help
<RudeViper> after I do this - I need to put a gui on it - for some stuff I need to do later- however - I don't want it starting with the server - tried before and it always starts on reboot - how can I prevent that from the get go?
<patdk-wk> heh?
<patdk-wk> was there a question in that?
<RudeViper> yes - how can I prevent say xfce4 from starting with the server after I install it - I only want the gui to run when I call it manually
<patdk-wk> disable the lightdm service?
<patdk-wk> that I don't really know
<RudeViper> ok - I'll keep looking while it upgrades again
<Pwnna> does anyone here know why the memory cgroup are not started during server boot even tho cgroup_enable=memory is specified in the kernel boot options?
<Pwnna> i'm very confused.
<tgm4883> Alright everyone, been banging my head against this for too long. Probably missing something simple. I have a script that backs up mysql dbs and is supposed to scp files to another server. The SCP portion fails though, as it tells me the file isn't found
<tgm4883> Here is the relevant portion of the backup script
<tgm4883> http://pastebin.ubuntu.com/10581658/
<Pwnna> https://pastebin.mozilla.org/8825330
<tgm4883> The offending line being line 9
<tgm4883> The error message I get is        scp: /srv/backups/MySQL/testdatabase.2015-03-11-12:09:51.gz: No such file or directory
<tgm4883> This seems to indicate that $FILE  (or ${FILE}) isn't being seen, despite it echoing correctly in the previous line
<tgm4883> It echo's something similar to /backup/mysql/testdb.2015-03-11-12:09:52.gz
<tgm4883> actually just fixed it. Super weird, but scp didn't like colons being in the filename from the timestamp
<keithzg> Anyone have any ideas on how to debug and/or work around Windows 10's inability to access samba shares? (at least those shared from 14.04)? I'm sadly seeing nothing in the main /var/log/samba/log.smbd when I make connection attempts.
<keithzg> Windows 10 connect to the old version of Samba (3.4.4) running on the Drobo we have kicking around just fine, but every 14.04 machine I have (all running Samba 4.1.6-Ubuntu) results in an immediate failure to connect, just "you might not have permission to access this network resource", which, considering I've made sure to enable guest logins on all machines is definitely a mistaken impression on Windows' side...
<keithzg> Of course it then admonishes me to "Contact the adminstrator of this server to find out if you have access permissions." Well thanks Windows, but I've talked to the admin, and he doesn't know why this isn't working because he's also me :P
<keithzg> This is if I try to access the computer via SMB, ex. "\\eliot" as the path, If I try to access a share directly, ex. "\\eliot\public", then I get "Error code: 0x80004005, Unspecified error". Helpful!
<beisner> keithzg, Win10 disables guest access by default.  http://answers.microsoft.com/en-us/insider/forum/insider_wintp-insider_web/error-the-account-is-not-authorized-to-login-from/ffa63cc0-92d0-4639-9e3f-4d50100c1841
<keithzg> beisner: Aha! Thanks, my google-fu was apparently failing me, I hadn't found that (weird that my text isn't actually the same, but it does seem to be the same error)
 * beisner will have to do his annual fire-up-the-latest-windows-version vm some time soon.  prob win10 tech preview.
<keithzg> Win10 tech was a bit problematic for a while, but the latest VirtualBox and its corresponding guest additions do finally support it well.
<beisner> keithzg, i haven't messed with it personally, but i recall that being one of the 'secure(er) by default' changes.
<keithzg> beisner: I'm a bit baffled by it, though, since it's not like Windows lets me attempt a user/password login, it just somewhat cryptically fails when it would have been able to log in, there's no option to even attempt a secure login. So they're forcing a relatively large change upon any servers/NAS's to support this behaviour. I'm not 100% sure they're wrong, by any stretch, but it does seem rather aggressive and a bit patronizing.
<beisner> keithzg, fwiw, back when i used win+samba, i always made sure that my windows username/password sets also existed as linux usernames/passwords, and that those users were permitted in samba shares.   which, appears to be exactly what Win10 is urging.   i would bet there's a reg hack though to re-enable anonymous/guest.
<keithzg> beisner: Oh, there's definitely a reg hack. I just got tons of flack from folks at my company when I tried to enforce precisely that, because folks around here really hate authenticating ever.
<keithzg> beisner: Err, to be clear, I mean that I ran into flack when I tried to set things up how you did.
<patdk-wk> give them the option to use 2factor auth
<beisner> keithzg, the trick is this:   as long as your windows username and password Exactly match your linux/samba username and password, they will never be prompted to authenticate.
<beisner> but they will authenticate nonetheless.
<patdk-wk> are you using a AD server?
<patdk-wk> if so, just bind to that, and let it import all those users
<patdk-wk> no need to keep passwords in sync
<beisner> ^ yes that's even better.   my example is a home network, a couple of kids devices connecting to shares.
<keithzg> patdk-wk: Naw. Each Windows machine has a local login. Almost everyone runs around as Administrator (which also precludes having usernames/passwords on the Linux server to match each user). I've gotten very little traction trying to convince folks how bad of an idea this is.
<patdk-wk> ya, that is extreemly painful
<keithzg> Yup :(
<keithzg> Registry hack it is, I guess. Le sigh.
<patdk-wk> just push it out with a gpo :)
<patdk-wk> after you setup AD :)
<keithzg> haha
#ubuntu-server 2015-03-12
<zkvvoob> Hello everyone. I'm looking for some extremely kind soul who would be willing to help me figure out a mess that I got into after upgrading from 12.10 ro 13.10 and then 14.04.2. No websites load, I can't get the ISPConfig that had been managing the vhosts to run and besides Apache is giving me some strange errors. Please?
<Sling> zkvvoob: what errors exactly?
<Sling> (i know a lot about apache, nearly nothing about ispconfig)
<zkvvoob> Sling:
<zkvvoob> * Restarting web server apache2
<zkvvoob> AH00548: NameVirtualHost has no effect and will be removed in the next release /etc/apache2/conf-enabled/000-ispconfig.local.conf:62
<zkvvoob> (98)Address already in use: AH00072: make_sock: could not bind to address [::]:8081
<zkvvoob> (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8080
<zkvvoob> no listening sockets available, shutting down
<zkvvoob> AH00015: Unable to open logs
<zkvvoob> Action 'start' failed.
<zkvvoob> The Apache error log may have more information.
<Sling> please use a pastebin for pasting more than a few lines
<zkvvoob> sorry
<Sling> but it seems that its trying to listen on ports 8080 and 8081 but there is already something listening there
<Sling> see 'lsof -i:8080' and 'lsof -i:8081' to check what is
<zkvvoob> Nothing happens, just new command line
<Sling> are you running this as root?
<Sling> or with sudo
<zkvvoob> sudo su, root
<OpenTokix> Sling: netstat -anp |grep :8080
<Sling> OpenTokix: lsof is a lot easier ;)
<zkvvoob> OpenTokix: that did nothing either :(
<OpenTokix> zkvvoob: if you do a grep :8080 /etc/apache2/sites-enabled/*   Maybe you are trying to bind multiple times to some ip or such? Anbd apache will kill itself.
<zkvvoob> OpenTokix: Maybe it's this, I got two lines that go "/etc/apache2/sites-enabled/000-ispconfig.vhost:NameVirtualHost *:8080
<zkvvoob> and then /etc/apache2/sites-enabled/000-ispconfig.vhost:<VirtualHost _default_:8080>
<OpenTokix> bleu.... ispconfig
<OpenTokix> That braindead howto infecting everything
<OpenTokix> ok
<OpenTokix> zkvvoob: That is probably it, yes - remove the _default_ one
<Sling> uuh
<Sling> no
<Sling> you're looking for Listen directives
<zkvvoob> default.vhost or one of the lines?
<Sling> that is what tells apache on which interface:port to listen, you can have multiple vhosts defined for the same port/ip
<OpenTokix> zkvvoob: Can you pastebin the 000-ispconfig.vhost and /etc/apache2/ports.conf
<Sling> the vhost won't cause that error
<zkvvoob> openjust a minute
<zkvvoob> OpenTokix: just a minute
<OpenTokix> Sling: You can not bind to the same ip:port muliple times, but you can bind to *:port multiple times.
<Sling> OpenTokix: the vhost doesnt bind
<Sling> that is what the Listen directive doe
<Sling> does*
<Sling> and the _default_:443 vhost is a special case for non-SNI stuff
<zkvvoob> OpenTokix: http://pastebin.com/EWYRjSgR
<Sling> it won't cause the errors he just mentioned, since it has nothing to do with 8080/8081 :)
<OpenTokix> zkvvoob: Do you have a listen 8080 in your /etc/apache2/ports.conf to ?
<Sling> wow that is a weird config
<zkvvoob> OpenTokix: No
<Sling> so if you load mod_fcid you get one documentroot, and if you load the itk mpm you get another
<OpenTokix> zkvvoob: try grep -r -i  Listen /etc/apache2/*
<zkvvoob> OpenTokix: http://pastebin.com/0swhqd0P
<OpenTokix> zkvvoob: I would try to change the _default_:8080 to *:8080
<zkvvoob> OpenTokix: where should I do that?
<OpenTokix> zkvvoob: ispconfig.vhost
<zkvvoob> OpenTokix: got it, just a moment
<zkvvoob> OpenTokix: restarted apache2, but got the same message "AH00072: make_sock: could not bind to address [::]:8081;  AH00072: make_sock: could not bind to address 0.0.0.0:8080"
<OpenTokix> intresting - one complains about ipv6, one ipv4 also
<zkvvoob> OpenTokix: I honestly don't get it, why an upgrade would screw things so badly! Everything was running just perfectly up until 15 hrs ago
<OpenTokix> zkvvoob: Nature of sysadmin - stuff breaks and one has to fix it.
<zkvvoob> Trouble is I am only a sysadmin wannabe :(
<OpenTokix> zkvvoob: you always use service apache2 restart; not service apache2 stop && service apache2 start ?
<OpenTokix> zkvvoob: Since restart and stop/start it very different for apache2
<zkvvoob> OpenTokix: yes, service apache2 restart
<lordievader> Good morning.
<zkvvoob> OpenTokix: are you out of ideas?
<OpenTokix> zkvvoob: I was waiting for you to do a stop/start
<zkvvoob> appologies
<zkvvoob> did that, same thing
<OpenTokix> zkvvoob: and ps ax |grep apache show nothing?
<zkvvoob> OpenTokix:  8858 pts/0    S+     0:00 grep --color=auto apache
<OpenTokix> zkvvoob: ofc. Annoying
<zkvvoob> OpenTokix: any other ideas? Sorry, if you're fed up with my troubles, I just really don't know who to turn to
<OpenTokix> zkvvoob: I love troubleshooting, best part of the job.
<OpenTokix> zkvvoob: did you try replace all _default_ with * ?
<OpenTokix> zkvvoob: in apps.vhost and ispconfig.vhost
<zkvvoob> OpenTokix: only in ispconfig.vhost, will do the apps now
<OpenTokix> zkvvoob: if iirc the error changes slightly when you removed it in ispconfig.vhost
<zkvvoob> OpenTokix: Changed it in apps.vhost as well, did service apache2 stop && service apache2 start, but the error remains the same: (98) Address already in use: AH00072: make_sock: could not bind to address [::]:8081; (98) Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:8080
<OpenTokix> zkvvoob: There is only two files in /etc/apache2/sites-enabled ?
<zkvvoob> OpenTokix: no, there are: 000-apps.vhost, 000-aps.vhost, 000-default, 000-ispconfig.conf and 000-ispconfig.vhost
<OpenTokix> ok
<OpenTokix> what is the difference between 000-ispconfi .conf and .vhost
<OpenTokix> zkvvoob: Can you pastebin them all? - Make sure you doe ----------- NAME ------- between them
<zkvvoob> OpenTokix: http://pastebin.com/GK4PQcmX
<zkvvoob> OpenTokix: http://pastebin.com/EWYRjSgR
<OpenTokix> zkvvoob: Can you do grep -i include /etc/apache2/apache2.conf
<zkvvoob> OpenTokix: http://pastebin.com/kPQdsyNM
<OpenTokix> zkvvoob: is there any files in /etc/apache2/conf-enabled ?
<zkvvoob> OpenTokix: 000-ispconfig.local.conf, apache2-doc.conf, charset.conf, javascript-common.conf, localized-error-pages.conf, other-vhosts-access-log.conf, security.conf, serve-cgi-bin.conf
<OpenTokix> zkvvoob: grep -i listen /etc/apache2/conf-enabled/*
<zkvvoob> OpenTokix: nothing, empty line
<d4rks1d3r> hi all, noob question, I'm trying to install ZNC as a daemon, where do I need to create the systemd script?
<ReScO> I'm in a pinch, somehow HSTS got enabled on my Apache2 installation and now i cannot access non-https resources.
<rbasak> ReScO: http://stackoverflow.com/q/10629397/478206
<dmsimard> Hi. Anyone know why 14.04.2 isn't shipped yet for cloud images ? https://cloud-images.ubuntu.com/trusty/current/
<rbasak> Odd_Bloke: ^^
<rbasak> dmsimard: I suspect that the images are effectively 14.04.2 anyway. Apart from maybe the kernel.
<sarnold> probably the <h1> is broken..
<dmsimard> rbasak: 3.16 doesn't seem to be in them indeed, e.g: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64.manifest
<rbasak> That might be too big a change to be worth making.
<sarnold> .. and unlikely to make much difference, since it is mostly there for hardware enablement, not as big a deal with VMs
<dmsimard> The specific use case if for usage with Ceph (rbd kernel module) which for certain features requires kernel >3.13
<dmsimard> I have no particular problems with building my own images based off of that (aka, install the utopic lts package on top of the existing image and re-package) but I would've expected that to be in the cloud image built-in
<rbasak> I suppose the difference is that users might not expect to suddenly end up with a different kernel version if they use the current image.
<dmsimard> So it's going to stay at 3.13 until 16.04 then ?
<rbasak> So there also exist users who expect the original kernel.
<Odd_Bloke> dmsimard: rbasak is correct, the daily images always track the latest packages and so are 14.04.2 sans the HWE bits.
<Odd_Bloke> And, in fact, he is correct again. :p
<dmsimard> I understand that, no problem here
<dmsimard> Just want to adjust my expectations :)
<utlemming> Odd_Bloke: looks like freenode let me join :)
<Odd_Bloke> utlemming: o/
<rbasak> I suppose what you need is a separate stream of images that include the HWE kernel. But that would be a whole load more for Canonical to maintain and support I guess, and more than we do right now.
<rbasak> OTOH, you can probably arrange some cloud-init userdata magic to upgrade the kernel and reboot on first boot.
<rbasak> It would just mean slightly slower "boot".
<utlemming> Odd_Bloke, rbasak: I understand that there is some question about the HWE kernel?
<utlemming> Odd_Bloke, rbasak: are these in re:  to the downloadable images or EC2?
<Odd_Bloke> dmsimard: ^
<dmsimard> rbasak: I'd tend to push an image that is already up to date and use that image :)
<dmsimard> Well, I mean, on my end
<utlemming> On GCE, Azure, VMWare's vCHS and a few other clouds the default is to use the HWE kernel at the behest of the cloud or because of virtual hardware compatability.
<dmsimard> Download cloud image -> update it -> push it -> use it
<utlemming> dmsimard: the reason why we don't default is that it was nacked at a v-UDS. Th
<rbasak> utlemming: so dmsimard's need comes from Ceph, which makes use of newer kernel features. Firmly in userspace, AIUI.
<utlemming> dmsimard: and we did discuss the idea of having an HWE image download, but that was killed because of the maintaince burden, and user confusion. Not to mention that the HWE kernel can royally screw with DKMS packages (i.e. Virtualbox and open-vm-tools, not mention the filesystem modules)
<dmsimard> Fair enough
<rbasak> But save for bumping all users to the HWE kernel for that, or maintaining separate HWE images, I don't see any other solution really, except for an upgrade-on-first-boot thing.
<dmsimard> FWIW, here's the Ceph recommendations off of the docs: http://ceph.com/docs/master/start/os-recommendations/
<dmsimard> And features that might not work depending on Kernel versions: http://cephnotes.ksperis.com/blog/2014/01/21/feature-set-mismatch-error-on-ceph-kernel-client
<utlemming> dmsimard: believe me, I really would like to make the switch. But the decision is made on a cloud-by-cloud basis. We only use the HWE kernel by default on clouds where the pain of not having the HWE kernel is worse than the pain of breaking thinks like DKMS.
<dmsimard> utlemming: I hear ya, not trying to make this happen
<mgagne> utlemming: where are those cloud-by-cloud image located? aren't they the same as the ones publicly available already?
<dmsimard> Just saying this wasn't about HWE, more about fixes/improvements made in later releases
<utlemming> dmsimard: they are not published generally since they are published into the clouds. GCE is probably the best one if you want in cloud. Their NVME SSD's are quite performant
<rbasak> dmsimard: there is of course the claim that LTS release + new features = more recent release :-)
<mgagne> utlemming: who is maintaining those images?
<utlemming> dmsimard: fixes is why we just switch Azure 12.04 over to HWE yesterday incidently.
<utlemming> mgagne: The Canonical Certified Public Cloud Team (me, Odd_Bloke and rcj, with our fearless manager gaughen).
<mgagne> utlemming: pardon my hasty conclusion but it looks like someone is already maintaining different versions of those images and adding one more isn't that much of an issue. except for the potential public confusion it could cause
<rbasak> mgagne: I don't think you quite appreciate the sheer number of images we're talking about, and the QA and maintenance work involved. Adding HWE is adding a whole dimension. There are two or three (?) HWE kernels each LTS, so you'd be multiplying the whole set by that number.
<utlemming> mgagne: I wouldn't call your conclusion hasty. It goes to the downloadable images. The GCE, Azure, et al, images that are published are not downloadable. You run them in cloud. And yes, I agree that there is a degree of confusion that is introduced here.
<utlemming> mgagne: and to rbasak's argument, you should see the bills for testing.
<mgagne> utlemming: sure, I'm a pragmatic person, I'm not trying to corner people into doing things that doesn't make sense from a financial, time and effort perspective
<utlemming> mgagne: but the choice of HWE in some environments (i.e. Azure, GCE, VMware) is dictated by either the cloud or a technical requirement that makes choosing otherwise problementic.
<utlemming> mgagne: ack. fwiw, I really do appreciate your concern. And that's why I want to re-raise this for 16.04.
<utlemming> mgagne: I think that the HWE kernel has proven to offer far more benefits that problems, especially for bug fixes, and new features that make older versions of Ubuntu more usable.
<mgagne> utlemming: but since it was mentioned that some cloud providers have "special" images built from them, I guess it's fair to wonder why it's ok for them but not the "public" ones.
<mgagne> utlemming: if it comes down to finance and time resources reasons, I can't argue with that. those are fair reasons
<utlemming> mgagne: the downloadable images are the "ideal", where the in-cloud published images are pragmatic compromise.
<mgagne> utlemming: right
<Odd_Bloke> mgagne: Furthermore, in places that we support HWE, we only support HWE; there is still only one 14.04 image.
<mgagne> utlemming: lets say a provider wishes to have images built and maintained by Ubuntu, would this be something provided by the "Canonical Certified Public Cloud Program" ?
<mgagne> Odd_Bloke: right, so you get the one with HWE, nothing else: no image without HWE. right?
<Odd_Bloke> mgagne: Yep; anything else would be (a) confusing for users ("wait, which of these is actually 14.04?"), and (b) too resource-intensive (both human and technical).
<mgagne> Odd_Bloke: makes sense
<rostam> Hi I am using ubuntu 14.04. I would like to disable console messages during boot. This is for production. How could I do that? thanks
<sarnold> rostam: try console=/dev/null on kernel command line? just a guess...
<Sling> rostam: why would you want to disable that?
<Sling> keep in mind that anybody with physical access should be considered to have full access
<rostam> Sling at the customer site the console messages looks like something is wrong they do not understand it.
<rostam> sarnold, thanks will try that.
<sarnold> of course when something does break you might regret turning them off :)
<londoncalling> hi, want to get 3 vm's running on a digialocean droplet (1GB RAM, 20GB disk), how would I do this purely from command line. I don't want to have to use VNC.
<sarnold> londoncalling: you can use libvirt via command line using virsh; or you can start your vms directly with qemu and command line options...
<londoncalling> sarnold, thanks a lot. Ill whip out the google-fu
<blizzow> Ok, I downloaded the 14.10 iso and formatted a USB stick with unetbootin.  Stuck the USB stick into a Dell Poweredge R510 and started the installer.  The installer complains about being unable to detect or scan a CDROM.  So I copied the iso to the USB stick as well.  As soon as the installation starts, I tap alt+f1 and mount the iso at /cdrom as a loop device.  The installation makes it through the disk partitioning and some package installation, but 
<bekks> Just dd the iso onto the stick.
<genii> blizzow: The images made now are hybrid images, you don't need to do anything special so that they run off a USB stick instead of the CD/DVD. You just need to use dd if on a linux machine. If Windows then  you need to find a dd for it like RawWrite or WinDD
<blizzow> bekks: I'm going to try that next.
<blizzow> Just flashes isolinux.bin missing or corrupt and then boots the disk
<blizzow> :(
<bekks> then how did you create the bootable usb stick?
<blizzow> dd if=/home/myusername/ubuntu-14.10-server-amd64.iso of=/dev/sdc
<blizzow> I mounted the iso with mount -o loop -t iso9660 /home/myusername/ubuntu-14.10-server-amd64.iso /mnt/ and did an md5sum of /mnt/isolinux/isolinux.bin and it's the same as on my drive.
<genii> Did you check the md5 sum to make sure the iso is not currupted?
<blizzow> yes, it's the correct hash
<genii> blizzow: I suspect an EFI issue
<blizzow> Maybe so, but really, I've installed ubuntu on these suckers before.  The installer should not be this sensitive
<genii> blizzow: Did you fsck the USB stick ?
<blizzow> genii: yeah.  and dd if=/dev/zero of=/path/to/usbdev to wipe it first.
<blizzow> The weird part is I can boot to the installer if I use unetbootin to make the stick, but the cdrom isn't seen.  If I use dd, I can't even get to the installer.
<blizzow> Forcing mounting /path/usb/partition /cdrom  during the install from the unetbootin usb stick leads to an install that doesn't complete.  I'm wondering if there is a different way to mount/detect the USB stick as my CDROM or if I can do a network based install..
<blizzow> Got it!  There is a setting in BIOS about usb drive emulation auto/floppy/hard drive.  It was set to auto and I set it to hard drive.
<RoyK> Pratchett just died :(
<sarnold> RoyK: i've been sad all day.. no new discworld books.
#ubuntu-server 2015-03-13
<RoyK> Sachiru: AT LAST, SIR TERRY, WE MUST WALK TOGETHER.
<RoyK> sarnold: did you read Nation?
<sarnold> RoyK: no, I didn't, I've only read discworld things so far
<RoyK> sarnold: AT LAST, SIR TERRY, WE MUST WALK TOGETHER.
<RoyK> sarnold: really, read it
<sarnold> such genius :) hehe
<RoyK> sarnold: it's one of his best
<RoyK> it's that and Amazing Maurice
<sarnold> RoyK: heh, I haven't read any of the YA books either
<RoyK> hehe
<RoyK> which says something
<RoyK> some of them are the best he's written
<RoyK> or wrote
<sarnold> cool, thanks for the eouragementn
<RoyK> now in paste tense
<RoyK> past tense :P
<RoyK> and please - read Gaimans books also
<sarnold> hmm I don't think I've read raisin steam yet either, looks like amazon's got that in paperback now too :)
<RoyK> the ocean at the end of the lane, the graveyard book, coraline
<RoyK> sarnold: make sure to read Nation by Pratchett
<sarnold> RoyK: thanks, ordered :)
<teward> so i have an interesting issue - i'm trying to set a service to listen locally on 127.0.2.1, but of course it's not 'binding' because it's not added as an ip address that's bindable, I think.  is there a way to get something locally bound to that address, either by forcing it or adding the IP to the lo interface?/
<sarnold> teward: try ip add 127.0.2.1 dev lo
<sarnold> sigh
<sarnold> ip addr add 127...
<teward> sarnold: thanks
<teward> sarnold: that worked, guess i have to add things to rc.local now.
<teward> sarnold: know any way to add that address to the lo interface at boot time without adding commands to rc.local?
<teward> (is it essentially the same as you would add addresses for eth0 or eth1, in /etc/network/interfaces ? )
<sarnold> teward: good question... I think yuo're free to stick anything into the pre commands in interfaces(5). that would be my first choice.
<teward> sarnold: i wonder if post-up works with it
<teward> guess we'll find out after i reboot xD
<sarnold> :)
<teward> in other news, i hate resolvconf - why can't things just NOT need an extra program to manage resolv.conf >.<
<sarnold> you can uninstall it if you wish
<sarnold> i think things ought to work if you do
 * teward shrugs
<teward> sarnold: whoever set this system up put it on
 * teward kinda inherited it :/
<sarnold> just you get to manage it. which worked well for twenty years...
 * teward chooses not to put resolvconf on his servers
<sarnold> teward: probably whover set it up didn't care :)
<teward> sarnold: well the person who set it up didn't care about a lot of things (that's all i'm going to say on the matter)
<teward> sarnold: me, on the other hand, set it up in devel or staging or testing or even in a VM
<teward> apply to production when complete at initial staging
<teward> done
<teward> works great until the next major vulnerability
<Patrickdk> nothing is worse than resolvconf + chrooted programs
<teward> ^ that
<Patrickdk> like debian loves to do with everything
<sarnold> eww sounds like an unfun combo
<teward> sarnold: unfun indeed
<teward> but don't get me started on using anything with SELinux - had to work with a centos box and i was ready to just say "BURN IT AND REPLACE IT WITH UBUNTU ALREADY"
<teward> SEL was on on that system too :/
<syeekick> I get input lag using putty to my ubuntu server, it's a local connection too. I've recieved the same problem remotely too, i just assumed it was a case of bad internet  speeds. Could it be the putty client? anyone got any ideas to decrease the input lag?
<teward> bind9 dumps a lot of information to syslog, is there a way to reduce the amount of data it logs and have it log to a flat file instead of syslog?
<OpenTokix> teward: yes, just configure loglevel
<teward> OpenTokix: you going to be more forthcoming with info or do I have to go bother #bind instead
<teward> (trying to follow their sparse documentation is resulting in a failure scenario)
<teward> oop found it
<OpenTokix> good =)
<jamespage> rbasak, does armel/armhf have a hardware clock?
<jamespage> rbasak, dealing with a ftbfs on ceph due to only dealing with arm64 and x86 right now
<teward> OpenTokix: well, kinda found it - looks like bind9 in Trusty doesn't want to accept the logging { } clause :/
<teward> guess i'll have to go poking at #bind for help
<jamespage> rbasak, https://github.com/ceph/ceph/commit/060cbaacef5091755d598da6fd3b70119fb43184
<rbasak> jamespage: you mean an RTC? Depends on the system.
<rbasak> jamespage: also there can be an RTC but one that is not battery backed
<jamespage> rbasak, hmm
<rbasak> Oh, equivalent to rdtsc? That's different.
<OpenTokix> teward: It does, it is just kind of wonky. - The thing is with bind logging is that default it is very verbose. And you have to understand your own bind configuration. - If you dont.... one should not run bind at all.
<jamespage> rbasak, probably
<jamespage> I have a ppc equiv - http://paste.ubuntu.com/10590583/
<teward> OpenTokix: or it needs better documentation - the setup is experimental so I can learn the system, but with no documentation, well...
<teward> s/the system/bind9/
<teward> OpenTokix: and https://help.ubuntu.com/community/BIND9ServerHowto#Logging is wrong as well
<teward> hence erroring
<teward> oh wait
<teward> wowwwwwwww
<teward> i need to learn to read syslog >.<
<teward> OpenTokix: i mistyped the log destination :/
<OpenTokix> teward: =)
<OpenTokix> teward: there is named-checkconf utility
<OpenTokix> also named-checkzone
<OpenTokix> to check any typos and such in your zone
<jamespage> rbasak, does armhf have something similar?  google is indicating not...
<RoyK> teward: you could use a RainerScript-based filter
<rbasak> jamespage: I found http://blog.regehr.org/archives/794 which would work on any armv7 I think, but there are caveats. Kernel needs to enable it for userspace (there might be a good reason)
<rbasak> jamespage: and I'm not sure it's synced between CPUs, so it wouldn't appear monotonically increasing without pinning a process to a single CPU
<rbasak> jamespage: so might be buggy depending on the use
<teward> RoyK: I could, but I've figured out the logging now - first i forgot to create the logging directory, then I forgot to chown the directory so bind can write to it.  Then I accidentally had a terminal crash when SSHing to the VM this is on so... *shrugs*
<rbasak> jamespage: it might be worth understanding the need a bit better to find a good solution.
<rbasak> jamespage: eg. would clock_gettime(2) do?
<jamespage> rbasak, right now the only place I can see it in the codebase is in the src/test/objectstore/ObjectStoreTransactionBenchmark.cc
<rbasak> jamespage: maybe just disable that test then?
<jamespage> rbasak, that's not the issue - the Cycles code is a core part of ceph, so right now I get a FTBFS on armhf
<rbasak> clock_gettime() would work for benchmarking but have far lower resolution and affect the test, as it involves an extra system call and slows thing down
<rbasak> So we could use clock_gettime for the armhf implementation maybe. Or replace it with a no-op if we know it's only ever used in a test.
<rbasak> jamespage: maybe safer to remove the test and the function entirely with an #ifdef?
<rbasak> Then if it ends up being called later, it'll get flagged up.
<rbasak> Though look out for ABI breakage.
<jamespage> rbasak, what about http://neocontra.blogspot.co.uk/2013/05/user-mode-performance-counters-for.html
<rbasak> Having said that even if the ABI didn't export a particular function before, adding it back later wouldn't constitute ABI breakage.
<rbasak> jamespage: that's essentially the same. It either needs a system call, or a kernel module to enable userspace access.
<rbasak> jamespage: I wonder why the kernel doesn't provide userspace access by default. Maybe that's something we should get fixed in the kernel.
<rbasak> (unless there's a good reason)
<jamespage> rbasak, that would be nice to know - smb - any ideas?
<smb> jamespage, I always have many ideas ... though I have not yet read about what you are talking right now
<rbasak> :)
<smb> Ok, so cannot say anything right now. Neither performance counters nor nor specifics of the Arm world would be something I can answer from the top of my head
<rbasak> ogra_ might know: ^^
<rbasak> ogra_: we're after an rdtsc equivalent for armhf. There does seem to be something, but it's not enabled for access from userspace by default, so needs a kernel module to turn it on. Do you have any idea why?
<ogra_> rtc's are often an exception on arm
<ogra_> and if you have an rtc it most likely doesnt come with a battery
<ogra_> (which is why we ship fixrtc in all ubuntu initrds and make sure it is executed on all arm boots usually)
<ogra_> (does that help you in any way? )
<rbasak> ogra_: it's not an RTC - it's a volatile monotonic high resolution counter that you can read from userspace without a system call. For high resolution timing of things.
<ogra_> rbasak, ah, well, thats probably a question for ppisati then
<rbasak> On Intel it's built in to the CPU and increments at CPU clock frequency
<ogra_> being our arm kernel guy
 * ogra_ rarely touches kernel code beyond configs 
<rbasak> THanks, I'll ask him in #ubuntu-kernel
<smb> rbasak, jamespage It might help to ask on #ubuntu-kernel
<jamespage> ack
<ogra_> yeah
<soren> smoser: Do you have any good ideas for how to avoid having my ssh keys regenerated each time an instance from the same volume?
<smoser> soren, so you want to boot multiple different instances with the same ssh keys
<soren> Well...
<smoser> it should only do that once "per-instance". based on the instance-id.
<soren> You *could* phrase it that way.
<smoser> you can feed it user-data that says to only do that once per-ever
<soren> Is there an easy way to do it on first boot?
<soren> The problem is this:
<soren> I boot from volume (which gets created based on a regular image).
<soren> Say I want to extend that volume.
<soren> That means I have to terminate the instance (not just stop it).
<soren> ...so that I can detach the volume.
<soren> Extend it.
<soren> Boot a new instance from it.
<soren> And then cloud-init sees it as a new instance. Which it technically is, but not in the usual sense.
<smoser> soren, http://paste.ubuntu.com/10590885/
<smoser> ithink thats what you want.
<soren> smoser: Sweet.
<smoser> that should run the ssh key generation only once. (leaves a marker in /var/lib/cloud/sem/ssh.once or something)
<soren> smoser: When is that logic applied?
<soren> Yeah, that's what I was afraid of.
<soren> That won't help me, then.
<soren> Then I'd need the image to have the config in place on first boot.
<smoser> oh.. i see. the next time you turn it on it has (possibly) differen tuser-data
<soren> ...so that it'll create that file.
<smoser> well, basically you need to get your image configured such that it only ever runs that once.
<soren> Right.
<smoser> you can put that config in the image when you capture the volume or what
<soren> That's the point:
<soren> I don't "capture" the volume.
<soren> I do something like:
<soren> nova boot --block-device source=image,id=<id of one of your trusty images>,dest=volume,size=20,bootindex=0 blah
<soren> ...which creates a new 20 GB volume by copying your trusty image.
<soren> ...and then I boot from that.
<soren> So it's a completely vanilla image.
<soren> So maybe I just -- on first boot -- need to update that config (as in your paste) *and* manually create the relevant semaphore file.
<soren> Or am I misunderstanding the implementation? Is the semaphore file always created, but only checked for in case of (e.g.) "[ssh, once]" or does it only get created in case of (e.g.) "[ssh, once]"?
<jamespage> rbasak, so do we have a good fallbck for this?
<rbasak> jamespage: I think we should either verify that clock_gettime(2) is acceptable and use that, or ifdef out both the function definition and the test.
<rbasak> If the function is never called it shouldn't matter if it doesn't exist, right?
<rbasak> I have no objection to using perf as per your find in http://neocontra.blogspot.co.uk/2013/05/user-mode-performance-counters-for.html either
<smoser> soren, i'm confused.
<smoser> you boot from a volume. it should create ssh keys only when the instance-id has changed.
<smoser> on first boot, its non-existant.
<smoser> so '' -> 'some-silly-long-uuid' is  a change
<smoser> and it creates ssh keys
<smoser> on reboot, it wont.
<smoser> if you capture the volume, and then start a new one from that, it will.
<patdk-wk> yes, for some reason he is *terminating* the instance
<patdk-wk> to resize the disk
<patdk-wk> then creating a new instance using the resized disk
<patdk-wk> so some-silly-long-uuid changes
<patdk-wk> not sure why he is doing it that way, but I haven't exactly done it too often myself
<zkvvoob> Hello people! Could anyone, please, help me with this: after I upgraded my server to 14.04, which came with Apache 2.4.7 (as compared to 2.2, which I had been running before), mod_fcgid cannot be installed. I need it in order to run ISPConfig, a free alternative to cPanel.
<zkvvoob> Besides, right now I can't access any of the websites that were previously working on the server - Chrome says "No access to this page"
<RoyK> zkvvoob: check the logs
<zkvvoob> RoyK^Work: http://pastebin.com/gL1bv6Qj here's apache's error.log
<jamespage> rbasak, http://paste.ubuntu.com/10591552/
<jamespage> how does that look to you?
<zetheroo> I am having a really hard time believing that resolv.conf is really this messed up in Ubuntu Server 14.04
<zetheroo> Once an initial networking setup is created and resolv.conf is populated with nameservers, how does one simply change the nameservers?
<zetheroo> I have seen all kinds of How-To's online claiming to show you how to do this in a mere few steps, but it doesn't work ... resolv.conf is pretty damn frustrating
<patdk-wk> heh?
<patdk-wk> if your using resolvconf
<patdk-wk> you should NEVER touch it
<patdk-wk> you just set /etc/network/interfaces correctly
<zetheroo> and if you want to change your DNS?
<zetheroo> what then?
<patdk-wk> you change it in interfaces
<patdk-wk> and bump the interface
<patdk-wk> or you *manually* call resolvconf with the changes
<zetheroo> ok, so lets try that (even though I think I have done that several times...)
<zetheroo> right now I have this in my resolv.conf: http://paste.ubuntu.com/10591599/
<zetheroo> I want to replace the .8 nameserver with .9
<pmatulis> isn't there a limit of 3 or that changed?
<OpenTokix> no
<OpenTokix> no limit afaik
<OpenTokix> zetheroo: Looks like yuour resolv.conf is manually handled, just edit the file and your set.
<zetheroo> so I edit the /etc/network/interfaces file
<OpenTokix> zetheroo: if you had the resolvconf-package, it would say in the top of the file
<zetheroo> OpenTokix: so how is it manually configured on some but not on others?
<OpenTokix> zetheroo: Depends on the resolvconf package
<OpenTokix> if you have resolvconf installed, it pulls info from /etc/network/interfaces
<OpenTokix> zetheroo: if resolvconf is removed, you edit the file directly
<zetheroo> ok, tried editing the resolv.conf and upon saving it I get :   Premission denied
<OpenTokix> zetheroo: its owned by root: sudo vim /etc/resolv.conf
<OpenTokix> or whatever editor you prefer
<zetheroo> I am logged in as root
<zetheroo> -rw-r--r-- 1 root root 125 Dec 24 11:14 /etc/resolv.conf
<OpenTokix> and whoami returns: root?
<zetheroo> yes
<OpenTokix> zetheroo: lsattr /etc/resolv.conf
<zetheroo> ----i--------e-- /etc/resolv.conf
<OpenTokix> it should say; -...--e-
<OpenTokix> ok
<OpenTokix> chattr -i /etc/resolv.conf
<zetheroo> ok
<zetheroo> no output
<OpenTokix> zetheroo: Who changed the attribute of the file?
<OpenTokix> no, if you do lsattr the i is removed
<OpenTokix> i is "imutable"
<zetheroo> ok
<OpenTokix> now you can edit the file
<zetheroo> ok thanks
<OpenTokix> and if you want to make it imutable again, chattr +i /etc/resolv.conf
<OpenTokix> And then like resolvconf cant change the file either
<OpenTokix> zetheroo: someone is not knowing what they are doing in the system
<zetheroo> :)
<zetheroo> ok
<OpenTokix> zetheroo: Check if resolvconf is insalled: dpkg --get-selections |grep resolvconf
<zetheroo> yes it is
<zetheroo> resolvconf					install
<OpenTokix> yeah
<zetheroo> :P
<OpenTokix> So someone manually edited the file, and then it got changed back
<OpenTokix> they got frustrated
<OpenTokix> and made the file immutable
<zetheroo> ok
<OpenTokix> just remove resolvconf
<OpenTokix> apt-get remove resolvconf
<OpenTokix> if it your machine that is
<OpenTokix> if its someone else machines, talk to them first before you mess around with system software
<zetheroo> but then it wants to remove ubuntu-minimal ...
<OpenTokix> wierd
<OpenTokix> It does not depend on ubuntu-minimal
<OpenTokix> zetheroo: Easiest is to put it as imutable again, after you change it. - I am leaving the office now
<zetheroo> ok
<zetheroo> thanks!!
<zetheroo> ok
<zetheroo> I see
<zetheroo> how do you update resolvconf?
<zetheroo> or "bump the interface"?
<rberg_> 'resolvconf -u' will update it
<rbasak> jamespage: you should probably check for errors in the call to perf_event_open - is fddev valid? Then again read is defined to return an error if the fd is invalid, so maybe that doesn't matter.
<rbasak> jamespage: it does mean two syscalls per request, which doubles the already bad syscall latency for timing.
<rbasak> jamespage: if this function starts getting used in critical path production code, it'll seriously regress performance.
<rbasak> jamespage: that's why I was saying to remove the function entirely - then if the function starts getting used in production code, an FTBFS would flag it up.
<zetheroo> ok, I found a server with a resolv.conf file which is being generated by resolvconf ... so going to try to change the DNS IP from .8 to .9 in the interfaces file and then do 'resolvconf -u' to update the resolv.conf file ... lets see if this works .. :)
<zetheroo> nope - no go
<rberg_> zetheroo: I believe if you change the nameserver in the interfaces file you will need to ifdown; ifup the interface to update
<zetheroo> resolv.conf still the same with the wrong nameserver IP
<rberg_> (I tend to add them to the lo interface)
<zetheroo> rberg_: but won't doing ifdown sever my ssh connection!?
<rberg_> yes
<zetheroo> :(
<rberg_> I dont know if I trust a 'service networking restart' over ssh
<zetheroo> ok, so I can do it with the IPMI connection ...
<zetheroo> the interface is a bridge (br0), so it should just be a matter of 'ifdown br0' and then 'ifup br0' - right?
<rberg_> (I kind of hate resolvconf) yeah it would be the interface where you added the nameservers
<zetheroo> ok, done and dusted - that worked.
<zetheroo> resolv.conf has the correct DNS IP
<zetheroo> :)
<rberg_> sweet! like I said on my servers I added nameservers to lo so I can easly restart them
<zetheroo> to lo?
<zetheroo> I didn't know you could do that
<zetheroo> if you add the nameservers to lo do they them apply to all NIC's?
<rberg_> yep
<zetheroo> ok, interesting
<zetheroo> on another system when I do 'resolvconf -u' I get this: /etc/resolvconf/update.d/libc: Warning: /etc/resolv.conf is not a symbolic link to /run/resolvconf/resolv.conf
<rberg_> ohh I have something for that in a script somewhere
<rberg_> I wonder if 'dpkg-reconfigure resolvconf' will do it
<zkvvob> Hello! Could you, please, help me figure out why a Wordpress installation on a Ubuntu 14.04/Apache2.4.7 server suddenly starded returning ERROR 500/Internal Server Error when accessing the site?
<zkvvoob> Hi all. I'm getting "Abort class-pclzip.php : Missing zlib extensions" when activating iThemes Security plugin for Wordpress. Could you tell me what I need to install on an Ubuntu 14.04.2/Apache 2.4.7 server in order to solve this issue?
<lordievader> Good evening.
<Guest89380> Hi.  I'm moving distros, and just starting to evaluate Ubuntu-Server.  A key requirement for my Server is Xen v4.5.x support.  Iiuc, in released 'Trusty', Xen 4.4.x is supported; v4.5.x is only available for (still experimental?) 'Vivid'.  Since Xen 4.5 is *released* from upstream, how's it work in Ubuntu-land.  Will Xen 4.5.x be released for Ubuntu 14 Trusty? or only for the eventual  Vivid release?
<rbasak> Guest89380: it is unlikely to be made available in Trusty, unless somebody volunteers to backport it.
<rbasak> I have no idea about Vivid and Xen.
<Walex2> Guest89380: 14.04 is a long term support release. backports are usually available, but not supported.
<Guest89380> ( No idea why my nick changed ... will figure that out in a bit )
<Guest89380> Ok, so if I need Xen 4.5x, either build it myself, or adopt a release branch with it "in there" ...
<Guest89380> which seems like Vivid
<rbasak> Right, so wait until Vivid's release in April.
<rbasak> Or arrange a backport.
<Guest89380> rbasak: I wouldn't know one if I saw one! ;-)  I'm on day1 of Ubuntu ...
<Guest89380> I'm evaluating Arch & Ubuntu-Server -- both provide access to Xen 4.5, currently through different unsupported paths.  Need to muse on how I feel about both.
<rbasak> For enterprise use I'd really buy whatever solution Xen recommend.
<rbasak> (I mean Citrix of course)
<rbasak> Especially if you care about having the latest rather than something that's settled.
<rbasak> Or, join the community in maintaining Xen in Ubuntu
<Guest89380> rbasak: Sure, I know the argument.  I Haven't been comfortable with Xen's 'dustiness' on the likes of RH/Centos for a long time.  I've been on Opensuse for awhile; that needs to change.  I need more flexibility in Dom0 than Citrix provides (i.e., prefer a plain ol' distro).  Both Arch & Ubuntu get me pretty darn close -- but not 100%.  Yet.  Different feel to each, though.  Lots depends on community, docs & Xen upstream's 'attitude' about it.
<rbasak> Guest89380: time to switch away from Xen then maybe?
<rbasak> I hear Openstack on Ubuntu with KVM is awesome.
<rbasak> :-P
<Guest89380> rbasak: nah.  need Xen's additional capabilities.  On my personal box, tho, thinking about playing with it -- and throwing Docker in the mix.  "In my spare time", of course.
<rbasak> smb spends some time on Xen in Ubuntu.
<rbasak> (working on it, that is)
<rbasak> I know little about it though.
<Guest89380> rbasak: I'm a little "spooked" by Ubuntu's "abandonment" of Xen (that was awhile ago, then it changed, and I'm working with out-of-date data & perceptions, I'm aware), and the whole "we may NOT stick with systemd" in Vivid+.  I'm a fan, have invested heavily and successfully in it already, and am not interested in yest another about-face down the line.
<Guest89380> Yes, I've some homework-reading to do.
<Guest89380> oh, smb's a person in here.  I read "Small & Medium Business" ... was confused a moment :-)
<rbasak> Guest89380: huh? My understanding is that we're committed to systemd because Debian is, and there's been no indecision there. The only question is when we're technically ready to make the switch.
<rbasak> (Vivid is looking good so far)
<Guest89380> rbasak: fwiw: http://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-Systemd-Monday , "Should there be much fallout from the systemd-booted Ubuntu, Martin wrote, "if after some weeks we find that there are too many or too big regressions, we can revert to upstart by default with two simple uploads (ubuntu-standard and init).""
<Guest89380> I guess it depends what you read between the lines ...
<rbasak> Guest89380: that's called a rollback plan. It would be crazy not to have one. That doesn't mean that we want to do it. That just means that we will switch to systemd when we don't think it'll regress things.
<rbasak> "when it's ready"
<rbasak> This is what happens when technical mailing list posts get media-ified.
<rbasak> The plan is to switch to systemd. That has not changed. However Vivid will release on time. That has not changed. So if we can't ship with systemd, the switch will be postponed.
<rbasak> So Martin had a plan for that.
<rbasak> However, things are still looking good for the systemd switch in Vivid.
<Guest89380> rbasak: fair.  i'm being a hypocrite worrying about it, tbh.  Opensuse screwed the pooch with their stack-switch to 'wicked' networking, with NO rollback option/plan.  1st time I've ever come across a release wilfully broken for an entire release cycle, AND a don't-give-a-admn attitude about it.
<Guest89380> damn, even
<Guest89380> how do LTS's work ... does every release get an LTS variant?  i.e., will there be a 15-LTS?
<bekks> !lts
<ubottu> LTS means Long Term Support. Until 12.04 LTS versions of Ubuntu were supported for 3 years on the desktop, and 5 years on the server; since 12.04 (Precise Pangolin) LTS versions will be supported for 5 years on the desktop and server. The latest LTS version of Ubuntu is !Trusty (Trusty Tahr 14.04)
<hanlon2> bekks: right.  not my question, though
<patdk-wk> no
<patdk-wk> normally LTS is every other year
<patdk-wk> but that is not fixed in stone
<hanlon2> patdk-wk: ok.  so Vivid's likely a "ride" 'til next LTS ...
<patdk-wk> the next projected lts, is 16.04, but dunno if it will be till then :)
<hanlon2> decisions, decisions ...
<hanlon2> I'll admit that I very much like the idea of a server-centric community (in here ...) that's actually "heard" of Xen, and doesn't spend 99% of its time whining about Desktop button colors.
<hanlon2> (now watch, I've probly jinxed it)
<hanlon2> from what I can tell, those in here != those in #ubuntu/#kubuntu
<hanlon2> mostly
<rww> I HATE THE DESKTOP BUTTON COLORS
<hanlon2> rww: What's a desktop?
<rww> as far as LTS, it's been every two years since we started doing LTS, and I'd be comfortable planning for it to continue to be every two years
<rww> (we are at my workplace, anyway)
<rww> i forget if you'll be able to do 15.04 -> 16.04 directly or if it needs to go through 15.10 though
<patdk-wk> must go through 15.10
<patdk-wk> if you don't, unexpected results are expected
<rww> i think they tweaked something with the supported upgrade paths, but yeah, i'd probably go through 15.10 regardless
<hanlon2> are the x.04 & x.10 actual 6-month release schedule spreads? (possibly having an aha! moment ...)
<rww> yes
<rww> it's year.month :)
<hanlon2> oh FFS! duh ...
 * hanlon2 cringes
<unused_PhD> is it possible to create a kvm with uvt-kvm with a default user other then ubuntu?
#ubuntu-server 2015-03-14
<umarzuki> hi, I cannot mount ocfs2 partition (fc device/lun) during boot on ubuntu server 14.04
<umarzuki> errors: Press S to skip mounting or M for manual recovery The disk drive for /var/www is not ready yet or not present
<umarzuki> i tried removing o2cb and ocfs2 startup and put these 2 lines before 'exit0' inside rc.local 'service ocfs2 start', 'service o2cb start', and 'mount /dev/vg/lv /var/www/'
<Walex2> umarzuki: how much do you know about OCFS2?
<umarzuki> Walex2: only up to what I read sample configuration on blogs
<umarzuki> I use it on a LUN to have acitve-active access for web server root directory
<Walex2> umarzuki: it is a clustered filesystem. You need a clustering service for shared mounts or a special mode for local only mount.
<umarzuki> if you have a better alternative, do share
<Walex2> umarzuki: if you are using it in clustered mode you need to check that the LUN is indeed available and the cluster manager is in a good state.
<umarzuki> Walex2: yes, I have configured ocfs cluster config
<umarzuki> there are 2 members of ocfs cluster
<umarzuki> when I pressed S, it got mounted and services starts okay
<umarzuki> but I don't want to have to go to kvm each time server reboots
<bekks> Then you have to ensure that the OCFS master node is up and running, when the other node connects.
<umarzuki> bekks: "service o2cb status" shows all online during that time
<umarzuki> although I found that accessing web gave 403 if second node did not go online
<bekks> I guess you have an according line in /etc/fstab, dont you?
<umarzuki> bekks: I have tried "/dev/vg/lv /var/www/ ocfs2 defaults 0 0" and "/dev/vg/lv /var/www/ ocfs2 _netdev 0 0" but both were not successful
<bekks> Which is highly expected, since the OCFS services start after the processing of local disks (which is what fstab is for).
<umarzuki> bekks: that's why I tried rc.local but same result
<lordievader> Good morning.
<zkvvoob> Hiya folks! Could you tell me how to install a missing zlib extension? I get this error message sometimes on a Wordpress site: "Abort class-pclzip.php : Missing zlib extensions"
<zkvvoob> Pretty please?
<Meerkat> https://wordpress.org/support/topic/missing-zlib-extensions-php-error -- looks like you need the package zlib1g or zlib1g-dev.
<zkvvoob> Meerkat: I ran apt-get install --reinstall zlibc zlib1g zlib1g-dev, it went smoothly I think, but the websites still return the same error randomly
<Meerkat> try restarting apache.
<zkvvoob> did that
<jelly> zkvvoob: does phpinfo() show the zlib extension is enabled?
<zkvvoob> jelly: just a minute, I'll check
<zkvvoob> jelly: Registered PHP Streams	https, ftps, compress.zlib, compress.bzip2, php, file, glob, data, http, ftp, phar, zip
<zkvvoob> jelly: Registered Stream Filters	zlib.*, bzip2.*, convert.iconv.*, string.rot13, string.toupper, string.tolower, string.strip_tags, convert.*, consumed, dechunk, mcrypt.*, mdecrypt.*
<jelly> and a whole section named "zlib" I guess?  (scroll down!)
<zkvvoob> jelly: ZLib Version	1.2.8
<zkvvoob> and yes, another whole section below
<zkvvoob> jelly: If it's any help, I'm pretty confident that only one plugin for Wordpress is behind the error - iThemes Security; what I mean is that when I disable it, I don't see anymore of this message
<netameta> how can i connect to a MSSQL server from a php script on a ubuntu box ?
<cryptodan> by using the right driver to connect to it
<cryptodan> netameta: http://php.net/manual/en/book.mssql.php
<netameta> thanks will read
<netameta> i got this http://paste.ubuntu.com/10600157/ after doing an apt-get ...
<Pici> netameta: press q
<sarnold> netameta: q
<netameta> Thanks very much
#ubuntu-server 2015-03-15
<thor77> hey, im trying to run postfix, it works very good, but i get this message sometimes "warning: valid_hostname: misplaced delimiter: .
<thor77> "
<thor77> whats wrong?
<FreezingCold> Can't seem to create a debootstrap install as a user
<FreezingCold> W: Failure trying to run: chroot /home/user/fakeroot-x64 mount -t proc proc /proc
<FreezingCold> fakechroot fakeroot debootstrap --variant=fakechroot lucid ~/ubuntu-lucid-fakeroot-x64/ http://mirror.its.dal.ca/ubuntu/
<sarnold> FreezingCold: the chroot(2) syscall requires root privileges
<FreezingCold> sarnold: Thanks for not reading at all.
<sarnold> FreezingCold: that may explain your failure...
<FreezingCold> Of course that's why it's failing, that IS the problem.....
<FreezingCold> fakechroot fakeroot is supposed to fix that
<stgraber> FreezingCold: while fakechroot will fake the chroot call for you as an unprivileged user, it will still not let you do mounts
 * FreezingCold sighs
<sarnold> evening stgraber :)
<stgraber> hey sarnold
<FreezingCold> stgraber: You're mixing up fakechroot and fakeroot.
<stgraber> FreezingCold: I'm not. fakechroot will let you do chroot() as an unprivileged user, fakeroot will let you write files as other uids/gids than yourselves, neither will let you do mount table changes as an unprivileged user
<FreezingCold> So why is this magically a new thing?
<FreezingCold> debootstrap definitely has *TONS* of reports of working perfectly with fakechroot fakeroot as a user
<stgraber> hmm, so looks like the mount issue is supposed to be worked around by fakechroot by ignoring the mount and letting you see the real /proc. The problem I see here when trying with lucid on trusty is that lucid's simply too old.
<stgraber> the LD_PRELOAD set by fakechroot requires a recent version of the C library which lucid simply doesn't have
<stgraber> precise on trusty works fine
<stgraber> fakechroot requires a libc >= 2.14. so precise is as old as you can run (since 2.14 was never shipped by Ubuntu and the closest thing is 2.15 in precise)
<lordievader> Good morning.
<thor77> what does this "warning: valid_hostname: misplaced delimiter: ." message in mail.log from postfix mean?
<bekks> That your hostname contains a . where it shouldnt be.
<thor77> http://pastie.org/private/damb65x7gg9v7lidqhrqrq
<thor77> i dont see the misstake
<thor77> or should i use $mydomain for "myhostname="?
<bekks> thor77: whats the hostname of your machine?
<thor77> bekks: fire.crapwa.re
<bekks> Then you does your config does not contain it?
<thor77> bekks: i replaced crapwa.re with domain.tld in the paste
<bekks> Which makes the pastebin useless.
<thor77> why?
<thor77> where's the difference between abc.domain.tld and fire.crapwa.re?
<thor77> so... where should i place my hostname in the config?
<smallfoot-> If I run Ubuntu on Microsoft's Azure cloud, will I have to do any maintenance and package updates myself or will that all be handled automatically for free?
<smallfoot-> When you run Ubuntu on Azure, is it as IaaS or PaaS ?
<bekks> MS Azure is a SAAS provider.
<smallfoot-> bekks, no I don't believe Azure does SaaS. My understanding is that Azure primarily does PaaS (their own offerings) and secondarily IaaS (third-party offerings)
<bekks> smallfoot-: At least Wikipedia states that Azure is a SAAS provider :)
<smallfoot-> " It provides both PaaS and IaaS services and supports many different programming languages, tools and frameworks, including both Microsoft-specific and third-party "
<smallfoot-> *nowhere* on the Wikipedia article does it mention SaaS
<Patrickdk> heh, even if azure did saas, running ubuntu in azure is so not saas
<smallfoot-> right
<smallfoot-> If you want to run a unattended server, is it better go with Ubuntu Server or CentOS?
<smallfoot-> Maybe I should go with CentOS, they are more professional and don't live in the dream world where they will make it on tablet and smartphones lol
<andol> smallfoot-: To that question, what answer do you really expect in a channel named #ubuntu-server? :)
<smallfoot-> true
<Patrickdk> heh? centos doesn't live in anyworld
<Patrickdk> they have no dreams or ambissions
<Patrickdk> they *just* feed off redhat
<Patrickdk> I wonder if their *security patching* is fixed yet, or not
<smallfoot-> oh, CentOS is bad at security patching?
<Patrickdk> it used to lag a month+ behind
<Patrickdk> since rhel has taken them in, I haven't been keeping track
<Patrickdk> centos is only *maintained* by 2 people
<Patrickdk> only so much they can do
<smallfoot-> oh, i see
<smallfoot-> I didn't know it was that bad
<squisher> rbasak, hey, finally got around to fixing the bcache repo, but I can't push because there's a remote hook that rejects non-fast-forward updates :-\
<squisher> jamespage, ^ what's the best way to resolve that?
#ubuntu-server 2016-03-14
<ThreeDeeAccel> Running Ubuntu-Server how do I see what drive my video card is using? And how do I update it?
<ThreeDeeAccel> and yes I know it is a CLI but I am trying to run a vbox...
<ChibaPet> ThreeDeeAccel: /var/log/Xorg.0.log
<ThreeDeeAccel> ChibaPet: I am not running X I am in ubuntu-server.
<ThreeDeeAccel> I was trying this "sudo ubuntu-drivers devices" which I found here https://help.ubuntu.com/community/BinaryDriverHowto/Nvidia but ubuntu-drivers doesnt appear to be a real thing
<ChibaPet> Oh, that. Hm. I'd pick an nVidia version from the nVidia web site, and install the correct package from the Ubuntu repos.
<tarpman> ThreeDeeAccel: ubuntu-drivers is very much a real thing. it's in the package 'ubuntu-drivers-common'
<ThreeDeeAccel> tarpman: lol, I dloaded the package to see what I can find.
<ThreeDeeAccel> ChibaPet: I pulled a file from nvidea and will look more into using it if the howTo doesnt work.
<ChibaPet> ThreeDeeAccel: I wouldn't use the nVidia driver downloaded from there - you can just use the web site to see what drivers support your card.
<ThreeDeeAccel> ChibaPet: I pulled the driver nvidia recommended. but, now I am just installing the same version# with apt-get.
<ChibaPet> That's what I'd do.
<ChibaPet> apt-get install nvidia-352-updates or whatever's appropriate
<ThreeDeeAccel> ChibaPet: why *-*-updates? and not just *.*?
<ThreeDeeAccel> I was on the nuovo driver before.
<ThreeDeeAccel> Nouveau
<ChibaPet> ThreeDeeAccel: I don't know, really. Someone suggested that -updates give you a different driver. From the name I'd *guess* they're more aggressive about commits or something. I don't honestly know!
<ThreeDeeAccel> ChibaPet: Interesting, Ill look into that for next time. I already hit 'go' on just the baseline file.
<ChibaPet> That should be fine.
<ThreeDeeAccel> The issue I was trying to fix by updating Graphic Drivers did not work. I think youll just point me towards #vbox but Ill ask here just in case.
<ChibaPet> Ah, let me beg for alacrity then. Was just about to go to bed, but I can give it a shot.
<ThreeDeeAccel> I am running Ubuntu-Server with VirtualBox. I am trying to install LinuxMint but I cannot check "Enable 3D acceleration".
<ChibaPet> On your host system, what's glxinfo | grep ^direct say?
<ThreeDeeAccel> glxinfo not installed...just a sec
<ChibaPet> If it's anything other than "Yes" then that's likely your issue. If it says "Yes" then yeah, #vbox (or whatever it is) might give you better luck.
<ThreeDeeAccel> it says "Error: unable to open display" btw I am ssh'ed into the host.
<ChibaPet> Oh, hrm. I think ssh -X will just give you info from your local server. Hrm.
<ThreeDeeAccel> I can walk to the server room and type it in...
<ChibaPet> Other than grovelling logs on the server to see that the proprietary driver is running, I'm not sure off the top of my head how to test that. Asking during daylight hours here and .... ah
<ChibaPet> Then that's the answer.
<ChibaPet> With that, though, I need to hit the sack. Good night and good luck.
<ThreeDeeAccel> ChibaPet: Thanks, cya.
<jak2013> hi friends
<jak2013> */4 6-20 * * 1-7 /usr/scripts/uplsql.sh    <-- every 4 minutes, from 6am to 8pm execute this script, my dude if i want execute the script every 3 minutes from 8pm to 7a, how do this?
<cpaelzer> jak2013: you described it all already, should be */3 20-7 ...
<cpaelzer> jak2013: https://help.ubuntu.com/community/CronHowto
<anas_> Hi, I have installed Kafka server to 3 nodes all of them have Ubuntu Server. The server starts correctly on two nodes but when I start the server on the third node it gives me this error (-su: /home/kafka/kafka/kafka.log: Read-only file system)
<anas_> any help ?
<lordievader> Good morning.
<pezet91> hello, i have problem with ubuntu server 16.04. After installation system can't see my raid devices :/ Anyone can help me?
<pezet91> hello, i have problem with ubuntu server 16.04. After installation system can't see my raid devices :/ Anyone can help me?
<pezet91> ...
<lordievader> pezet91: You end up in the initrd?
<pezet91> lordievader: i don't understand
<lordievader> Is your rootfs on the raid?
<pezet91> no, it's additional disk space :]
<pezet91> my rootfs is on ssd
<lordievader> Ah, alright. Is it softraid (mdadm)?
<pezet91> no, it's hwraid, it's LSI MegaRAID SAS-3 3008 [fury] controller
<lordievader> Hmm, I'm afraid I don't really have much experience with hardware raids. Anyhow anything in dmesg about it?
<pezet91> dmesg log (grep mega): http://pastebin.com/qgiUzZq8
<pezet91> driver for this card is megaraid_sas
<lordievader> Doesn't look like there is anything wrong with it...
<lordievader> I suppose the raid drive is offered as a normal block device? i.e. sda or something?
<pezet91> i know, but i don't see a virtual drives
<pezet91> i don't see this in /dev
<lordievader> No other obvious errors in dmesg?
<pezet91> no, i don't see errors
<lordievader> Huh...
<lordievader> I'm afraid I cannot really be of help, sorry.
<pezet91> ok, thanks for trying ;)
<pezet91> I never buy a server from Fujitsu :/
<kgirthofer> hey all - what are you guys using to close brackets and auto indent in vim
<kgirthofer> I had this super great .vimrc but I accidentally deleted it from source control
<kgirthofer> now I gotta find a new one
<rbasak> :set ai
<rbasak> and )
<rbasak> I like the idea of surround.vim but have never used it.
<ogra_> dolby.vim :)
<Sophie__> helloooooo!
<Sophie__> i need some help plz
<Sophie__> i try to install openstack with vms
<Sophie__> i have one node ready and i try to deploy juju but i got an error "bootstrap instance started but did not change to deployed state"
<Sophie__> after i wait long time
<Sophie__> does anybody know anything?
<hateball> Sophie__: it can be quite idle here, have patience :)
<pmatulis> Sophie__: what instructions are you following?
<Sophie__> https://help.ubuntu.com/lts/clouddocs/en/Intro.html?_ga=1.27068984.444620310.1457272523
<Sophie__> this one now I am one the second step to install juju
<Sophie__> with computers you will need a lot of patience my dear :)
<pmatulis> Sophie__: i suggest you try the automated installer for now. go up a few levels from your doc
<Sophie_> yeah but i want to make it with vm as nodes so it can be more like the normal install
<mne1991> Hello, I want to ask please if start-TLS works with Samba4? (as I don't want to use LDAPS because as you know it has been deprecated). Thanks in advance!
<mne1991> Hello, I want to ask please if start-TLS works with Samba4? (as I don't want to use LDAPS because as you know it has been deprecated). Thanks in advance!
<pmatulis> mne1991: i havene't tried it but why would it not?
<mne1991> I don't know, I need some help to set it up, even if it was with money :)
<mne1991> Because I tried everything I can, with no success
<jjrabbit443> hello
<jjrabbit443> if i create a new account under ubuntu server then will all the applications i installed under the original root user no longer start up automatically?
<jjrabbit443> or do applications such as apache and mysql start up irrespective of what account logs in?
<Abd_Allatif_> jjrabbit443: login account does not effect them in any way
<jjrabbit443> so all applications run under root account regardless of what account i am logged in as when i install?
<Abd_Allatif_> first, you have a "normal" user account that can get root privileges using sudo. So, when you install some services using apt, the will run in the system using root account (approximately).
<jjrabbit443> nice
<Abd_Allatif_> it depends on the service. some services use so called "System user accounts"
<jjrabbit443> yeah i installed using apt
<Abd_Allatif_> some other services you may need to run it manually, so the user account that you are using for starting it will matter
<davidic65> Hi guys, how come kernel updates are coming through to the repos before any mention on http://www.ubuntu.com/usn/  (Precise )
<davidic65> be nice to know why?
<Datz> Hi, I just noticed that my ubuntu server install no longer uses a kernel tagged kernelversion"-kernel", but now "-generic" Is there any reason for this, or did something get messed up during my upgrade process awhile back?
<davidic65> Datz: never a dull moment eh
 * Datz is tres excited
<Datz> trÃ¨s*
<davidic65> not French dude I just live here
<davidic65> when is Ubuntu going kernel 4+?
 * Datz has no a clue
<davidic65> I installed 14.04 on Scaleway the other day and it had the 4.4+ kernel
<Datz> basically my relationship with ubuntu server latley has been set it and forget it
<davidic65> not sure how they did that
<davidic65> Datz:  yeah we all love the EOL
<Datz> indeedy
<andol> Ubuntu got a 4+ kernel with Ubuntu 15.10. That kernel also got included in the latest point release of Ubuntu 14.04
<Datz> I don't know, I'm apparently running 3.13.0
<davidic65> andol: ty
<Datz> well, that's interesting..
<davidic65> andol:  does that kernel mean less reboots needed?
<andol> davidic65: Not neccesarily.
<Datz> probably more!
<davidic65> I thought that was a main thing with the k4
<andol> If one has an existing Ubuntu 14.04 install one can get the 4.2 kernel by installing the linux-image-virtual-lts-wily package.
<davidic65> andol: could that mess with any software we have on a current server?
<Datz> Ok I see my answer.. Before 12.04, Ubuntu server installs a server-optimized kernel by default. Since 12.04, there is no difference in kernel between Ubuntu Desktop and Ubuntu Server
<Datz> I think I was running 10.04 before this..
<davidic65> backup and do a fresh install maybe
<davidic65> busy site?
<davidic65> sometimes there is nothing quite like a fresh install
<jjrabbit443> anybody here use fail2ban
<davidic65> I do, all servers
<ChibaPet> jjrabbit443: I fail2fail2ban. Greylisting and keys.
<davidic65> but basic setup
<davidic65> mainly SSH
<ChibaPet> Or portknocking, rather.
<jjrabbit443> if i have SSH open i should be running fail2ban correct?
<davidic65> if I use dovcot or postfix my clients are banned in no time :)
<davidic65> jjrabbit443:  sure it will not hurt, you use pwds yeah
<andol> davidic65: Anything is possible, but as a general rule newer kernels tends to do a good job in staying compitable with older userspaces.
<davidic65> andol:  thanks
<jjrabbit443> cool thanks
<davidic65> jjrabbit443:  just use very nice pwds everywhere and use f2b for SSH
<davidic65> Ubuntu dont like root for very good reasons but...
<davidic65> 2s|BH2M_7;29nv7594m52KB854U|76p^r6*Hq would be hard to hack
<davidic65> example
<davidic65> I prefer root myself
<jjrabbit443> what do you mean Ubuntu don't like root
<davidic65> prefer you use sudo
<davidic65> I prefer direct root access
<davidic65> and root sftp but you need to edit sshd_config for that
<davidic65> easily done
<Pici> iirc, sshd_config allows root login by default.
<davidic65> Pici: not on 14.04
<ChibaPet> Using password for ssh, fail2ban or not, is not recommended.
<Pici> davidic65: ah, did it change then?
<davidic65> yep
<ChibaPet> Pici: It's allowed by default in 16.04, with "prohibit-password".
<davidic65> ChibaPet: how come all the other distros dont have a problem with root
<ChibaPet> I assume the same is true for 14.04.
<davidic65> I understand the security but
<ChibaPet> davidic65: It's largely illusory security. Recommending sudo-based administrative access simply helps auditing and accountability.
<ChibaPet> That is worth it all by itself
<davidic65> when you upgrade from 12.04 to 14.04 this matter comes up and the default is allow root lofin btw
<davidic65> login sry
<davidic65> but a fresh install and its different
<ChibaPet> davidic65: I have a fresh install of 14.04, and it also allows root by default. Don't take what you read on IRC with more than a grain of salt.
<davidic65> not complaining about sudo, just dont use it after I have root setup with a nice pwd or key
<ChibaPet> davidic65: sudo is great when you have an admin team and want to track access and actions.
<davidic65> sure, lone wolf here
<ogra_> root has been locked down since the very first release in ubuntu (the installer actively runs passwod -l root) ... if yuo have an unlocked root account yu must have manually set a password or removed the locking
<davidic65> ChibaPet: thats makes total sense
<ogra_> there is techinically no way an upgrade touches that
<davidic65> Ubuntu are trend setters
<ChibaPet> ogra_: That is accurate. You don't get a root password unless you explicitly set one.
<ogra_> right
<ogra_> and if you set one or unlock the account it will persist across upgrades
<davidic65> actually I setup Ubuntu 14.04 on a new hetzner server the other day all was done root?
<ogra_> whats "hetzner" a hardware manufacturer ?
<davidic65> installed and logged in as root?
<jjrabbit443> i like running as root
<ChibaPet> davidic65: How cloud vendors do it is up to them more than Canonical.
<davidic65> ogra_: hetzner.de
<jjrabbit443> so should i create a secondary login that i use to SSH into, then when I am logged in switch to root account?
<davidic65> ChibaPet:  yeah but they did not think sudo was important
<ogra_> davidic65, oh, so you get a hetz-buntu then, thats not comparable to an accurate ubuntu i guess :)
<davidic65> all root
<ogra_> i.e. some vm image they roll
<ogra_> (and obviously not using the ubuntu defaults there)
<davidic65> ogra_: also they give you the initial pwd which is nothing like pwd123456 :)
<ogra_> fun
<ogra_> do you at least get forced to change it on first login
<davidic65> nope just do it if you like
<ogra_> or can i expect to be able to log in to half their virtual servers with the defaults
<davidic65> but its a pretty good one to start of with
<ogra_> thats gross
<davidic65> nah your just starting off, most people who run dedies know to change pwd after install
<davidic65> the one they give is unhackable I guess
<urthmover> .
<jjrabbit443> would be nice if putty had a dropdown with history of ip addresses
<davidic65> ip address for what
<davidic65> apache access log will give you enough
<jjrabbit443> recent servers
<davidic65> ?
<jjrabbit443> *recent connections
<ogra_> send patches to the putty devs ;)
<davidic65> tcpdump for current
<davidic65> jjrabbit443:  apache access log will do for that
<ogra_> hwo would apache access log help you to remember which servers you ssh'ed to recently ?
<davidic65> very handy for WP xmlrpc attacks so you can drop an offender
<davidic65> ogra_:  maybe I misunderstood
<ogra_> yeah, putty is a windows ssh client (that was at some point ported to linux i heard)
<davidic65> :)
<davidic65> I use it from time to time :)
<davidic65> kitty more pretty tho
 * ogra_ has last used it in 1998 or so ... when i last used a windows desktop
<davidic65> ah http://www.ubuntu.com/usn/ linux kernel update just in
<davidic65> will they ever nail a kernel down
<davidic65> been in the repos for hours
<davidic65> coffee break maybe?
<davidic65> must admit the flaws are so sophisticated
<davidic65> the people who find these flaws, do they have a kife?
<davidic65> life
<davidic65> Ralf Spenneberg discovered that the usbvision driver in the Linux kernel did not properly sanity check the interfaces and endpoints reported by the device. An attacker with physical access could cause a denial of service
<davidic65> Hats off
<davidic65> how did he find that out :)
<davidic65> I come across things like that every day :)
<sarnold> davidic65: https://www.blackhat.com/docs/eu-14/materials/eu-14-Schumilo-Dont-Trust-Your-USB-How-To-Find-Bugs-In-USB-Device-Drivers-wp.pdf
<sbeattie> davidic65: he has a special arduino device/firmware that he uses to pretend that it's a bunch of different devices
<ogra_> sforshee, nice work with syseleven ! (/me just read their blog)
<ogra_> http://www.syseleven.de/blog/heisenbug/ ... :)
<sarnold> nice find
<davidic65> yeah, hats off to guys who find things like that
<HoloIRCUser2> Hi
<HoloIRCUser2> My server 14.04 kernel 3.19.0.51 has act up
<HoloIRCUser2> Read only filesystem then on reboot I have issues when I fsck /dev/sda
<HoloIRCUser2> Error loading shared libraries
<HoloIRCUser2> Libext2fs.so.2 invalid header
<HoloIRCUser2> Any way I can fix this?
<HoloIRCUser2> Monkeydusk?
<HoloIRCUser2> MonkeyDust?
<HoloIRCUser2> Help?
<davidic65> HoloIRCUser2:  what have you done before the problem?
<HoloIRCUser2> Nothing. Server was on idle
<HoloIRCUser2> When I SSH to it I noticed the RO problem
<HoloIRCUser2> Try to reboot and fsck /boot
<HoloIRCUser2> Can't
<HoloIRCUser2> Hmm
<davidic65> what do your server provider people say
<HoloIRCUser2> Maybe I should fsck it from a USB bootable
<davidic65> no idea of your setup or provider
<HoloIRCUser2> My server provider is me
<sarnold> HoloIRCUser2: try booting to a different root filesystem; cd, usb, pxe, etc., something where the 'fsck' isn't provided by the bad filesystem itself
<HoloIRCUser2> Yeap my thoughts exactly. Will try a bootable usb
<HoloIRCUser2> Thanks
<davidic65> sarnold: how do problems like arise if you have done anything to the server?
<davidic65> not done
<sarnold> davidic65: bitrot is one potential source of problems; it could be logic bugs in the kernel, of course, and cronjobs often do enough work to trigger those..
<davidic65> sarnold: thanks
<jjrabbit443> running "service ssh restart" from an SSH session doesn't kick you off?
<jjrabbit443> really?
<ogra_> no, it keeps the sshd you are connected to running until you log off
<jjrabbit443> cool
<davidic65> getting root eh :)
<ogra_> (a package upgrade for openssh-server calls that too... you dont really want your connection to die if there are 200 packages coming in and ssh is somewhere in he middle)
<nacc> jjrabbit443: those old sessions are running w/ the old context/configuration (iiuc)
<jjrabbit443> better restart the server i guess
<davidic65> heads up, not promoting but virtualmin now does auto install on the free letsencrypt ssl certs as of today, well cool, one click install :)
<Mate-user1512> ok here i am
<Mate-user1512> unfortunately e2fsck doesnt report any issues with /dev/sda1
<Mate-user1512> now what?
<davidic65> ?
<Mate-user1512> am on server boot from usb
<Mate-user1512> a, holoircuser
<Mate-user1512> am
<davidic65> this a home server?
<Mate-user1512> yes
<Mate-user1512> this is what i ger/dev/sda1: 303/124496 files (16.8% non-contiguous), 58680/248832 blocks
<davidic65> why not backup and do a new install
<Mate-user1512> no way
<Mate-user1512> to much trouble
<davidic65> why? Exactly
<Mate-user1512> i have a lot of configs on this one
<davidic65> back them up
<davidic65> what software you running
<sarnold> Mate-user1512: you could check debsums on the filesystems..
<Mate-user1512> so nothing can be done with those shaqred libraries gone bad?
<Mate-user1512> am on ubuntu server 14.04.4
<davidic65> I read something on google earlier
<davidic65> probably related
<Mate-user1512> now i have booted manjaro from usb / on server :)
<davidic65> not
<Mate-user1512> umount sda1
<davidic65> google your problem a bit
<Mate-user1512> and checked for issues
<Mate-user1512> nothing came up
<sarnold> Mate-user1512: you can apt-get install --reinstall the affected packages
<Mate-user1512> hmm
<Mate-user1512> didnt try that
<davidic65> simple things can fix things thats for sure
<davidic65> I was reading something about ncurses
<davidic65> probably not related
<Mate-user1512> nah
<davidic65> always an answer dude but always have complete backups
<Mate-user1512> i have
<Mate-user1512> i dd the os image
<davidic65> then just reinstall and restore
<Mate-user1512> but i was on 12.04
<davidic65> ah
<Mate-user1512> yeah
<Mate-user1512> too much trouble
<Mate-user1512> too long ago
<davidic65> well your site is down how much trouble is it worth?
<davidic65> when did you upgrade?
<Mate-user1512> when canonical aloud ne too
<Mate-user1512> *me
<davidic65> while ago then
<Mate-user1512> i think that was on 14.04.1
<davidic65> and all was fine until today?
<Mate-user1512> yeap
<davidic65> that sucks
<davidic65> hardware?
<Mate-user1512> d525
<Mate-user1512> with 4gb ram
<Mate-user1512> atom cpu 4 threads
<davidic65> I mean hardware failure
<Mate-user1512> no failure
<davidic65> how you know?
<Mate-user1512> filesystem corruption maybe
<davidic65> hdd packing in maybe
<Mate-user1512> no
<Mate-user1512> my raid 5 array is ok
<sarnold> smartctl reports happy?
<Mate-user1512> its just the os hdd that has issues
<Mate-user1512> let me see
<davidic65> atom is it a laptop with raid 5?
<Mate-user1512> nope desktop
<Mate-user1512> 1.8ghz x4
<sarnold> some atoms were also popular with low-power server folks
<Mate-user1512> this is it
<sarnold> if a machine was intended for mostly storage tasks it was a pretty decent fit
<Mate-user1512> yeap
<davidic65> sarnold: Scaleway using them with great sucess
<Mate-user1512> it fits me very well indeed
<jjrabbit443> is the "var" directory the "program files" of ubuntu?
<Mate-user1512> nothing wrong with sda
<Mate-user1512> weird
<jjrabbit443> sorry, is "bin" not "var"
<sarnold> jjrabbit443: check out "man 7 hier"
<jjrabbit443> sarnold: ty
<Mate-user1512> hmm /dev/sda1: status 0, rss 2896, real 0.010133, user 0.003333, sys 0.003333
<Mate-user1512> hdd is well
<davidic65> if you have money hire a sysadmin expert
<sarnold> was that the -fast- check? try a slow long check
<Mate-user1512> how?
<Mate-user1512> -p?
<davidic65> I reckon its hardware failure
<Mate-user1512> that was fsck -r /dev/sda1
<Mate-user1512> shall itry -r
<Mate-user1512> eh
<Mate-user1512> -p
<davidic65> things dont go tits up overnight like that if you have not been faffing with stuff
<sarnold> Mate-user1512: the -t long option to smartctl looksl like the hting
<sarnold> davidic65: meh, sometimes they do
<sarnold> davidic65: I came back froma weekend once to find that reiserfs had eaten over a third of its tree
<davidic65> ok, but never had probs like that
<ogra_> as long as mechanics are involved they will always wear out at some poin
<ogra_> t
<davidic65> other than bang something gives up
<Mate-user1512> hmm maybe is that
<Mate-user1512> it wear out
<sarnold> if you have enough systems these sorts of things happen near continuously :)
<Mate-user1512> nothing gives whats the issue
<Mate-user1512> i have to remove usb and reboot os to check again
<sarnold> which is why e.g. zfs is as popular as it is, it provides self-repair when disks bitrot
<Mate-user1512> doesnt wrth the hw involved
<Mate-user1512> *worth
<davidic65> Mate-user1512:  if you have full backup I can loan you a dedicated to restore to, you will just need to do dns
<Mate-user1512> you mean backup my raid?
<davidic65> just your sites
<davidic65> what sites you running
<Mate-user1512> nah i dont have any
<Mate-user1512> :)
<davidic65> so is the server that important?
<Mate-user1512> this is a home server for storage/streaming/downloading
<davidic65> why
<Mate-user1512> my right hand
<davidic65> but you hand this scenario in mind yeah, server goes bang, do a quick restore
<davidic65> on something important to you
<Mate-user1512> i"ll remove ubuntu and install openmediavault i think
<davidic65> WHAT
<Mate-user1512> too much trouble start over
<davidic65> let me get this right :)
<davidic65> its just a personal storage server
<Mate-user1512> yes
<davidic65> lol
<davidic65> hehe
<davidic65> I mean a personal PC
<Mate-user1512> yes
<davidic65> :)
<Mate-user1512> ok i"ll reboot
<davidic65> I think I am about done here
<Mate-user1512> bye and thanks
<davidic65> geez
<davidic65> guy needs an external hdd :)
<davidic65> well I fell for that one
<davidic65> probably just realised his hardware is going south
<jjrabbit443> anybody know if there is a website that tests your server for vulnerabilities?
<bekks> Hopefully there isnt, that would be a severe vulnerability :)
<patdk-wk> unless your just looking to test, wide open stupid mistakes
<patdk-wk> then your probably looking for shodan.io
<coreycb> beisner, can you promote python-glanceclient 1:2.0.0-0ubuntu1~cloud0, python-openstackclient 2.2.0-0ubuntu1~cloud0, and pep8 1.7.0-1~cloud0 to trusty-mitaka-proposed?  regression tests have passed successfully.
<beisner> coreycb, promoted those 3 to mitaka-proposed.  thanks for the updates :)
<coreycb> beisner, thanks!
#ubuntu-server 2016-03-15
<devsys> Hi all, I have a 14.04 server install, the init process is constantly ussing 33% of the processor. Where is a good place to start troubleshooting this?
<devsys> It's been running fine, I think this started with a update
<joni_> hey there...
<joni_> i got some problem in installing libapache2-mod-fastcgi for Ubuntu Server 14
<TJ-> devsys: is some service being constantly restarted?
<devsys> TJ-: Yep, just found it's scip:  "init: scip main process ended, respawning"
<devsys> Not sure why it's crashing like that, I wonder if theres some log I can check for that...
<TJ-> devsys: possibly /var/log/syslog, or else a 'scip' specific log
<devsys> TJ-: the text " init: scip main process ended, respawning" repeats over and over in the syslog
<TJ-> devsys: not helpful. Best to disable the service then try starting it manually with its foreground/debug options on and see what it reports to the terminal
<devsys> hmm ok, trying to figure out what this service even is
<devsys> man scip has nothing
<TJ-> devsys: ask the service controller
<TJ-> devsys: something like "initctl list | grep scip" might help (or without the grep)
<devsys> so scip is what starts sysctl at boot accotring to the scip upstart job
<devsys> TJ-: I stopped the service a while ago and it indeed stops the CPU usage, but I'd imagine there is a underlying problem here I need to fix so it can run normally
<TJ-> devsys: As far as I can tell 'scip' is not an Ubuntu packaged tool
<TJ-> It looks like it's been added from a 'help' article at https://help.ubuntu.com/community/WebBrowsingSlowIPv6IPv4
<devsys> TJ-: You nailed it, I used this article months ago to resolve an issue where this server (running apache) was extremely slow to respond to requests initially.
<devsys> Blowing this crap out of there now lol thx
<devsys> That fix did not even work IIRC, I just needed to add a directive to the resolv.conf to disable the ipv6 lookups that were slowing things down
<Datz> Hi, when I upgraded to the next 14.04 phpmyadmin and munin give me 403s. I'm not sure what changed in apache to break them, but if anyone has any insights on where to look first it would be a big help
<sarnold> Datz: apache 2.4's authentication and authorization is very different from apache 2.2; here's a reasonable starting point http://httpd.apache.org/docs/current/howto/auth.html
<Datz> thank you sarnold
<bray90820> How would I login to SSH with a bash script?
<bray90820> Without entering a password
<hateball> !sshkey
<hateball> hmmm
<hateball> bray90820: set up passwordless ssh using keys
<hateball> !ssh
<ubottu> SSH is the Secure SHell protocol, see: https://help.ubuntu.com/community/SSH for client usage. PuTTY is an SSH client for Windows; see: http://www.chiark.greenend.org.uk/~sgtatham/putty/ for its homepage. See also !scp (Secure CoPy) and !sshd (Secure SHell Daemon)
<bray90820> hateball: Yea someone else explained it to me I had my logic reversed and was thinking that was a bad idea
<bray90820> I get it now tho
<PSUser2>  i've followed this tutorial, up to step 4: http://funwithlinux.net/2014/04/ubuntu-14-04-web-server-tutorial/ but php doesn't want to work. when i load the page, it remains blank
<PSUser2> any ideas?
<PSUser2> apache sees index.php but does not execute php code, so i see a blank pag
<coreycb> beisner, when you get a chance can you promote sahara 1:4.0.0~b3-1ubuntu1~cloud0 and murano 1:2.0.0~b3-2~cloud0 to trusty-mitaka-proposed?
<beisner> coreycb, promoted murano + saraha to mitaka-proposed
<coreycb> beisner, thanks
<coreycb> EmilienM, the new murano package will be available shortly in trusty-mitaka-proposed once it builds
<EmilienM> coreycb: ack, thx a ton
<RawPeacock> good morning :)
<Sling> evening even
<RawPeacock> So I'm pretty new to alot of this, but is there a method of authenticating an ubuntu phpipam server through ldap (to an existing, separate AD server) without using samba?
<jjrabbit443> exit
<alreece45> Is there something in 14.04 that restarts the boot process if it taking too long?
<bekks> alreece45: no.
<bekks> alreece45: there is no facility that is already running in case the boot process takes too long, which could initiate a restart of the boot process.
<alreece45> I seem to be encountering an issue where (after the server becomes unresponsive and is rebooted), it will not start up normally. Booting to recovery works, fsck works, going to the console and manually starting everything works, but trying to resume normal boot results in a short loop of failing to mount network filesystems (there are none), and eventually-- after several minutes, a reset...
<DammitJim> can I delete the user account that I created when I installed ubuntu on a server?
<runelind_q> can I use standalone landscape server to manage itself along with all my client boxes?
<teward> runelind_q: I did that in the past; though I never actually had Landscape manage its own restarts - i always did that by hand
<teward> (that landscape instance has since been removed, due to its system it was on blowing up in my face)
<runelind_q> teward: so enroll my landscape server just like I would other clients?
<runelind_q> to itself
<teward> runelind_q: that's what I did, but note that if you need the slot that you lose with that registration, you're better off running Landscape's updates and such manually yourself by hand
<runelind_q> I'm good on slots.
<runelind_q> teward: that worked, thanks!
<teward> runelind_q: you're welcome.  I seriously would NOT use Landscape to reboot itself, though, rather, turn off landscape service via SSH, and then restart as you would any other server by hand
<teward> and then when it comes back up, done.
<runelind_q> teward: agreed.
<teward> (you may have a slight downtime, yes, for the other machines not being able to ping Landscape, but...)
<axisys> anyidea why realm join is failing with samba-common-bin on trusty64 when clearly it is already installed?
<axisys> http://dpaste.com/340X614
<axisys> http://dpaste.com/2DPT64Y <- better output
<axisys> I changed the realmd.conf automatic-install = no as a workaround to the bug
<axisys> http://serverfault.com/questions/598476/how-to-use-realmd-in-ubuntu-14-04-lts-to-join-an-active-directory-domain
<bray90820> Not really the right place for this but... can I mount an lvm snapshot to an SSH location?
#ubuntu-server 2016-03-16
<zepka1> Hello people
<lordievader> Good morning.
<halvors1> Hi. AppArmor is apparantly destroying my VPN conncetion with StrongSwan, any way to fix this? http://pastebin.com/21V17bVP
<sdeziel> halvors1: try adding: "owner @{PROC}/@{pid}/fd/ r," to /etc/apparmor.d/local/usr.lib.ipsec.charon
<sdeziel> halvors1: then reload the profile with: apparmor_parser -r -T -W /etc/apparmor.d/usr.lib.ipsec.charon
<halvors1> sdeziel: Isn't this something that would be corrected in the strongswan package?
<sdeziel> halvors1: yes but testing the manual fix first would be useful
<halvors1> sdeziel: The VPN tunnels works for several hours at first.
<halvors1> Then it goes down and doesn't come up again unless i restart it manually.
<halvors1> So there is no way to be sure that this fix works before it has gone several hours from now.
<sdeziel> halvors1: which version of strongswan do you have installed?
<halvors1> Linux strongSwan U5.3.5/K4.2.0-34-generic
<halvors1> So 5.3.5 :)
<sdeziel> that's one odd mix of kernel/userland versions
<sdeziel> 5.3.5 only shipped with Xenial yet you have Wily's kernel
<halvors1> sdeziel: I know. Trying to use 5.3.5 from xenial repo on wily since i had so much trouble with wily's build.
<halvors1> Same issue duo.
<halvors1> sdeziel: Do you want me to use the wily version instead?
<sdeziel> halvors1: I'd suggest to open a bug in LP describing your environment (including a list of all the packages related to Strongswan).
<sdeziel> halvors1: I never ran into this denial but we have different testing environments. I only ever used Stronswan on Trusty and Xenial
<sdeziel> halvors1: if that's feasible, you could try to reproduce the issue with a clean/pure Xenial machine?
<halvors1> sdeziel: Ok, haha another solution is just to disable apparmor.
<halvors1> Really it's almost only trouble.
<sdeziel> halvors1: you can try that too but I'd recommend to only disable the charon profile
<sdeziel> halvors1: FYI, Apparmor might only be a red herring like it was in https://bugs.launchpad.net/ubuntu/+source/strongswan/+bug/1549436
<halvors1> sdeziel: How do i do that?
<ubottu> Launchpad bug 1549436 in strongswan (Ubuntu) "AppArmor kills StronSwan daemon 'charon'" [Undecided,Confirmed]
<sdeziel> halvors1:
<sdeziel> https://help.ubuntu.com/community/AppArmor#Disable_one_profile
<halvors1> sdeziel: Thanks, i'll monitor the connection with your proposed fix first.
<halvors1> sdeziel: Should "owner" be replaced with the user or just stay as it is?
<sdeziel> halvors1: "owner" requires the executable to be run by the same user as the file/dir owner for the rule to match.
<CrummyGummy> Hi, do any of you know of a security ppa for Natty? I realise it's unsupported but I need to upgrade openssl quite urgently.
<henkjan> CrummyGummy: there is no security ppa for Natty
<CrummyGummy> then any src for an up to date openssl deb?
<henkjan> CrummyGummy: you can try to point your /etc/apt/sources.list.dist to old-releases.ubuntu.com
<Pici> Since it EOL'd in October 2012, you're not going to get any updates dated past then.
<CrummyGummy> I know :(
<henkjan> installing an openssl .deb from a new release will give you headaches i'm afraid
<CrummyGummy> Any idea how risk it would be to install by hand?
<CrummyGummy> I have it compiled and ready to install. Just a bit nervous...
<henkjan> why would you even try to update openssl as the rest of your system has gaping holes al around?
<CrummyGummy> Just to make connectivity work to a client.
<CrummyGummy> we're moving away from this server slowly. It's one of those legacy boxes that we're to scared tu update.
<CrummyGummy> there's always one...
<patdk-wk> the thing about openssl is
<patdk-wk> it will just work, or you will have to recompile everything that uses it :(
<patdk-wk> and that is when you stay within the same version :(
<runelind_q> if I'm doing a repository, do I want to delete the Packages file every time before re-running dpkg-scanpackages?
<runelind_q> I guess it is re-creating it since I'm doing > Packages.gz
<cyphermox> hey, if someone cares a lot about ntp; this seems like it might be a relatively easy bug to fix: https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/1558125
<ubottu> Launchpad bug 1558125 in ntp (Ubuntu) "ntpd doesn't synchronize to local clock (ntpd 4.2.8p4/xenial)" [High,Triaged]
<cyphermox> ^ I'm pointing it out here because ntp is subscribed by the server team, I may otherwise get up to it $later
<nacc> kickinz1: --^ ?
<cyphermox> if you need help reproducing I can perhaps get the reporter or his manager or someone to get more info quickly.
<tuor> Hi, I'm working on a preseed file. I trying to add an apt repo while installing. This is my preseed file: https://paste.ubuntu.com/15402527/
<tuor> My goal: add the puppetlabs repos directly during the installation.
<tuor> What I tried now: Add Ubuntus "ch" archiv as repo.
<tuor> Problem: When installing, I get an error in the "select and install software" stage.
<tuor> Line 162 of my preseed file is the intresting part. What am I doing wrong? Or how can I find out what I'm doing wrong?
<tuor> If I remove the 2 lines 162 and 163 it works.
<kickinz1> nacc, cyphermox looking
<nacc> kickinz1: thanks!
<tuor> It's working now. I just don't know what is different^^
<jjrabbit443> hello
<jjrabbit443> is creating a swap file on ubuntu the equivalent of having a pagefile on windows?
<lordievader> Pretty much.
<jjrabbit443> if you are low on RAM isn't a swap file on an SSD drive pretty much as good as using RAM anyway?
<jjrabbit443> as far as I/O speed
<sarnold> not even close; ram on high-end cpus can run at ~60 gigabytes per second; an ssd connected via sata can only run at 500 megabytes per second
<sarnold> don't get me wrong, ssds are way better than hard drives
<sarnold> s/hard drives/spinning metal/
<sarnold> but there's no substitute for real ram when you need it
<jjrabbit443> wow ok nevermind
<jjrabbit443> sounds like even old ass RAM sticks would blow SSD I/O out of the water
<sarnold> yeah :) old-ass ram may only go 20 gigabytes per second or something -- check intel's ARK for details for your processor -- but it's still better than swap by a lot :)
<l1meon> Hello, I have a question. How can I run a script for web using ipv6 instead of ipv4? Say I have ipv4: 55.55.55.55, and 4 ipv6: 2e2e:xxx:xxx
<sarnold> l1meon: do you mean, something like a php script that you run in apache's mod_php or fastcgi or similar?
<sarnold> l1meon: does i already work with ipv4? is there something preventing your apache from binding to the ipv6 addresses you want?
<antonispgs> hey guys nice to be here
<antonispgs> so the script for rtorrent i used on a remote seedbox created a nice website where i can see my downloads
<antonispgs> https://IP/download/user/
<antonispgs> i am now transferring stuff that are going in folders inside https://IP/download/user/rtorrent/downloads at which point i get a 403 forbidden error
<antonispgs> is there a way to get access further into the subfolders?
<sarnold> antonispgs: it depends upon the program; it might just require removing some code that prevents that.. or it might require larger changes
<sarnold> maybe there's a configuration option for the script?
<antonispgs> it was made via nginx
<lunaphyte> i suppose if i don't want to be a hypocrite, i'd better move it over here :)
<tarpman> ;)
<lunaphyte> i see references to both frontend and priority, but only the defaults seems to be indicated in files in /var/cache/debconf/
<lunaphyte> but when i set them with dpkg-reconfigure, they do seem to be remembered, so they've got to be specified somewhere
<lunaphyte> plus, it seems like i don't ever remember being prompted for this sort of thing when doing an install, which is i guess sort of a side note
<tarpman> hm. I'm not aware of any other locations where things are saved. all the non-defaults I can remember configuring are in /var/cache/debconf/config.dat
<lunaphyte> oops, you are right
<lunaphyte> there it is
<lunaphyte> that's got a little bit of an openldap slapd-config feel to it :)
#ubuntu-server 2016-03-17
<lunaphyte> the config for debconf is within the config mechanism debconf uses for everything, including itself
<tarpman> one eats one's own dogfood, etc ;)
<lunaphyte> like slapd's db section for cn=config within cn=config :P
<lunaphyte> yeah
<lunaphyte> what a great find, thanks
<tarpman> :)
<lunaphyte> now i want to know why i don't recall ubuntu asking me those two questions during install
<lunaphyte> i guess i'll have to od a new install to know for sure.  it's been too long to recall with confidence if that's actually what happened, or if i am just dumb
<tarpman> I don't think you'd be prompted for frontend or priority by default
<lunaphyte> i wish it would.
<lunaphyte> how would anyone ever know?
<lunaphyte> outside of pure coincidence, like this
<tarpman> no idea
<tarpman> I learned about debconf when I needed preseeding
<tarpman> I think that's probably how most people get exposed
<frickler> jamespage: any chance to update ceph in xenial based on this: https://www.spinics.net/lists/ceph-devel/msg29225.html ? not completely official release, but IMHO much better than 10.0.2 and you won't want to release xenial with a development release of ceph anyway, or do you?
<rbasak> stgraber: FYI, https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2016-March/016329.html
<sekapiet> hello
<sekapiet> i'm looking for help with openstack autopilot
<coreycb> beisner, when you get a moment can you promote the following packages to trusty-mitaka-proposed? http://paste.ubuntu.com/15407540/
<Hanumaan> how to format multipath iSCSI drive? getting this error: http://paste.ubuntu.com/15407534/  if I try to format it and lsof shows this: http://paste.ubuntu.com/15407542/ .. how to format the drive .. ?
<patdk-wk> you said you are getting an error
<patdk-wk> there is no error
<sekapiet> is there a channel for ubuntu openstack autopilot ?
<Hanumaan> patdk-wk,  sorry did not give the error .. this is the error  "/dev/sdb1 is apparently in use by the system; will not make a filesystem here!"
<patdk-wk> why are you using sdb?
<patdk-wk> you said it was multipathed
<patdk-wk> if you want to use sdb, kill the multipath
<Hanumaan> patdk-wk, you were spot on .. now is used multipath drive .. which is /dev/mapper/mpath0-part1 ..
<patdk-wk> :)
<Hanumaan> and it worked.. can you also let me know how to get my own alias working for multipath drive? it usually generates some random names .. and says alias already exists ..
<devster31> hi, I'm wondering if anyone can help me with something, I can access a windows 10 share from osx using smb://ip-of-pc/name-of-folder, and the email-password as accounts, but smbclient refuses to let me access the same share on ubuntu
<devster31> what could I be doing wrong?
<kgirthofer> hey guys I installed ubuntu with lvm for the first time and I don't understand how to resize
<kgirthofer> any help?
<kgirthofer> vgdisplay lists the drive, it's sda2 and 5 respectively
<coreycb> ddellav, re: barbican and use of python-ldap3
 * ddellav nods
<coreycb> It seems like ldap3 is the way to go in the future for python ldap clinet use since it supports python 3
<coreycb> ddellav, the problem is there's a mix of openstack packages (and non-openstack packages) using python-ldap which is already in main
<ddellav> so we DO in fact need an MIR for ldap3
<ddellav> ?
<coreycb> ddellav, well they'll want one ldap client in main so it'd be python-ldap or python-ldap3
<coreycb> ddellav, I think at this late point in the cycle we should revert barbican to using python-ldap, and see about MIRing python-ldap3 earlier in the next cycle
<ddellav> coreycb ok
<coreycb> ddellav, the patch looks pretty trivial to revert it
<stgraber> hallyn: https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2016-March/016329.html
<achiang> hi, using the ec2 ami locator service: https://cloud-images.ubuntu.com/locator/ec2/
<achiang> after reading docs, it's not clear to me if i can use an hvm:ebs-ssd on a t2.micro
<achiang> does anyone know?
<achiang> or am i forced to use the hvm:ebs instance type?
<achiang> ah, ok. re-reading this page http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
<achiang> seems like ebs-ssd is fine; the only thing that is incompatible would be the instance-store type
<hallyn> stgraber: that sounds bogus:
<hallyn>  It appears that the pam_systemd plugin is not present due to pam_cgfs.so being in the pam session plugin configuration in /etc/pam.d/common-session.
<hallyn> ""
<stgraber> hallyn: yeah, I wonder if libpam-systemd got removed from his system somehow
<hallyn> i'm d/ling new desktop image to try in any case
<hallyn> (unity won't show up in lightdm as an option for me - desktop team blacklisted me)
<coreycb> hallyn, for that qemu patch you backported yesterday, can that be considered fixed for qemu-kvm as well?
<hallyn> coreycb: qemu-kvm is in precise only....  was thinking i'd leave that open until the precise sru is done
<hallyn> but either way
<hallyn> i have it in my sru file so i won't forget it.  Really the "proper" but less helpful thing is to mark it invalid but confirmed for precise target
<coreycb> hallyn, ok I saw it's been deprecated
<coreycb> hallyn, thanks so n/a unless precise which it's currently not targeted to fix for as far as I can see
<hallyn> pretty sure i'll sru it to prcise, just haven't set the targets
<coreycb> hallyn, ok, thanks
<hallyn> stgraber: just updated a unity vm, can't reproduce the inability to get admin rights
<hallyn> lemme try removing the pam-systemd line
<stgraber> hallyn: I know we briefly had a bad cgroup-lite which was causing libpam-systemd to be removed if the user didn't look closely at upgrade time, that may be what caused this for him
<hallyn> oh wait, i was testing xenial, not trusty with ppas. sigh
<hallyn> (trusty wouldn't have pam_systemdi guess)
<stgraber> I thought it did
<randymarsh9> hello
<randymarsh9> what is the simplest way to transfer files to clean ubuntu installation
<randymarsh9> install FTP?
<Deeps> scp/sftp
<randymarsh9> does scp come with ubuntu?
<Deeps> https://help.ubuntu.com/community/SSH/TransferFiles
<sarnold> yes, though you may need to install openssh-server on one of the machines
<randymarsh9> ssh is already enabled
<randymarsh9> i'll do the scp
<randymarsh9> thank you both
<randymarsh9> does ./tmp/ get cleared out every reboot?
<sarnold> it depends on what your current working directory is :) /tmp gets cleared, yes, but others may not
<randymarsh9> hm lets see
<max3> how can i get an install of ubuntu-server that has nothing except what i need to run postgres?
<max3> i tried the minimal install but it's still about a gig. how does ubuntu-mini-remix get it down to 200mb?
<sarnold> max3: maybe start with the cloud images?
<teward> also consider: the mini remix isn't an officially supported project ;)
<teward> so it's not an Ubuntu official project
<max3> sarnold, teward i understand that but i'm just wondering if anyone knew how it's accomplished
<max3> sarnold, sorry these cloud images are what? already virtual appliances? ova is some sort of standardized vm right?
<sarnold> max3: deborphan/orphaner scripts can help remove uneeded 'leaf' packages..
<max3> sarnold, which of the cloud images should i download?
<sdeziel> max3: when I need something minimal, I start with a debootstrap then add a kernel+grub. This takes around ~500MB
<sarnold> max3: I think I'd start with disk1.img version
<max3> sarnold, i'm planning on running this in virtualbox so it should be the ova file right?
<sarnold> max3: ah, if vbox can do ova, yeah
<max3> i wish i could get tiny core linux to work
<max3> it even has a postgres package but i can't the database cluster up because of missing libraries
<max3> sarnold, i don't think these cloud images can just run on virtualbox?
<sarnold> max3: oh :(
<Deeps> max3: an alternative approach might be to create a postgres docker container from the ubuntu package, which you can then run on a number of different minimal platforms
<max3> Deeps, when you say minimal platforms you mean systems that have docker installed?
<Deeps> max3: yes, e.g. photon by vmware
<max3> lol so many different layers of abstraction
<beisner> coreycb, ok, proposed voluptuous @ mitaka uca  ... and the rest of http://paste.ubuntu.com/15407540/ too.
<coreycb> beisner, thanks :)
<coreycb> EmilienM, that will include the new sahara version ^
<EmilienM> coreycb: cool
<devster31> any idea why rsync isn't working between a local hdd mounted on /mnt/usbhdd and an autofs mounted smb share?
<camjac251> Hello
<mybalzitch> hi
<RawPeacock> sup camjac251
<camjac251> Would anyone be able to help assist me with X11? I'm running 14.04 with an X11 server to xshell and xmanager. I ran mythtv-setup over ssh and opened up the x11 window. but the problem is the keys are messed up and mouse clicks dont register. the 1 key would show something like w when pressed, F1 would show as 1. So it would be off by 1 above to the right on the keyboard.
<camjac251> I'm using a full qwerty keyboard, US, Installed Ubuntu 14.04.4 Server today with that chosen as my locale
<camjac251> I tried looking up like wrong keybinds, wrong keys for x11, on google hoping to find some thread with a similar issue and a guide on how to change the x11 keyboard locale, but wasnt able to find anything helpful, the guides there were how to add custom layouts :/
<camjac251> meaning I'd map every key by hand myself
<LuisM> hi folks
<LuisM> look that flaw in Ubuntu Server 14.04 LTS x64 right now: i've just hit "shutdown now" over ssh with root account "sudo su", and in the first tty screen the root account is exposed with shell, then the machine isn't going to shutdown..
<LuisM> wtf?
<tarpman> LuisM: it switched to single-user mode, exactly as you told it to. what were you expecting?
<tarpman> LuisM: http://manpages.ubuntu.com/manpages/trusty/en/man8/shutdown.8.html -> If no option is specified, the default action is to  bring  the  system down into single-user mode.
<LuisM> ok, with logged with root account?
<LuisM> looged with root account*
<tarpman> LuisM: http://manpages.ubuntu.com/manpages/trusty/man8/sulogin.8.html -> If the root account is locked, as is the default on Ubuntu, no password prompt is displayed and sulogin behaves as if the correct password were entered.
<LuisM> hmm, now i see
<LuisM> ;)
<LuisM> that behavior is controlled
<tarpman> LuisM: even if you lock the root account... unless your disks are encrypted, someone with physical access to your server can just reboot it and boot with "init=/bin/sh"
<tarpman> LuisM: so protecting the server itself is still crucial.
<LuisM> yeah, absolutely
<Gaming4JC> I have 4 drives, what is the best method for backing up stuff? RAID6, RAID0 (two drives for CORE os, the other two rsynced user files separately), or a Union FS such as Gluster, MergerFS, etc.
<Gaming4JC> researching this has left me more confused than before I started
<Gaming4JC> xD
<RawPeacock> alright so I'm trying to follow this guide http://www.kiloroot.com/add-ubuntu-14-04-lts-server-to-a-windows-active-directory-domain-fullest-integration/ and I'm unable to verify on the last step
<sarnold> Gaming4JC: have you considered zfs yet? :)
<Gaming4JC> sarnold, I did, but the licensing issue bothers me. I like GPL :P
<RawPeacock> 14.04 successfully joined the domain using my domain admin credentials, but when I look at other posts using similar methods (ie: https://community.spiceworks.com/how_to/80336-join-ubuntu-14-04lts-to-a-windows-domain-using-pbis-open) I notice a few incongruencies
<teward> hate to be the obvious harbinger of IT doom, but relying on RAID or a filesystem to do a backup on its own, without taking a full backup as a separate copy of data, sounds iffy
<teward> just sayin ;)
<RawPeacock> #1 being that I have no lightdm configuration file /usr/share/lightdm/lightdm.conf.d/50-unity-greeter.conf , but reading has indicated that that's only necessary on desktop environments not serverside
<RawPeacock> #2 being that my server never had the session sufficient pam_lsass.so line in  /etc/pam.d/common-session , so I added it at the bottom of the file with the recommended parameter of session [success=ok default=ignore] pam_lsass.so
<RawPeacock> I've attempted to verify just *how* connected I am to my AD by going into my phpIPAM page and seeing if I could add a domain user, which it does successfully, but I'm unable to search on the utility using my domain admin account, and still unable to login using a domain account via ssh
<sarnold> RawPeacock: do you have any error messages in /var/log/auth* that might indicate why the sshd logins didn't work?
<RawPeacock> sarnold, I'm trying to connect on port 22 on my PuTTY utility but according to this log it's indicating that ssh2 port 61576; unable to authenticate [preauth]
<RawPeacock> let me try a different account that's specifically added to the linuxadmins group
<RawPeacock> here's my slightly edited log http://paste.ubuntu.com/15411484/
<sarnold> RawPeacock: does the /etc/pam.d/sshd file either include the proper account and session lines for the new pam module, or include files that do?
<RawPeacock> sarnold, I don't believe so no, going over my notes on the installation of the phpIPAM component  http://paste.ubuntu.com/15411556/
<sarnold> RawPeacock: and how about the common-account, common-auth, common-session files?
#ubuntu-server 2016-03-18
<RawPeacock> sarnold, blah I wish I knew thank you for giving me things to investigate tomorrow
<RawPeacock> In case you're feeling curious http://paste.ubuntu.com/15411603/
<RawPeacock> thanks again for answering sarnold
<Gaming4JC> So is it even worth using RAID1 on two drives these days?
<Gaming4JC> My idea was: Two drives, two different vendors (mirrored), then two other drives (maybe RAID1 again?) for offline backups
<Gaming4JC> it's got to be better than what I have now which is nothing
<Gaming4JC> :D
<sarnold> RawPeacock: yeah, it looks like there's no pam modules to do the AD authentication steps...
<sarnold> Gaming4JC: I think it is; while it's not going to prevent data loss in the case of two drives dying, at least it'll handle one drive death alright
<Gaming4JC> sarnold, ok, thanks :)
<sarnold> Gaming4JC: and you said this was for backups ayway, so even e.g. raidz1 / raid5 may make sense, it's not like it'll cause immediate downtime
<sarnold> it will mean running without backups for a bit, not great, but still :)
<jak2000> i want every 8 minutes execute a script, from 20pm to 8am in the crontab: 0,8,16,24,32,40,48,56 20-8 * * 1-7 /usr/scripts/respsql2.sh   <---- not work, why?
<jak2000> j ubuntu
<showaz> ubuntu-server 16.04 (motd freezed)
<showaz> 27 packages can be updated. 0 updates are security updates.
<showaz> sudo apt dist-upgrade ( 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. )
<OlofL> apt-get install ruby gave me ruby 1.9. how do I get to install ruby 2.0+?
<OlofL>  uname -a Linux 80003v-app001 3.16.0-37-generic #51~14.04.1-Ubuntu SMP Wed May 6 15:23:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
<cpaelzer> OlofL: 1.9 was the base in trusty, so that is what you've got when just installing "ruby"
<cpaelzer> OlofL: but there is a ruby2.0 package for you that will give you 2.0.0.484-1ubuntu2.2
<cpaelzer> http://packages.ubuntu.com/trusty/ruby2.0
<selinuxium> hi all
<devster31> what package do I need to remove files from an ntfs hard drive?
<devster31> mounting with ntfs isn't enough
<lordievader> devster31: It should be. If the ntfs drive is mounted rw you can remove files from it.
<devster31> https://bpaste.net/show/f3ac712dd67a these are from mtab and fstab respectively and I can't even create a directory
<lordievader> As what user?
<devster31> pi user, as the gid uid
<lordievader> Also, check dmesg. It might still be mounted ro because it was uncleanly unmounted or something.
<jamespage> frickler, hey - sorry for lag - been away
<jamespage> frickler, I have a 10.0.3 prepared in ppa:openstack-ubuntu-testing/ceph-sru if you want to try it
<devster31> this is the last dmseg https://bpaste.net/show/862eabad96ac are you sure it isn't related to ntfs vs ntfs-3g driver?
<jamespage> frickler, I try get that uploaded today (on my backlog)
<jamespage> frickler, and we will ship with Ceph Jewel - last time I checked in with sage that was still good timescale wise
<lordievader> devster31: Ntfs != ntfs-3g?
<devster31> lordievader: doesn't seem so, ntfs is the linux kernel driver read-only, ntfs-3g is read-write
<lordievader> Ah, the use the ntfs-3g ;)
<k4v_>  hi i have a problem with a ubuntu server: i got lots of kernel warnings in dmesg, and now my root filesystem is full. i already deleted everything i could find but now two days later its full again
<lordievader> k4v_: What is taking up the space?
<lordievader> Or better, what is generating it ;)
<k4v_> I wish I knew
<k4v_> i cannot find any more files to delete
<k4v_> dmesg still shows new warnings every few seconds
<k4v_> i used du --max-depth=1 to find stuff to delete, but now I don't see anything more
<lordievader> Precisely, start with which folders take up the most space on the root of the filesystem,.
<k4v_> lordievader: i did this, the space is gone but there are no more logfiles to delete
<lordievader> k4v_: What is the output of 'du -hs /*'?
<frickler> jamespage: thanks for the update, I'll be away next week, will recheck after that
<k4v_> lordievader: i found the problem: I deleted the logfiles without stopping rsyslog, the process still held open handles to the files.
<k4v_> after "service rsyslog restart" i have my space back.
<lordievader> Still, log files shouldn't grow that large, that fast.
<lordievader> You should investigate which log is spamming your rootfs full.
<k4v_> there are lots of kernel warnings... thats the problem.
<k4v_> it seems to have something to do with a network device....
<lordievader> Could you pastebin those?
<coreycb> ddellav, I pushed the barbican-api changes.  it drops uwsgi and runs the api behind apache/mod-wsgi now.  I need to figure out how to do the init scripts for that so those aren't there yet.
<ddellav> coreycb ack
<halvors> Why is lxcbr0 present by default in ubuntu 16.04...
<halvors> ?
<ogra_> because lxc is preinstalled in the server task by default in 16.04
<sdeziel> halvors: for now, you can set USE_LXC_BRIDGE=false in /etc/default/lxc-net
<coreycb> beisner, can you please promote neutron 1:2014.1.5-0ubuntu4~cloud0  from precise-icehouse-staging to precise-icehouse-proposed? regression testing is complete.
<beisner> coreycb, yep on it
<patdk-wk> heh, well this is a lot of fun
<patdk-wk> sarnold this is even more silly :)
<patdk-wk> so that closed-source program, that wouldn't run on precise with a newer kernel
<patdk-wk> will run on precise with a xenial kernel
<beisner> coreycb, oh hey, pushed neutron 1:2014.1.5-0ubuntu4~cloud0 to icehouse-proposed
<coreycb> beisner, thank you
<beisner> coreycb, yw
<karstensrage> anyone know why libpam0g et. al. was recently updated?
<tarpman> karstensrage: presumably /usr/share/doc/libpam0g/changelog.Debian.gz would say why
<Pici> karstensrage: also see the last few entries here: http://www.ubuntu.com/usn/
<karstensrage> wow thats freaking awesome
<karstensrage> thanks tarpman
<karstensrage> nice
<karstensrage> thanks Pici
<Slashman> hello, I'm trying to activate hugepages on ubuntu server 15.10, I use "vm.nr_hugepages = 17000" in sysctl.conf and even after reboot and sysctl -p, "cat /proc/meminfo | grep Huge" give me 0 hugepages, the process works on debian and centos, what am I missing ?
<sdeziel> Slashman: do you have ~34GB of RAM or more?
<Slashman> 256GB ram
<sdeziel> lucky guy ;)
<sdeziel> can you show the output of that grep?
<Slashman> sdeziel: http://apaste.info/02P
<sdeziel> Slashman: I'm not that familiar with those sysctl knobs but here I have both vm.nr_hugepages and vm.nr_hugepages_mempolicy set to the number of hugepages I want
<sdeziel> Slashman: in fact, I'm only setting vm.nr_hugepages and it seems that the _mempolicy follows
<Slashman> sdeziel: thanks! it works without the "vm.nr_hugepages_mempolicy = 0"
<Slashman> I suppose that's a difference between the kernel of ubuntu and debian/centos, since those last are older
<tmartins> I have enabled hugepages on my Ubuntu box
<tmartins> For DPDK
<tmartins> something like: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash default_hugepagesz=1GB hugepagesz=1G hugepages=4"
<tmartins> in my /etc/default/grub
<tmartins> then: update-grub
<DammitJim> do you guys have a suggestion on running a monitor like inotifywait ?
<DammitJim> but how do I ensure that it's running all the time?
<mojtaba> Hi, I know that I can use -D option in ssh and using foxyproxy tunnel my connection through the server. Is there any way to do the same thing, but via command line? (I mean instead of foxyproxy in the web browser, direct my traffic using terminal through the server)
<tarpman> mojtaba: it kind of depends by program. some programs respect an http_proxy environment variable, others look for HTTP_PROXY, others have something different (or nothing at all)
<mojtaba> tarpman: I want to update my system, but I can not do that with my current Ip address.
<mojtaba> tarpman: basically I want to run apt-get update && apt-get upgrade using ssh tunnelling on my system.
<tarpman> mojtaba: you're wondering how to set the proxy for apt-get? set Acquire::http::Proxy in /etc/apt/apt.conf -> man 5 apt.conf
<RoyK> https://www.youtube.com/watch?time_continue=3&v=szdbKz5CyhA <-- if anyone wondered about email
<mojtaba> tarpman: thanks
<mojtaba> tarpman: Can I use ssh -D option in that apt.conf file?
<tarpman> mojtaba: you can point apt at localhost and the port your ssh is bound to
<tarpman> mojtaba: ... if I understood your question correctly
<mojtaba> tarpman: thank you very much
<mojtaba> tarpman: I get something like this: W: Failed to fetch http://ca.archive.ubuntu.com/ubuntu/dists/trusty-backports/universe/binary-i386/Packages  Cannot initiate the connection to 8080:80 (0.0.31.144). - connect (22: Invalid argument)
<mojtaba> tarpman: And when I am trying to use port 80, it says Privileged ports can only be forwarded by root.
<tarpman> mojtaba: correct, you have to be root to use ports lower than 1024. 8080 is fine
<tarpman> mojtaba: with ssh -D bound to port 8080, what did you put in /etc/apt/apt.conf ?
<mojtaba> tarpman:
<mojtaba> Acquire::http::Proxy "127.0.0.1:8080";
<mojtaba> Acquire::https::Proxy "127.0.0.1:8080";
<tarpman> mojtaba: hmm, try "http://127.0.0.1:8080/" ?  I'm not sure whether what it's expected to do without http://
<mojtaba> tarpman: I get bunch of 404 and 405 erorrs: W: Failed to fetch https://deb.nodesource.com/node_0.12/dists/trusty/main/source/Sources  Received HTTP code 405 from proxy after CONNECT
<tarpman> mojtaba: ah, darn... I forgot ssh is just a SOCKS proxy and not a full HTTP proxy. you probably have to go in a different direction, e.g. http://askubuntu.com/questions/35223/syntax-for-socks-proxy-in-apt-conf
<tarpman> mojtaba: sorry about that
<tarpman> mojtaba: http://unix.stackexchange.com/questions/78335/apt-get-via-ssh-tunnel-if-only-port-22-is-allowed suggests an alternate apt.conf setting, but I have not tried this myself...
<mojtaba> tarpman: thanks for your info.
<mojtaba> tarpman: I tried this: Acquire::socks::proxy "socks://localhost:3128/"; and then ssh -CND localhost:3128 user@remote.host
<mojtaba> But I get this as a result: http://paste.ubuntu.com/15419708/
<tarpman> mojtaba: Acquire::socks::proxy "socks://localhost:8080"; and ssh -D 8080 user@host works for me -- you may have some other problem
<mojtaba> tarpman: Does it matter which port I am using?
<tarpman> mojtaba: the google talkplugin archive in particular might have some maintenance underway right at the moment -- if your other repositories are working, you could comment that out and try it again later
<tarpman> mojtaba: as long as it's above 1024, any port should work
<mojtaba1> tarpman: This the output of the update result: http://paste.ubuntu.com/15419708/, I am not sure if other repositories are working or not.
<mojtaba1> tarpman: Do you know what could be the problem?
<tarpman> mojtaba1: hm actually. is the output with acquire::socks::proxy any different from the output with no proxy at all?
<mojtaba1> tarpman: yes
<tarpman> mojtaba1: in that case, I wonder if the machine you're sshing to is behind a proxy that is misbehaving. for example http://askubuntu.com/questions/474549/got-nodata-issue-nodata-does-the-network-require-authentication
<mojtaba1> tarpman: I rename the apt.conf file, it seems the output is the same.
<tarpman> ah...
<genii> Is this on 16.04?
<mojtaba1> tarpman: sorry. Do you know what is wrong then? (Why it is not working as expected?)
<mojtaba1> genii: no 14.04 LTS
<genii> User in #ubuntu+1 is experiencing same issue, so I was wondering
<mojtaba1> genii: Is that 14.04?
<tarpman> mojtaba1: seems like acquire::socks::proxy doesn't do what I hoped it would do. you may have to look at tsocks or such, as in one of the URLs I pasted earlier
<mojtaba1> tarpman: thanks. I will check that too
<tarpman> mojtaba1: sorry for misleading you
<mojtaba1> tarpman: I appreciate your time and effort.
<genii> mojtaba1: No, they are on 16.04 and not 14.04 ...but I didn't know which distribution you were using until I checked your pastebin. The askbuntu URL is what caught my eye
<genii> mojtaba1: The #ubuntu+1 channel is for the version of Ubuntu which is currently in development
#ubuntu-server 2016-03-19
<snowgoggles> oops
<mixpix3ls> Hi everyone, question about uvt-kvm and ssh key for the ubuntu user.
<mixpix3ls> Wait, nevermind, I figured it out :)
<linuxmint> Hello, is there a way to connect my linux laptop to the ubuntu-server machine, which needs a reinstall of ubuntu-server. I need a keyboard and monitor on the machine for the install.
<alreece45> ntpdate is in ubuntu-minimal?
<linuxmint> Hello, is it a good idea to install ubuntu-server on a virtual server?
<linuxmint> If I understand correctly, the virtual server is only as big as the hard drive...say 120GB?
<linuxmint> This seems to limit the amount of servers like ubuntu-server, to install on the drive.
<linuxmint> Also, if the drive is faulty, the virtual server and the guest ubuntu-server would need to be reinstalled?
<lordievader> Good morning.
<phillw> hi folks, for anyone following 16.04 can you have a look at bug 1559507 and confirm, thanks.
<ubottu> bug 1559507 in debian-installer (Ubuntu) "Keyboard selection is missed" [Undecided,New] https://launchpad.net/bugs/1559507
<ChibaPet> Hey all. apt-add-respository for Oracle Java stuff worked on my brother-in-law's computer, but then Aptitude was unwilling to update, citing key problems. The repo add worked for apt-get. I've not encountered this particular skew before. Ideas?
<ChibaPet> I know apt-get and aptitude keep different conceptions of caches, but I was (incorrectly) guessing they would share auth data.
<dax> ChibaPet: is your brother-in-law running 16.04?
<dax> ChibaPet: if so, apt on 16.04 got more strict about signature checking and you should prod the repository maintainer to use sha256 (also, note that Oracle Java is not an officially supported package, and #ubuntu+1 knows about this issue more)
<ChibaPet> dax: 16.04, yes - thank you (and I'd have been happy with openjdk, but he encountered an inexplicable error running Minecraft)
<ChibaPet> (which I don't see, but, computers...)
#ubuntu-server 2016-03-20
<lordievader> Good morning.
<devster31> how can I login as root from a ssh session? I have ssh root disabled, but I can sudo su -, however the old login session stays open if I do this, and I can't change usernames with the old user session still open
<bekks> devster31: Login as user, and dont use sudo su - (which is nonsense), but just sudo -i
<bekks> And for changing usernames, you need to create another user capable of using sudo -i, login as the new user, and change the username of the old user.
<bekks> No need for unlocking the root account, no need for enabling ssh as root.
<devster31> so the only way to rename a user is to create a new one with root privileges and log in as this one?
<bekks> Correct.
<patdk-lap> is there some way I can audit all access in apparmor?
<patdk-lap> I cannot seem to locate the correct path to block access for a file
<Sling> patdk-lap: you can set it in learning mode
<Sling> see https://help.ubuntu.com/lts/serverguide/apparmor.html
<Sling> (see aa-genprof)
<patdk-lap> ya, but that only works if a something matches
<patdk-lap> the problem is, I can't get anything to match
<patdk-lap> atleast for the folder I am concerned about
<patdk-lap> for other folders, works fine
<patdk-lap> nothing was captured by aa-genprof :(
<patdk-lap> is there a way I can show what profile I'm running in, to confirm it's actually matching?
<patdk-lap> ah, na, it's not matching the program
<patdk-lap> ok, so the path to the program itself is also messed up
<devster31> bekks: why is sudo su - nonsense? doesn't it login as root?
<snowgoggles> devster31: ?
<devster31> he said: Login as user, and dont use sudo su - (which is nonsense), but just sudo -i
<ogra_> sudo su - messes up the environment, sudo -i gives you a proper one
<ogra_> (and "sudo su" generally starts extra subshells which is pointless)
<devster31> but sudo resets the entire env, only some variables are passed right?
<snowgoggles> devster31: what's the concern? they will elevate privileges differently
<devster31> no concern, I'm curious, if I have for example local::lib perl variables with sudo -i they won't be passed
<lunaphyte> i have a server with an nfs filesystem in fstab.  the filesystem fails to mount at boot, but mounts just fine with mount -a after boot.  so far, i've not found much in the way of logging.  how can i troubleshoot this further?  the os is 14.04.4
<RoyK> lunaphyte: try adding _netdev to the options
<RoyK> with the underscore
<lunaphyte> i did try that, but it doesn't seem to have an effect
<lunaphyte> here's the current fstab entry:
<lunaphyte> 10.128.35.251:/home	/home/example.com	nfs	auto,_netdev,rw,hard,intr	0	0
<dasjoe> So, I'm playing with lxd on 16.04 for a bit. "lxc-ls" shows no defined containers, "lxc list" shows the ones I'm running but its output is not easily parseable. Am I missing something? I'd like to feed the output into lxc-destroy (or should I use "lxc delete"?)
<DvLnme> hello everyone
<punkoivan> hi guys.
<punkoivan> have some problem with ssh. when I don't use ssh-session for few minutes I got "broken pipe"
<punkoivan> in sshd_config fix timeout 120 to 0, but it's not fix.
<punkoivan> It's no problem to reconnect, but realy irritabel
<patdk-lap> enable keepalives :)
<tdelam> hey, how do I upgrade apache from Apache/2.4.7 (Ubuntu) to 2.4.16 via apt? We're on 12.04 LTS.
<bekks> sudo apt update; sudo apt full-upgrade;
<tdelam> trying to find some info online but it's bee tricky finding documentation to upgrading to a specific version
<tdelam> a full upgrade?
<bekks> tdelam: Sure.
<rbasak> tdelam: we only maintain one version per Ubuntu release.
<tdelam> Oh
<tdelam> 2.4.16 is in the most recent?
<rbasak> 2.4.7 in Trusty, 2.4.12 in Wily.
<rbasak> Wily is the most recent Ubuntu release.
<tdelam> ah
<rbasak> Xenial is on 2.4.18.
<rbasak> (but not released yet)
<tdelam> damn
<tdelam> I might have to do this from source :/
<rbasak> If there's a specific bugfix you need, we can backport a fix depending on what it is.
<tdelam> PCI scan is calling specifically for 2.4.16
<rbasak> If it's a security thing, we quite likely already have backported the fix to 2.4.7.
<rbasak> Get a better PCI scan.
<tdelam> heh
<tdelam> if it were only that easy
<tdelam> https://www.dropbox.com/s/v7sj60f87yadcqj/Screenshot%202016-03-20%2018.54.07.png?dl=0
<rbasak> Doing it yourself from source is clearly worse for security.
<rbasak> Unless you want to also pay a security team to keep it up to date.
<tdelam> yep
<tdelam> that's not efficient
<rbasak> You can look up CVEs at http://people.canonical.com/~ubuntu-security/cve/
<rbasak> For example your first CVE is fixed in the version in Trusty.
<rbasak> No need to update to anything else.
<tdelam> oh wow
<tdelam> this will be good, I can show them it's resolved in 2.4.7
<tdelam> looks like their scan is checking version, not pen testing any of this.
<rbasak> If your scan says you're vulnerable and you have the latest package installed, then your scan is wrong.
<rbasak> Right
<tdelam> petty dumb pci scanner
<tdelam> thx a ton rbasak, ill check that url
<rbasak> No problem.
<tdelam> rbasak: sorry, https://people.canonical.com/~ubuntu-security/cve/2014/CVE-2014-0231.html the green "released" text depicts that it was patched in those releases?
<rbasak> Right. And the version in brackets tells you the package version it was fixed in. Make sure you have that version (or higher) installed.
<tdelam> gotcha, thanks!
<tdelam> gorgeous! all the issues listed in their scan threat is resolved in my version.
<tdelam> thankscience!
<rbasak> nacc_: FYI, https://www.stewright.me/2016/03/upgrade-php-7-0-ubuntu/
<rbasak> Oh, he's using Ondrej's PPA
<rbasak> Never mind!
<nacc> rbasak: yep, that's on 14.04
<rbasak> Sorry. I assumed it was talking about 16.04 without reading further.
<rbasak> How's it going BTW? Near the finish line yet?
<rbasak> I haven't been following because you seemed to have a very good handle on it.
<nacc> rbasak: php7 progress is good, it's the removal of php5 that's going slowly
<nacc> we finally got symfony updated last week
#ubuntu-server 2017-03-13
<drab> any clue what could be the cause of a UPS disconnecting every 5 mins?
<drab> I'm running NUT on xenial with latest kernel and the UPS disconnects non stop
<drab> all the problems with disconnects I googled reported stuff about power/control, but that's already disabled for the UPS
<patdk-lap> I have no issues
<patdk-lap> you aren't using a usb in a vm are you?
<drab> patdk-lap: nope, plugged straight into the box
<drab> and I have 3 diff machines, from plain dsktop machines to a supermicro server and 2 of these UPSs (Eaton 1500)
<drab> they all show the same behavior
<drab> server and desktops are ubuntu xenial with latest kernel
<drab> the UPS works, I can talk to it and all, but I've had troubles when by chance I hit it during the disconnect window
<drab> and that's no good for a UPS...
<drab> or anything really, but especially some something that's supposed to add reliability to the system
<drab> I got these guys because they are on the compatible list of NUT, and work well with it except this disconnect shenanings that I can't track down
<patdk-lap> dunno, I only have tripplite and apc
<patdk-lap> never had an issue like that, except when attempting to brige a usb port into a vm
<ishaved4this> hey guys, I'm pretty new to running my own home server and I was wondering if anyone could help me out in terms of getting transmission-daemons permissions right so i can allow it to download into a mounted ext4 hdd
<ishaved4this> is anyone up?
<OerHeks> ishaved4this, edit ~/.config/transmission/settings.json, and change  "download-dir" and "incomplete-dir"
<OerHeks> to your hdd, something like /media/<your user>/abcabcabcabc etc/transmission/
<ishaved4this> well transmission has been working on my internal hdd since i set it up
<ishaved4this> just a few hours ago I set up this external, made it auto mountable, and used chwon to give myelf ownership of my folder /mnt/Seagate 500gb
<ishaved4this> (or so I thought) and it refuses to download the torrent into it
<OerHeks> spaces in foldernames.. they should shoot the guy who invented that, and approved it.
<ishaved4this> hahaha is this my problem?
<OerHeks> i think you want >  /mnt/Seagate\ 500gb
<ishaved4this> whoops! its actually /mnt/Seagate500gb
<OerHeks> oh oke, then no worry
<ishaved4this> but I am trying to download to /mnt/Seagate500gb/TV Shows
<ishaved4this> the funny thing is, that this works fine when I send a torrent to /home/user/tv Shows
<alkisg> Which user is transmission running at, you, or a system user called transmission-daemon?
<alkisg> *as
<ishaved4this> its running as its own user
<alkisg> And you think it has permission issues?
<alkisg> Then just su to that user and see if you have access
<alkisg> (using ls, cd ...)
<ishaved4this> well when I type in groups, debian-transmission is under me
<alkisg> It's not you that should have access to that group, but that user to your folders
<alkisg> Try sudo -u debian-transmission bash -l
<OerHeks> maybe change the group:  chgrp -R debian-transmission /mnt/Seagate500gb/TV Shows
<alkisg> And from that bash, then cd / ls / check anything you want to see if you have access there
<ishaved4this> I did cd /mnt, and then /Seagate500GB and its saying no such file
<ishaved4this> Oer, what  is that command doing? and shouldn't I do it to the entire seagate folder? because that directory has both /movies and /Tv Shows
<ishaved4this> TV*
<alkisg> Try this command, as the transmission user: ls /mnt/Seagate500GB
<ishaved4this> ok
<ishaved4this> in purple text it is showing 2 folders
<ishaved4this> lost+found and Seagate 500GB
<alkisg> Nice, so you have read access. Now try write access:
<alkisg> Eh wait, you have 2 Seagate500GB folders?
<alkisg> What's the output of this? ls /mnt/Seagate500GB | nc termbin.com 9999
<ishaved4this> no, only one. I have folders within that folder
<ishaved4this> "/mnt/Seagate500GB: is a directory use netcat
<ishaved4this> "
<OerHeks>  ls /mnt/Seagate500GB/ | nc termbin.com 9999
<alkisg> Please use copy/paste, it helps to avoid mistyping things
<ishaved4this> lost+found
<ishaved4this> Seagate500GB
<ishaved4this> that's what it got
<alkisg> Did you use termbin?
<alkisg> Because termbin shows a URL
<ishaved4this> http://termbin.com/569m
<alkisg> OK, sorry, I needed ls -l: ls -l /mnt/Seagate500GB/ | nc termbin.com 9999
<alkisg> So that the owner and permissions are shown too
<ishaved4this> okay, and do this from my user, or transmissions?
<alkisg> tranmission
<ishaved4this> debian-transmission@server:~$ ls -l: ls -l /mnt/Seagate500GB/ | nc termbin.com 9999
<ishaved4this> ls: invalid option -- ':'
<ishaved4this> Try 'ls --help' for more information.
<ishaved4this> Use netcat.
<ishaved4this> debian-transmission@server:~$
<alkisg> Only run the command after the :
<alkisg> ls -l /mnt/Seagate500GB/ | nc termbin.com 9999
<ishaved4this> http://termbin.com/2bku
<alkisg> OK, so this: drwxrwxr-x 4 joey root 4096 Mar 12 23:19 Seagate500GB
<alkisg> This means that the joey user and the root group have write access
<alkisg> So, transmission won't be able to write in _that_ folder. Now let's see below that.
<alkisg> ls -l /mnt/Seagate500GB/Seagate500GB | nc termbin.com 9999
<ishaved4this> that doesn't exsit
<ishaved4this> exist**
<alkisg> If it doesn't exist, you were not copy/pasting, but running your own commands
<ishaved4this> it goes /mnt/Seagate500Gb and in that directory is /Movies and /TV Shows
<alkisg> The ls command that you run says otherwise
<alkisg> Unless you're running whatever you want and not the commands that we ask
<ishaved4this> http://termbin.com/wm4f
<alkisg> Can you run this command and paste ALL the output?
<alkisg> ls -l /mnt/Seagate500GB/ | nc termbin.com 9999
<alkisg> Don't run ls /mnt
<alkisg> Run the command as I put it above
<alkisg> And paste all the output, including the entries for . and ..
<ishaved4this> okay
<alkisg> Because if we ask for X and you tell us the results for Y, we can't help you, your replies don't make sense then
<ishaved4this> yeah, I have been copy and pasting. Heres the results for what you just told me to put in
<ishaved4this> http://termbin.com/855q
<alkisg> Let's try something else. What is the output of this? find /mnt -maxdepth 2 -type d | nc termbin.com 9999
<ishaved4this> http://termbin.com/9qha
<alkisg> OK, see that line there?/mnt/Seagate500GB/Seagate500GB
<alkisg> Why do you say that this doesn't exist?
<alkisg> (08:27:25 ÏÎ¼) alkisg: ls -l /mnt/Seagate500GB/Seagate500GB | nc termbin.com 9999
<alkisg> (08:27:50 ÏÎ¼) ishaved4this: that doesn't exsit
<alkisg> Now let's get the complete picture, with this command: find /mnt -maxdepth 3 -type d -ls | nc termbin.com 9999
<alkisg> What's the output of that ^ command?
<ishaved4this> weird! because when I first had it set up, it didnt have two directories with the same name
<ishaved4this> okay ill paste that in
<ishaved4this> permission denied
<ishaved4this> hold on, nvm
<ishaved4this> http://termbin.com/ciux
<alkisg> Let's talk about this one: 74416131      4 drwxr-xr-x   2 root     root         4096 Mar 12 23:15 /mnt/Seagate500GB/Seagate500GB/TV\ Shows
<alkisg> As you can see, only root can write there
<ishaved4this> ah. that and movies
<alkisg> Yes, it's the same  for movies too
<ishaved4this> so I have to take ownership of those folders correct?
<alkisg> Not you, but transmission
<ishaved4this> okay, now I also use plex. if only root can read/write here shouldnt i also give plex permission to read/write as well?
<alkisg> Does plex run as root?
<alkisg> ps aux|grep plex
<ishaved4this> joey      1238  0.0  0.0   4508   804 ?        Ss   01:08   0:00 /bin/sh -c /usr/lib/plexmediaserver/Plex\ Media\ Server
<ishaved4this> joey      1241  0.1  2.1 439636 125524 ?       Sl   01:08   0:07 /usr/lib/plexmediaserver/Plex Media Server
<ishaved4this> joey      1307  0.2  0.9 1726432 56540 ?       SNl  01:08   0:11 Plex Plug-in [com.plexapp.system] /usr/lib/plexmediaserver/Resources/Plug-ins-03e4cfa35/Framework.bundle/Contents/Resources/Versions/2/Python/bootstrap.py --server-version 1.4.3.3433-03e4cfa35 /usr/lib/plexmediaserver/Resources/Plug-ins-03e4cfa35/System.bundle
<ishaved4this> joey      1367  0.0  1.0 304964 60468 ?        Sl   01:08   0:03 /usr/lib/plexmediaserver/Plex DLNA Server
<ishaved4this> debian-+  5564  0.0  0.0  14224  1028 pts/0    S+   02:40   0:00 grep --color=auto plex
<ishaved4this> it runs as my normal user it seems
<alkisg> Then it won't have access to that folder
<alkisg> It may have access to a subfolder, if the subfolder has different owner or permissions
<alkisg> Try it yourself: touch /mnt/Seagate500GB/Seagate500GB/TV\ Shows/test
<alkisg> This should give you 'permission denied'
<ishaved4this> yup
<ishaved4this> so should I give joey ownership to the subfolders, along with having dabian-transmission in my group, or should I independently give them access to the folders?
<alkisg> It's up to you
<alkisg> You can even make a "movies" group, and put joey and transmission to that group
<ishaved4this> okay so how do I add daemon-transmission to my group again? I forgot the command
<ishaved4this> I used "Sudo chown joey [directory] and now the termbin is looking like this
<ishaved4this> http://termbin.com/vy6s
<Hariharan> ping-all : what is the option to do text based installation of ubuntu with preseed file?
<jamespage> cpaelzer: does bug 1672367 ring any bells? yakkety specific and not a regression afaict
<ubottu> bug 1672367 in qemu (Ubuntu) "libvirtError: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk1'" [Undecided,New] https://launchpad.net/bugs/1672367
<roaksoax> b/win 8
<cpaelzer> jamespage: hi
<cpaelzer> jamespage: yeah it actually does - let me check if it applies here
<cpaelzer> jamespage: ah no - yakkety
<cpaelzer> jamespage: ok all my bell ringing does not apply then
<cpaelzer> jamespage: we dropped something from depends to recommends in zesty related to this but nothing in yakkety
<cpaelzer> jamespage: although some of your debugging rings a bell and I can comment at least on that
<cpaelzer> jamespage: that apparmor is a red herring
<cpaelzer> I'll update the bug
<jamespage> cpaelzer: ta
<cpaelzer> jamespage: still updating
<cpaelzer> with more extras for you
<cpaelzer> jamespage: check the bug now
<cpaelzer> jamespage: I'm even closer now
<cpaelzer> I'm down to the same error msg
<cpaelzer> but since I lack your env you'll have to do the trial and error
 * cpaelzer is updating again
<cpaelzer> jamespage: I'm atm creating the command chain you need
<cpaelzer> jamespage: I hope from there you can find your issue
<jamespage> cpaelzer: thankyou!
<cpaelzer> jamespage: updated the bug with more for you
<cpaelzer> jamespage: I hope it helps
<cpaelzer> I stop replying to myself there now, jamespage let me know if that gets you any further analyzing the case
<jamespage> cpaelzer: ack
<jamespage> cpaelzer: is there a way I can get libvirt to tell me what commands it tried to issue?
<cpaelzer> jamespage: If there is I don't know one
<cpaelzer> jamespage: I'd had hoped you'd know from the stack above what it submitted
<cpaelzer> :-/
<jamespage> cpaelzer: ok got to the same error message - command in bug report
<coreycb> jamespage, beisner, python-os-brick 1.2.0-2ubuntu0.2~cloud0 is ready to promote to mitaka-updates
<Genk1> HEllo all
<Genk1> where I can find a trusted ssl certifcates for testing ?
<Genk1> It seems that my Dovecot IMAP server doesn't like self-signed certificates
<rbasak> Genk1: you could get real certificates from letsencrypt at no charge. That's probably the easiest nowadays, even for testing.
<rbasak> (use test.genk1.com or whatever)
<Genk1> rbasak OK
<Genk1> I was trying that last time
<Genk1> with .localdomain  but it didn't accept that
<Genk1> I will try with something else
<rbasak> You will need a real domain.
<rbasak> No "real" CA will give you a certificate without a real domain.
<rbasak> If self-signed isn't acceptable, you have no other option.
<Genk1> you're right
<rbasak> I suppose you could create your own CA, self-sign *that* and add it to your system CA list actually.
<Genk1> thanks for your advice
<rbasak> But dovecot should work with self-signed anyway.
<rbasak> HTH :)
<cpaelzer> Thanks for the command jamespage
<cpaelzer> jamespage: and here things make sense, the error connecting made the "device" not appear
<cpaelzer> jamespage: so on the latter device_add it fails
<cpaelzer> jamespage: I guess it is not important if that is hotplug or on boot
<cpaelzer> jamespage: did you see you issues on boot, or later on?
<cpaelzer> posting to the bug to not loose it
<jamespage> cpaelzer: both
<tash> If EOL is listed for a distro as April 2017, does that mean April 1?
<nacc> tash: not necessarily
<tash> is there a way to find an exact date?
<tash> all I see on the doc site is April 2017, but I'd like to know if I have time ... or if end of March is what I need to focus on
<nacc> tash: for 12.04?
<tash> yeah =\
<nacc> i don't believe an exact date is available, but i'd ask in #ubuntu-release probably. However, you want to migrate *before* it goes EOL anyways
<tash> yeah, makes sense
<sarnold> tash: I'm not sure there is an exact date planned until the date it happens
<drab> containers + bindfs + snaps outta be the coolest thing for fast dev I've seen in a long long time
<drab> \o/ hurray for tech that works
#ubuntu-server 2017-03-14
<hhee> morn or night. need to install php 7.1, which more truly way to do it?
<hhee> ubuntu 16.10
<drab> hhee: https://launchpad.net/~ondrej/+archive/ubuntu/php
<drab> seems like your best bet
<hhee> drab, got it. thx for support
<hhee> drab, dont see php7.1-pdo or smth like this...
<hhee> in that repo
<hhee> whas i did wrong?
<drab> hhee: yeah that repo doesn't have it for some reason
<drab> dunno why he choose not to package it
<drab> hhee: oh, apparently it's packaged in the specific backend
<drab> hhee: so apt-get install php7.0-mysql or whatever will get you the PDO for mysql
<drab> hhee: see http://stackoverflow.com/questions/32728860/php-7-rc3-how-to-install-missing-mysql-pdo
<hhee> drab, got it, thx a lot!
<drab> hhee: btw, you'd get a lot further faster if you just googled those questions
<drab> because that link I passed you was the third result for "php7.0 install pdo ubuntu 16"
<drab> hhee: and ppl on irc get easily annoyed if you ask dumb questions that can be answered by a simple google search
<hhee> drab, got it too :) sorry
<hhee> promise to google before ask questions
<hhee> i promise*
<drab> no need to be sorry, just need to put in the work
<drab> sorrow helps nobody, elbow grease does
<mybalzitch> anyone here played with MPTCP? or multiple internet connections under linux?
<blackflow> 'ello. Any users of munin monitoring here? one of the plugins (if_*) stopped working on all our 16.04 nodes. it's registering "nan" values on the master, but running munin-run on the nodes shows everything okay. No error or warning in any of the logs either.
<blackflow> it stopped working with latest update to munin packages few days ago.
<PhoenixMage> Hi All, has any one had any experiencing configuring nfs4 using Active directory as a kdc?
<PhoenixMage> I think I have everything configured correctly but when I try to mount the filesystem mount essentially hangs until I ctrl-c
<huggybear404>  installing ubuntu-16.04.1-server-amd64 on ASUS Strix H270F Gaming mainboard , getting gnu grub menu then click install and screen turns off, is there a way for me to install this ?
<huggybear404> has anyone had luck installing any ubuntu or linux on a modern asus board ?
<lordievader> !modeset | huggybear404
<lordievader> !nomodeset | huggybear404
<ubottu> huggybear404: A common kernel (boot)parameter is nomodeset, which is needed for some graphic cards that otherwise boot into a black screen or show corrupted splash screen. See http://ubuntuforums.org/showthread.php?t=1613132 on how to use this parameter
<lordievader> That's the one.
<huggybear404> I tryed replace quiet with nomodeset, no change
<huggybear404> does that mean there is no way to install ?
<lordievader> How new is that motherboard?
<huggybear404> got it 2 days ago so pretty new
<huggybear404> should the - after quiet remain ?
<huggybear404> what does set gfxpayload= keep mean ? is that bad ?
<lordievader> Production date I mean ;) It could be that it is too new, that the current 16.04 kernel does not yet support it.
<patdk-lap> man, the marketing on the board
<lordievader> You could try 16.10 (or even the development version) to see if that does support your system.
<huggybear404> is there something very different with its gpu ?
<patdk-lap> gamer's guardian (the cmos 2032 battery)
<huggybear404> is there a list of supported boards ?
<patdk-lap> what gpu did you put in it?
<patdk-lap> it doesn't come with one
<huggybear404> i didnt its onboard
<patdk-lap> no, it's not
<huggybear404> are the hdmi port just for decoration ?
<patdk-lap> ok, if you didn't put in a dedicated gpu in a pcie slot
<patdk-lap> what CPU did you isntall, so we can manually look it up on the website as to what gpu is contained in the cpu
<patdk-lap> yes
<huggybear404> g4400
<huggybear404> so I need a cpu with better graphic card ?
<huggybear404> this new tech will be the end of me
<patdk-lap> generally using server motherboards for servers work better
<huggybear404> I miss the old days where we plugged in a vga card and played
<patdk-lap> why can't you still do that?
<huggybear404> where can I get server boards ?
<huggybear404> I found none in my local shop
<huggybear404> will it work better if I plug in a vga card ?
<patdk-lap> dunno, that gpu should be working fine
<patdk-lap> in 15.10+
<huggybear404> what is happening ? is it ubuntu server being kind and setting up a 4096 display that it knows the screen cant handle to make me use windows instead ? There is no way to setup a resolution or driver that will give me picture ? seems to me this reduces the usefullness a little bit. I wish the user installing can have some choice in video settings
<huggybear404> in windows there is a video test and if I dont answer that it works it reverts to a simpler mode, seems that would add a ton of value in linux as well and allow install on any hardware
<lordievader> huggybear404: You mentioned hdmi.. Is that the only connection? If not did you check the others?
<huggybear404> there is a dvi and none of my adapters fit it since the have pins that this dvi dont have
<huggybear404> screen has vga and hdmi input
<huggybear404> probably max 1980 res
<huggybear404> I tryed with a hdmi to vga adapter but its no use is ubuntu only uses 4096 resolution
<huggybear404> seems I cant install if it always automatic select highest setting and blow up screen
<huggybear404> I see no way to set lower res
<patdk-lap> I doubt a hdmi to vga adapter would work at all
<patdk-lap> if it did the dvi port would have the vga pins
<cpaelzer> jamespage: you'll need a new libvirt
<cpaelzer> jamespage: I think I have a fix, how'd you like it to be best testable for you?
<cpaelzer> jamespage: is a ppa ok?
<cpaelzer> jamespage: or does your setup imply you need something else?
<huggybear404> whats a ppa ?
<cpaelzer> !ppa
<ubottu> A Personal Package Archive (PPA) can provide alternate software not normally available in the offical Ubuntu repositories - Looking for a PPA? See https://launchpad.net/ubuntu/+ppas - WARNING: PPAs are unsupported third-party packages, and you use them at your own risk. See also !addppa and !ppa-purge
<cpaelzer> huggybear404: ^^
<huggybear404> how can I use that ?
<cpaelzer> huggybear404: as a user using one or as a developer creating a ppa?
<huggybear404> I think what I need is a way to setup a resolution that my screen will read, I suspect ubuntu set up maximum the gfx card can do never trying to detect the screen. can a ppa help this ? how can I load it ?
<jamespage> cpaelzer: I can splice in a PPA
<huggybear404> in the iso ?
<cpaelzer> jamespage: ok creating one for you to check then
<lordievader> huggybear404: That ppa talk wasn't directed at you ;)
<cpaelzer> huggybear404: I was talking to jamespage on a different bug, I don't think it applies to your case
<lordievader> huggybear404: There are some kernel parameters for setting the resolution, but I forgot what they where.
<lordievader> huggybear404: I suppose google can help you with that.
<cpaelzer> lordievader: did you mean https://www.kernel.org/doc/Documentation/svga.txt?
<cpaelzer> vga=ask first time, then scan and picking one to be set next time
<cpaelzer> long long time ago :-)
<lordievader> Yes, that seems to be the one.
<lordievader> huggybear404: ^
<huggybear404> ah I missed it was to james, I tryed vga=ask and also ASK_VGA from the web , im not getting any menu tho, just black screen again
<huggybear404> maybe im doing it wrong ?
<huggybear404> if it "asks" me in a 4096 mode its not much help
<huggybear404> maybe a way to lock it to 1024 ?
<patdk-lap> but nomodeset would have fixed that
<patdk-lap> http://askubuntu.com/questions/38780/how-do-i-set-nomodeset-after-ive-already-installed-ubuntu
<huggybear404> tryed vga= 773 nomodeset still black
<huggybear404> tryed nomodeset vga=773 before quiet still black
<huggybear404> might be on the right track tho
<huggybear404> i hope
<huggybear404> no way to disable onboard gfx so im not sure if it will be different with an old vga card in pci
<coreycb> beisner, jamespage: oslo.messaging 1.8.3-0ubuntu0.15.04.2~cloud3 is ready to promote to kilo-updates
<huggybear404> im not tempted to invest a bundle in a 4k screen just for testing
<coreycb> beisner, jamespage: also python-rfc3986 0.2.2-0ubuntu0.16.04.1~cloud0 is ready to promote to mitaka-updates
<jamespage> coreycb: I did a sweep on mitaka stuff am today
<jamespage> that included
<coreycb> jamespage, cool thanks
<jamespage> coreycb: pike opening is mostly done; postponing enablement of auto-backports until zesty +1 opens
<coreycb> jamespage, ok great thanks for doing that.  so we will just upload new pike packages straight to the cloud archive until zest+1 opens?
<jamespage> coreycb: oslo.messaging in kilo done
<coreycb> jamespage, thanks
<jamespage> coreycb: ah
<jamespage> Launchpad encountered an error during the following operation: copying package oslo.messaging.  oslo.messaging 1.8.3-0ubuntu0.15.04.2~cloud3 in trusty (source has no binaries to be copied)
<jamespage> coreycb: ok so that failed to build from source
 * jamespage looks
<hoobaman> hi
<hoobaman> i am looking for a pulp alternative (pulpproject.org) for ubuntu
<hoobaman> pulp allows you to "tag" and "pin" updates into a global update, validate that global update and roll them out on every server
<hoobaman> so all servers have exactly the same packages
<hoobaman> is there an apt variant avaiable?
<cpaelzer> jamespage: the ppa is built and I updated the bug with the link
<jamespage> cpaelzer: ta
<jamespage> cpaelzer:
<hoobaman> ok found it aptly -> aptly.info
<jamespage> https://www.irccloud.com/pastebin/ka16zb97/
<cpaelzer> jamespage: is that a working seeing it "attached"
<cpaelzer> ?
<jamespage> cpaelzer: yes - going to run a tempest smoke test against it as well
<jamespage> that also covers the boot from volume test case
<cpaelzer> jamespage: yay
<cpaelzer> jamespage: I already queued it for my non openstack tests
<cpaelzer> jamespage: give me a ping when the smoke test worked, if possible into the bug helping the latter SRU
<jamespage> cpaelzer: will do
<beisner> hi coreycb, all clear on thos uca moves, or still need attention there?
<coreycb> beisner, all good for now, thanks
<jamespage> cpaelzer: +1 commented on bug with test results
<jamespage> coreycb: washed proposed->updates for ocata - smoke tested OK with the new charm changes for cinder-ceph/nova-compute
<jamespage> \o/
<coreycb> jamespage, awesome
<coreycb> jamespage, that oslo.messaging test runs successfully for me locally
<coreycb> jamespage, it is weird though, looking at the test it seems that  the final assert could be -3 if both consume calls timeout
<huggybear404> I tryed 2 graphic cards now with same results, can never get past the first menu
<huggybear404> Apreaciate all tips tho
<huggybear404> looks like this board just cant do what I need
<lordievader> huggybear404: Have you tried 16.10 yet?
<Elzington> Let's say I have 3 servers that I use ssh to access, should I have a different key created for each server, or should I just use the same key for all three servers?
<bekks> Elzington: Create different keys on each server, and one on your client. USe your client key to login into each server.
<Elzington> ah, ok, thanks!
<huggybear404> I download 16.10 now, takes ages on my "broadband"
<teward> nacc: powersj: wait, so did I basically create a precedent for specialized triage rules now?  o.O
<teward> (in terms of wiki pages and such)
<nacc> teward: you did indeed!
<nacc> teward: put that on your resume!
<teward> we might want to create a template page somewhere then that can just be copied over, if we start doing this regularly xD
<teward> but I didn't expect my one little effort to become a "Hey let's do it for XYZ package!" precedent xD
<teward> this makes me smile :)
<powersj> teward: yes you did, it was so nice you set the bar :)
<teward> heh
<teward> powersj: nacc: once that page is done, if there's specialized triage rules, donk't forget to add it to https://wiki.ubuntu.com/Bugs/Triage/#Actions underneath the 'some packages have additional policies on triaging' section, so new bug triagers who read the guide are aware of such rules :)
<powersj> teward: thanks for the reminder I had forgotten about this page
<teward> you're welcome.  :)
<nacc> teward: good catch
<Sebastien> hello, i just installed webmin on my tiny 5$/month linode account. was wondering if it's a good idea or not.
<tarpman> !webmin | Sebastien
<ubottu> Sebastien: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<Sebastien> oh darn.
<Sebastien> is there another alternative to it?
<sarnold> webmin and other tools are a leading source of remote compromise. Be sure to install firewall rules that allows you and only you to access the thing from your house.
<Sebastien> and will purge remove it completely?
<tarpman> Sebastien: vi(1)
<Sebastien> yeah ufw is enabled
<Sebastien> vi ?
<drab> has anybody seen "Hardware error from APEI Generic Hardware Error" before?
<drab> this one seems to be specifically referred to the PCIe subsystem
<drab> but not much fo a clue how to debug it any further
<drab> found very little on google except for memory subsystem for which ppl replied "bad mem"
<sarnold>  * For more information about Generic Hardware Error Source, please
<sarnold>  * refer to ACPI Specification version 4.0, section 17.3.2.6
<DammitJim> man, please send me to the right room
<nacc> DammitJim: ?
<DammitJim> but if a developer notices that a service they are using from one of our clients is timing out
<DammitJim> why does the "devops" team have to get that resolved?
<DammitJim> the service isn't even for production but something for test
<DammitJim> I guess I'll have to deal with it
<sarnold> "Hello; thanks for the report. If you check with the signed SLA between our groups you'll see that this is a testing service and therefore not subject to the uptime guarantees that we have provided. Thus I am going to lunch."
<DammitJim> LOL
<DammitJim> thanks sarnold
<DammitJim> someone understands me
<nacc> "Also never e-mail me again."
<sarnold> nacc: ooh nice, planning for the future too :)
<sarnold> that's the kind of proactive approach that gets you noticed!
<DammitJim> blah... so, seriously, who normally would take care of this in a company?
<drab> it's devops, it's nice ppl, not some old grumpy sysadmin :P
<drab> feel the vibe man
<nacc> sarnold: :)
<DammitJim> I guess we do since we take care of calling the power company when there is a power outage
 * DammitJim is a grumpy sysadmin
<nacc> lol
<drab> I guessed, all the devops ppl are on slack, not irc :P
<drab> irc isn't fancy anymore
<sarnold> DammitJim: "yeah, we tried turning it on and off again. no luck. better call devops." :)
<sarnold> drab: lol
<DammitJim> no, they haven't even done anything... just reported that there is a problem
<drab> sarnold: ok, so it was power related after all, I had looked at the kernel txt but didn't find that ref
<drab> which actually makes sense since I'm using a pci card with a nvme device and 2 M2 SSDs..
<drab> maybe that's just too much with aux power, I just wonder why it worked for a month no prob and then just started rebooting non stop with taht error
<drab> workload has not changed
<sarnold> drab: ugh! that's annoying. did you lose a powersupply and not notice?
<drab> also I will note the somewhat ironic comment in syslog "Error recovered, nothing to do"
<drab> and then the system reboots
<drab> heck of "have you tried to rurn it off and on again" joke
<DammitJim> anybody?
<drab> sarnold: nope, lights on power PSUs are green and IPMI says it's all dandy
<drab> DammitJim: ?
<drab> if the service isn't prod SLA is diff, but if the team is in charge of maintaining the test env, then it's still your responsibility?
<DammitJim> I don't know
<drab> it's really down to who manages that environment and also how nice you are I guess, I don't mind landing a hand as long as ppl aren't abusive
<DammitJim> there is no SLA
<DammitJim> I guess I manage the environment
<drab> ok, well, if you spend time fixing that you aren't spending time working on prod... somebody needs to have handle of priorities, if it's cool for you to jump on that and you manage it, then do it
<drab> but you're the only person that knows your priority, and/or maybe your manager if that's where you're at
<DammitJim> tru
<drab> the problem with these things is that I've seenf ar too many folks bring drawn in responsive work and never getting to proactive one
<drab> and fixing broken stuff in test is a classic source of that
<drab> so sometimes the right thing is to put taht at the end of the queue and carve out time for proactive work to improve things
<drab> balancing that and helping others is one of the tricky thing that imho makes a good team member and a "senior" person, as opposed to simple tech skills
<DammitJim> the problem in this case is that it's a test service that one of our clients is supposed to have available for our company
<DammitJim> specifically for a project that this developer is working
<DammitJim> on
<drab> ok, then isn't that their responsibility if they are making it available?
<DammitJim> and the developer just happened to notice that his requests started to time out
<DammitJim> yeah, but the dev lead is asking my team to take care of the problem
<DammitJim> meaning... go call whomever, open a ticket, explain the problem, follow up on the issue, blah blah blah
<DammitJim> sorry, thanks for letting me pour out the weird feelings
<drab> np, we've all been there
<DammitJim> so, I don't know if I should help (I'm busy and they know it... as well as my team)
<drab> if you don't have a direct line that would make this quicker I'd say bounce it back, I'm sure he has hands and a web browser and can open a ticket
<DammitJim> or just tell them to take care of it themselves
<drab> generally duimping problems on others is not good practice
<DammitJim> right
<DammitJim> and I don't know why they get the idea that they don't deal with anything but coding
<DammitJim> well, I guess that's their job description
<DammitJim> see how I'm having a hard time?
<DammitJim> lol
<drab> it's ok, admins have had for years the idea that they didn't deal with anything but systems :)
<drab> finding a niche of expertise and shunning everything else is a normal human mechanism
<drab> it creates comfort and security
<DammitJim> that's right
<DammitJim> so, I should ask them nicely to take care of it themselves, right?
<drab> I'd ask them to please give it a go and if they have a specific reason why they can't do it or get stuck you're happy to help, but then to please provide some info of where's they are stuck
<drab> also intentions to me are important, especially if you are emailing ppl, because it's obvious when folks are pissed
<drab> os if you email back pissed because you feel they just tried to drop it on you, that won't go down well even if it's reasonable
<drab> of course easier said than done :P
<DammitJim> yeah, nothing easy about this (and it's probably my fault too because I'm making it a big deal)
<drab> well, it depends on the pov, to me it's important because it helps ppl grow and a team become stronger, sometimes it's good to educate folks to jfgi
<drab> again what matters is the mindset, are you annoyed at someone or are you trying to help the team[s] grow?
<drab> better teamwork isn't shaped by a manager shouting out some cheesy slogans and off site retreats, it's built by conscious choices like this to encourage responsibility etc
<DammitJim> I'm trying not to let them keep throwing stuff at our team
<DammitJim> maybe that's why
<drab> right
<drab> and I think that's a very good thing
<DammitJim> the dev lead was just at my desk telling me how we need to manage all the SSL Certificates and make them available in a keystore for their applications
<DammitJim> I guess we are in charge of that as well?
<drab> I'd rather do that, yes, but that's just me :)
<DammitJim> yeah... that I'm fine with
<DammitJim> anyways
<DammitJim> we'll see how this goes
<DammitJim> blah
<sarnold> could you just automate the whole thing away to lets encrypt and be done with it? :)
<DammitJim> sarnold, I could
<drab> sarnold: have you tried that? was planning on testing it for some internal domains here
<DammitJim> but requests like these keep me from getting important stuff like that from getting done
<DammitJim> you know how it it
<DammitJim> it is
<sarnold> drab: no, happy to avoid all such responsibility :)
<drab> lol, k
<drab> anyway, enough irc for me, ttyl :)
<DammitJim> I thought it was slack
<DammitJim> not irc
<DammitJim> sarnold, what do we troubleshoot first... the application or the infrastructure?
<sarnold> DammitJim: if one depends upon the other, whichever is most foundational. If they don't depend upon each other, whichever one is currently costing the company the most money per minute.
<DammitJim> hhmmmm... that's a tough one
<DammitJim> an application travels through the systems that we maintain and end on someone else's system, then finally tries to reach another application at the other end
<DammitJim> I'm going to shut up now
<drab> ok, I got stuck and need irc again :P
<drab> stgraber: hitting you up directly, hope that's ok - is there any way to init lxd in an unattended way?
<drab> it seems like it used to be configured with flat files, so you could just drop those into place
<drab> but now that it's sqlite based I don't see an easy way
<drab> https://github.com/juju4/ansible-lxd/ <-- this seems to be the "official" moduel to do with with ansible, but it's old afaics and for lxc1
<drab> using flat config files
<nacc> drab: what does `lxd init` do? i don't know if it now prompts or not
<drab> nacc: it asks a bunch of questions about datastore, setting up a bridge, etc
<nacc> drab: lxc1 != lxd -- if they say lxd, it means lxc2
<drab> they say lxd but they are still using the bridge file in /etc/lxc, which I thoguht was 1 business
<nacc> drab: i think /etc/lxc is used by lxd too ... although the default-bridge stuff is now gone, that's true
<nacc> but that was a recent chagne (with the addition of `lxc network`)
<drab> in any case, I tried to do for example a search for zfs in that repo
<drab> and it has nothing but the installation of packages
<drab> so it's not telling lxd which backend/datastore to use, which is one of the questions lxd init asks
<nacc> drab: taht would be `lxc storage`
<nacc> drab: so if you don't want to use `lxd init`, you will need to manually configure lxd, afaict
<nacc> drab: setting up the networking and storage pools, at a minimum
<drab> I guess I'll test if I can call lxc commands before doing a lxd init, I thought I tried and it wasn't working, ie hanging, as if the service was unconfigured
<drab> but maybe I should have just started it anyway and then used lxc commands as you are suggesting
<drab> lemme test that, thanks
<nacc> drab: yeah, i'm not 100% -- there's also #lxcontainers
<nacc> drab: you might be able to the various config commands (config, storage, network) since those are not starting containers
<drab> nacc: #lxcontainers ime is just stgraber anyway :)
<drab> otherwise just questions with no answers or a part/join fest
<drab> I guess I could hit the ML too, that seems to have some more action
<drab> anyway, testing
<nacc> drab: yeah, i answer in there every so often ... but it keeps the topic separate in my head :)
<drab> heh, fair enough
<drab> on a different note, anybody doing nic teaming here? maybe I'm just getting confusd, but rhel has a nice page describing bonding Vs teaming as if they were diff things and also with diff features
<drab> and they say there's a "teamd" daemon that is being used
<drab> however I can't find a similar doc for ubuntu-server and all the docs I'm seeing are for "bonding", with no ref to teaming
<drab> also on most sites the words seem to be used interchangeably, but that seems incorrect if you buy the diff from RH
<drab> http://rhelblog.redhat.com/2014/06/23/team-driver/
<drab> ok, I guess this is the source and it just hasn't been ported to ubuntu? https://github.com/jpirko/libteam
<drab> I see libteam tho in apt
<drab> ah-ha, found it, it's in libteam-utils
<drab> nobody out there seems to be doing it tho, all google results for "ubuntu nic teaming" bring up articles about setting up bonding with ifenslave and just have "teaming" in the title as if it was the same thing
<sarnold> drab: that's because I didn't like the code quality when it was proposed for including in ubuntu https://bugs.launchpad.net/ubuntu/+source/libteam/+bug/1392012
<ubottu> Launchpad bug 1392012 in libteam (Ubuntu) "[MIR] libteam" [Undecided,Expired]
<sarnold> drab: s/in ubuntu/in ubuntu main/
<drab> ok, looks like bonding it is
<drab> thanks sarnold
<drab> at least I figured out what's what, the internets was confusing the heck out of me for a change
<sarnold> :)
<drab> obligatory xkcd: https://xkcd.com/386/
<sarnold> <3
<drab> unfortunately I gtg, but someone should set all those ppl straight on their terminology :P
<DK2> http://abload.de/image.php?img=imag1185-107uwv.jpg
<DK2> i have like 5 installations with this weird error
<DK2> when booting into the OS
<DK2> after grub, this screen appears
<DK2> any1?
<nacc> DK2: what OS is it?
<DK2> ubuntu 16
<nacc> DK2: 16.04?
<DK2> yea
<nacc> DK2: did you verify the ISO you are using? Is this from the installer or from the installed OS?
<nacc> DK2: looks very strange and suspicious, I've never seen anything like it
<DK2> yes that iso worked few days ago
<nmjnb> Hi, I'm looking for some kind of control panel for Ubuntu Server. I looked at Zpanel, which seemed fine, but it seems they don't have a version for 16.04. Do you have tips of other free control panels for Ubuntu 16.04?
<DK2> i was thinking of hardware error, but this is appearing on more than one machine, so i dont know
#ubuntu-server 2017-03-15
<cpaelzer> jamespage: no backporting by cloud-archive of new DPDK further back than Xenial right?
<jamespage> cpaelzer: correct
<jamespage> any earlier pockets are closed now
<cpaelzer> good, thanks jamespage
<cpaelzer> there was a fix to recent dpdk to build on older kernels <=3.10 I think
<cpaelzer> no need to rush it in then
<frickler> jamespage: coreycb: ICYMI nova 15.0.1 is out with high impact fixes for ocata, pls update
<jamespage> frickler: on my list for as soon as it appeared
<jamespage> frickler: working that next
<fullstop> any good tools to understand why I'm having poor network performance on a server?
<fullstop> There are four NICs, and throughput is choppy, even on a NIC which is unused for anything else, so I'm guessing that it's something with the scheduler.
<fullstop> The server is a little busy, load avg of ~16
<fullstop> but there are 24 cores
<exegesis> Hi there. I'm having trouble executing a script at startup.
<exegesis> I have added it to cron as a @reboot job, not working.
<exegesis> Can anyone give me a light on how to add it to rc.local?
<erlon> stgraber: Hey Stephane, Im hitting this bug (https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/1576341) do you know what is the status? I basically cant run iscsid inside the container (failed to mlockall, exiting...). I added the lxc.aa_profile=unconfined, as suggested in the bug, restarted the container, but no success, I still get the same error
<ubottu> Launchpad bug 1576341 in systemd (Ubuntu) "fails in lxd container" [High,Confirmed]
<erlon> stgraber: any idea?
<nacc> erlon: did you verify the running container is unconfined?
<nacc> erlon: what version of open-iscsi?
<erlon> nacc: the last, 2.0.873
<erlon> nacc: hmm, one point worth to mention, this container is running inside a KVM machine
<erlon> nacc: not sure if that can be a problem
<nacc> erlon: specific version -- is this i ubuntu 16.04?
<erlon> open-iscsi                           2.0.873+git0.3b4b4500-14ubuntu3.3 amd64
<erlon> nacc: yes
<nacc> erlon: right it doesn't have the fix, afaict
<nacc> erlon: only zesty does
<nacc> erlon: (well, the open-iscsi fix)
<nacc> erlon: you can try making the same change to the service file locally
<erlon> nacc: reALLY?
<erlon> opps
<erlon> caps
<nacc> erlon: i see no tasks for xenial and the last comment is for the zesty version (per changelog and rmadison)
<erlon> nacc: you mean configuring the iscsid.service  in the container?
<nacc> erlon: let me see if i can find the change verbatim to show you
<erlon> nacc: ok
<erlon> https://www.irccloud.com/pastebin/kAgDBIav/
<nacc> erlon: http://paste.ubuntu.com/24183873/
<nacc> erlon: interesting -- i don't know enough about why mlockall is failing then, it would seem if it was unconfined, it shouldn't have issues with it
<erlon> nacc: as it seems that fix just blocks iscsi from running inside confined containers, righ?
<erlon> nacc: ConditionVirtualization=!container, 'if this is a container, don't run'
<nacc> erlon: i think it prevents it from running in containers period?
<erlon> nacc: yes,
<nacc> hallyn: --^
<nacc> hallyn: should that change be SRU'd?
<erlon> nacc: but checking here, I have run iscsid inside a container running in a baremetal machine and is not getting the same problem
<nacc> erlon: same version of the container (16.04)?
<erlon> nacc: I believe yes, checking
<erlon> nacc: very same version
<erlon>  iscsid -f --version
<erlon> iscsid version 2.0-873
<nacc> erlon: the iscsid version isn't very helpful
<nacc> need the package versions
<nacc> `apt policy open-iscsi`
<erlon> dpkg -l | grep iscsi
<erlon> ii  open-iscsi                       2.0.873+git0.3b4b4500-14ubuntu3.3  amd64        iSCSI initiator tools
<erlon> https://www.irccloud.com/pastebin/4VZlxypy/
<erlon> https://www.irccloud.com/pastebin/BYzncru8/
<erlon> nacc: ^this last one inside the virtualized container
<nacc> erlon: ack
<nacc> erlon: well, i'm a bit stumped myself. It does seem like hallyn's fix in 17.04 is to prevent open-iscsi.service from running in containers at all. But I'm not sure why that'
<nacc> that's acceptable -- i guess you would need to do it manually
<patdk-wk> I'm having a horrible time on 16.04 with a cups server
<erlon> nacc: hmm, but I can't run it  even manualy
<patdk-wk> when run with cupsd -f, it works fine
<erlon> nacc:
<nacc> erlon: right, that's probably why it was disabled
<erlon> root@juju-5efd81-1-lxd-1:/home/ubuntu# iscsid -f
<erlon> iscsid: failed to mlockall, exiting...
<patdk-wk> but when run as a service, or manually, using cupsd -l
<patdk-wk> it timesout after 60seconds and exits
<nacc> erlon: as you don't have CAP_IPC_LOCK actually
<erlon> nacc: Im mean calling the binary directly
<nacc> erlon: right, the systemd service just calls the binary
<nacc> erlon: so rather than have it fail all the time in containers, hallyn disabled it
<erlon> nacc: but inside the baremetal container it runs fine
<jamespage> frickler: just published the 15.0.1 update for nova through to zesty; that will auto-backport in the next hour and then I'll promote and test to the UCA (that might be AM tomorrow)
<nacc> erlon: are the two hosts (baremetal and KVM) the same Ubuntu?
<jamespage> if you want it early then it will be in ppa:ubuntu-cloud-archive/ocata-staging
<erlon> nacc: I believe yes, just double checking
<drab> patdk-wk: had the same issue, went back to 14.04, couldn't figure it out
<nacc> erlon: and same version of lxd in both?
<patdk-wk> it's odd
<nacc> erlon: and kernel (since there are hwe now)
<patdk-wk> only happening on this one server, the other one it works fine
<patdk-wk> but think that has something to do with avahi stuff keeping it alive from hitting the 60sec timeout
<drab> patdk-wk: yeah, I think so, because onew ork around was to add a "ping" cronjob
<patdk-wk> ya, as long as I keep hitting the web interface to configure and do setup, it keeps running
<patdk-wk> as soon as I'm done, it's dead
<erlon> nacc: HOST->"Ubuntu 16.04.2 LTS" ->  KVM -> ="Ubuntu 16.04.1 LTS", -> LXC -> Ubuntu 16.04.2 LTS
<drab> I did some tracking in systemd thinking it was a socket problem, and found some stuff was indeed missing, but couldn't figure out what needed to be fixed
<drab> patdk-wk: but imho the prob is there, with ssytemd and the "on-demand" stuff
<nacc> erlon: so the KVM and baremetal systems are different?
<nacc> erlon: rather than using codenames, it's probably better to pastebin things like `uname -a`, `apt policy lxd`
<drab> hey DammitJim
<drab> DammitJim: how did that work out?
<patdk-wk> epoll_wait(4, [], 65536, 1000)          = 0
<patdk-wk> epoll_wait(4, [], 65536, 60000)         = 0
<patdk-wk> epoll_ctl(4, EPOLL_CTL_DEL, 9, 0x7ffe5adefb70) = 0
<patdk-wk> epoll_ctl(4, EPOLL_CTL_DEL, 10, 0x7ffe5adefb70) = 0
<patdk-wk> close(9)                                = 0
<patdk-wk> close(10)                               = 0
<erlon> nacc: hold on, just fcorrenting KVM I mean the lxc running
<patdk-wk> right after the return from that 60000 epoll_wait, cupsd start to shutdown
<patdk-wk> socket ping should keep it alive :( but annoying
<drab> yeah
<erlon> nacc: so yes, the LXC container that iscsid can work (16.04.1 ) is different from the LXC running under KVM (16.04.2)
<patdk-wk> doesn't happen on my laptop, but that has the full gui installed on it, not just a cups server
<erlon> nacc: just does not makes sense the the later version should work, not the 16.04.1
<patdk-wk> â cups.service - CUPS Scheduler
<patdk-wk>    Loaded: loaded (/lib/systemd/system/cups.service; enabled; vendor preset: enabled)
<patdk-wk>    Active: inactive (dead) since Wed 2017-03-15 13:32:05 EDT; 31min ago
<patdk-wk>      Docs: man:cupsd(8)
<patdk-wk>   Process: 13103 ExecStart=/usr/sbin/cupsd -l (code=exited, status=0/SUCCESS)
<patdk-wk>  Main PID: 13103 (code=exited, status=0/SUCCESS)
<ducasse> is there a difference between the zfs.ko that comes with the kernel and the zfs-dkms package? both are the same version, afaict.
<axisys> I increase the virtual hard disk.. how do I realize the new size in the VM? pvs still shows old size
<patdk-wk> ducasse, yes
<axisys> I increased the size of the virtual disk..
<DammitJim> drab, I told one of the other managers and he said we'll talk about it
<DammitJim> thanks for following up
<axisys> ah.. it took a while for pvresize to take in effect.. now I see the large size
<DammitJim> axisys, good luck with resizing
<axisys> DammitJim: done
<DammitJim> I still have to document how I do that to standardize it in our company
<DammitJim> awesome
<ducasse> patdk-wk: i assume i should use the zfs.ko in the kernel, as the zfs metapackage doesn't drag in zfs-dkms?
<patdk-wk> either is fine
<nacc> for ubuntu kernels you shouldn't need zfs-dkms, iiuc
<patdk-wk> dkms is for when you use a kernel that does not have it
<axisys> my /dev/sdb was a PV .. so once I changed the size of the disk in vmware, I had to run pvresize.. and it took a little type to reflect
<ducasse> patdk-wk: ok, thanks
<patdk-wk> or if if you want to use a different version than the one in the kernel
<ducasse> i see. perfect :)
<frickler> jamespage: great, thx
<nacc> erlon: so i'm not sure where we stand -- it's sort of hard to tell which version is which (as 16.04.1 and 16.04.2 are the same version of lxc and open-iscsi, the difference is in the kernel/x stacks potentially)
<patdk-wk> drab, this is not really the *right* fix
<patdk-wk> but editing the systemd cups.service unit file to change the -l (run on demand) to -f, has fixed the issue
<drab> patdk-wk: oh, good point, better than the ping or stick with 14.04 I guess
<drab> thanks
<patdk-wk> it seems like -F works too
<patdk-wk> testing that, as it seems like a better solution
<patdk-wk> atleast till the real reason is solved, this on demand stuff for a server heh
<drab> while on the topic, I don't know your setup, but you wouldn't happen to have quite a few diff systems there and a reasonable way to centralized printing?
<drab> I wrote to the cups ML, but no response for some reason
<drab> the client.conf thing they say is deprecated and "you must run a local cups connecting to the remote one"
<drab> I get that in principle, but in practice it's a mess ime
<patdk-wk> that is how I do it
<drab> ok, maybe I'm doing something wrong, here's the big problem for me
<patdk-wk> I setup a cups printer, that advertizes airprint and google print
<patdk-wk> setup cups-browsed
<drab> printer goes down, queue is still up because cups doesn't actually check the printer
<drab> user goes to machineA, tries to print, doesn't work, tries again, doesn't work, a third time, doesn't... now machineA has 3 jobs in the local queue
<drab> then use thinks "I nkow, there's a problem with machine A" and goes to machine B and does the same thing
<drab> now I have two machines with 3 jobs each of the same thing in the queue
<drab> I go to fix the printer and bam, 6 copies of the same thing gets printed
<patdk-wk> setup your cups server, with cups-browsed, and BrowseLocalProtocols cups
<patdk-wk> then on the client machines install both again, but with cups-browsed setting of BrowseRemoteProtocols cups
<patdk-wk> it will automatically add the cups servers printers locally
<drab> with client.conf that doesn't happen because jobs only exist on the remote cups, which I can easily monitor and see the dup'ed jobs and remove them before I restart the printer
<drab> the problem isn't finding/configuring the printer
<drab> it's job management
<drab> specfically stuck jobs
<drab> when something is wrong with the printer, local queues ime are a nightmare
<drab> becaue I have no clue on which machine theya re and purging them is a pain
<patdk-wk> yes, but the job won't be on the local machines then
<drab> mmmh, I had that situation quite a bit, which is why we switched to client.conf
<drab> but maybe there was something else wrong
<patdk-wk> dunno, I never have to worry about queues ever myself
<patdk-wk> I mainly use that feature for my laptop and macs
<patdk-wk> they show up on the lan, the printer just appears and is usable
<patdk-wk> no local config setup or anything needed
<patdk-wk> I can't imagine it would get stuck in a local queue when doing that
<patdk-wk> as it should always be able to go to the remote queue
<patdk-wk> or the printer would *vanish* from the local machine
<drab> I see
<drab> eer, above I said "printer has a problem", the problem case is "remote cups has a problem"
<drab> which is when the job is cached locally
<drab> in the other case you're right, the job would be gone from the local queue even if not printed
<drab> I'll take a look at the browserd thing you mentioned, thanks
<drab> if the printer did vanish that'd stop the problem altogether
<patdk-wk> ok, not sure -F works :(
<patdk-wk> ok, found the real reason
<patdk-wk> it's cause I have no shared printers configured
<patdk-wk> adding this as in the bug report I found solves the issue
<patdk-wk> ListenStream=631
<patdk-wk> bug #1598300
<ubottu> bug 1598300 in cups (Ubuntu Xenial) "CUPS web interface stops responding after a while" [Undecided,Fix committed] https://launchpad.net/bugs/1598300
<patdk-wk> ya, that looks like it fixed up
<patdk-wk> upgrading to the cups -proposed package
<patdk-wk> that is damned annoying
<drab> patdk-wk: good catch. the upstream patch seemed unresolved tho, ie they don't accept the ListenStream=631
<patdk-wk> different issue, kindof
<patdk-wk> default in ubunt is to have the webinterface enabled
<patdk-wk> then cups should never exit
<patdk-wk> the patch fixes that
<patdk-wk> so then it becomes a non-issue
#ubuntu-server 2017-03-16
<goose_> Hi so I've got a pretty difficult task to accomplish. I am a beginner at best with servers but I manage the website for a non profit, and our friend hasnt paid his hosting and we have an important article coming out tomorrow. I have my own hosting on greengeeks or I could host it on my computer. I have all the auth info from go daddy to switch it over I just dont know how to, totally new to ubuntu using 16.04 only a couple weeks 
<goose_> I need to point the dns to my hosting and download then upload the site right? its wordpress
<goose_> sorry if this is the wrong forum, its just that a server is at the centre of all of this
<hallyn> nacc: i'm on the road till the weekend, sorry.
<hallyn> please keep me in the loop on the open-iscsi one.  my understanding was it dabbles in hardware therefore is inappropriate in containers
<whirlmind> Any inputs on this thread : https://ubuntuforums.org/private.php?do=showpm&pmid=2552188
<whirlmind> I mean this thread : https://ubuntuforums.org/showthread.php?t=2355754
<kyle__> Any preseed gurus here?  I've been fighting with it for a few days, trying to get my preseed recipe to do an EFI install properly
<cpaelzer> whirlmind: you approach is fine, as you take a backup before you should go along
<cpaelzer> whirlmind: shrinking is always coming with a risk, but mostly works
<cpaelzer> and the "mostly" is where the backup kicks in
<cpaelzer> whirlmind: btw in general, backup to the same disk than you run on is rarely useful (reading in your intention why you shrink)
<kyle__> I can get the installer to run through, but it apparently never catches the EFI install part, because on the next boot, it just pxeboots insto the installer and runs through the install agian.
<whirlmind> cpaelzer: Thank you for your inputs. As to the backup being on the same disk, you are right. I will be taking a copy on to an external device. I usually leave a copy of the backup on another partition on the same disk as well, just for ease of access.
<nacc> hallyn: ack, will do!
<adrian_1908> I didn't get an answer in ##php, so maybe someone here can help me. If I have a VPS with just one core, what's a good number of child processed for PHP-fpm to use? Is the default (5) better than say 1 or 2?
<nacc> adrian_1908: i assume there's a certain amount of tweaking, but i wouldn't change the default unless you know you should
<nacc> adrian_1908: which process manager are you using?
<adrian_1908> nacc: just the vanilla php7.0-fpm that Ubuntu offers by default. If you mean something else, let me know.
<nacc> adrian_1908: fpm itself has a process manager
<nacc> adrian_1908: i don't have one in front of me, let me look at the defaul
<adrian_1908> nacc: do you mean the static/dynamic/ondemand models?
<nacc> adrian_1908: yeah
<adrian_1908> nacc: not sure yet, but i'm thinking static with 1 or 2 children. I imagine the default expects more hardware and load. I don't have any load issues to worry about, but I'm interested in an economical setup none-the-less.
<adrian_1908> It's not always as simple as 1 core = 1 thread or 1 process though, so i thought maybe someone has experience with this :)
<nacc> adrian_1908: seems like dynamic is the default (if i'm reading it right)
<nacc> adrian_1908: and core is probably the wrong term to use here
<nacc> adrian_1908: do you mean you have 1 logical cpu in /proc/cpuinfo?
<adrian_1908> nacc: i meant in the sense that my VPS is assigned only one "vcore".
<nacc> adrian_1908: ok, in my perspective what matters is what linux sees as # of cpus
<nacc> adrian_1908: you can hyper-optimize as to what youa re running on physically/virtually
<nacc> but linux doesn't know any of that
<adrian_1908> fair enough, i'm not well versed in that. my VPS reports `cpu cores : 1`.
<nacc> adrian_1908: what does /proc/cpuinfo say?
<adrian_1908> nacc: just that, an Intel Haswell with 1 cpu core.
<nacc> adrian_1908: oh i see
<nacc> adrian_1908: sorry, i thought you meant the VPS web view or something
<nacc> adrian_1908: even a single-cpu system can handle multiple processes running (well it definitionally is doing htat)
<nacc> adrian_1908: there's not a reason to have more if you know you aren't processing tons of requests
<nacc> adrian_1908: 1 seems like a bad idea, just because if it is busy then you'll notice, i think?
<adrian_1908> yeah, if there's any kind of delay or bottleneck, 1 might just be the wrong choice. Maybe 2 then. I just wasn't sure of there's some complex load balancing going on in the back, that makes e.g. 5 faster on a single cpu (even on low load). Hence my asking.
<nacc> adrian_1908: that i don't know about  :)
<adrian_1908> nacc: ok, but thanks! :)
<jonah> hi can anyone please help with an error I have on iptables: http://pastebin.com/QmNXv8p9
<jonah> much appreciated for any help
<lordievader> jonah: Does the DOCKER chain exist?
<jamespage> mwhahaha: hey - the pike UCA pockets are now populated - its just in sync with Ocata ATM; will get bumped to Pike for milestone 1
<mwhahaha> jamespage: sounds good, thanks
<jamespage> yw
<mwhahaha> jamespage, coreycb: are you guys aware that your fwaas tempest tests are still out of date for ocata? http://logs.openstack.org/57/446657/1/check/gate-puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/49fe74b/console.html#_2017-03-16_18_22_06_516241
 * mwhahaha is checking our tempest exclusions from m2/m3 packaging
<coreycb> mwhahaha, i see what's going on there
<coreycb> mwhahaha, it's not so much out of date, well it is, but upstream hasn't released a point release yet with the fix
<coreycb> mwhahaha, i'll cherry pick the patch and upload it
<mwhahaha> coreycb: ok, we've just be excluding since it broke
<coreycb> mwhahaha, i've uploaded that so it'll work it's way back to the cloud archive for ocata.  you can track it in bug 1667736.
<ubottu> bug 1667736 in neutron-fwaas (Ubuntu) "gate-neutron-fwaas-dsvm-functional failure after recent localrc change" [High,Triaged] https://launchpad.net/bugs/1667736
<mwhahaha> coreycb: thanks
<drab> ok, million dollar question of the day...
<drab> udel net rule seems to conflict with bridging
<drab> probably because the rule mentions the mac address the the bridge is supposed to get the same mac
<drab> so it gets confused
<drab> any idea how to untangle this mess?
<drab> I'm on xenial, using unpredictable names, aka eth0... (yes I've disable predictable names)
<drab> I further set up a rule in /etc/udev/rules.d/70... network to assign the name "lan" to eth0 (based on mac address)
<drab> in /etc/network/interfaces I then have the usual 5 lines to set up "lan" to inet manual and the bridge to add that as a port/device and up it
<drab> the setup works fine if I take out all the udev rules and use "eth0
<drab> "
<drab> as the name
<drab> https://github.com/systemd/systemd/issues/3374
<drab> seems related
<drab> altho nobody wants a million dollars uh? :)
<tsimonq2> drab: I want a million bucks, can I get it without answering the question? XD
<OerHeks> wild guess .. predictable interface naming, not eth0 but ensp3 or simular like that .. we share the million, tsimonq2
<OerHeks> oh, i see the disable precictable names now... no million.
#ubuntu-server 2017-03-17
<drab> I get the million <3
<drab> it seems like if I disable predictable then the matching in udev is screwed up, don't have the time to debug why, but I can'tg et eth0 to become "lan"
<drab> so if I make the names "predictable" again, and I will quote "predictable" to my death bed because it's the last thing those names are, the you can match with KERNEL="enp*"
<drab> that seems to avoid the bridge device which wuoldn't match and therefore it all works in the end
<drab> mega workaround but worth a million. at some point that systemd bug will make it through and it'll "just work" [tm]
<OerHeks> drab, maybe this works, mask udev's rule file for the default policy: ln -s /dev/null /etc/udev/rules.d/80-net-setup-link.rules >> https://www.shysecurity.com/post/20160510-Ubuntu%20Network%20Interfaces
<drab> OerHeks: that post itself says that method doesn't :)
<drab> anyway it's fixed per above
<drab> I'm gonna go enjoy the million dollar :P
<drab> certainly hard earned
<OerHeks> oh
<DirtyCajun> Do we have a stable kernel release that works with AM4 ?
<DirtyCajun> Reading the release notes it looked like 4.10 was promising but i wanted some opinions
<sarnold> DirtyCajun: I suspect most folks just use xeons. You may have better luck in #ubuntu
<DirtyCajun> That is the pleb channel. i stay far away
<sarnold> hehe
<DirtyCajun> Xd
<patdk-lap> I dunno why you don't ask in the kernel channel
<sarnold> I just figured it'd be more likely a gamer would know..
<DirtyCajun> Wat. its not for gaming
<DirtyCajun> its a 16 thread proc. its for my plex 2u
<patdk-lap> heh? not for gaming?
<sarnold> oh? I thought this was the new amd cpu; the only other person who's mentioned it on irc is a gamer..
<patdk-lap> dunno what an AM4 even is
<patdk-lap> people use amd in servers?
<DirtyCajun> absolutely
<patdk-lap> never seen one, ever
<DirtyCajun> wel.... i work at a datacenter
<DirtyCajun> and it is in about 40% of our blades
<patdk-lap> I have racks of servers, not a single amd
<sarnold> well what do you know, an amd chip in a server ;) http://www.thinkmate.com/system/stx-nl-qe12-2210
<patdk-lap> :)
<patdk-lap> they are always horrible for my performance workload, due to the lack of ondie l2 cache that intel has
<DirtyCajun> patdk-lap, check out the specs of the Ryzen
<DirtyCajun> and the upcoming server grade chips
<patdk-lap> ya, I'll wait for real benchmarks
<drab> why is mdadm and preseeding such a pain to work with :(
<patdk-lap> as the benchmarks so far, are not accurate
<DirtyCajun> passmark has 50+
<sarnold> in fairness it's insanely hard to do benchmarking well
<patdk-lap> the last benchmk I saw of amd new cpu line was, in this specific memory workload test, we beat the crap out of intel
<patdk-lap> due to having double the amount of memory channels
<patdk-lap> ya, anyone can make something beat the crap out of something else in one area, I want to know about normal workloads :)
<sarnold> gah
<sarnold> so, cpubenchmark.net numbers on it look pretty good but I was curious about other specs.. and was looking for an ARK link. sigh. :)
<patdk-lap> I guess for a in memory database, that might be ok, if the ondie cache is acceptable to actually run the sql server
<patdk-lap> hehe
<patdk-lap> I'll wait a year or two atleast
<drab> how si there not an option to nuke everything with preseed? just nuke, *everything*, don't try to mount raids and be smart
<patdk-lap> intel has pretty much deadended their cpu lines though
<drab> have I missed it?
<drab> I keep getting crap because some of the disks had raid stuff on it that apparently mdadm --zero-superblock hadn't cleaned and install fails
<drab> and it seems non-sense since I don't care what's on the disks, I just want the whole thing to be wiped and reinstalled
<sarnold> patdk-lap: what I could find on the amd website is sorely thin on specs, but it does say for the eight core model, 2 channels of ddr4, 768kb of l1, 4mb l2, 16mb l3; 95W
<patdk-lap> seems light
<patdk-lap> man, have to go all the way down to a e5-2620v4 to find an 8core intel
<patdk-lap> 20mb cache
<patdk-lap> not as bad as it used to be
<sarnold> I've got a e5-2630v3, very similar specs; but four memory channels, 40 PCIe lanes (instead ofthe 24 that other news sites were reporting for the ryzen)
<patdk-lap> 4 memory channels
<sarnold> so says ark, hehe :)
<patdk-lap> so says my m otherboard :)
<patdk-lap> so it *might* be comparable
<patdk-lap> have to see what pricing turns out to be
<sarnold> cpubenchmark said it'd be a bit cheaper than my cpu
<sarnold> if you had compute-bound tasks rather than memory bound tasks, the ryzen might be a good fit
<patdk-lap> ya, most workloads are memory issues
<patdk-lap> except stuff like encoding video
<patdk-lap> I don't know why that isn't memory bound, as I would think it should be :)
<sarnold> it's nice to know amd's in the game again
<patdk-lap> intel did take themselves out of it
<patdk-lap> been years since they made progress
<patdk-lap> and now they basically admitted to giving up
<sarnold> they haven't had any real need
<patdk-lap> sure they have
<patdk-lap> they have almost let arm overrun them
<sarnold> back in early opteron days
<sarnold> but it's been a decade
<patdk-lap> intel is getting backed into a server cpu corner
<patdk-lap> with this, more and more amd systems for desktops, maybe laptops
<patdk-lap> but more and more laptops are going arm by being replaced with tablets and phones and stuff
<fishcooker> how to duplicate a server using the same configuration ?
<ybaumy> hi. what can cause something like the following. on vlan 1 iscsi connections work like a charm. on vlan 500 iscsi connections work like a charm. but on vlan 501 iscsi connection are really slow and its a lottery if they work at all
<ybaumy> there are no dropped packets. no denies on the firewall
<ybaumy> i dont find anything at all
<ybaumy> just that its incredible slow
<ybaumy> all 3 vlans are using the same physical connection
<ybaumy> the VM that runs vlan 501 has no memory or cpu problems
<ybaumy> they all run 16.04 latest patches
<ybaumy> i found the problem
<ybaumy> fyi there was proxy arp enabled on the firewall for vlan 501
<ybaumy> i found that in tcpdump there were no arp requests
<a8o> Know any tricks for saving shell history as you enter commands?  I keep getting connections lost/dropped and i loose my shell history unless i exit normall.
<hateball> a8o: first hit on google is http://askubuntu.com/questions/67283/is-it-possible-to-make-writing-to-bash-history-immediate
<hateball> if you keep dropping your connection you may want to consider using mosh
<a8o> cool thanks.
<a8o> I use mosh on some systems, but not all can have it.
<hateball> a8o: true enough
<TafThorne> Try using screen or tmux on the remote end.  Then your shell session will persist even if you loose ssh connection to the remote.
<a8o> never heard of tmux.
<a8o> i do use screen alot, but some terminal apps get all screwy when I use them in screen.  But I do use screen for stuff I need to run and watch now and then.
<TafThorne> tmux is very similar to screen.  Their use to be use case differences between then but now days I think both can do pretty much everything the other can do.  screen is often already installed on systems (just because it was around first I think).
<TafThorne> As long as you do not resize the terminal I usually find things behave OK inside screen.  I do not use tmux so I cannot vouch for it.
<a8o> if anything it's fun to try something new.  Lately I've been using the fish shell.  Love that one
<jamespage> coreycb: hello
<jamespage> coreycb: https://launchpadlibrarian.net/311293238/buildlog_ubuntu-zesty-amd64.vmware-nsx_10.0.1~git20170317.bfa48c0-0ubuntu1~ubuntu17.04.1~ppa201703171404_BUILDING.txt.gz
<jamespage> does that seem familiar to you - I can't get vmware-nsx to build in a PPA
<jamespage> coreycb: builds fine locally/on a cloud instance using sbuild
<coreycb> jamespage, haven't seen that before.  i wonder if the tests do any db setup that is getting missed for some reason.
<coreycb> jamespage, i see run_tests.sh has an option to recreate the database
<jamespage> coreycb: yeah not sure about that
<jamespage> coreycb: afaict nothing ever gets re-created on disk
<jamespage> created rather
<jamespage> coreycb: I lied as well - it failed on my laptop
<coreycb> jamespage, oh..
<drab> anybody here using ansible by any change and has it working with a preseeded install against the in-target chroot?
<jamespage> coreycb: hmm
<jamespage> coreycb: I might have it identified
<jamespage> coreycb: why do we have a patch to make unit tests use an on-disk sqlite db?
<coreycb> jamespage, is that in vmware-nsx?
<jamespage> coreycb: no in neutron
<jamespage> coreycb: disabling it to see if that helps
<coreycb> jamespage, ok, not sure i'll take a look at it though
<station> can ubuntu core be used on an intel pc with 4 Etherports as an gateway/rooter â¦ firewall
<drab> station: I'm doing that
<drab> actually just getting through the bonding part on a second machine to get a "4gbit" link
<drab> oh nm, you said core, assumed -server since this is -server, nm
<station> drab: what did you use to set it up? is it a bad idea to alsow host like security cam directly on the rooter?
<station> im new to server ....
<drab> station: if by router you mean your internet modem, it's ok ime as long as you don't open it up and set up, say, port forwarding
<station> drab: so i could set open sense / pfsense on one "roooter PC" and a separate server
<drab> <troll level raised to yellow>
<drab> station: what are you trying to achieve? I'm having troubles to understand what you're asking for
<drab> are you trying to build a firewall for your home?
<station> drab: im still learning about server the 4 port pc will replace a slow rooter and I have to get internet access to nextcloud  rocketchat â¦
<station> was hoping to get them both on one device with ubuntu core but â¦
<drab> nacc: you might know, is there any ML I can post to about say preseed questions? ubuntu-server seems to be development list only and not for support requests
<drab> but I don't see any other place/ML where -server ppl are hanging out to ask a question
<nacc> drab: you can ask here
<nacc> drab: you can ask on -server too, you might just get redirected around a bit; i wonder if ubuntu-users is the right place
<drab> nacc: I doubt -users deal a lot with preseed, feels like asking in ubuntu-desktop :P
<drab> but maybe I'm wrong, not on -users
<nacc> drab: what's your question?
<drab> nacc: if you don't think -server ppl are going to hate me for the rest of my life for posting support requests to a -devel list then I'll try that
<drab> I have two preseed problems
<drab> both on -server and -desktop actually, altho latetr may be a bug in ubiquity installer
<drab> so on -server, I can't figure out how to get preseed to ignore everything on the drive
<drab> if I try to re-pxe a sytem that had raid on it, initrd I think will autostart the md device
<drab> by the time parted kicks in the drives are in use and everything fails because there's no root device
<nacc> pxe doesn't depend on a physical device being present for root
<drab> I have to manually get a shell, mdmad --zero-superblock, stop the mdadm device, and then dd zeros for good measure
<drab> sure, talking about preseed
<drab> I can't basically reinstall a system automatically as it stands
<drab> if the disks had stuff on it
<nacc> while not having done this with raid, i've reinstalled many systems in the past that were already installed
<nacc> drab: does it only happen with raid?
<drab> even if they had a previous install, parted stops and asks me to confirm which device I wanna use and stuff
<drab> nope, with a single disk too, same issue
<nacc> drab: then you're not telling preseed enough
<nacc> partman-lvm/device_remove_lvm boolean true
<drab> I'm sure, I just can't figure out what I'm missing
<nacc> partman-lvm/confirm boolean true
<drab> lemme dpaste it
<drab> http://dpaste.com/30B43FQ
<drab> nacc: I have all of those already
<drab> that's a grep partman srv1604.seed
<nacc> drab: can you pastebin your whoel preseed?
<nacc> drab: iirc, what you have is sufficient for everything to get wiped by the installer and if it's still prompting you for a selection, then something else is wrong
<nacc> drab: and please be precise as to where it stops, becasuse "asks me to confirm wich device ... and stuff" doesn't help me undersatnd where the preseed is stopping
<nacc> drab: ideally screenshots, i think
<drab> nacc: ok, I'll get to that, gimme a few, thank you
<drab> I should have a windows VM on vbox to repro this so that I can get actual sshots instead of blurry pics with my phone
<nacc> drab: in the meawhile, can you pastebin your entire preseed?
<drab> nacc: k, that I can certainly do right away, sec
<drab> nacc: ok this has everything , line from pxelinux that I start the install with + srv1604.seed + the partman diff with the raid1 version
<drab> http://dpaste.com/27KH6SP
<drab> thank you for taking the time to look
<nacc> drab: do you have multiple physical disks?
<drab> nacc: in the server using the raid-1, yes, in the testbox using the simple one, no, just one ssd
<nacc> drab: in the latter case, does it help to specify
<nacc> d-i partman-auto/disk string /dev/sda
<drab> ok, I'll add and try with that and see if it helps
<drab> there is the equivalent of that in the raid1 already tho, and I still get errors due to raid being mounted, but that might be a diff issue
<nacc> drab: right i think it is
<nacc> drab: hence why i want to see where preseed stops :)
<drab> ok
<drab> I'm still working on getting a VM going, it's a bit of a multitasking morning with a few more interruptions than usual, appreciate your patience
<nacc> drab: np, i might need to step away for a bit to run some errands/lunch
<drab> no worries at all, thank you for your help
<drab> btw I just figured out another neat trick this morning, just in case anybody cares...
<drab> to do end-to-end ansible installs
<drab> or anything else really, would be the same with chef, if you wanted to trigger a push and need ssh
<drab> the late-command can install ssh in /target and then you can just start it and it'll work just fine so than you can ssh in/push stuff to it and make all the changes before rebooting
<drab> plus put in the bios boot from hd first, second network, so now I can WOL machines at night, dd a few bits to the disk to remove the mbr, reboot, and it'll go into pxe, install and then ansible it
<drab> poor man ipmi :)
<rgb-one> Hey
<rgb-one> any osTicket users here?
<tarpman> oh wow. been years since I heard that name..
<tarpman> rgb-one: not any more, and boy am I grateful for that!
<rgb-one> tarpman, hehe why do ya say that?
<drab> tarpman: is there a better alternative? I've been looking for a simple ticketing system for ppl to report problems, but didn't find much
<rgb-one> drab, I found a list on reddit
<tarpman> drab: I moved to request tracker
<drab> rgb-one: link? I saw a whole bunch, but they are all way too complicated or kinda broken
<tarpman> rgb-one: embarrassing... it's long enough ago that I'm actually having trouble remembering specifics
<rgb-one> some free ones, Request Tracker and OTRS
<rgb-one> https://www.reddit.com/r/sysadmin/comments/na2r0/to_my_fellow_helpdesk_operators_what_trouble/
<rgb-one> Free ticketing systems: https://www.reddit.com/r/sysadmin/comments/2pp8op/free_ticketing_systems/
<drab> rgb-one: if you go through that list it basically seems to come down to the usual 2 tho, OTRS and RT
<drab> everything else isn't free (despite the topic, see Jira) or is the usual hosted web thingie
<rgb-one> do you use either of those?
<drab> I have in the past
<drab> and they are too complicated for where I'm at now
<rgb-one> Oh
<drab> I almost just need a form with a listing at the back
<drab> a mini django site would do I reckon
<drab> even just sending me an email would work... I could do that to start actually
<drab> I almost implemented it as a google form... I just need a few questions/guided step for ppl to report problem
<drab> so that they don't write "my internet is broken"
<drab> and it turns out they were at home... not joking
<drab> I don't need multi user, report generating ITIL TLA MSG BLAH BLAH super duper flashy mobile first
<drab> :P
<drab> but it seems I'm a minority so there's no such thing
<rgb-one> I see
<drab> nacc: this was my other problem btw, a bug still open since 2011... https://bugs.launchpad.net/ubuntu/+source/netcfg/+bug/713385
<ubottu> Launchpad bug 713385 in netcfg (Ubuntu) "netcfg/choose_interface=auto fails to find the right interface" [Medium,Confirmed]
<drab> still working on those sshots
<drab> I have the same prob as the initial guy, server with a quadport nic
<nacc> drab: and they don't report link up even though they have link?
<drab> the problem is with the "predictable" names, in some cases the "first" port is not the one I have pxe set up on
<drab> I understand the theory about predictable naming, but honestly I never had an issue with eth* stuff coming out of order and I could reliably say to pick, say eth4 for pxe
<nacc> drab: and do you pass BOOTIF= down?
<drab> so I used ifnames=0 to go back to that, which worked and could reliably pxeboot stuff, but that broke everything when I started working with the new VM server and need to preconfigure bridges
<drab> have not used BOOTIF=, but I passed netcfg/choose_interface=eth0 if it means the same thing
<nacc> well, if you use BOOTIF= netcfg/choose_interface=auto should work
<nacc> fi you don't use BOOTIF=, then yes, you might need to specify the network interface
<nacc> drab: if you don't use your recipes (but use 'atomic' or something) does it work?
<drab> nacc: I did some more tests... it works with auto, but it doesn't do what I'd like it to do. basically, for whatever reason, what by the rear panel are port 3 and 4 come up "earlier" with systemd and are named enp129XXXX Vs the other two enp4XXX
<drab> so I guess they are selected first and stuff gets installed/set up with the wrong interface
<nacc> strange
<drab> the "funny" thing is, if I disable predict interafces with ifnames=0
<drab> then the enp4XXX gets eth0 etc
<drab> so eth0-3 match the panel layout
<drab> and I get preseeding etc on eth0 which is what I want
<drab> but like I said if I do that then I get other problems when I get to setting up the bridge
<drab> at this point I'm inclined to give up and work around it with ansible
<drab> as long as the install works with predictable names, when the box comes back I can reconfigure it
<drab> the thing that makes even less sense is that the dhcp server is set to tftp *only* on that first interface, the others don't
<drab> so it pxeboot from iface1, then when installer stuff it loaded order is rearranged by systemd or whatever
<drab> it dhcp out on iface3, gets an ip and continues the installation from there
<drab> I guess a good fix would be to have the installer to use the same iface it pxebooted on, but it might not know what that is
<drab> nacc: I'll test with atomic
<drab> nacc: nope, same error, no root file system defined
<leetskeet> hello all, recently setup server 16.04 and I have an issue where after a reboot, I have to complete a local login before SSH with key login will work
<leetskeet> ran systemctl enable ssh and still same issue
<leetskeet> anybody around
<drab> ok so I found the error...
<drab> nacc: ubiquity: error ddf1 wrong # of devices in RAID set "ddf1_kvm5" on /dev/sda
<drab> I've no idea why I get that
<drab> there's no raid on that disk, it's an old windows disk, or maybe windows was installed with fakeraid and that did something to it...
<JMichaelX> did a release upgrade a few weeks ago on home server, from 14.04 to 16.04, and just noticed that installation is using kernel 3.13.0-110-generic
<JMichaelX> desktop on 16.04 is using kernel 4.4.0-66-generic. is there a reason for this difference?
<genii> You haven't done a dist-upgrade
<drab> JMichaelX: 16.04 server uses 4.4.65 too, prolly just missing a dist-upgrade
<sarnold> JMichaelX: how did you do the upgrade?
#ubuntu-server 2017-03-18
<lordievader> Good morning.
<blackflow> Hello. I'm running Bind9 as resolver in our ipv4 only network. I'd like to force it to use ipv4 only, there's a flag "-4" for "named" executable. I've added it to OPTIONS in /etc/default/bind9, restarted named, but "-4" is not showing in ps output, and it keeps trying ipv6
<blackflow> I tried copying /lib/systemd/system/bind9.service  to /etc/systemd/system/  and changed the ExecStart to include -4   , but still the same
<blackflow> How do I force the -4 option to the named executable of Bind9?
<blackflow> (Ubuntu 16.10)
<r33d_> hi
<fishcooker> lordievader: hello
<JMichaelX> sarnold: i upgraded from 14.04 to 16.04 from command line, using do-release-upgrade, and have since done multiple apt upgrades and apt dist-upgrades.
<JMichaelX> however, it is still using 3.13.0-110-generic
<teward> JMichaelX: is this a VPS?
<teward> oh wait home server nevermind
<genii> Maybe you have something pinned
<JMichaelX> have not pinned anything
<genii> It could also be that you have a package installed with specific kernel name, instead of just linux-image-generic
<JMichaelX> i can just try to install the most recent kernel, but just found it odd that the kernel did not upgrade
<JMichaelX> yes, possible. will check that
<genii> !info linux-image-3.13.0-110-generic
<ubottu> Package linux-image-3.13.0-110-generic does not exist in yakkety
<genii> !info linux-image-3.13.0-110-generic trusty
<ubottu> linux-image-3.13.0-110-generic (source: linux): Linux kernel image for version 3.13.0 on 32 bit x86 SMP. In component main, is optional. Version 3.13.0-110.157 (trusty), package size 14509 kB, installed size 32631 kB
<genii> !info linux-image-3.13.0-110-generic xenial
<ubottu> Package linux-image-3.13.0-110-generic does not exist in xenial
<genii> Hm
<OerHeks> is this vagrant?
<JMichaelX> ok, it does appear that linux-image-generic was not installed. i seriously doubt i ever removed it, however
<JMichaelX> after installing linux-image-generic, i did get errors like: initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
<JMichaelX> are these types of errors something a person should not be seeing in 16.04?
<genii> Are these messages persisting after a boot into the newer kernel?
<JMichaelX> am preparing to check that. will brb
<JMichaelX> so far so good. 4.4.0-66-generic now in use, and i'm seeing no errors. many thanks for the help
<genii> Excellent.
<genii> JMichaelX: The upstart error can also happen when there are old startup scripts in /etc/init/ ( upstarts scripts) but also startup script in /etc/init.d/ ( systemd)
<genii> ( for the same service)
#ubuntu-server 2017-03-19
<lordievader> Good morning
<adrian_1908> Since my nginx is configured to compress textual data it serves, i should disable any compression on the PHP-fpm side, right?
<blackflow> right
#ubuntu-server 2018-03-12
<ShellcatZero> what is the proper place to set the TERM variable? It is currently set to xterm-256color in my .zshrc file, but this overrides GNU screen and tmux causing problems, since they both require TERM=screen.
<Neo4> :)
<cpaelzer> good morning
<lordievader> Good morning
<jamespage> o/
<jamespage> cpaelzer: good morning!
<jamespage> cpaelzer: would you have time for a peek at https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1754015 ?
<ubottu> Launchpad bug 1754015 in libvirt (Ubuntu) "nova-compute-kvm does not pull ipxe-qemu on non-amd64 archs" [Undecided,New]
<cpaelzer> hi jamespage
<cpaelzer> did that get the libvirt tasks just minutes ago?
<cpaelzer> I don't have any updates in my mail (yet) - just want to ensure I'd have got that without your ping
<cpaelzer> reading now
<cpaelzer> jamespage: please think with me while I pass through the bug updates :-)
<cpaelzer> jamespage: step 1 I agree to coreycb on comment #1/#2
<cpaelzer> reading further ...
<jamespage> cpaelzer: it did - I just added them
<cpaelzer> ok, makes sense then jamespage
<cpaelzer> TL;DR: a recommends is the correct dep from the qemu packages
<cpaelzer> I'll update the bug
<jamespage> cpaelzer: I was a bit confused as to why on x86 we Depend, but on arm we only recommend?
<cpaelzer> jamespage: TL;DR x86 needs it to work, ppc/arm only needs it to enhance
<cpaelzer> even on x86 there would be cases it could be a recommend
<cpaelzer> but the general history on this was a depend on x86, so that is kept for compat
<cpaelzer> if we would follow the depend/recommend definition hard also x86 would become a recommend
<cpaelzer> OTOH - nobody complains that pulled in on x86, while on arm/ppc there are way more cases this isn't needed
<cpaelzer> jamespage: ^^
<cpaelzer> I'd update that on the bug - ok?
<jamespage> yes thanks!
<jamespage> I guess if its needed for openstack operation we can pull it in via the charm anyway
<cpaelzer> jamespage: bug updated, fix dep in nova-compute-kvm or charm would be my suggestion
<cpaelzer> I'll track the discussion if any arises
<cpaelzer> thanks for the ping jamespage
<jamespage> cpaelzer: actually I like nova-compute-kvm
<cpaelzer> jamespage: I tihnk you are supposed to "like nova-compute-kvm" by your Job :-P
<jamespage> lo
<jamespage> cpaelzer: do I need ipxe-qemu-256k-compat-efi-roms as well? or is ipxe-qemu enought?
<jamespage> my brain can't remember
<adac> hi there
<adac> I'm following this: https://unix.stackexchange.com/questions/198003/set-default-kernel-in-grub/224708#224708
<adac> but no matter which number I set in GRUB_DEFAULT it always boots me the newest kernel
<adac> any ideas?
<nacc> adac: did you run `update-grub` after?
<adac> nacc, yepp
<nacc> adac: did you check the generated file? (/boot/grub/grub.cfg) ?
<TJ-> adac: what does "grep menuentry /boot/grub/grub.cfg; grep GRUB_DEFAULT= /etc/default/grub" show ?
<adac> TJ-, nacc  https://gist.github.com/anonymous/34c54264871a990690ca3b8b75285d49
<nacc> adac: that reads as GRUB_DEFAULT is unset
<nacc> adac: grep GRUB_DEFAULT /etc/default/grub
<nacc> adac: oh sorry, misread the gist
<adac> is set to 5
<TJ-> adac: which one do you want to be the default
<adac> 4.4.0-112-generic
<adac> TJ-, ^^
<adac> shouldn't the number be 5 for that?
<TJ-> adac: GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 4.4.0-112-generic"
<adac> TJ-, so It doesn't work with the number?
<adac> mean when I set the number to 5
<adac> I need to add this string?
<TJ-> adac: yes, but it's easier to use the name since numbers can change... you'd want =6 for the same entry
<nacc> TJ-: gnulinux-4.4.0-116-generic-advanced-e1b951ba-b110-4a81-bf51-23d82e6c92c5
<adac> TJ-, ok that is true
<nacc> is technically a better choice (less prone to string changes)
<adac> TJ-, still wondering why the number won't work
<adac> I try now to reboot with this
<TJ-> adac: especially with sub-menus involved using numbers is a pain, whereas using names is understandable long after it is set, e.g. I have GRUB_DEFAULT="Advanced options for Ubuntu 16.04.4 LTS>Linux Mainline-RC for Ubuntu 16.04.4 LTS>Ubuntu 16.04.4 LTS, with Linux 4.16-rc3-pci_bridge" (2 levels of sub-menu)
<adac> TJ-, ok I see. Yes I try it out now like this. lets see
<adac> TJ-, ok now it doesn't boot up anymore at all :D
<adac> https://gist.github.com/anonymous/09bd217831753d5a017c481af6b8d34b
<adac> this was correct or?
<TJ-> adac: you sure seem to be good at breaking GRUB :D
<adac> TJ-, totally :D
<adac> how can I set this back in a rescue shell?
<adac> :D
<adac> modifiy the generated one or?
<adac> *generated file
<TJ-> adac: see what this reports: "grep 'set default=' /boot/grub/grub.cfg" that should match GRUB_DEFAULT=
<adac> TJ-, is this correct what I have generated? see above
<TJ-> adac: always set it in /etc/default/grub, then do "sudo update-grub" which writes /boot/grub/grub.cfg based on /etc/default/grub
<jamespage> coreycb, cpaelzer: https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/nova/commit/?id=072a1d449e1e37e2a5233f25b7d6568b84b842bc look ok?
<cpaelzer> jamespage: yes I think so
<jamespage> done
<jamespage> coreycb: I committed the same fix to the stable/pike branch but I think we can wait until the next point release before uploading
<frickler> jamespage: not directly related but nova folks seemed to strongly suggest to drop 17.0.0 and use 17.0.1 instead
<jamespage> yeah I saw that - I'll ask coreycb nicely :-)
<coreycb> jamespage: frickler: yep saw that as well, planning to do that today
<jamespage> coreycb: ta
<frickler> cool, thx
<coreycb> jamespage: thanks for the nova fix
<gunix> does ubuntu have an official docker hub account, so that i can download the maas container instead of building it myslef ?
<arooni> hey team;  how to set up x11 for remote clipboard sharing?  i have xclip installed on ubuntu server but when i do echo 'foo' | xclip i see can't open display :0
<arooni> even though i've set the var export DISPLAY=:0
<hallyn> ugh - ubuntu bionic desktop (gnome over vnc) on kvm is doing that key autorepeat thing
<hallyn> bouncekeys
<hallyn> guess it could be the gvncclient
<hallyn> though i've not seen it happen with other kvms
#ubuntu-server 2018-03-13
<altmount> I oppose!
<Randolf> altmount:  What do you oppose?
<cpaelzer> good morning
<Randolf> Good morning cpaelzer.
<cpaelzer> hi Randolf, good mornign to you as well
<lordievader> Good morning
<Randolf> Good morning lordievader.
<lordievader> Hey Randolf
<lordievader> How are you doing?
<Randolf> Good.  How are you?
<lordievader> Doing good here :)
<rbasak> cpaelzer: may I have your opinion please, as my potential future memcached MP reviewer?
<rbasak> Upstream landed https://github.com/memcached/memcached/pull/285 in a "bugfix" update.
<rbasak> Which is really for RHEL. It disables some systemd sandboxing features as RHEL's systemd doesn't support it.
<rbasak> It does so by prefixing '##safer##' to certain lines.
<rbasak> The RPM .spec is updated to sed them out again if the systemd is new enough.
<rbasak> In our case, systemd is always new enough.
<rbasak> But if I pull the new upstream and do nothing, the systemd service unit regresses by disabling this sandboxing.
<rbasak> IMHO, it is inappropriate for upstream to ship the lowest common denominator by default.
<rbasak> Older RHEL should patch out the lines it can't support or something. So I intend to file an issue about it.
<rbasak> In the meantime, how should I package this in Debian?
<rbasak> sedding them out again in debian/rules works just fine, and is reasonably future proof if upstream keeps to the same pattern.
<rbasak> Or I could add a more rigid quilt patch.
<rbasak> Opinion?
<cpaelzer> rbasak: start reading ...
<cpaelzer> rbasak: as you already said sedding as well as a patch works
<cpaelzer> rbasak: I'm somewhat afraid that sed'ding might enable/disable random bits in the future
<cpaelzer> I also think upstream does that the wrong way
<rbasak> cpaelzer: so you'd prefer I add a quilt patch for now, rather than sed?
<cpaelzer> rbasak: sort of yes, quilt + a check if there are more ##safer##
<cpaelzer> rbasak: the check could break build
<cpaelzer> rbasak: maybe even check count of ##safer## and if != the expected amount the packager has to check them one by one if it is ok to disable
<cpaelzer> this could still fail if they add one and remove one upstream but you see where I'm heading to
<cpaelzer> some sort of safety what we enable/disable with the sedding
<cpaelzer> if at least they would have taken safer-<systemdversion> or something
<cpaelzer> then the decision would again be where it belongs (upstream)
<cpaelzer> but they didn't
<rbasak> cpaelzer: perhaps a quilt patch for now, and a dep8 test to verify that ##safer## doesn't appear in the shipped service unit file?
<rbasak> That should cover your mutate-but-count-is-same case.
<cpaelzer> rbasak: yes
<cpaelzer> rbasak: but dep8 is needlessly late for this
<cpaelzer> rbasak: why not a d/rules entry that checks
<cpaelzer> so people would realize it mcuh earlier, probably also before it is in some -proposed
<rbasak> cpaelzer: good point. Easy enough to do in override_dh_auto_install.
 * rbasak has filed https://github.com/memcached/memcached/issues/359
 * rbasak has filed https://bugs.launchpad.net/memcached/+bug/1755460
<ubottu> Launchpad bug 1755460 in memcached (Ubuntu) "memcached.service is less secure by default" [Medium,Triaged]
<frickler> looks like memcached maintainers are proud of the current publicity and want to make sure they can keep riding that wave? :-/
<rbasak> frickler: huh?
<frickler> https://blogs.akamai.com/2018/03/memcached-fueled-13-tbps-attacks.html
<rbasak> Oh. That's something quite separate. I said "secure" in my bug summary, but it's really a different (and lower) order of magnitude of "secure" than the amplification vulnerability.
<ahasenack> hi,
<ahasenack> how would I go about updating a package version in ubuntu's git with a new upstream (not from debian) tarball?
<ahasenack> non-git workflow is essentially uupdate on the new tarball
<ahasenack> (the one I know of)
<rbasak> I happen to have just done this for memcached (MP not up yet).
<rbasak> I used uscan/uupdate, and then hacked things around to gitify it again.
<rbasak> Then added one commit that changed upstream sources only (not committing the debian/changelog change made by uupdate).
<ahasenack> git status followed by git add/remove as needed?
<rbasak> uupdate puts everything in a different directory so I had to bring it all back, including any dotfiles (there were none in my case).
<ahasenack> right
<ahasenack> feels a bit error prone
<rbasak> So a bunch of hackery done by hand, but the end result being that there's a single git commit that pulls in the new upstream but doesn't touch debian/
<ahasenack> how does debian do it in their git, do you know?
<ahasenack> same thing?
<rbasak> This could be automated in the future by git ubuntu uupdate or something, but that's not implemented.
<rbasak> gbp has some functionality to do something similar
<rbasak> gbp import-orig
<rbasak> Maintains a separate upstream branch
<rbasak> gbp updates that and then merges upstream into master (and there's a --no-merge flag)
<rbasak> That's the common gbp workflow
<rbasak> ahasenack: in terms of error-proneness, you can gain some confidence when dpkg-buildpackage doesn't complain. As if it's a 3.0 (quilt) package, then everything apart from debian/ must exactly match the orig tarball, and the orig tarball isn't hacked with.
<ahasenack> true, I got such errors in other occasions
<cpaelzer> ahasenack: FYI this was a uupdate for example https://git.launchpad.net/~paelzer/ubuntu/+source/iproute2/log/?h=bionic-merge-4.14.1
<cpaelzer> going ahead of Debian
<ahasenack> I'll take a look, thanks
<smoser> rbasak: do you recall the bug where you were asking to change cloud-init to manage_etc_hosts : true ?
<rbasak> Yes. Do you want the number? :)
<rbasak> smoser: https://bugs.launchpad.net/cloud-init/+bug/1741277
<ubottu> Launchpad bug 1741277 in cloud-init (Ubuntu) "Not all platforms running cloud-init end up with the system hostname resolveable by default" [Undecided,Confirmed]
<smoser> thanks
<nacc> rbasak: when using a fixture with pytest.mark.parametrize, do i need to pass the fixture as a parametrized thing each time? or will pytest dtrt?
<nacc> fixture *and* mark.parameterize
<rbasak> I think you just add the fixtures to the def line
<rbasak> I can't remember whether at the end or the start.
<rbasak> But one of those should work I think.
<nacc> rbasak: ok, i'll test to figure it out (that's what i wasn't sure of either)
<rbasak> nacc, ahasenack: I just created https://code.launchpad.net/~racb/ubuntu/+source/memcached/+git/memcached/+merge/341340. Process question: is it a problem that usd-import-team isn't in the list of reviewers?
<rbasak> This happened because I added the first reviewer from the MP creation page.
<Odd_Bloke> rbasak: nacc: Adding them anywhere in the parameters should DTRT.
<nacc> Odd_Bloke: thanks!
<nacc> rbasak: process-wise, no, except that only usd-import-team can create the upload tag
<nacc> rbasak: had another test question for you, if you have a moment (HO would be preferred, but I can do IRC)
<rbasak> HO is fine. Two minutes. Standup HO?
<nacc> rbasak: thanks
<ahasenack> rbasak: mp is fine
<ahasenack> wrt your question
<ahasenack> rbasak: hm, question
<ahasenack> rbasak: debian repacks samba's tarball
<ahasenack> samba_4.7.4+dfsg.orig.tar.gz
<ahasenack> upstream is like https://download.samba.org/pub/samba/stable/samba-4.7.6.tar.gz
<ahasenack> how do I know what was changed in that +dfsg repacking? untar it and compare?
<ahasenack> rbasak: my connection dropped, not sure you saw my dfsg question above?
<nacc> ahasenack: gbp.conf (see debian/README.source)
<nacc> ahasenack: (debian/gbp.conf)
<ahasenack> I see
<ahasenack> nacc: the filter setting in [import-orig] should be enough?
<ahasenack> I have
<ahasenack>   'source4/heimdal/lib/wind/rfc*txt',
<ahasenack>   'source4/ldap_server/devdocs',
<ahasenack>   '*chm',
<nacc> ahasenack: based upon debian/README.source, that's what ends up munging the orig
<ahasenack> ok
<nacc> rbasak: i need to drop
<rbasak> nacc: sorry, back now. That too longer than expected :(
<nacc> rbasak: np
<nacc> rbasak: i think your pad does write it up correctly now
<nacc> rbasak: or, at least, mostly -- do you want to take that up next, then?
<rbasak> nacc: yeah I guess I need to :)
<nacc> rbasak: thanks; i'll hold off on the new tests for a bit
<ahasenack> nacc: rbasak: could one of you please feed samba to the importer? Debian just released a new package: https://launchpad.net/debian/+source/samba/2:4.7.4+dfsg-2
<ahasenack> it's not yet in https://code.launchpad.net/~usd-import-team/ubuntu/+source/samba/+git/samba
<nacc> ahasenack: keep-up has not yet started
<nacc> ahasenack: once the reimport is done (i just kicked off the remainder)
<ahasenack> what is keep-up?
<nacc> the normal import script that keeps up with the publisher
<ahasenack> ok
<ahasenack> it's a daily cron?
<ahasenack> or a long-running daemon?
<nacc> the latter
<ahasenack> which you stopped because of the reimport, right?
<nacc> right
<ahasenack> ok, thanks
<nacc> no publish in the last ~8 days is guaranteed to have been imported
<ahasenack> ETA sometime today?
<nacc> yeah
<nacc> at least, well, to start it
<ahasenack> ok
<nacc> it will probably take a while to catch all the way up
<ahasenack> ah, it's been stopped for all this time
<ahasenack> not just since saturday?
<nacc> since a week ago saturday, or so
<nacc> march 5
<ahasenack> ok
<nacc> when i started the reimport
<ahasenack> nacc: if we go ahead of debian in some package, does git-ubuntu merge work with that package afterwards? Let's say debian updates the version too
<ahasenack> I saw there is a -f option to force a merge, would that do it correctly?
<ahasenack> s/would/should/
<nacc> ahasenack: i expect git-ubuntu merge to be pretty broken right now
<nacc> ahasenack: due to the bug you reported
<nacc> it needs updating for the new import algorithm
<ahasenack> ok
<nacc> ahasenack: you can manually tag the old, new bits and just do the rebase yourself
<nacc> that's really all git-ubuntu merge is doing
<ahasenack> I was just wondering about that case where we would be ahead
<ahasenack> since the normal merge is normally with debian being ahead of us instead
<nacc> ahasenack: wait, we're ahead of debian and still are?
<ahasenack> in this hypothetical case, yes
<ahasenack> the "merge" would be to just grab debian/* changes from them in that case
<nacc> that's not an ubuntu merge then :)
<ahasenack> correct
<nacc> ahasenack: so, no, it wouldn't work, as git-ubuntu-merge is for ubuntu merges
<ahasenack> "Are you sure you want to merge? (Pass -f to force the merge)." <-- was wondering what would happen with -f here
<nacc> you'd get a weird delta that you don't really want to merge?
<ahasenack> in other words, going ahead of debian also makes getting future changes from them harder
<nacc> ahasenack: well, you'd cherry-pick them
<nacc> ahasenack: you wouldn't merge to them
<ahasenack> right, it's another process
<nacc> right
<nacc> i don't think it's harder, though
<nacc> it's easier, if anything
<ahasenack> harder because debian uploads are just one big commit
<ahasenack> you would have to take it apart before cherry picking
<ahasenack> or just grab what you want and apply, without cherry picking a specific commit
<ahasenack> taking it apart is part of the merge workflow already
<nacc> ahasenack: did you see if it's their VCS?
<nacc> *in their
<nacc> we don't ever take apart debian uploads, fwiw
<ahasenack> it's there, yes
<ahasenack> I'm just considering pros and cons of going ahead of them
<ahasenack> and yes, we take apart our changes only, not theirs, my mistake
<nacc> mostly because if you really carea bout that level of granularity, check the debian vcs and use that
<hallyn> ruh roh - bionic: apt build-dep libvirt:
<hallyn> The following packages have unmet dependencies: builddeps:libvirt : Depends: libcgmanager-dev but it is not installable
<MitchT> Hello all.
<hallyn> cpaelzer: ^
<hallyn> we're inf eature freeze only so i guess i coudl push a fix for this,
<hallyn> i assume noone else is doing a libvirt update since, well, we're in feature freeze
<MitchT> I hope this is a decent place to ask, but has anyone attempted to deploy an azure vm using 18.04 beta?  As of March8th, all daily images are unable to deploy.  I figured it was azure. I opened a ticket and after escalation, their team confirmed that all images from March 8th+ are 'broken'
<hallyn> kirkland: ^
<hallyn> kirkland: who handles azure vms these days?
<nacc> Odd_Bloke: --^ ?
<hallyn> gonna guess not utlelmming utlemming :)
<hallyn> meh
<hallyn> my control keys why dey no work
 * MitchT posts the list of what works and what doesn't https://hastebin.com/icuzavoner.css
<hallyn> hm, but in my other vm i don't have that failed dependency,
<Odd_Bloke> nacc: Thanks!
<Odd_Bloke> MitchT: o/ I'm a person you can talk to about that; how are you deploying them?
<hallyn> backports?
<MitchT> I'm deploying using a template
<hallyn> oh!  haha - do-release-upgrade failed me.  it did not update the release name in deb-src lines
<nacc> hallyn: wouldn't expect backports to matter for bionic :)
<nacc> hallyn: ah...
<hallyn> nacc: yeah this is worse :)
<hallyn> thanks
<hallyn> cpaelzer: ignore me
<MitchT> the template hasn't changed since early feb. it took me a few days to track down that if i specify that last version i have marked "OK" it will deploy.
<MitchT> @Odd_Bloke I've found that the problem is exactly what i've described. I'm honestly out of ideas. I'm glad freenode is still around. I should know linux guys would be on IRC
<Odd_Bloke> MitchT: OK, I can reproduce the issue, I'll dig in to it a bit more.  Thanks for reporting it here!
<Odd_Bloke> MitchT: Could I also ask you to file a cloud-images bug at https://bugs.launchpad.net/cloud-images/+filebug, please?
<MitchT> My IT department sits in awe at the level of support from the ubuntu team
<MitchT> i will file a bug.
<MitchT> Bug filed.  I hope this is helpful.   I can provide more details if needed, but its pretty straightforward. https://bugs.launchpad.net/cloud-images/+bug/1755565
<ubottu> Launchpad bug 1755565 in cloud-images "Unable to deploy azure template with Ubuntu18.04 Daily image past March 8th" [Undecided,New]
<ahasenack> hm, I have a question regarding dfsg orig tarballs
<ahasenack> let's say I produce one
<ahasenack> using the same exclusion rules as debian
<ahasenack> build the ubuntu package, upload to the ubuntu archive, all is well
<ahasenack> then debian catches up, produces the dfsg tarball too
<ahasenack> but for some silly reason, it has a different md5
<ahasenack> it's the same content, but different md5 because of, say, different compression level
<ahasenack> won't that break things on our side? Because it will be the same filename, same tarball version and so on, but with a different md5 (but same content)
<rbasak> ahasenack: yeah, that can happen. It's annoying, but sometimes unavoidable.
<nacc> ahasenack: yeah, that's something the importer catches (and why we import origs separately for debian & ubuntu)
<nacc> ahasenack: but it *really* should be avoided if at all possible
#ubuntu-server 2018-03-14
<nacc> ahasenack: fyi, the catch-up script may not start until tomorrow (i'm going afk for the evening)
<lordievader> Good morning
<cpaelzer> hi lordievader, how are you?
<lordievader> Hey cpaelzer Doing good here, how are you?
<cpaelzer> same
<trippeh_> weirdness
<trippeh_> mars 14 09:32:43 hrmng systemd-networkd[963]: ipmi: Could not get LINKINFO: No data available
<trippeh_> mars 14 09:32:43 hrmng systemd-networkd[963]: Could not set ifindex on netdev, ignoring: No data available
<trippeh_> fails every boot. but systemctl restart systemd-networkd fixes it.
<trippeh_> have other interfaces using the same driver that works fine.
<trippeh_> No results found for "Could not get LINKINFO: No data available".
<trippeh_> thanks google :p
<trippeh_> uh, I moved it to another port, still failed.
<trippeh_> gave it another name.. then it worked
<trippeh_> odd.
<cpaelzer> rbasak: getting to the memcached review only now - with a potential lunch interrupt it might still take a while
<cpaelzer> are you super blocked onthat?
<rbasak> cpaelzer: no not blocked at all.
<rbasak> It can land "whenever" IMHO. No FFe required (again IMHO).
<rbasak> Just needs to make Bionic.
<cpaelzer> ok, so not postponing lunch then
<rbasak> Though of course earlier is better for wider testing as always. Yeah, of course don't postpone lunch :)
<cpaelzer> rbasak: do you have a ppa with that built already?
<rbasak> cpaelzer: yes. ppa:racb/experimental
<cpaelzer> rbasak: tests after lunch, the review on code is done and mostly ok
<rbasak> Thanks!
<ahasenack> rbasak: hi, morning
<ahasenack> rbasak: regarding my dfsg tarball question from yesterday
<ahasenack> I saw your reply and nacc's
<ahasenack> I don't know how to proceed now, if I should generate the dfsg tarball and go ahead of debian, risking its md5 being different from what debian will generate when the time comes, or doing nothing
<rbasak> mdeslaur: ^ this is a known thing that can happen, but when we have no choice we just do it, right?
 * rbasak isn't sure if mdeslaur will be in today
<mdeslaur> that's a tough one
<rbasak> Oh. Hello :)
<mdeslaur> you can name it something else than dfsg
<mdeslaur> hi :)
<ahasenack> is there precedence for that?
<ahasenack> all of this because of some rfc files :/
<mdeslaur> I believe I've done it before
<rbasak> Even if we name it something else, Launchpad will only accept that one orig for a given upstream version in Ubuntu, right?
<mdeslaur> I don't think so, all launchpad cares about is that the orig tarball doesn't have the same name as another
<rbasak> Oh, OK.
<mdeslaur> which package is this?
<rbasak> samba :-)
<ahasenack> debian is at 4.7.4, no idea when they will update
<ahasenack> we would like to go to 4.7.6 because of a corruption bug (fixed in 4.7.5) and the security fixes in 4.7.6
<mdeslaur> yeah
<ahasenack> changelog shows only bugfixes are in these new releases
<mdeslaur> so call it 4.7.6+adfsg
<ahasenack> the secfixes have patches, but the corruption bug's patch is huge, about 100kb
<ahasenack> and Andrew Bartlett himself is asking us to please use 4.7.6
<mdeslaur> just make sure there's no "dfsg" versioning logic in debian/rules, I've seen that before
<ahasenack> how will dpkg-buildpackage know which tarball to use? Where is that name referenced?
<mdeslaur> it finds the orig tarball name based on the version string in debian/changelog
<ahasenack> ah, sure
<rbasak> Oh, of course.
<ahasenack> will 4.7.6+dfsg be higher than 4.7.6+adfsg?
<rbasak> That's why what I said above is moot. The upstream version would be "different".
<rbasak> You could use 4.7.6+dfsg~ubuntu or something.
<rbasak> Then when Debian has one, we can "upgrade" the upstream version to it.
<mdeslaur> but you'll get a tarball collision if you do that
<mdeslaur> oh
<mdeslaur> ah yes, I think +dfsg~ubuntu-1ubuntu1 or similar would work
<ahasenack> so
<rbasak> -0ubuntu1 please, to be consistent.
<mdeslaur> sorry, yes, 0ubuntu1
<ahasenack> samba-4.7.6+dfsg~ubuntu-0ubuntu1
<ahasenack> twice ubuntu?
<rbasak> Yes
<rbasak> But without the samba- prefix in the version string itself of course, but I think that's what you mean.
<rbasak> It's interesting that this isn't an already established pattern for us.
<rbasak> I wonder if there's a reason for that. It seems so obvious now.
<ahasenack> do you guys know Mathieu Parent <sathieu@debian.org>? I would like to ping him to see if he is planning on an upload soon
<mdeslaur> so your tarball would be called samba_4.7.6+dfsg~ubuntu.orig.tar.gz
<ahasenack> mdeslaur: right
<rbasak> And solves the problem pretty much entirely, in the case that we know that we're going ahead of Debian in the orig tarball construction.
<mdeslaur> I guess it doesn't happen often enough that anyone has documented the right approach
<rbasak> cpaelzer: memcached> thank you for the review!
<cpaelzer> yw rbasak
<jlnl>  Ubuntu 16.04 Server, samba and kerberos working, smbnetfs is setup, ~/.smb/smb.conf is a copy of /etc/samba/etc/smb.conf customised for my domain/ realm but when smbnetfs is used, it only shows resources shared under WORKGROUP, not my current domain.
<jlnl>  I also get this: [2018/03/14 12:58:22.380986,  0] ../source3/winbindd/winbindd_group.c:45(fill_grent)
<jlnl> My winbind version is 4.3.11, so it should not suffer from the bug mentioned in https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=858601
<ubottu> Debian bug 858601 in samba-vfs-modules "winbind: user authentication using windows domain fails after upgrade to 4.2.14+dfsg-0+deb8u4" [Important,Fixed]
<ahasenack> I never used smbnetfs
<jlnl> ahasenack, smbnetfs is a kind of glue between fuse and mount.cifs
<jlnl> ahasenack, this should simplify mounting windows shares as a normal user.
<ahasenack> and /etc/samba/smb.conf uses WORKGROUP, whereas your config in ~ uses another?
<jlnl> ahasenack, neither configfile mentions WORKGROUP, but smbnetfs seems to default to WORKGROUP. Ah, I see that my samba error message is not complete, my apologies:
<jlnl> [2018/03/14 14:01:26.482395,  0] ../source3/winbindd/winbindd_group.c:45(fill_grent)
<jlnl>   Failed to find domain ''. Check connection to trusted domains!
<jlnl> ahasenack ^
<ahasenack> doesn't ring a bell, but winbind is notorious for errors here and there
<jlnl> ahasenack, the groups command returns the correct groups that this ADS user is a member of, which should also work through winbind, or is this assumption wrong here?
<ahasenack> it's not wrong
<ahasenack> I would try to verify this via other tools
<jlnl> ahasenack, do you have a suggestion which tools to use for testing this?
<ahasenack> also modify the main smb.conf and use that one as the only source of configuration, to remove parsing multiple configuration files as a possible source to the problem
<ahasenack> jlnl: smbclient, nautilus' network environment (forget what it is calleD)
<jlnl> smbclient eh, ok, IÂ´ll read the manpages for that command.
<jlnl> ahasenack, smbclient works normally
<TJ-> jlnl: what is the version of the share server itself?
<jlnl> TJ-, Windows Storage Server 2012 Standard 6.2
<TJ-> I wonder if the debug option used in this similar report might help, you may have the same issue https://sourceforge.net/p/smbnetfs/bugs/25/
<jlnl> TJ-, that might very well be the case. I couldnÂ´t find any similar solution that automatically iterates through available shares though.
<jlnl> TJ-, Look at the year: 2011, thatÂ´s ancient...
<jlnl> Still open.
<TJ-> jlnl: or it was not confirmed as an issue, user solved it and never reported back on how :)
<jlnl> TJ-, yes, that unfortunately happens ... a lot :-/
<jlnl> TJ-, judging by the last entry, this shpac did not have any password authentication setup on the server (s)he was connecting to. Our network is a bit more secure than that :-)
<ahasenack> rbasak: deb maintainer told me they have a 4.7.6 tarball in their "pristine-tar" branch, any idea where that would be? I checked salsa and it's still at 4.7.4 in the pristine-tar branch
<rbasak> ahasenack: perhaps the maintainer forgot to push it?
<rbasak> Or it got pushed to an old alioth repo?
<rbasak> If you don't see a branch called "pristine-tar" that has been updated recently, it's not been pushed there.
<nacc> mdeslaur: are you planning on doing the php7.0 update still? i've gotten another offline ping re: security update (this time for 7.0.28)
<mdeslaur> nacc: hi! 7.0.25 only fixed "low" cves, so I didn't release it as a security update
<mdeslaur> nacc: are you planning 7.0.28 updates? let me look at the cve list, one sec
<ahasenack> rbasak: yeah, maybe alioth, will check there too before responding
<mdeslaur> nacc: looks like 7.0.28 has "medium" CVEs, so that one we should release as a security update
<nacc> mdeslaur: err, it was 7.0.27 that was waiting for you, iirc
<nacc> mdeslaur: i can prep 7.0.28 for you today
<mdeslaur> oh, you had 7.0.27?
<nacc> (or 7.0.26, i can't remember)
<mdeslaur> darn, sorry, did I drop the ball on that one?
<nacc> yeah, it was some not yet published version
<nacc> mdeslaur: it's alright :)
<mdeslaur> I thought the one you wanted me to look at was the one that went to -updates
<nacc> mdeslaur: sorry, miscommunication on my part
<mdeslaur> nacc: so we need 7.0.28 and 7.1.15
<mdeslaur> and 7.2.3
<nacc> mdeslaur: ack
<mdeslaur> nacc: if you don't mind preparing 7.0.28 and 7.1.15, ping me and I'll push them straight to -security
<sdeziel> nacc: I updated LP: #1744148 but it only covers the 7.0 branch. Would it help if I add the 7.1 and 7.2 branches?
<ubottu> Launchpad bug 1744148 in php7.0 (Ubuntu Xenial) "[MRE] Please update to latest upstream release 7.0.28" [Wishlist,In progress] https://launchpad.net/bugs/1744148
<nacc> mdeslaur: yep, will do
<nacc> sdeziel: if the CVEs are the same, probably
<sdeziel> nacc: OK I'll check if there is an overlap
<nacc> sdeziel: thanks -- otherwise, it probably makes sense to have distinct bugs
<nacc> mdeslaur: and yeah, i plan on doing the 7.2 update today anyways (i believe it might fix a bug someone is hitting)
<sdeziel> nacc: they all overlapped so I extended the existing bug
<nacc> sdeziel: thank you!
<sdeziel> that's the least I can do, I'm pretty glad that you take such good care of PHP :)
<jlnl> TJ-, thank you for the useful pointers. IÂ´ve got to go now.
<sudormrf> with SELinux, if you are running as a low priviledged user and get an "invalid argument" error when you try to chcon a file, is this due to permissions?
<sudormrf> I am following the selinux wiki example
<sudormrf> https://selinuxproject.org/page/Guide/Contexts > to what I refer
<nacc> sudormrf: are you using selinux on ubuntu?
<sudormrf> I am testing it
<nacc> sudormrf: given that the selinux package in ubuntu hasn't been updated since 12.04, i'm not sure how well it is supposed to work
<sudormrf> ok, that's fine. curious about the "error" I am seeing
<sudormrf> I am _betting_ it's permissions
<nacc> sudormrf: might be easiest to ask a selinux channel?
<sudormrf> oh, didn't know there was one :D
<nacc> sudormrf: or mailing list, or whatever
<sudormrf> thanks :)
<nacc> ahasenack: running the samba import now
<MitchT> Google failed me.  My search was "why was php7.1 not included in bionic"
<MitchT> i'm sure theres a better reason than "we have 7.1 now"
<MitchT> UGH 7.2 i mean
<nacc> MitchT: we don't want to have two versions of php in the archive
<MitchT> makes sense
<MitchT> kinda rough for all those magento users out there scrambling for an mcrypt fix, but honestly we should have been using openssl by now.  I know there was a php7.1x bug, but still
<MitchT> thank the maker for pecl.
<nacc> MitchT: afaik, magento is not in ubuntu
<MitchT> oh absolutely, no more likely than wordpress to be in it
<MitchT> its an ecommerce platform
<MitchT> but its still using mcrypt, and that lib's deprecation and omission has left a lot of us coming up with "creative fixes"
<MitchT> ;)
<MitchT> also, being an encryption library, you might understand how its scary to just swap it out for something else
<nacc> MitchT: it's equally scary to be using something deprecated :)
 * MitchT agrees
<nacc> MitchT: seems like it would be equally easy to run magento on php7.0 on 16.04 in a VM or container
<MitchT> it would be..  and that was an option, but i really want to go with 18.  our current site was completed in 2011
<MitchT> granted there have been a lot of adjustments, but its still running ubuntu 14
<MitchT> our IT dept is small and we have a lot of projects
<nacc> MitchT: you can still run 18.04 as the host
<MitchT> i am
<MitchT> i use php7.2 and pecl install mcrypt
<MitchT> makes magento very happy on a tiny azure vm
<MitchT> hehe... hope Odd_bloke / team gets that azure image bug fixed soon. My automated deployments are locked to a build from Feb.
<MitchT> ubuntu 18 has exceeded my expectations by a longshot, i've not used something that works as well as this in a LONG time.
<Onepamopa> ok, can someone explain how two dhclient processes would be running for a single interface, resulting in flood to the dhcp server... ?
<dpb1> Onepamopa: cloud workload?
<Onepamopa> what the .... is that?
<dpb1> I'll take that as a no
<nacc> MitchT: yeah that works too -- tbh, based upon xevious' recommendation and thinking about it, i think long-term, we want to drop everything but the interpreter, pecl and composer from ubuntu at some point -- it's just not possible to keep everything current
<nacc> rbasak: ahasenack: fyi, the samba patches-applied import fails due to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=850843
<ubottu> Debian bug 850843 in dpkg-dev "dpkg-source in stretch cannot extract samba_3.6.5-2.dsc" [Important,Fixed]
<nacc> which is parallel true for any modern dpkg-source in ubutnu relative to the one that was in precise (apparently)
<nacc> ahasenack: i believe samba patches-unapplied has been reimported fully now
<nacc> rbasak: for the git-ubuntu merge fix (LP: #1734364), I think I'll need the tags support, as well, in order to create a Repo (was CommitGraph) object to pass in, write it to the repository, and then do a lookup of some objects in that repository (I don't believe the placeholder exists once we've written it, right?)
<ubottu> Launchpad bug 1734364 in usd-importer "merge start fails with bind9" [Critical,In progress] https://launchpad.net/bugs/1734364
<nacc> rbasak: http://paste.ubuntu.com/p/VSbYQqTPhp/ for reference to what i mean, hjopefully
<jayjo> sorry, jst asked this on ubuntu but I beleive it fits here more. I'm trying to run a python server on ubuntu using systemd to be reverse-proxied by nginx. I'd like to automatically restart the server every day or week or some arbitrary amount of time... if I use WatchdogSec= in [Service] of the unitfile, will this just kill the process or send a SIGINT?
<sarnold> i have to think you'd be better served by a cronjob entry
<dpb1> jayjo, ya, you are likely to get better replies here, though I'm not sure of the right unit incantation
<dpb1> same, it would be trivial there
<jayjo> so a service to start it and a cronjob to restart it?
<sarnold> I mean, *best* would be to sort out whatever resource consumption problem you've got that makes a restart seem like a good idea :) but a cronjob to run something like systemctl restart foo.service  can't be so bad
<nacc> rbasak: ah i think we want to update our pytest config to be xfail_strict=true (which will make xpass also a fail)
<axisys> need some help with this.. logger -p mail.info writes to /var/log/mail.log fine, but not sending to remote IP .. mail.info @192.168.1.100 is there
<axisys> 14.04
<sarnold> axisys: does *anything* make it to that remote syslog server?
#ubuntu-server 2018-03-15
<axisys> sarnold: well i was capturing for udp pkts from this server .. nothing going out
<axisys> no pkts even trying to send to remote.. pure quite
<sarnold> axisys: how about trying nc to send udp to the remote host?
<axisys> ok.. let me check
<sarnold> axisys: that would let you sort out if it is a matter of configuring rsyslog or fixing firewall rulesets or routing or something similar
<axisys> right.. how do I generate a udp pkt with nc
<axisys> got it..
<axisys> ok nc 192.168.1.100 works .. remote server gets the pkts.. so it is issue with rsyslog then
<axisys> nc 192.168.1.100 514
<sarnold> are you using -u ? or not
<axisys> nc -u 192.168.1.100 514 works too.. pkt recvd on remote side
<sarnold> oh okay
<sarnold> so that leaves troubleshooting rsyslog :/
<axisys> right..
<sarnold> hrm, do you need to restart or reload rsyslog to pick up changes? have you started or restarted rsyslogd since adding this line?
<axisys> I did .. let me do it again
<axisys> I killed it and it came back up right away
<axisys> ah.. upstart
<axisys> this is pre-start /lib/init/apparmor-profile-load usr.sbin.rsyslogd and /etc/default/rsyslog has RSYSLOGD_OPTIONS="" and then just an exec rsyslogd $RSYSLOGD_OPTIONS ..
<sarnold> do you have anything in dmesg?
<axisys> [20646747.288516] init: rsyslog main process ended, respawning
<sarnold> heh. handy :) but sadly not much help
<axisys> lol
<sarnold> nacc: pity this was marked private security .. it might have been easier to address ~ten days ago. Is this something for your team? 1753018
<nacc> sarnold: looking
<nacc> sarnold: yes
<nacc> sarnold: i'll make sure it gets noticed tmrw
<cpaelzer> good morning #server
<lordievader> Good morning
<Mava> i'm facing quite an interesting case with sas storage in hp server. the storage is configured to be 12Tb, but the ubuntu server sees with blockdev only 1.1Tb. Any tips ?
<OpenTokix> Mava: wrong partition table?
<Mava> OpenTokix: should it affect the information blockdev reports ?
<OpenTokix> Mava: It's where the sectors and sector-size is stored.
<Mava> OpenTokix: good point, unfortunately converting it to gpt did not fix anything.
<OpenTokix> Does the num sectors and sector size add up to wrong or right?
<Mava> once you said: both are wrong
<Mava> like. it calculates it right
<Mava> but the amount of sectors and the sector size is not the ones specified in the array configuration utility
<Mava> now i've got a clue. thanks OpenTokix !
<Cheez> how do people monitor mdadm arrays? ilke, I have 2 different arrays, a 2 drive raid 1 for / and /boot and an 8 drive raid 6 for /mnt/storage - I know I can look at mdadm --detail and check the state is clean anddisks are active, but is there a best practice way of keeping on top of it in case a disk fails?
<patdk-lap> I just let munin do it for me
<dlloyd> you can also configure MAILADDR in mdadm.conf for alerting
<Cheez> ooh, that'd probably be good enough
<Cheez> although i presume it expects there to be a local MTA rather than being able to specify an SMTP server?
<cpaelzer> ahasenack: rbasak: did you ever realize that system looses "fast" error messages?
<ahasenack> not really
<ahasenack> what did you (not) see?
<OpenTokix> Mava: cool, good luck
<rbasak> Am I missing some context here?
<cpaelzer> slow: http://paste.ubuntu.com/p/2nybTkkm3h/
<cpaelzer> fast: http://paste.ubuntu.com/p/mSfYPHssfT/
<cpaelzer> essentially echo + exit vs echo + sleep + exit
<cpaelzer> result: http://paste.ubuntu.com/p/TT38mQ83Mf/
<ahasenack> you must be debugging something interesting :)
<cpaelzer> many services that have wrappers do like "initial check, error out if failed"
<cpaelzer> I realized while working on one that the message isn't there
<cpaelzer> ahasenack: this is what got me to this
<ahasenack> for what is worth, I do find that the logs from systemd are lacking detail
<cpaelzer> ahasenack: rbasak: do you see that the "fast" output is not in the status info
<ahasenack> maybe because of that
<rbasak> Interesting
<cpaelzer> ahasenack: I find the logs better accessible than before sytemd, but this is odd
<cpaelzer> I just wondere - might this be a general flaw
<cpaelzer> or do I miss something
<rbasak> I can't think of any architectural reason why there has to be a race like that
<cpaelzer> but I feel my example is not simplified a lot, and still reproduces
<ahasenack> are they also not in the output of journalctl?
<cpaelzer> ahasenack: the are in the journal
<ahasenack> just not in the status output?
<cpaelzer> just not on systemctl status, which is where people look first
<ahasenack> ohh
<ahasenack> that just got more interesting
<cpaelzer> an i mean, sleep fixes it c'mon we are not in 1998
<ahasenack> sleep fixes so many things
<cpaelzer> I tried to echo to stderr (unbuffered) but that didn't change anything
<ahasenack> how do you echo unbuffered?
<ahasenack> or is that stderr's default behavior
<cpaelzer> the default of stderr
<cpaelzer> ahasenack: rbasak: but none of you points out obvious flaws right - so I might ask #systemd then
<ahasenack> and even after a while systemctl status still won't show it?
<ahasenack> right
<cpaelzer> ahasenack: not after 7 minutes
<ahasenack> good enough
<ahasenack> what about the <3> prefix, does that mean anything special?
<ahasenack> or just log level
<ahasenack> (which would be special)
<cpaelzer> ahasenack: that makes log levels for journal/systemd
<cpaelzer> but yeah, lets try without
<cpaelzer> removing the log level has no effect
<cpaelzer> but was worth a try
<ahasenack> ok
<cpaelzer> sleep 0.01 is enoug
<cpaelzer> h
<cpaelzer> so just any sort of interruption
<cpaelzer> rbasak: ahasenack: xenial not affected, but showing in bionic
<TJ-> Is there a recommended way to set the FQDN on a pure IPv6 setup. Equivalent to the IPv4 /etc/hostname "127.0.1.1 hostname.domain.tld hostname" ? The "::1" is generally set to a similar list as '127.0.0.1' but do is there an IPv6 version of '127.0.1.1' ?
<patdk-lap> no
<patdk-lap> but I dunno why your using loopback for fqdn, that just doesn't work
<TJ-> That explains why my search-fu has been failing :)
<patdk-lap> you could always use ::127.0.1.1
<TJ-> I'm not, I've been investigating a pure IPv6 deployment/config with IPv4 totally disabled in kernel to provoke bugs and other problems
<TJ-> Recommendation is not but FQDN in /etc/hostname; if sticking to that then there ought to be a recommended location for the domain
<TJ-> s/but/put/
<OpenTokix> Anyone know what the point of the 127.0.1.1 address is? - From what I understand it's just a smeantic differnce. No technical difference.
<TJ-> OpenTokix: i /think/ originally it was to prevent errors where 'localhost' was removed from the '127.0.0.1' entry
<OpenTokix> since the entire network 127.0.0.0/8 is the same interface.
<patdk-lap> no
<patdk-lap> it's so you can have multible things going on
<patdk-lap> I bind a lot of things to different loopback ip's
<cpaelzer> rbasak: ahasenack: FYI that is https://github.com/systemd/systemd/issues/2913
<ahasenack> I was following your discussion in #systemd
<ahasenack> nice that you found the bug
<cpaelzer> it essentically fails to associate with the unit before it is gone
<cpaelzer> and therefore missing on systemctl status and journal -u output
<ahasenack> that's an old bug :/
<cpaelzer> it does not affect my xenial system it seems
<cpaelzer> as I understand it we might loose any late message
<cpaelzer> as long as it comes very shortly before the PID goes away
<cpaelzer> :-/
<cpaelzer> But quite often the last message before something dies is the most important one
<OpenTokix> patdk-lap: yes, but they all bind to the same interface.
<patdk-lap> isn't that the whole point?
<patdk-lap> if it didn't, useless
<OpenTokix> it is very useful, but also a magic interface
<patdk-lap> nothing magic about it
<patdk-lap> now, dummy, that is a magical interface :)
<OpenTokix> It binds to a whole /8, but only one of those ips is showing.
<patdk-lap> no it doesn't
<patdk-lap> it binds to exactly one ip
<patdk-lap> but it *routes* the rest
<patdk-lap> just like anything else
<cpaelzer> rbasak: ahasenack: for the sake of awareness on the Ubuntu side I filed bug 1756081
<ubottu> bug 1756081 in systemd (Ubuntu) "journald is unable to attribute messages incoming from processes that exited their cgroup" [Undecided,New] https://launchpad.net/bugs/1756081
<ahasenack> ok
<Slashman> hello, I have an issue with the HWE kernel for xenial "linux-image-4.13.0-37-generic", it's freezing my server some minutes after the boot, is this a known issue?
<rbasak> Slashman: try #ubuntu-kernel
<Slashman> rbasak: thx
<ahasenack> rbasak: hi, are you reviewing https://code.launchpad.net/~paelzer/ubuntu/+source/chrony/+git/chrony/+merge/341461 or was that comment just a drive-by?
<rbasak> ahasenack: I claimed the review as I was curiousu to look anyway
<ahasenack> cool
<rbasak> cpaelzer: what does EFF_ stand for OOI?
<cpaelzer> EFFECTIVE
<cpaelzer> a rename can be done
<rbasak> Ah
<cpaelzer> just let men know
<cpaelzer> me
<cpaelzer> I just replied for the default conffile change
<madLyfe> https://gist.github.com/61a118ed1c8437e2b480c6049cee07d7
<madLyfe> not sure why i got those errors after running update && upgrade
<sarnold> try running it as root
<madLyfe> run commands it suggests or run update && upgrade again?
<sarnold> update and upgrade again as a first step
<madLyfe> its odd because i have 4 identical servers and only two of them had these errors.
<madLyfe> and i always update them at the same time.
<JimBuntu> madLyfe, what happens if you try `touch /var/lib/apt/lists/test.txt` ?
<madLyfe> well it looks like the updates are going through at this point with root.
#ubuntu-server 2018-03-16
<cpaelzer> good morning
<lordievader> Good morning
<cpaelzer> hiho lordievader
<cpaelzer> hope you have a good friday
<lordievader> It is raining so far ð
<lordievader> How are you doing?
<cpaelzer> enjoying the same rain a bit south of you I think :-)
<lordievader> You may have all of the rain ð
<cpaelzer> rbasak: are you around to talk about the preferred way to configure the fallback on chrony?
<rbasak> cpaelzer: HO in a bit? I couldn't sleep last night so did some work then so that I could start late this morning.
<cpaelzer> rbasak: sure, ping me when you feel ready
<whatevermee> Greetings!
<whatevermee> Any help with exporting locales on my mutilated 16.04 VPS?
<whatevermee> This question is pretty it:
<whatevermee> https://askubuntu.com/questions/1015398/cant-set-locales-ubuntu-16-04
<albech> hi all. can anyone recommend a caldav server? I dont want something like owncloud/nextcloud that provide tons of other things.
<rbasak> cpaelzer: ping
<rbasak> albech: I use radicale
<albech> rbasak: i looked at that.. any good?
<rbasak> albech: 16.04's radicale works well for me against DAVdroid
<rbasak> It's pretty simple to set up and use.
<cpaelzer> rbasak: ~5 min to churn through a rebase then on the server standup HO link?
<rbasak> cpaelzer: ack
<albech> rbasak: guess il spin it up and have a look at it. cheers
<cpaelzer> nacc: FYI - I saw gpg errors on e.g. git ubuntu merge start
<cpaelzer> nacc: were there changes in that area on a recent snap push?
<cpaelzer> I'm on xenial, and I think you use the systems gpg out of the snap
<cpaelzer> maybe due to the changes since xenial this affects me but didn't sho up for you?
<cpaelzer> nacc: repro would be start the merge on strongswan
<cpaelzer> I can get around it for now by manually doing the steps it would do
<cpaelzer> just wondering if you see that as well and/or known related changes before considering a bug
<jamespage> coreycb: just sniffing bionic with the charms - horizon dash'es seem to be blocked up on the changes to include openstack-auth in horiuzon itself?
<coreycb> jamespage: is that what they're blocked on? i never noticed what the issue was.
<coreycb> jamespage: where did you see that?
<cpaelzer> rbasak: I pushed the chrony changes as discussed to the MP
<cpaelzer> rbasak: if you could give your +1 on that I could then open it up to the other folks I had in mind
<coreycb> tinwood: i subscribed you to the pylxd sru bug. 2.0.7 is ready for testing in xenial-proposed.
<tinwood> thanks coreycb; I'm on swap today, but still idly trying to fix another pylxd issue amongst other activities (it's raining REALLY hard here, so an indoor day).
<coreycb> tinwood: ah you might want to swap the swap then!
<coreycb> tinwood: typically it'll have to stay in proposed 7-10 days so plenty of time to test
<tinwood> coreycb, kk; so I'll just need to apt install it from proposed, run the tests, and then post the results back to the bug?
<coreycb> tinwood: actually just 7, i don't know where i got the -10 from
<coreycb> tinwood: yep
<tinwood> coreycb, I'll add a reminder to my trello card to do that on Monday. :)
<rbasak> cpaelzer: ack. otp.
<jamespage> coreycb: its not exactly opaque
<MitchT> Odd_Bloke, did you have any luck capturing a vhd from azure? I've been trying to do that, as well as looking at boot diagnostics.
<MitchT> looks like cloud-init might be causing some issues
<MitchT> boot diagnostics log from azure - https://hastebin.com/folimuyuci.vbs
<MitchT> INFO Waiting for ssh host key be generated at /etc/ssh/ssh_host_rsa_key.pub [1788 attempts remaining, sleeping 1s]
<MitchT> WARNING cloud-init does not appear to be running
<rbasak> smoser: ^
<rbasak> or rharper ^
<MitchT> https://bugs.launchpad.net/cloud-images/+bug/1755565
<ubottu> Launchpad bug 1755565 in cloud-images "Unable to deploy azure template with Ubuntu18.04 Daily image past March 8th" [High,Confirmed]
<MitchT> i have just been trying to help with this
 * MitchT does what he can
<smoser> MitchT: that is bug 1755965
<ubottu> bug 1755965 in cloud-init "util.subp regression: no longer accept commands as string" [High,Fix committed] https://launchpad.net/bugs/1755965
<smoser> MitchT: thanks for ringing a bell though.
<MitchT> i'm working on another approach... https://github.com/Azure/WALinuxAgent/issues/1058 - i'm wondering if this is related.
<MitchT> i'm sure theres umpteen million reasons cloud-init might not be firing up
<MitchT> smoser whats the usual turnaround time before these fixes get worked into the daily for azure cloud
<smoser> Odd_Bloke: ^
<MitchT> only reason i have ants in my pants about it is because my project is in testing but i'm its not running the actual daily.  The CI build it theoretically broken but technically i've been able to make it work
<MitchT> i'd like to use latest by mid april
<MitchT> if thats doable
<smoser> MitchT: well, as you showed iun that bug, it seems dailies are popping out every 2 days or so right now.
<smoser> it is in ubuntu. so any image built after llast night sometime will have it.
<MitchT> thanks smoser
<MitchT> today's image didn't seem to do it. I need to run the command to list the images
<smoser> MitchT: the times (dates) are UTC
<smoser> so a 20180315 would not really be expected to contain something i put in at the end of my day (eastern) yesterday
<smoser> so next one should have it htough
<JimBuntu> smoser, There is only a 4 hour diff between Eastern and UTC right now... were you *that* close to the end of the day?
<MitchT> 18.04.201803150 .. well it didn't get that one
<MitchT> ah i need to read back before i post!
<MitchT> sorry
<MitchT> its still epic the fix went out so fast. Thanks @ smoser && @ Odd_Bloke
<smoser> JimBuntu: well, the date 20180315 means it started on that day... no idea when.
<JimBuntu> smoser, good point
<nacc> cpaelzer: pastebin?
<MitchT> you guys ever use hastebin? its like pastebin without all the extra junk
<nacc> cpaelzer: and no, no changes in that area
<JimBuntu> MitchT, I use termbin, don't think I have seen hastebin. Have you seen any issues with hastebin?
<MitchT> well, i just like it because its really fast, and it also does decent highlighting
<MitchT> its bare bones
<nacc> MitchT: people can use whatever they want
<MitchT> yep
<jamespage> smoser: hullo
<jamespage> smoser: is there a reliable way to get up-to-date simplestreams packages these days?
<smoser> packages ?
<smoser> bionic is up to date with trunk i think.
 * smoser checks
<smoser> and also then . i think https://code.launchpad.net/~simplestreams-dev/+archive/ubuntu/trunk
<smoser> should be good
<smoser> you're probably wanting a fix for https://launchpad.net/bugs/1686086 in ubuntu
<ubottu> Launchpad bug 1686086 in simplestreams "glance mirror and nova-lxd need support for squashfs images" [High,Fix committed]
<smoser> i will upload now.
<jamespage> smoser: ta - it was general query as planning some work to improve the glance sync process in charms; it would be good to get a consistent version for things like keystone v3 support etc...
<jamespage> smoser: I'm not sure how far behind ss is in xenial for example
<smoser> jamespage: well the fix for v3 is in sru... i have mps for them. and freyes is supposed to be walking them through
<smoser> https://trello.com/c/9QvyIeMT
<jamespage> smoser: awesome thanks
<jamespage> bbcmicrocomputer, thedac: ^^ that probably answers most of the questions we had; bionic will be up-to-date, xenial getting SRU's
<thedac> cool
<bbcmicrocomputer> that's great.. thanks smoser, jamespage
<smoser> i just uploaded
<smoser> to bionic
<smoser> nacc: would you be oopposed to changing the namespace of your [gitubuntu] 'lpuser'
<smoser> to be [launchpad]
<smoser> i have wanted to get at that sort of information too and launchpad just seems like a better name.
<nacc> smoser: would something other than git-ubuntu read it?
<nacc> smoser: i'm not opposed, but we'd need to perform an in-place upgrade (presumably)
<nacc> not a big deal
<nacc> and i'd change it from lpuser to user then :)
<nacc> e.g., convert gitubuntu.lpuser to launchpad.user if gitubuntu.lpuser is present
<smoser> yeah. well, i have stuff that wants to know this same bit of information
<nacc> stuff that ... is public?
<smoser> "what is your launchpad user name so I can push to git?"
<nacc> or stuff like your tooling
<nacc> :)
<nacc> yeah
<nacc> smoser: this might be better asked in #launchpad?
<nacc> that is, i'm not sure git-ubuntu should touch a reserved (in my mind) namespace like 'launchpad'
<smoser> https://github.com/cloud-init/qa-scripts/blob/master/scripts/review-mps is one example.
<nacc> we could read it if it's set, i suppose, but i'd want cjwatson's permission to use it
<smoser> it asks for a --git-user
<nacc> smoser: yep
<smoser> yeah, i think having cjwatson approval is fine.
<smoser> but with that approval i think it make sense for git ubuntu to write it.
<nacc> sure
<smoser> i saw sparkiegeek has gitlptools
<smoser> which i think must have to ask the same question also
<smoser> https://launchpad.net/gitlptools
<nacc> smoser: yeah, i'm wondering if it should go there, and we can use that, ro something
<nacc> smoser: the problem with two tools managing it is it becomes kind of an ABI
<smoser> nacc: not really.
<smoser> lots of tools "manage" [user]/name and [user]/email
<ram__> i am using a AWS ubuntu image and been struggling to get grub booting my kernel regradless of what i do to grub. Is there something special needed on AWS for ubuntu images?
<nacc> Odd_Bloke: smoser: --^
<nacc> smoser: they do?
<nacc> smoser: given those are specifically defined by Git, they have to do so in a way that Git will undersatnd
<nacc> smoser: which is, in effect, an ABI between those applications and Git
<Odd_Bloke> ram__: Can you give us some more details on what issues you're seeing?  What kernel are you trying to boot?
<nacc> Odd_Bloke: thanks
<ram__> Odd_Bloke, yes. i have just built and installed a 4.16-rc4 kernel.  and trying to boot on that kernel.  I ran grub-mkconfig -o /boot/grub/grub.cfg and then changed the default entry to my kernel.
<ram__> regardless of what I do,  the kernel 4.16.0-rc1 kernel gets booted.
<Odd_Bloke> ram__: What virtualisation are you using?
<ram__> Odd_Bloke, i dont know. must be xen or something. its is transparent to me.
<Odd_Bloke> ram__: No, sorry, which EC2 virtualisation type: PV or HVM?
<ram__> Odd_Bloke, how can i tell that? is this something that I can find on the AWS console?
<Odd_Bloke> ram__: It is, yeah.
<ram__> i see hvm
<ram__> Odd_Bloke, ^^
<arooni> is it better to send transactional mail oer SSL ?
<arooni> and if so ; how would i check to see if that port was opened
<arooni> i'm not trying to run a mail server BTW, just send via api to email provider
<Odd_Bloke> ram__: OK, you might want to try running update-grub; that's the Ubuntu way of updating grub configuration.
<ram__> Odd_Bloke, did it. but has no effect.
<Odd_Bloke> If that doesn't work, then really it's just a grub configuration issue.
<ram__> it continues to pick 4.16.0-rc1 kernel and boots
<Odd_Bloke> HVM just boots in to GRUB normally, so I don't think you're seeing an EC2-specific issue.
<ram__> i want to boot from a 4.16-rc4 kernel.
<ram__> the submenuentry for that is 0
<ram__> the submenuentry for -rc1 kernel is 3
<ram__> i have default set to 0
<ram__> but still it boots from -rc1
<ram__> unforatunately ec2 does not provide a way to conect to the console of the VM and choose the boot kernel manually.
<ram__> if it did, life would be easier.
<sarnold> I have a vague memory that the GRUB_DEFAULT setting doesn't work as you might expect
<ram__> sarnold, what should be done in that case?
<ram__> any workaround?
<rbasak> For submenus use '>'
<rbasak> eg. GRUB_DEFAULT="1>3" for submenu 1 option 3.
<ram__> ok. let me try that.
<rbasak> After changing GRUB_DEFAULT there's a tool you have to run to update things IIRC. I forget what its called.
<ram__> its update-grub
<sarnold> wow. I was going through stack overflow trying to find it, and wouldn't you know, someone commented TODAY on a post I made a year ago about it! https://askubuntu.com/a/858520/33812
<sarnold> thanks stackexchange :D
<rbasak> That's the page I was looking for, thanks.
<rbasak> I wanted to reference it but failed to find it.
<sarnold> "It is also necessary to include the above numbers in quotes. It will not work otherwise." is probably what I was remembering.
<rbasak> Too many generic questions and answers about GRUB_DEFAULT but not specifically the submenu syntax.
<rbasak> Yeah it's got to be in quotes because it's sourced as shell.
<rbasak> And in shell > is special
<sarnold> aka "good luck finding *this* bug" :)
<rbasak> "Why do I have a file called '3'?"
<ram__> sarnold, did exactly that. but still it managed to boot from my -rc1 kernel :(
<ram__> something is sticking it to boot from there.
<ram__> i get a feeling that it does not even consult the /boot/grub/grub.cfg file while booting.
<ram__> it seems to have a mind of its own.
<ram__> if [ -s $prefix/grubenv ]; then
<ram__>   set have_grubenv=true
<ram__>   load_env
<ram__> fi
<ram__> if [ "${next_entry}" ] ; then
<ram__>    set default="${next_entry}"
<rbasak> Oh
<ram__>    set next_entry=
<rbasak> Yeah
<ram__>    save_env next_entry
<ram__>    set boot_once=true
<ram__> else
<ram__>    set default="1>0"
<rbasak> Clouds commonly still use /boot/grub/menu.lst or similar.
<ram__> fi
<ram__> ^^ that is what i have in my grub.cfg
<rbasak> I don't know about AWS for your instance type specifically.
<rbasak> grub-legacy-ec2 etc.
<ram__> rbasak, in my menu.lst i dont even have a entry for my -rc1 kernel.
<ram__> /boot/grub/menu.lst  file that is
<rbasak> I don't know then, sorry.
<Odd_Bloke> menu.lst isn't used for HVM.
<jayjo> I have a unit file that looks like this exactly: https://bpaste.net/show/034b77f9f0e6 and in my syslogs I see that it tries to start too quickly. It's a simple bash script, why would it fail? Is there a way to get more verbose logs?
<sarnold> maybe journalctl ?
<jayjo> here is the log values I'm seeing: https://bpaste.net/show/bb8ddaed2285
<sarnold> I wouldn't be surprised if it's insanely hard to use a shell script along with type=forking units ..
<jayjo> journalctl is identical: https://bpaste.net/show/b38e4b47c2a6 (other was pulled from syslog with some intermeidate results)
<jayjo> what's the alternative to using type=fork?
<sarnold> " simple, forking, oneshot, dbus, notify or idle"
<sarnold> https://www.freedesktop.org/software/systemd/man/systemd.service.html
<jayjo> does systemctl run as sudo? Can that be specified?
<jayjo> it runs as root as the default, right?
<nacc> jayjo: systemctl runs as who you invoke it as ... you shoiuld be using `sudo systemctl ...` if you want it to run as root
<jayjo> simply executing that shell script works perfectly from the command line with the DISPLAY=:0 set, but it doesn't work from systemctl ... I tried both as forking and simple.
<jayjo> is there a way to get more verbose output of why it fails?
<jayjo> in order to debug?
<sdeziel> jayjo: the unit probably runs as root so I'm not sure it has access to DISPLAY=:0
<jayjo> I think you're right ... is there a way to pass it in through ExecStart?
<sdeziel> jayjo: I'd first try by adding User=$YOUR_USERNAME to the unit
<sdeziel> assuming it can run with your user's privs
<sdeziel> jayjo: for better support, you might want to try #systemd
<cfnc> Hello everybody guys. A simple question. I have some films on my server and I would watch them without copying them locally on my Mac. So my question is about the protocol I should have to use to watch them on my Mac: if I have to use samba or ftp or what? Thank you in advance
<dpb1> samba would likely be the easiest
<dpb1> for simmple file sharing to mac os x
<sarnold> forget you ever heard about ftp
<cfnc> sarnold: what you mean, sorry I didn't understand
<sarnold> cfnc: ftp is just an all around terrible protocol :)
<sarnold> sftp is better in every respect
<sarnold> samba is probably your most convenient answer here
<sarnold> but sshfs via fuse might not be terrible
<cfnc> I've already used samba in the past so I already know how to start. I'll give it a try. Thank you very much guys ;)
<mojtaba> Hello, I have two remote machines, I can login to remote 2 from remote 1, but I can not login to remote 2 from my local machine. Do you know how can I rsync a directory from remote 2 to my local machine?
<sarnold> can remote 1 log in to your local machine?
<mojtaba> sarnold: no
<mojtaba> I can login to both
<mojtaba> just remote 1 I mean, from local
<mojtaba> and to remote 2 from remote 1
<dasjoe> "ssh -L 2222:remote2:22 remote1", then rsync from localhost:2222?
<sarnold> oh nice, that's simpler than anything I'd thought of :)
#ubuntu-server 2018-03-17
<_KaszpiR_> sarnold kodi + dlna
<_KaszpiR_> or samba/nfs as people said
<_KaszpiR_> kodi is a media center appl, can stream audio/video and so on, DLNA is a protocol to use for streaming and controlling devices - worth to read
#ubuntu-server 2019-03-11
<lordievader> Good morning
<devster31> hi guys, I was looking at RPI server images, what does the "preinstalled" mean? is it like the cloud images with some default users set but customizable with user-data?
<lotuspsychje> maybe the arm guys also know that devster31
<lotuspsychje> !arm
<ubottu> ARM is a specific (RISC) processor architecture used in a variety of applications such as handhelds and networkdevices. For more information see https://wiki.ubuntu.com/ARM . For ARM specific support, stop by the #ubuntu-arm channel.
<devster31> ok, thanks
<tomreyn> devster31: i think "preinstalled" means they are disk images which can just be written to a storage using 'dd' or similar raw copy utilities.
<jamespage> cpaelzer: coreycb may have already asked this but I have a problem with libvirt 5.0.0 on bionic via the UCA
<jamespage> libvirt/qemu is not reporting a capability of domain type=kvm
<jamespage> only type=qemu
<jamespage> any ideas?
<khedrub> hi there. I am a bit confused, I hope someone here may help me with that. :-) I need to set up a few services next to each other, for example nextcloud with web interface and discourse as a forum software. In my understanding the modern way to do this is to use containers. Snap or docker come to mind. So I used snap for nextcloud and it runs fine. But how does one go about setting up discourse as a seperate snap and tell it to also listen
<khedrub> to port 80/443 but a different subdomain?
<khedrub> Without containers it would be easy, but I want to do it right from the beginning
<blackflow> khedrub: you don't need containers. they solve specific problems and you'd know if you had them. like inability to install some software regularly -- due to conflicts with system files, libs, other packages, or simply because there aren't any packaged for the distro.
<blackflow> khedrub: "containerization" -- like isolation, namespacing, permissions, can all be solved with system unit configuration of the services, without the need to exponentially raise the complexity of your system with "containers"
<khedrub> arent containers also more secure because its isolating software similar to virtual machines?
<blackflow> containers per se are NOT security boundaries.
<rbasak> Containers are useful for trying things out.
<blackflow> they're totally not similar to virtual machines. they're just process (and uid and filesystem) namespacess
<rbasak> Since what goes on in them generally doesn't affect the host system.
<rbasak> For example if a third party software wants to stomp all over your system, which is very common.
<blackflow> right, the highly specific problems I mentioned above :)
<rbasak> discourse and nextcloud are perfect examples
<blackflow> khedrub: btw, "listen on <same> port but different subdomain" does not work. you really need different IP, not subdomain per se. then, yes the different IP could be pointed at by the subdomain.
<khedrub> Okay, so the old way of doing this (installing the software via git or apt and then configuring apache to use the sub domains for example) is still the good way to this nowadays? Because thats how we did it back in the days but I thought that is outdated and there are better ways in terms of security
<blackflow> khedrub: yes. use snaps if the snap'd versions offer some functionality that you specifically need.
<blackflow> also considering the drawbacks of snaps. for example they're rather bad for server applications because they autoupdate and you don't have any control over that.
<khedrub> I see.
<blackflow> (and you don't have the ability to supply your custom apparmor profile to them -- those two are for example the two biggest gripes I have against snaps at the moment, even if you ignore the "containerize everything" hype)
<jamespage> cpaelzer: nm figured it out - perms on /dev/kvm where incorrect
<khedrub> But from what you say it sounds like containers are quite a rare usecase. I had the impression that they are the new stuff that everyone is using nowadawys
<blackflow> khedrub: they have their uses yes. it is _not_ to containerize everything by default, no questions asked. that's very bad.
<rbasak> khedrub: I don't think blackflow's view is particularly representative of Ubuntu server users in genreal.
<rbasak> Trying things out in containers is very common and is recommended.
<blackflow> "trying things in containers" does not in any way conflict with anything I've said so far.
<khedrub> okay, but what if you want to have the software as productive systems?
<blackflow> I don't see "trying things" as specific requirement in the original question.
<rbasak> In general, "disposable" deployment platforms are extremely common.
<rbasak> Whether that's "start a cloud instance" or "start a container".
<rbasak> Or a VM.
<blackflow> sure they are. which, again, does not contradict anything I've said.
<jamespage> cpaelzer: hmm but...
<blackflow> if your use case calls for a container, then by all means use it. with all the virtues AND drawbacks of them.
<rbasak> Something that encapsulates that part of the deployment, which you can throw away on a whim to try again, rather than doing anything on a host system installed by hand.
<cpaelzer> jamespage: but why are they incorrect?
<cpaelzer> they should be set by udev
<jamespage> cpaelzer: change in qemu packaging I think
<blackflow> rbasak: that's a very specific use case, not a default state of production environments.
<cpaelzer> jamespage: yeah I stopped doing to in qemu, as udev already did
<cpaelzer> and was the right place to do so
<blackflow> production environemnts, especially money making ones, want as little change as possible.
<jamespage> cpaelzer: that probably works ok on disco, but not so well when we backport to bionic
<cpaelzer> It should even be on Bionic, but let me check to be sure
<cpaelzer> if confirmed you can add it back on UCA
<rbasak> blackflow: either you misunderstand my point, or your grossly incorrect. I'm not sure which.
<rbasak> See "devops", "pets vs. cattle", etc.
<blackflow> khedrub: one drawback of containers is that they contain and package ALL the libraries and requirements for a specific software.  that means, for example, if you had 100 containers on the system and each needed openssl, you'd have 100 individual openssl installations.
<jamespage> bionic has 237
<jamespage> might be 239 where that comes in
<blackflow> khedrub: which also means that in case of security vulnerability, you'd HAVE to upgrade all 100 of them. which is a very complex situation. so this means that you need to upgrade EACH container separately which is a lot of work. that's why you use them only if they solve some specific use case you can't solve otherwise.
<cpaelzer> jamespage: 239-6
<rbasak> blackflow: rubbish. You're making that out to be a big problem. It is not. You are exagerrating.
<rbasak> blackflow: if I used 100 VMs or cloud instances, I'd have the same problem.
<khedrub> blackflow, dont they autoupdate like you said earlier?
<blackflow> rbasak: neither. the original question was not about "temporary testing environment" so I don't know why you're trying to present that to be somehow against what I've said, which it isn't
<rbasak> blackflow: if local, then why would I have 100?
<rbasak> blackflow: and if, as is current best practice, you have code that can redeploy, with CI, etc, then upgrading each container is absolutely not a lot of work. It's automatic.
<blackflow> apt install <package>; done.    how's that worse than deploying containers around?
<khedrub> Indeed, my usecase is a server which has a little as possible running on it, only the 2 - 4 webservices like nextcloud and discourse.
<rbasak> Because for server tasks you're not done after an apt install.
<blackflow> or are you deliberately ignoring what I'm saying from teh beginning, in that containers DO have use cases, but they should NOT be a default solution, if another exists.
<blackflow> rbasak: strawman
<blackflow> neither you are with containers
<rbasak> You seem to have some obsession with container hate.
<rbasak> Note that I'm not talking about containers specifically.
<rbasak> <rbasak> Something that encapsulates that part of the deployment...
<blackflow> khedrub: snaps autoupdate, yes. I was talking in general that containers are isolated envs and if you build one, you have to maintain it as such.
<rbasak> Encapsulation is something that is best practice.
<rbasak> Do it with containers, or something else, doesn't matter.
<blackflow> I do not have obsession with container hate. I've been doing this for over 10 years, even before the "container hype" with freebsd jails.   I _do_ have obsession with as simple as possible systems.
<rbasak> As yes, the cost is that you have to maintain multiple encapsulations, but we have automation to help with that.
<blackflow> I do have a problem with "containerize everything" hype which is misplaced. Again, they do have use cases, but they should not be defaulted to with no clear idea what problem you're trying to solve.
<rbasak> Muddling everything together into a host system that you have to sysadmin by hand because so many things have happened to multiple tasks on the system that redeploying is now a huge task is not the way.
<blackflow> rbasak: so you're saying apt should be replaced by snaps? you really ARE saying that, which that blueprint was apparently misquoted for?
<khedrub> but if I used containers, how would I tell the nextcloud snap to use cloud.xyz.org and the discourse snap to use discourse.xyz.org? Since they are using seperate apache instances...
<rbasak> Nope. I'm not saying that.
<rbasak> When did I even mention snaps?
<blackflow> "mudding everything" ... why would you do that? if a package exists, it's obviously integrated with the system, tested and available as part of the distro
<rbasak> I'm talking about _encapsulation_
<rbasak> Whether that's inside snaps, containers, VMs, cloud instances...doesn't matter.
<blackflow> of what exactly?
<rbasak> I think you should take a break, and read this conversation again tomorro.w
<blackflow> if an apt package exists, why is a container of that software better solution?
<ahasenack> good morning
<rbasak> You can use apt inside a container.
<cpaelzer> hi ahasenack
<blackflow> and I'll repeat what I said earlier, service containerization IS achievable with systemd unit configuration WITHOUT the expense and complexity of handling a whole isolated OS tree environment.
<rbasak> Now you can undo the entire container in one go.
<ahasenack> hi cpaelzer
<rbasak> You can do your service containerization with systemd units inside a container.
<blackflow> rbasak: right, and why would you do that unless you had a specific need for a container? you mentioned test environments -- sure I agree, containers are very suitable for those. that, however was not the original question.
<rbasak> A container is just a nested Ubuntu.
<blackflow> sans the kernel, yes.
<rbasak> (well, it can be many things, but it can be that)
<rbasak> Sure. A nested Ubuntu userspace then.
<blackflow> so why manage 100 copies of ubuntu if that can be solved with SHARED libraries, especially since you can STILL isolate the services with systemd unit config?
<rbasak> Again, you're exaggerating
<blackflow> I totally am not. the shared library model exists for a reason
<blackflow> containers exist for a reason too. my whole point so far is unless you really need to isolate software in it, do yourself a favor and don't complicate the system.
<rbasak> My point is that you _always_ want at least one level of isolation.
<rbasak> Since the host system is always one part that's the most expensive to redo/redeploy.
<jamespage> cpaelzer: right so a chgrp and chmod on /dev/kvm fixes things up
<cpaelzer> jamespage: ok, glad to know
<blackflow> rbasak: right. ProtectSystem= systemd directive seems to be one level of isolation (which I'm using, among other things, quite extensively).
<jamespage> cpaelzer: I'm guessing you'd want to hold this as a patch for the UCA backport right?
<rbasak> No, because to do that you already taint your host system configuration.
<khedrub> blackflow, I am trying to find some info on isolating services with systemd unit configs. The search results are a bit unrelated though. Do you have any link to further reading or a good search term for this specific use case of unit files?
<blackflow> there's no need to install a whole new Ubuntu, inside your existing Ubuntu, sans the kernel, just to run nginx off of it, for example.
<cpaelzer> rbasak: blackflow: you two do realize that the problem of your discussion is, that you are both right - you can do things with/without containers (of various types) - and it depends on your problem if you want/need to use them (and there everyone can decide where to make the cut on their own - and it is ok that you two do so at different places)
<blackflow> khedrub: https://gist.github.com/ageis/f5595e59b1cddb1513d1b425a323db04
<blackflow> khedrub: and of course respective manpages for each of the directives
<blackflow> cpaelzer: which is what I've been saying from the beginning, with the addition of don't _default_ to using a container unless you understand fully, what problem that will solve? why is that in any shape or form bad advice? it's GOOD system administration pattern.
<khedrub> blackflow, thank you!
<khedrub> It was very worthwhile for me that you two had that discussion. It brought some clarity about the pros and cons of either solution, if nothing else. :-) So thank you both
<blackflow> rbasak: "taint your host system configuration" -- yes, so? if there's a conflict, please report a bug against the specific package. there can also be bugs with snaps you know. let's begin with all the littering of /snap, ~/snap and loop mounts.......
<blackflow> if that is not tainting the host system, I don't know what is.
<cryptodan> thats one of the first things i did when i installed ubuntu server was to remove snap from the system
<blackflow> khedrub: and all I'm saying is don't default to them unless you understand the problems containers solve, if that fits your use case, and you're aware of all the drawbacks of containers.
<blackflow> cryptodan: everyone does it, where I've been called to maintain Ubuntu systems. but somehow my views are not representative..... I think Ubuntu devs have a huge disconnect between what they think users want and what users actually DO in practice.
<mason> blackflow: So, you've missed one critical point with containers vs a base system service - service stability. Even for a singleton, there's not live migration of containers.
<blackflow> mason: you mean of base system service
<cryptodan> i also removed the cloud stuff from the base install
<mason> Or perhaps that should be *especially* for a singleton.
<cryptodan> mason: then it should be optional
<cryptodan> not forced
<mason> blackflow: No, live migration of containers and a network config that goes with them.
<mason> Which is to say, live migration of services.
<blackflow> mason: sorry I don't follow what you wanted to say then
<cryptodan> they should have the mentality of a single use server for base installs
<mason> blackflow: You were arguing that containers aren't at all like VMs, but that's no longer quite true.
<cryptodan> no added crap like snapd, cloud stuff, or anything like containers
<blackflow> mason: I was arguing from the standpoint of ISOLATION, which as the discussion. one is process namespace, the other is a whole kernel running on the CPU with host-side irq handlers triggering on hardware virt
<blackflow> which *was . VMs area  whole different level. not just from the CPU standpoint, but also from the hardware standpoint. memory compartmentalization. context switches.
<mason> If isolation is the critical thing, it's also worth bringing up type I vs type II hypervisors, then.
<blackflow> mason: sure but this convo and my original objection was only this:  don't use containers as _Default_ solution unless you understand what they do, what problems they'd solve for you, and what are the drawbacks.
<mason> You'd hate my answer, then. For me, I like containers as lighter-weight VMs.
<blackflow> the original question was about "In my understanding the modern way to do this is to use containers" -- for running services.  and I said, no, not by default.
<blackflow> mason: if that solves your use case and you know what you're doing, that's not in any way contrary to what I'm saying :)
<mason> Yeah. I was just thinking about it. If the *only* reason for using containers is isolation, that's still not *bad* in any way. It's like using a shorthand to talk about separation, rather than depending on not missing any of a range of available tools.
<mason> As for the scale of managing one system's updates vs dozens, that's something to be automated anyway. The exact number shouldn't matter at all.
<blackflow> mason: and even then, what kind of isolation. it's all bout namespaces. process, pid, filesystem, network.
<mason> sure
<blackflow> for example, in my use case, the packaged nginx, postgres, dovecot, and postfix  -- they all fit my needs. I isolate them with sytemd unit configuration options. additionally with apparmor profiles. and I benefit from those packages being maintained in the way they are, stable and with backported fixes.  I trust that way more than a random docker someone plopped on a hub somewhere, or a random
<blackflow> snap someone uploaded.
<blackflow> if I needed to use or test the super bleeding edge version of nginx for example, then totally yes, I'd use a container (LXD probably) for all the reasons mentioned here as benefits: full isolation without affecting the base OS.
<cryptodan> id do it in a vm
<mason> blackflow: Ah, that's different, pulling in the notion of random dockers.
<mason> Hand-maintained homebrew containers can still use the nice, curated system packages.
<blackflow> mason: yes, and then you have that problem if having to maintain multiple systems (sans the kernel) yourself manually. which is fine if that solves your case. unnecessariy complexity if `apt install X` would've solved your case in the first place.
<blackflow> *of having
<mason> I assume I already have that problem in all cases, though.
<blackflow> mason: sure but the orig question was about using third-party prepackaged containers with snap or docker. those contain additional "don't do it unless you really need it and really know what you're doing" stickers. it's one thing to "I need to isolate this thing here in a way LXD does it, so I'll build an LXD container and apt install what I need, in there."    and quite anotehr to  " I'll
<blackflow> install a docker or snap of package X because it's the 'modern thing to do' without undrestanding what that really means".
<blackflow> I'll always bark against the latter.
<mason> Ah. Ah. I'd missed that. I skimmed backlog to get an idea of things, but yeah, using someone else's packaged bundle leaves me cold too.
<jamespage> coreycb: I'll make a start on the oslo.* ones
<coreycb> jamespage: sounds good. i'm going to fix up the vitrageclient backport and then i'll get started
<coreycb> sahid: nova 16.1.7 pushed and uploaded to pike-staging. thank you.
<mwhahaha> coreycb, jamespage: we're getting tempest failures in puppet openstack because we appear to be missing https://review.openstack.org/#/c/605851/ in keystone. when's the next time you're going to update the stein packages?
<sahid> coreycb: ack thanks
<jamespage> mwhahaha: next 24-48hrs
<mwhahaha> k thanks
<jamespage> we're working deps first, and then will do the core projects
<ykarel> mwhahaha, so till then we pin tempest, or wait?
<mwhahaha> i'll propose a tempest pin if ic an figure out the patch that broke it
<ykarel> mwhahaha, should i propose
<ykarel> was preparing a patch
<ykarel> commit message should explain it
<mwhahaha> k
<mwhahaha> if you have it sure
<ykarel> ok
<jamespage> coreycb: awesome new unpackaged dep for oslo.service
<jamespage> \o/
<coreycb> jamespage: oh great
<coreycb> jamespage: let me know if i can help
<CPressland> Afternoon all, was wondering if somebody could help me with Netplan? I'm spinning up a VM on Azure with multiple DHCP IPs per NIC for use in a Kubernetes Cluster. Netplan is detecting the Primary IP but cannot see any additional IPs. How do I configure Netplan to get all secondary IPs (30 of them)?
<ruben23> guys my ubuntu server has some big text on my monitor screen, how do i adjust it to be smaller
<ruben23> and a bit high res
<ruben23> anyone here guys
<lotuspsychje> !patience | ruben23
<ubottu> ruben23: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or https://ubuntuforums.org or https://askubuntu.com/
<sdeziel> ruben23: that's usually something you send in your client's terminal
<sdeziel> s/send/set/
<tomreyn> CPressland: looking this up a little, i think this is bug 1759014
<ubottu> bug 1759014 in netplan.io (Ubuntu Cosmic) "Netplan has no way to control DHCP client" [Undecided,Confirmed] https://launchpad.net/bugs/1759014
<CPressland> tomreyn: That looks like it almost exactly! Thanks!
<tomreyn> CPressland: Consider clicking on "This bug affects 14 people. Does this bug affect you?"
<tomreyn> + subscribing
<CPressland> Done! Fingers crossed for a backport. For now I'll do my prototyping on 16.04
<CPressland> Thanks again.
<sarnold> ruben23: is it coming up with 80x25 vga? or a framebuffer console? or X11? what do you want it to do?
<ruben23>  sarnold:  coming from vga, the res is too big i cant see the whole picture of the server most specify when i try to check logs , how do i adjust it to smaller and high res a bit
<sarnold> ruben23: most people just ssh into their machines and don't care about the display attached to it
<sarnold> ruben23: there are a few kernel command line parameters you can try -- look at video= and vga= in https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt#L4940
<sarnold> ruben23: you can also install X11 if you want to
<ruben23> sarnold: if there is X11 it will automatically adjust right.?
<sarnold> ruben23: yeah I think it'll try to run at the best the video card and monitor can support
<ChmEarl> CPressland, /usr/share/doc/netplan.io/examples/
<CPressland> ChmEarl: Thanks, unfortunately those examples don't cover off what I'm trying to achieve here. Basically Azure has provisioned 31 IP Addresses for a single NIC, but I can only get Ubuntu itself to see the "Primary" IP. I can manually assign secondary IPs and it works just fine, but the point is that I won't always know what the IP address is (nor will Chef) as we're using DHCP.
<CPressland> Looks like the Azure CNI may actually handle some of this for me. I'll do some further testing on that assumption.
<Jofi00> hi, can anyone help me with first steps in ubuntu-server installation?
<lordcirth> Jofi00, what is going wrong?
<Jofi00> I have installed ubuntu-server and have nextcloud running, but I cannot find any apache folder to mod the configuration
<lordcirth> Jofi00, /etc/apache2 should have your config
<tomreyn> Jofi00: how did oyu install nextcloud?
<Jofi00> in etc I cannot find the folder
<Jofi00> I did install nextocloud via the menue in the installation process
<Jofi00> could it be that by default any other server is running?
<sdeziel> Jofi00: that sounds like the snap version of nextcloud
<tomreyn> Jofi00: aaw snap, i think this installed a snap then.
<lordcirth> I've never used that method. Perhaps it's a snap?
<Jofi00> could be
<Jofi00> so this would use its own server then?
<lordcirth> Jofi00, run 'locate nextcloud'
<Jofi00> untortunately, it doesnt return anything
<lordcirth> Actually, snap info nextcloud
<Jofi00> ah
<teward> if you used the server live installer (subiquity based) and selected nextcloud, it used the snap.
<teward> can guarantee that
<Jofi00> ok, this is a good start for me to search for the config file, thanks
<blackflow> that's terribad. this default to snaps nonsense must stop.
<Jofi00> snaps are no good? havent used them before
<sarnold> snaps aren't a bad way to try to get some of the benefits of windows-style software distribution
<blackflow> I didn't say that. I said _default to snaps_ is bad. snaps per se are solving particular problems, yes.
<Jofi00> gotcha
<blackflow> in so that if you want to use the snap, then it should be a deliberate, conscious action of `snap install nextcloud`. not automagic where you're left wondering wth is this, where's ther files. ooh, in some squashfs loopbackmounted readonly dir.
<sarnold> if your machine exists to do nextcloud, then using a nextcloud snap is a pretty decent idea. if it's just something that'll be there, and you don't care about specific features, specific bug fixes, upgrading every release, etc, then a deb might fit nicely
<sdeziel> Jofi00: snaps are nice self contained software a bit less flexible than what you'd get from a .deb package. Could be either good or bad depending on what you are looking for
<lordcirth> snap is a great way to install ipfs. But it puts its config files in ~/.ipfs anyway.
<sdeziel> blackflow: IIRC, the live installer clearly mentions snap when providing a list of snaps to pick from
<blackflow> apparently that didn't help. :)
<Jofi00> it didnt help from getting me confused
<lordcirth> It mentions snap, yes, but I don't think it defines the term or mentions that nothing will be where you'd expect.
<sdeziel> yes, clearly the notice wasn't noticeable/clear enough ;)
<sdeziel> the good thing is there is one so it's just a matter of improving it
<sarnold> it's hard to convey the full range of pros and cons in one installer screen though :)
<sdeziel> could be improved by dropping a motd snippet with a brief intro on snaps if one was picked during the installation?
<blackflow> how about don't treat server users as idiots. unlike desktops, servers _should_ by all means be installed by skilled and experienced people who then will learn what snaps are, and whether they want it, with all the pros AND cons of it.
<Jofi00> Maybe it was just me being completely unaware of snaps. However, some kind of notice or explanation would have helped.
<blackflow> and as such, power users will have a choice to `snap install anything` should they decide to do so, being made aware of pros AND cons.
<sdeziel> blackflow: such admins are likely not going to click anywhere in the installer's list and will deal with any snap installation later on
<sdeziel> snaps are new so they need some kind of introduction that's not required for debs
<sdeziel> Jofi00: you can learn more about that nextcloud snap in the README at https://github.com/nextcloud/nextcloud-snap
<Jofi00> thanks
<blackflow> Jofi00: and be aware that snaps update automatically and you have no control over it. depending on your use case, this might not be desireable. restarting server services should be a controled, scheduled activity. -- depending on your use case of course, perhaps you don't care about that at all
<Jofi00> yes, I'll probably go with the non-snap installation
<blackflow> sdeziel: definitely because as it is now, it's just a "Featured Server Snaps" selection menu with no explanation what snaps are, what are pros and cons.
<sdeziel> Jofi00: in the nextcloud case, my personal recommendation would be to stick with the snap, or at least carefully consider what it means to not use it: no auto update, no automatic HTTPS cert, etc
<sdeziel> nextcloud will be hosting potentially important data so updates and HTTPS are desirable
<sdeziel> my 2c ;)
<blackflow> and quite doable by the sysadmin even without the snap.
<lordcirth> the snap comes with certbot?
<blackflow> so, sure, if "Just gimme nextcloud, I don't care about the details" is what you want, snaps are fine. my whole objection is "the admin shoudl be aware of all those details and make a conscious decision".
<sdeziel> lordcirth: dunno what client they use but they integrate seamlessly with Let's Encrypt
<blackflow> lordcirth: it's a kitchen sink of Apache, MySQL, Redis, PHP and then some.
<blackflow> so that's basically a whole appliance consisting of several software suites. people should _really_ be made aware of things like that.
<blackflow> OHLOL AGPL licensed.... yeah be VERY very careful with that.
<lordcirth> If you aren't changing the source, I don
<lordcirth> *'t see why AGPL would be a problem?
<lordcirth> Though you should be aware, yes
<JanC> or when you are just running it for yourself
<blackflow> for personal use it's okay. if you use it in conjunction with other sofware (eg. in a SaaS scenario) you have to release that other software source as well
<JanC> AGPL only applies to the software itself & its dependencies
<blackflow> another general license to be VERY careful with, is the new Commons Clause, you can't use it in conjunction with commercial products.
<blackflow> JanC: and software that uses the AGPL'd component as its own dependency
<sdeziel> AFAICT, that AGPL license is not specific to the snap though
<JanC> only if it's really a dependency (e.g. a control panel doesn't become AGPL because it can start/configure an AGPL service)
<blackflow> well thing is. AGPL and GPL'd software can't be made together into a single work (the snap). Also Redis now has its own, totally separate and totally FOSS unfriendly license, so whoever is packaging that snap, should be careful about which version its using.
<blackflow> in other words it's a potential minefield, kitchen sink bloatwares like this, made of so many differently licensed components.  all the details people should be very much aware of before they one-click install a conveniently featured snap.
<JanC> I assume they use an open source version of Redis
<JanC> ?
<lordcirth> JanC, I wasn't aware there was a proprietary version?
<blackflow> you're confusing "open source" with "libre". Redis still is open source. it ain't libre no more tho'
<JanC> Commons Clause isn't considered an Open Source license AFAIK
<blackflow> "open source" or "libre"?
<blackflow> open source means literally "here's the source code of this program". just that, nothing more.
<lordcirth> https://redis.io/topics/license
<lordcirth> blackflow, no, that would be "shared source" https://opensource.org/osd
<JanC> the source being available doesn't make it Open Source
<JanC> the source of MS Windows is also available
<JanC> if you sign a whole bunch of NDAs etc.
<blackflow> lordcirth:  https://techcrunch.com/2019/02/21/redis-labs-changes-its-open-source-license-again/
<blackflow> https://www.gnu.org/philosophy/free-software-for-freedom.html
<blackflow> Stallman on libre vs open source
<Odd_Bloke> blackflow: Your definition of open source is not a generally accepted one.
<blackflow> open source is literally "here's the source code with your fries". libre/free licenses give you rights wrt redistribution of that software, which is commonly (mistakingly) known as "open source"
<blackflow> Odd_Bloke: ain't _mine_ tho :)
<lordcirth> "Redis Source Available License" oof. But it only applies to certain modules so far, not the actual redis server
<Odd_Bloke> Yeah, it's Redis Labs and not Redis itself that is affected.
<lordcirth> Something to keep an eye on, though.
<Odd_Bloke> Indeed.
<blackflow> still something to be careful about, esp. in whole wheat packaged solutions. which redis modules are inside?
<Odd_Bloke> Yep, it definitely makes Redis usage more fraught than it used to be.
<sdeziel> https://github.com/nextcloud/nextcloud-snap/blob/master/snap/snapcraft.yaml#L228
<JanC> the new Redis Labs license doesn't satisfy https://opensource.org/osd-annotated AFAIK, meaning it's not Open Source
<Odd_Bloke> blackflow: I believe you're also mistaken about AGPL/GPL compatibility: https://opensource.stackexchange.com/a/1726
<blackflow> or that SO poster is. my ideas of AGPL+GPL (v2 btw) compat is not mine alone. our lawyers have pretty much explained to us to stay away from agpl and commons clause like plague. my company does a SaaS.
<Odd_Bloke> And, furthermore, I don't believe that a snap would be considered a 'work'; snaps are filesystem images, so them being a work would mean that any ISO would be, for example.
<Odd_Bloke> Yeah, you should definitely stay away from both of those licenses, but presumably not because they'll compromise your GPL-licensed code but because they'll compromise your proprietary code.
<blackflow> yes, that's the part in danger here. and btw, I think you're right, the AGPL part affects modified code only. if we modified and used an AGPL component, we'd have to release it even though we're not redistributing anything (it being SaaS)
<lordcirth> Yes, that is the whole purpose of the AGPL
<JanC> I wouldn't mind using AGPL for most purposes
<rbasak> blackflow: AGPL-3 is DFSG compatible. If you don't like it or can't use it, then fair enough, but Debian (and therefore Ubuntu) already include AGPL-3 software, so you can't expect the distro to eliminate that for you.
<foo> I'm looking for a monitoring system that has an API that I can feed an IP address (publicly accessible) and it can "onboard" that IP and then monitor. Specifically, I'd like it to determine how it can be monitored - eg. what ports are open, what ports share a banner, is it pingable - then share back when the system goes offline via an API and via whatever it originally found to be "onboarded." Maybe
<foo> with a confidence score. eg. if 6 ports are open, and it's pingable, and everything goes unresponsive... it's likely it's all offline. Or, if 6 ports open and it's not pingable and 1 port closes, then there "may be" an issue. Does nagios or zenoss or something else happen to provide something like this? Or do we need to roll our own system
#ubuntu-server 2019-03-12
<Deihmos> i notice ubuntu server updates packages unlike debian that provides just security updates
<Deihmos> isn't package updates risky?
<andol> There is always a tradeoff.
<andol> ...and Debian too provides some updates which aren't neccesarily security fixes.
<lotuspsychje> Deihmos: not updating system is a risk
<lordievader> Good morning
<kstenerud> cpaelzer: for https://code.launchpad.net/~kstenerud/ubuntu/+source/php7.0/+git/php7.0/+merge/364241 what did you mean by generated code? What should I be comparing?
<cpaelzer> kstenerud: when you have e.g. dh_xyz --arg in d/rules that (especially in your case) gets generated into code in the post/pre-rm/inst files
<cpaelzer> kstenerud: https://wiki.debian.org/MaintainerScripts
<cpaelzer> depending on your preference you have two ways to "inspect" them to do absolutely what you expect them to do
<cpaelzer> #1 is to check the built packes locally - e.g. sbuild and then extract the debs with dpkg -x ... to take a look
<cpaelzer> #2 is to check the real thing you use for testing (from a PPA for example) you can check for (recently) installed packages in /var/lib/dpkd/info/<pkgname>.*
<cpaelzer> kstenerud: I prefer #2 for most cases but both work
<cpaelzer> kstenerud: background - just because you said "restart-after-upgrade" that might not always do the right thing (it should, but then murphy)
<cpaelzer> kstenerud: so I got used to check indirect changes to the maintainer scripts in what was generated
<cpaelzer> kstenerud: does the above make sense for your current case?
<kstenerud> yes I think so. I'll need to look at it to be sure
<cpaelzer> ok, let me know if you need more breadcrumbs or a HO to calrify
<kstenerud> cpaelzer: actually, I can't find any reference to restart-after-upgrade, even in the PPA that has it. /var/lib/dpkg/info has not a single file with that text in it
<kstenerud> I tried running fakeroot debian/rules build manually, but it fails on a patchfile
<kstenerud> so it looks like the builder does something different, because the PPA did build successfully
<kstenerud> similarly, dpkg-buildpackage doesn't generate anything with restart-after-upgrade
<cpaelzer> kstenerud: that is correct
<cpaelzer> kstenerud: it is not gogin to be passed literally
<cpaelzer> kstenerud: when you tell dh_install init --restart-after-upgrade then instead of prerm:stop + postinst:start it will only to postinst:restart
<cpaelzer> kstenerud: fortunately the dh_* tools leave headers
<cpaelzer> so e.g. dh_installinit says something like "added by dh_installinit ..." before the snippet
<cpaelzer> in these snippets after some checks the final calls to "actually do" things will most likely start like "service ... ", "invoke.rc ...", or "systemctl ..."
<kstenerud> cpaelzer: None of the postrm and such files contain service, invoke.rc, or systemctl
<cpaelzer> kstenerud: can you point me at your PPA again pleas?
<kstenerud> https://launchpad.net/~kstenerud/+archive/ubuntu/xenial-php7.0-restart-after-upgrade-1819033
<kstenerud> Actually, there's no such fpm module for php in xenial...
<cpaelzer> kstenerud: php7.0-fpm is already the newest version (7.0.33-0ubuntu0.16.04.2)
<kstenerud> I get:
<kstenerud> E: Package 'php7.0-fpm' has no installation candidate
<cpaelzer> kstenerud: apt update after new container maybe?
<cpaelzer> kstenerud: http://paste.ubuntu.com/p/6CFVNmCSZs/
<cpaelzer> looks exactly as I'd hope it would be
<cpaelzer> kstenerud: but please take the chance to find it on your own so that you know where/how next time
<cpaelzer> this is a copy of /var/lib/dpkg/info/php7.0-fpm.* as in-archive vs your PPA
<cpaelzer> kstenerud: once you had the time to digest this let me know if you are fine now or if you need more
<kstenerud> is there a tool you're using to download these files?
<kstenerud> or do I have to spin up containers to get them?
<cpaelzer> kstenerud: I used containers, but as I mentioned before you can also just get the .debs and use dpkg -x
<cpaelzer> kstenerud: when looking at your PPA you can always go to "View Package details" and then flip-open the package of your interest. When you click on one of the builds you will getto e.g. https://launchpad.net/~kstenerud/+archive/ubuntu/xenial-php7.0-restart-after-upgrade-1819033/+build/16468276
<cpaelzer> and there the debs are with proper links (e.g. you can wget them)
<kstenerud> cpaelzer: OK, I'm still not clear on how the service will be down during the upgrade, or how this patch fixes it. It looks like it's just checking to see if a param is "configure" or "abort-upgrade", and then checks for files, then sets an action name based on that. But what does it all mean?
<cpaelzer> kstenerud: this calls for a HO to help you removing some of the maintainer-script-mist
<cpaelzer> kstenerud: I'll wait in the standup for when you are ready
<kstenerud> ok
<coreycb> jamespage: sahid: looks like we're almost done with dependency updates for stein. I'll plan to pick up on what's left during my afternoon today.
<Ussat> no nagios4 ?
<leftyfb> doesn't look like it
<leftyfb> oh wait, it's there. In 18.10 and newer
<leftyfb> https://packages.ubuntu.com/cosmic/nagios4
<Ussat> ok
<foo> Repeat in case someone happens to know: I'm looking for a monitoring system that has an API that I can feed an IP address (publicly accessible) and it can "onboard" that IP and then monitor. Specifically, I'd like it to determine how it can be monitored - eg. what ports are open, what ports share a banner, is it pingable - then share back when the system goes offline via an API and via whatever it
<foo> originally found to be "onboarded." Maybe with a confidence score. eg. if 6 ports are open, and it's pingable, and everything goes unresponsive... it's likely it's all offline. Or, if 6 ports open and it's not pingable and 1 port closes, then there "may be" an issue. Does nagios or zenoss or something else happen to provide something like this? Or do we need to roll our own system
<lordcirth> foo, so, you want it to auto-scan the IP, and monitor everything it sees?
<foo> lordcirth: yes, and then report back if something goes down, with a % of what went down based off ports, etc. Not sure how granular I can get with different systems but I suspect something out there allows for such - hence my ask. Thank you for clarifying
<lordcirth> I'm sure there is such a tool, but I don't know off the top of my head
<nacc> cockpit ?
<tomreyn> pandora fms has auto discovery
<tomreyn> also opennms and netxms
<tomreyn> zabbix, too, and observium. and probably a lot more.
 * RoyK prefers zabbix
<RoyK> foo: keep in mind that most of these tools requires an agent installed on the monitored OS. Some will work with snmp too, but then, you'll need to allow snmp access to the system and setup what to monitor
<foo> tomreyn: opennms, haven't heard of that in a while.
<tomreyn> foo: it's still being developed, and in use by some larger corps
<foo> tomreyn: yup, I've seen it, may have used it at one point, it skipped my mind. Appreciate you sharing
<foo> What I'm not sure on is exact process for discovery
<foo> But I can investigate further
<tomreyn> as Roy said, most use snmp.
<tomreyn> but there are also some which use common ports, such as ssh, telnet (yes!), rdp, etc.
<foo> The challenge is I don't know anything about the end system... but it sounds like this isn't exactly a unique problem
<foo> I'll have to look more into snmp and how that works
<tomreyn> a former employer of mine decided to move from a stone age nagios distribution to an SaaS monitoring solution which does SNMP discovery, from the internet. they do offer their own snmp (and  more protocols) discovery gateway you can install on your network, as a blackbox, so to have a somewhat controlled way into your network. but it's effectively still a backdoor, seeing how many devices don't support setting fine grained permissions via
<tomreyn> snmp.
<tomreyn> the most crazy management decision i've ever seen so far.
<tomreyn> so you do want to host monitoring yourself, at least the main system.
<foo> tomreyn: I'm ok with a third party doing stuff (even if it's locally), if anything I'd want an API to connect into to feed an API, webhook to report to, etc.
<tomreyn> foo: are you referring to a single system to be monitored?
<foo> tomreyn: thousands via Internet
<foo> all we have is IP, nothing else.
<foo> and nothing else to be set up (at least, at this point in time)
<tomreyn> are your systems all on the internet then?
<tomreyn> are all your systems on the internet then?
<tomreyn> ^ grammar fixed ;)
<foo> (thank you, grandma's life could have been saved) not necessarily, we mainly want to track whether or not a location's internet went down (well, I'm saying "we" but I'm asking on behalf of someone else)
<tomreyn> are those many different / separate locations / networks then?
<tomreyn> not like many devices in one place / network?
<tomreyn> so more like IoT rather than 3 data centers?
<RoyK> better use an agent - you'll get access to more data
<RoyK> not all services etc export their stuff to snmp
<foo> tomreyn: yes
<foo> RoyK: yes, that's on the radar at some point... that requires a ton of other stuff, moving parts, etc, so we're curious on what we could get with just an IP
<RoyK> and snmp is quite old and personally, I don't use it for other stuff than network devices, UPSes etc
<RoyK> foo: moving parts?
<tomreyn> foo: yes, you'll want some kind of an agent, and a canary like system.
<tomreyn> ...running on a well connected (multi-homed) and highly reliable (HA) central system
<foo> RoyK: yes, politics, talking with different companies, security procedures, etc.
<foo> Agent is ideal, but wanted to start the level before that first... which is not an agent
<tomreyn> about service discovery, IMO the best approach is really regular network scans. i don't know if any of these monitoring systems do this, and if they do, how they do it, and how they handle findings.
<foo> tomreyn: ... that's my exact challenge. I can build this whole thing to do exactly what I want. But I equally don't want to re-invent the wheel.
<foo> To recap: scan an IP and see "what is open" / trackable... eg. TCP response, banners, pingable IP, etc... then monitor that hourly or such and report back % likely down (based off how many "trackable elements")
<tomreyn> i bet none of them have a really good implementation for this, so you'll want to choose a fast ipv6 (ideally also ipv4) network scanner which scans lists of addresses for at least tcp (ideally more), and does service detection as well.
<tomreyn> you'd need to feed those findings into a montoring system and do anomaly detection along the way, so as to detect relevant changes to existing systems, systems added, systems removed.
<tomreyn> and probably someone to review most but not all of these findings.
<foo> tomreyn: agreed. So "trackable elements" would be ping, TCP open ports (with associated services if possible), with a backend system to do anomaly detection on a given time schedule... mhmmm.
<foo> I'm leaning towards building this if possible
<RoyK> foo: what sort of systems will you be monitoring?
<RoyK> windows? mac? linux? something else?
<foo> RoyK: the most important thing is we're actually monitoring the whole network. Whatever we get from an IP address is more of an indication of "if this network online" or offline
<RoyK> perhaps https://nav.uninett.no/ ?
<RoyK> it's used in large scale across scandinavian universities and colleges
<RoyK> it's not superfancy, but it works well
<RoyK> we have a few hundred devices in out installation at work
<RoyK> we're using zabbix in addition to that, for more detailed service monitoring etc
<foo> RoyK: interesting, I hadn't considered two services - one for discovery, and one for the actual monitoring of what was found... hmmm, that's interesting if I had two APIs. Thank you for sharing that
<foo> RoyK: PS. Recently heard Sinne Eeg at a house concert, a Scandanavian Jazz Vocalist... she's awesome.
<RoyK> foo: most systems don't cover the whole picture, but using more of them may be a good idea, even if they overlap a bit
<foo> RoyK: yeah, that's not a bad idea at all, really appreciate it
 * RoyK has been working with system monitoring for 20 years or so and rather likes it ;)
<foo> RoyK: Thank you for sharing! I used nagios many years back and am just about wrapping my head around the latest... as long as things have an API I can pull/put from, I can definitely connect the dots and logic
<RoyK> I've been using nagios as well and learned to dislike it quite a lot ;)
<foo> haha
<foo> RoyK: sounds like you like Zabbix, I think I looked at that once long ago
<RoyK> v4 came out some months back - seems good
<foo> Rad. I wonder if I can do https://nav.uninett.no/ for discovery + zabbix, hmm
<RoyK> but they still store passwords in unsalted md5 hashes in the db if you use the local authentication, which isn't very good
<RoyK> foo: I beleive uio.no has a thing for that, but I haven't seen the code
<RoyK> foo: where're you from again?
<foo> RoyK: States, currently California
<RoyK> ok - I found an article about this on uio.no, but it's in Norwegian, but perhaps google translate or something can help you  though it https://www.usit.uio.no/om/organisasjon/iti/gid/publisering/overvaaking/zabbix_report.html
<foo> RoyK: aha, thank you! ... since I probably couldn't search in Norwegian. Actually, I wonder if searches return other languages if I search in English. hmph. BTW, what country are you in?
<RoyK> .no
<RoyK> I work for oslomet.no
<foo> RoyK: Permission to PM?
<RoyK> sure
<RoyK> (I work for Keyser SÃ¶ze :D)
<foo> RoyK: BTW, this is helpful: https://screencast.com/t/LnKLwoVL - great to see what's possible with Zabbix here
<RoyK> foo: iirc is ping already in the standard zabbix - check_icmp iirc
<foo> RoyK: nice
<RoyK> foo:     Template ICMP Ping
<foo> RoyK: aha, perfect
<foo> RoyK: eh, my hand was forced - looks like someone else went in with librenms ... oh well. Looks like they have an API I can integrate with but I was hoping to get Zabbix in play.
<foo> RoyK: in any capacity, I might be able to look into zabbix for another system (where I do have more control over the environment and systems)
<RoyK> foo: just setup a VM
<foo> RoyK: nice, will do - I think I have an amazon instance I can throw this on - thank you!
<Marz> ubuntu server gets a lot of updates. i was previously running debian and there aren't that many updates. does upbuntu server update the packages versions?
<sarnold> we almost always backport security fixes as necessary
<sarnold> mysql we publish new point versions as oracle release them
<sarnold> otto from mariadb normally supplies us with new point releases for mariadb updates as they release them
<blackflow> ubuntu does more than just security updates. esp. with the kernel. debian however does regular point releases which include bugfixes, and other bumps.
#ubuntu-server 2019-03-13
<lordievader> Good morning
<kstenerud> cpaelzer: It turns out that the php7.2 issue was exactly as I'd suspected: A missing #include, which causes the call to a macro to be interpreted as a call to an undeclared function, thus the undefined symbol when linking.
<kstenerud> So now, if I want to fix this and test it, is there a way for me to make my own fixed version of php-intl, and then build horde-lz4 against it and run autopkgtests?
<kstenerud> I'm hoping theres a better way than just pushing an MP through and then praying nothing goes wrong on the excuses page once it mgrates
<cpaelzer> kstenerud: yeah there is
<cpaelzer> kstenerud: for starters you can put a fixed php7.2 (which builds php-intl binary IIRC) in your PPA
<cpaelzer> kstenerud: and then you can upload the horede package to the same PPA
<cpaelzer> kstenerud: it will build against that php it finds in that PPA then
<cpaelzer> and you can then check e.g. the build log or even install from the PPA on a container to check for more
<cpaelzer> kstenerud: will that achieve what you need?
<kstenerud> how do I get the horde package uploaded to the ppa? It fails the signature because the last changelog is not mine
<cpaelzer> kstenerud: just "dch -i" it and add a ~ppa1 to the version
<cpaelzer> the last changelog will be yours then and you can upload
<kstenerud> ok thanks!
<jamespage> cpaelzer: hey not sure we got to a conclusion on where to fix/patch or whatever for the bionic backport of libvirt 5.0.0 with regards to udev rules
<jamespage> we can hold a patch in the backported to re-apply the rules and maintainer script parts so that its bionic only - does that work for you?
<cpaelzer> jamespage: hiho, yeah you can add back on the Bionic-backport what I dropped
<cpaelzer> jamespage: do you need a pointer to the former change?
<jamespage> cpaelzer: yes
<jamespage> please
<cpaelzer> jamespage: old change #1 https://salsa.debian.org/qemu-team/qemu/commit/5a90d7b9
<cpaelzer> jamespage: old change #2 https://salsa.debian.org/qemu-team/qemu/commit/447402b0
<cpaelzer> jamespage: dropped because https://salsa.debian.org/qemu-team/qemu/blob/ubuntu-disco-3.1/debian/changelog#L109
<jamespage> cpaelzer: how does http://paste.ubuntu.com/p/Wy26sYSjVQ/ look ?
<jamespage> basically i ressurected the maintainer scripts and udev rule from cosmic's qemu
<kstenerud> cpaelzer: I'm taking another look at debian's php, and it looks like they're still updating their 7.2 branch: https://salsa.debian.org/php-team/php/tree/master-7.2
<cpaelzer> jamespage: seems ok to me
<kstenerud> but it doesn't show up in rmadison
<jamespage> cpaelzer: throwing it into the UCA for stein now
<kstenerud> So would it make more sense for me to update disco's php7.2 off that?
<cpaelzer> kstenerud: they might update the branch because someone still uses it for himselve
<cpaelzer> kstenerud: I'd not take anything from there blingly (like rebaseing onto it)
<cpaelzer> kstenerud: but evaluating and picking changes one by one might be ok if you think they are helpful
<kstenerud> oh wait, looks like their 7.2 doesn't use the newer libicu, so it wouldn't help
<kstenerud> However, I'm noticing that in their 7.3 icu (upstream) changes, there are a lot of differences besides just the namespace change. There are a bunch of ifdefs around icu version 57 (above or below)...
<kstenerud> so now I'm wondering if I should revisit what changes I made, and maybe include all that extra code?
<kstenerud> I'm worried that just "making the code compile" might hide some submarine bugs
<cpaelzer> kstenerud: ack, sounds worth a check
<rbasak> kstenerud: I like that attitude :)
<jamespage> coreycb: qemu fix on its way into the stein uca
<jamespage> holding a uca patch
<ahasenack> good morning
<jamespage> coreycb: how are we looking on the stein uploads? I don't see any updates in the to tracker?
<Delvien_> ubuntu-server 18.04 - i had set my hosts file manually at one point, even after i changed to a DNS server, it seems to be keeping that information and directing "ping hostname" to its old IP. Where can I clear this ?
<ahasenack> try "systemd-resolve --flush-caches"
<Delvien_> tried that already, still trying to ping old IP
<blackflow> Delvien_: "changed to a DNS server" -- how?
<ahasenack> cat /etc/nsswitch.conf |grep hosts:
<ahasenack> does that return "files" before dns?
<Delvien_> yes
<Delvien_> blackflow: local dns server to resolve local queries before passing to net for FQDN
<Delvien_> because editing /etc/hosts on every machine is a pain
<blackflow> Delvien_: which one, bind?
<blackflow> (which one is the local dns server)
<Delvien_> blackflow: tried bind but with my setup it wasnt working, im having my pihole serve as my DHCP and local DNS
<Delvien_> ahasenack: yes, it returns files
<blackflow> Delvien_: well you'll have to trace the resolution and see what exactly is authoritative for the zone and what is that particular authority returning. using dig
<blackflow> dig hostname +trace    dig hostname @ns-that's-supposed-to-be-the-authority
<blackflow> dig hostname @configured_resolve.conf_nameserver
<Delvien_> blackflow: i did that, its pointing inward (127.0.0.53) so its trying to self-resolve the query, when it shouldnt because i set a nameserver
<Delvien_> plus hosts file is now blank so im not sure where its pulling this infor
<blackflow> +trace will tell you
<Delvien_> trace?
<blackflow> also.... do you perchance have a "search" domain in resolv.conf and using just "hostname" is actually checking "hostname.search.domain.tld" and that still points to the old IP?
<blackflow> Delvien_: `dig <hostname> +trace`
<blackflow> you said "I did that" but you obviously did not understand a word I said :)
<Delvien_> yeah its still pointing 127.0.0.53
<blackflow> Delvien_: and what is listening on that IP? pihole?
<Delvien_> pihole is a different machine, different ip
<blackflow> so what is that then, systemd-resolved?
<Delvien_> Im not sure, thats what im trying to figure out
<Delvien_> or are you asking about pihole?
<Delvien_> ubuntu-server seems to be using systemd-resolved
<Delvien_> and ignoring settings in netplan for nameserver
<blackflow> ss -4lnp will tell you what's listening on that ip
<blackflow> so anyway, here's how this works. defined by nsswitch, but assuming defualts now, resolving is first consulting /etc/hosts, then asking whoever is configured in /etc/resolv.conf as the "nameserver" entry.
<blackflow> so THAT is your first stop, to check its config and clear its caches if any. BUT, if /etc/hosts is the only place you forced that hostname-IP combo, then that's a different problem
<blackflow> if that's systemd-resolved, the command ahasenack gave you should've fixed it but.... systemd-resolved is a steaming pile of feces which everyone should be disabling and masking out on servers. I've seen that not work in the past and I had to fully restart the systemd-resolved service.
<Delvien_> ss states nothing is listening on 127.0.0.53, resolv.conf is auto-regenerating to point to 127.0.0.53, so yeah i think its systemd-resolved
<Delvien_> noted.. getting rid of this resolved. lol
<blackflow> if nothing is listening on that IP but resolv.conf is pointing at it with "nameserver 127.0.0.53", then you've got broken resolving there.
<Delvien_> yeah, disabled resolved, fixed resolv.conf, works now
<blackflow> you'll have to mask, not just disable, resolved. other serivces (like NM) have this ugly tendency to resurrect it, if only disabled.
<Delvien_> I had fixed resolv.conf before but it seems it was pointing it right back.. its quite broken.
<blackflow> it's a symlink into /run  by default. with resolved masked out, and no NM (or something else), you'll need to unlink /etc/resolv.conf and make a proper file
<coreycb> jamespage: ok thanks for the qemu patch
<jamespage> coreycb: bumping it to proposed now
<coreycb> jamespage: no progress, i've been sidetracked. i'll try to focus on stein today.
<jamespage> coreycb: have cycles to spend on pkg-ing right now - can I work on unassigned items on the tracker?
<coreycb> jamespage: sure have at it please. i'm going to get the keystone ldap SRU going and then i'll work on stein too.
<jamespage> coreycb: +1 ok
<Delvien> blackflow: So I found out that 127.0.0.53 is the "stub listener" for systemd-resolved
<sarnold> that was a very nice thing of them to do :)
<sarnold> it makes it stand out clear as day what tool you're using
 * mason has been naughty and ripping out a bunch of that stuff lately.
<mason> Zapping netplan.io, networkd-dispatcher, adding ifupdown and having =gasp= a manually managed resolve.conf for servers.
<sarnold> mason: *the horrors*!
<blackflow> mason: good!
<blackflow> out with NIHtrocities!
<Odd_Bloke> Surely manually managing your resolv.conf is NIH too? :p
<sarnold> what's to mnage thogh?
<mason> Odd_Bloke: Eh? Managing resolv.conf has been a non-issue for decades...? I'm not sure I understand. Even managing it with cfengine and its descendents is layered with a fine coating of dust at this point. That it's seen any attention at all is a bit weird, and that that attention has made it fragile and breaky is bizarre.
<lordcirth> mason, what breakage have you run into?
<mason> lordcirth: None here, because I yank the stuff out, but wasn't there something big and visible on SlashDot a year or so ago?
<lordcirth> iirc there was a default fallback to 8.8.8.8 if it couldn't contact anything?
<lordcirth> Which doesn't seem like a huge deal to me
<lordcirth> And it was fixed
<mason> That sounds right.
<mason> Yeah. "We're going to randomly depend on a particular company's services for EVERYONE." Questionable thinking behind that.
<lordcirth> Not depend, just fallback to as the last resort
<cyphermox> it's also configurable, like everything else
<lordcirth> Had it not been set, then it would have just broken instead. The concern was privacy.
<Odd_Bloke> And I don't think 8.8.8.8 is random either; it's a common fallback.
<Odd_Bloke> (It's what I switch to when I find a malfunctioning DNS server in the wild.)
<lordcirth> Yeah, my home router DNS is 1.1.1.1, then 8.8.8.8
<lordcirth> If cloudflare and google stop working - well they don't.
<blackflow> 17.04 was uninstallable for me, systemd-resolved had to be removed from the live env. then systemd-resolved had issues with my upstream resolver, some responses were considered erroneous. I've been enjoying the peace of quiet of a local bind instance (because bind is what I use otehrwise for authoritative purposes, so 'tis a familiar tool)  ever since.
<lordcirth> blackflow, are you sure the upstream responses *weren't* erroneous? It wouldn't be the first time that a tool written to spec broke because the one it replaced didn't follow spec.
<cyphermox> ^ that, quite often
<cyphermox> not so much the responses themselves as how they are conveyed
<blackflow> lordcirth: infact, no, it was systemd's inability to handle DNSSEC properly.
<blackflow> uhm.... systemd-resolved's inability
<lordcirth> blackflow, ah, that's interesting. Do you know if there's a bug page for it?
<blackflow> I think this was it https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1682499
<ubottu> Launchpad bug 1682499 in systemd (Ubuntu Zesty) "Disable DNSSEC by default" [High,Fix released]
<blackflow> I don't remember now, there was quite a number of 'em, including upstream
<cyphermox> that looks quite relevant, if you did have dnssec; it was very much busted
<mason> Anyway, this all strikes me the same way "predictable naming" does - everyone gets it, even though really a very small subset of users need it. I'd want to default to less-complex and make the more complex solution available for folks who benefit from it.
<blackflow> quite a number of domains did not work for me until I ripped that out in place of a local bind
<blackflow> btw, the argument made about broken spec is a moot one. if you're writing a critical tool intended to replace another, then you MUST make sure your tool supports spec "quirks", if you want it to replace anything
<blackflow> saying "FU, your upstream is broken" is just gonna make _me_ go "FU and your <...>", this worked for the past 10 years until you came along.
<mason> I suspect everyone has good intentions and just wants to make cool new software.
<lordcirth> blackflow, I think that depends on the situation. Sometimes I've updated, and it stopped working, and clearly said that the other end was broken. And I looked into it and fixed the other end, and lived happily ever after.
<blackflow> mason: the problem with "predictable naming" is that you can't reliably predict it in advance. case in point the recent shtstorm that hit the upstream about their change in udev
<lordcirth> The way I like now is to use netplan's mac matching, then rename it.
<mason> blackflow: Well. The problem is that most people have one or two NICs at most in most systems. Not being able to rely on "eth0" being what you want is unfortunate.
<lordcirth> My management iface is 'mgmt'.
<blackflow> it very much affects me because we rent our servers and I don't control PCI slots or exact hardware used. installing via debootstrap because we use encrypted ZFS root and I'm left guessing via udevadm which name it is GONNA be after reboot.....
<mason> MAC matching makes sense.
<lordcirth> Of course, if you only have one NIC, it would be nice if you could rely on it to be eth0...
<cyphermox> you can't rely on having just one or two NICs on servers nowadays though
<blackflow> lordcirth: the ironic part is that MAC binding has been done for years to fix _Exactly_ teh issue that "predictable naming" is supposed to fix, and it only made it worse so I am _STILL_ forced to bind via mac and use ethX naming again.
<cyphermox> pizzaboxes do have two usually, and anything 2U often has 4
<cyphermox> (and if you netboot, order may vary slightly, etc. etc.)
<blackflow> i've learned to appreciate the process management part of systemd and the ability to utilize kernel features through simple unit files. everything else? purge it out with nuclear fire!
<cyphermox> the best way to summarize this is "it's not that simple"
<blackflow> big red flag when RH is not doogfooding their own product properly.  they use NM and not networkd for example. I'm not sure but I don't think they use resolved even?
<blackflow> *dogfooding
<Ussat> blackboxsw, um...I know LOTS of RH employees that do........
<blackflow> but their flagship moneymaker isn't. like, orders of magnitude more important than an anecdotal mention of "LOTS of employees".
<Ussat> ...
<Ussat> Yes, yes it is, but whatever
<Ussat> I use both Ubuntu and RH extensively......both have their strengths and weakness's
<blackflow> I like SELinux. Wish I had more time to learn it and use it properly. Wish it was supported in 'buntu instead of AppArmor. I'll ask Santa in ten months :)
<Ussat> SElinux is great
<blackflow> 'tis.
<blackflow> incidentally, I found gentoo's selinux policies to be far more  usable and polished than CentOS/Fedora's. you could actually run a SELINUXTYPE=strict server (openrc instead of systemd though, and no xorg) just fine with no issues.
 * jdstrand notes that selinux is available in Ubuntu via kernel command line and the policy/tools that come from Debian. that said, I would expect that policy to need a lot of work
<blackflow> quite a lot. that's what I meant by "supported in Ubuntu instead of AppArmor" -- that level of committment to AppArmor.
<jdstrand> it's community supported so if people wanted to invest their time, it would be great. but sure
<blackflow> I totally would if, say, Ubuntu shipped with it default and had a bootable server instance supported, even if in targeted SELINUXTYPE. Like a platform to get started.  So I'm doing the same but for AppArmor. all the services we use are apparmor'ed, even though ubuntu does not ship policies for them.
<jdstrand> yeah, I mean, AppArmor is backed by Canonical and used in many places, so that is the focus. the lack of decent selinux policy is unfortunate but also reflects that the community is largely ok with the choice. my only point is that there is a path forward to make it better; someone needs to do it and bootstrapping the policy and then maintaining it release to release is hard
<jdstrand> years ago there was decent policy from the community, but the maintenance burden got to be too much for those devs and now we just get whatever policy Debian has
#ubuntu-server 2019-03-14
<smoser> are the values for dhcp really 'no' and 'true' ?
<smoser> https://netplan.io/examples
<smoser> i guess i tuyped too quickly. the values there in the examples are 'no', 'yes', and 'true'.
<smoser> probably should make that consistent.
<mason> smoser: There's a #netplan channel, FWIW. Might be worth talking about it there too.
<sarnold> is that a general fault of yaml? yes/no on/off true/false?
<coreycb> jamespage: we're mostly caught up with stein packages
<sarnold> smoser: yes/no on/off true/false may just be a general fault of yaml
<jamespage> coreycb: awesome
<jamespage> coreycb: I'm intending on working through snapshots for any that lack b's
<jamespage> nova revealed a new dep/refactoring thing - os-resource-classes
<jamespage> uploaded for aa review
<andre144k> hi all, how to kill a linux-prozess which willnot stop via "killall -9 <name>" and willnot stop via "kill -9 <pid>"
<jamespage> mwhahaha: I've uploaded a keystone snapshot for stein - should be into proposed in the next few hours
<lordievader> Good morning
<frickler> jamespage: coreycb: we found this issue when upgrading the OpenStack CI to bionic, looks like a packaging issue to me https://bugs.launchpad.net/cinder/+bug/1819819
<ubottu> Launchpad bug 1819819 in Cinder "rtslib_fb.utils.RTSLibError: Cannot set dbroot to /etc/target for ubuntu bionic 18.04" [Undecided,Incomplete]
<jamespage> sahid: looking at your openstack pkg updates now
<sahid> jamespage: thanks, let me know or comment on the doc if you need me i change something
<jamespage> sahid: all looks ok so far - glance uploaded to ubuntu
<sahid> ack thanks
<jamespage> frickler: commented - there is a mix of package based and pip installed rtslib-fb on the log output
 * blackflow looks at bug #1797386 and sends some positive vibes toward everyone involved ...    <3
<ubottu> bug 1797386 in openssl (Ubuntu) "[SRU] OpenSSL 1.1.1 to 18.04 LTS" [Undecided,In progress] https://launchpad.net/bugs/1797386
<frickler> jamespage: oh, good point, I didn't notice that, thx
<frickler> jamespage: I tested locally but only after devstack had run, so probably had the same pip installation state
<jamespage> frickler: most likely
<jamespage> frickler: my experienced ubuntu bug triage eyes automatically zone in on anything starting with /usr/local ;)
<jamespage> sahid: build failure on https://launchpadlibrarian.net/415128633/buildlog_ubuntu-disco-amd64.mistral_8.0.0~b2-0ubuntu1~ubuntu19.04.1~ppa201903141431_BUILDING.txt.gz
<sahid> jamespage: hum... weird they pass locally with pbuilder
<jamespage> sahid: looks like a racey test - don't worry about it
<jamespage> sahid: I think placement needs a dependency on python-os-resource-classes
<jamespage> sahid: ok all done apart from placement - python3-os-resource-classes was accepted into disco this morning
<jamespage> so you should be good to go on that one.
<geigerCounter> I have a question: Is there any reason to change my hostname through the cloud.cfg?
<geigerCounter> My VPS provider had it configured this way, but when I updated I switched to the new config and manually edited in their change. I'm just curious about it.
<blackflow> geigerCounter: change from what to what? one reason would be if you're running a mail server and want to ensure proper rDNS
<blackflow> (that rDNS matches the forward resolution)
<geigerCounter> blackflow: To the address of the VPS in their system, I guess. It's vpsXXXXXX.vps.ovh.ca, since I'm using OVH. ( With the X's being my VPS number. )
<blackflow> geigerCounter: isn't that the default? Aren't their VPS images configured that way?
<geigerCounter> Yes, it is the default for their VPS images. I was doing a manual update and it asked me if I wanted to install the package maintainer's version. After looking through the differences I did so and edited the config OVH did back in. I'm just here asking about why it is this way for my own education.
<blackflow> well I understood you were asking if there's a reason to change from default to something else, and I gave you one. It really boils down to how do you want your server to be named.
<geigerCounter> I see. I suppose I'd have to ask them if it's even allowed to change it. I would like to change it to my public DNS address.
<geigerCounter> I am running a mail system, for one thing.
<geigerCounter> Could that be a reason why Gmail's not sure about my mail?
<geigerCounter> I'd used a mail configuration testing service that reported no problems, but Gmail still issues a warning with mail sent from my server.
<blackflow> geigerCounter: you don't have to ask OVH, you can chage it to whatever you want. Note that there's to facets of that. One is the hostname configured onthe machine itself. The other is rDNS for its IPs, configurable in OVH's panels.
<blackflow> you can only affect your local hostname on the machine itself. it's the name various systems get when they ask the OS "hey, what's my hostname here".
<blackflow> GMail will complain if you don't have rDNS, and it's wise to have rDNS and hostname the same.
<geigerCounter> I see. I'll set it to what I need and then I'll check out the rDNS in my OVH panel. While I'm here, does anyone have a link to a good guide for setting up a startup script to run a program as a service on startup? ( How to write an init.d script, I guess? )
<blackflow> geigerCounter: it's a systemd unit file these days, far more simple than an init shell script. unless you're still running 14.04 or 12.04?
<geigerCounter> I think I'm running 18.04, but I'm at least running 16.04. So yeah.
<geigerCounter> I just haven't really done it before. I need to do so though, because I have too many things running on my VPS that I have to start manually on a reboot.
<blackflow> geigerCounter: ton of articles on on google. for example start here   https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files
<geigerCounter> Thanks.
<geigerCounter> Is it possible to start one of these in interactive mode with screen or something so I can just su to the acc that's responsible for running it and interact with the console? ( I ask, because I have a Minecraft server. Worse case scenario, I can remote admin it through IRC or Discord with a plugin I have if it doesn't fail to launch when not in interactive mode. )
<blackflow> geigerCounter: sure   https://superuser.com/questions/1276775/systemd-service-python-script-inside-screen-on-boot
<blackflow> you can start anything from systemd, it's just the question of what, and how (service "Type").
<geigerCounter> I see! Very nice. Thank you so much.
<geigerCounter> Alright, so back to asking about reverse DNS since I've never really done anything with DNS other than registering my domain name and setting up the appropriate pointers to my IP and an SPF record - what do I want to set my reverse DNS to in my OVH panel? My domain name or something else?
<geigerCounter> I haven't been able to find the info I'm looking for - does Systemd always kill child processes when you stop a service?
<tomreyn> i don't actually know, but i would hope so. what makes you ask?
<tomreyn> geigerCounter: ^
<geigerCounter> tomreyn: Well I mean, yeah I guess it's the usual and sane behaviour, but I have a weird use case. I'm trying to make a Systemd Unit that launches a Minecraft Spigot server in a screen with the appropriate user I set up for that. That way all I have to do is su to that user and reattach to manage the console and the server will start on boot and can be controlled as any other system process. The thing
<geigerCounter> is getting it to gracefully shut down. Right now it's pretty brutally killed and that's most likely bad for the health and longevity of the server.
<geigerCounter> So the chain of events from systemd goes: Systemd -> screen -> BASH script -> Java running the spigot jar.
<geigerCounter> What I want to have happen on calling "service spigot stop" is that it uses screen to stuff a 'stop' command and wait for the java process to end normally as the server shuts down and then exit up from there with screen terminating as expected.
<geigerCounter> It kills the java process instead.
<geigerCounter> The only thing I can think to do to get around this is to make a bash script which stuffs the 'stop' to the java console and waits five minutes before exiting and set TimeoutStop to five minutes also.
<geigerCounter> But is there a better way?
<geigerCounter> ( Systemd novice here. )
#ubuntu-server 2019-03-15
<geigerCounter> tomreyn: ^
<mwhudson> geigerCounter: you can configure what systemd does to shut down a service
<geigerCounter> How do I do that beyond specifying it in ExecStop?
<mwhudson> geigerCounter: there is also KillMode
<geigerCounter> mwhudson: Tell me more/link me to docs?
<mwhudson> geigerCounter: man systemd.kill
<mwhudson> geigerCounter: but i think execstop is what you want here
<mwhudson> geigerCounter: you need your execstop script to wait for the process exit though
<mwhudson> hm sounds like you are trying that
<mwhudson> so what you are doing _should_ work, i don't know off hand why it would not be
<geigerCounter> Yeah.
<geigerCounter> I don't either really. Today is my first day ever making a systemd unit.
<geigerCounter> What happens when I use service stop is the java process just dies and then since it has no more child processes, screen exits normally. This isn't the expected behaviour.
<tomreyn> geigerCounter: sorry, i lost track there.
<geigerCounter> It should push 'stop\n' to the screen and the server's shutdown sequence should begin.
<geigerCounter> Instead of just dying.
<blackflow> geigerCounter: did you look at the KillMode option?
<geigerCounter> Looking at it now on the suggestion of mwhudson
<blackflow> also, that's not really usual or normal way services behave. essentially you want systemd to start your screen as an oneshot "service" and forget about it. then you deal with your screen and minecraft manually.
<blackflow> this situation has been asked about before and the only answer is: open a bug report to minecraft devs and have them build proper service management controls into the daemon, namely respnding properly to TERM or QUIT signals.
<geigerCounter> There is no daemon. It was never intended to be run this way. It's intended to be run as an interactive console application using Java's console tools with stdin and stdout.
<geigerCounter> I want to run it *as* a daemon, using screen.
<blackflow> even systemd, the greatest controversy since Linus began his hobby, expects services to understand these signals.
<geigerCounter> Although yes, it should learn to respond to signals correctly.
<geigerCounter> >_<
<blackflow> geigerCounter: perhaps you can whip up some stdin command injection from a shell script exec'd via ExecStop
<geigerCounter> That's basically what I'm doing using screen's "stuff" command
<blackflow> use KillMode=none so it doesn't care about killing unresponsive processess (which this essentially is)
<tomreyn> maybe think of some other (more common, maybe in ubuntu) java processes which might have the same issue and see whether a better solution was found there.
<geigerCounter> Which allows you to "stuff" characters into a detached screen as though you were typing it in yourself.
<geigerCounter> But the problem is systemd isn't paying attention to what I asked ExecStop to do.
<geigerCounter> Will try that and report back.
<blackflow> systemd will normall send a KillSignal (TERM by default) and if the process doesn't quit in TimeoutStopSec, it'll KILL It and all the kids. modulo KillSignal and ExecStop existing
<blackflow> sorry, modulo KillMode
<tomreyn> sounds sane to me.
<geigerCounter> Oh I see. So that's probably part of the problem. I have my timeout set too low.
<blackflow> geigerCounter: read the systemd.kill manpage
<geigerCounter> I... rookie mistake.
<geigerCounter> I'm reading it.
<geigerCounter> Thank you.
<geigerCounter> :)
<blackflow> geigerCounter: also the first paragraph for ExecStop= entry in systemd.service(5) manpage
<blackflow> it explains what's going on exactly, and which value of KillMode you need (none)
<lordievader> Good morning
<kstenerud> cpaelzer I got a build failure for amd64 (but not the other archs), but it wasn't  a build error. The build process and tests completed, then
<kstenerud> Build killed with signal TERM after 150 minutes of inactivity
<kstenerud> E: Build failure (dpkg-buildpackage died)
<cpaelzer> kstenerud: well this could be a real issue (background processes hanging or such)
<cpaelzer> kstenerud: but TBH most likely it is something awkward, you can ask the ops if soemthing known happened or just hit rebuild on your build
<kstenerud> ok
<ahasenack> good morning
<kstenerud> ahasenack: I'm just not sure what would cause this kind of issue... It's a bug where php was closing file descriptors before a call to curl finished
<kstenerud> and if it's testing whether a crash occurs or not, why would the entire testing rig die?
<ahasenack> kstenerud: I'd try to run that test in isolation
<ahasenack> kstenerud: well, it dies due to a timeout
<ahasenack> side "A" is talking to side "B", side "B" crashes, side "A" doesn't notice and keeps waiting
<kstenerud> It dies like this:
<kstenerud> TEST 3443/14261 [ext/curl/tests/bug48203_multi.phpt]
<kstenerud> E: Caught signal âTerminatedâ: terminating immediately
<kstenerud> Then the calling process times out
<ahasenack> check that test in isolation, is my suggestion
<kstenerud> yeah I'm running an sbuild locally. If it crashes I'll shell in and see what's up
<frickler> jamespage: coreycb: would it be possible to get an updated erlang version into UCA for xenial? like possibly the one from bionic? see the latest comments in https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1783203
<ubottu> Launchpad bug 1783203 in rabbitmq-server (Ubuntu) "Upgrade to RabbitMQ 3.6.10 causes beam lockup in clustered deployment" [Undecided,Confirmed]
<kstenerud> So when I run the build locally, it works :/
<JuJUBee> Hey all.  I want to setup internal dns in my classroom.  I have an LTSP server running and dnsmasq is installed.  I have a separate server acting as my gateway running isc-dhcp server.
<JuJUBee> I ultimately want to be able to access servers using fqdn internally like LTSP-Server.foo.local
<JuJUBee> I know very little about setting up dns.  Any help?
<lordcirth> JuJUBee, you can either set up DNS on your gateway, or on your LTSP server. Then set isc-dhcp to advertise the DNS server's IP
<JuJUBee> lordcirth, dhcp currently advertising 2 dns servers (open dns), can I just add the third IP of my dnsmasq machine?
<lordcirth> JuJUBee, you could, but if you want them to always use your DNS server, you should probably set only that.
<JuJUBee> lordcirth, but if I only want to get to public sites, don't I need external dns servers?
<lordcirth> JuJUBee, your DNS server should be configured to use those external DNS servers itself, for anything it doesn't manage.
<JuJUBee> So this seems far beyond my expertise.  I was hoping it would be a fairly simple configuration
<leftyfb> JuJUBee: the link I gave you shows you how to setup bind9
<lordcirth> JuJUBee, it is fairly simple. DHCP points clients to your dnsmasq. dnsmasq answers for .local, and for everything else, does a lookup to the outside and caches it.
<JuJUBee> leftyfb, bind kind of intimidates me.
<JuJUBee> lordcirth, I need to see how to do that in dnsmasq.  I will read on it.
<leftyfb> JuJUBee: and it will continue to intimidate until you try it, then it won't :)
<JuJUBee> leftyfb, I do have that link open and have been reading it along with dnsmasq setup.  dnsmasq seems more appropriate for my simple needs.  I just don't want to misconfigure bind and let the nastys in or prevent appropriate requests from getting out...
<sdeziel> abusing .local isn't a good idea but maybe that was just a made up example
<leftyfb> JuJUBee: this DNS server should NOT be accessible to the internet. Regardless of which solution you go with.
<leftyfb> sdeziel: abusing?
<sdeziel> leftyfb: .local shouldn't be used for internal purposes other than mDNS
<leftyfb> sdeziel: why is that?
<sdeziel> leftyfb: https://serverfault.com/questions/17255/top-level-domain-domain-suffix-for-private-network/937808#937808
<sdeziel> leftyfb: systemd-resolved chokes on .local names unless you configure it in a special way
<leftyfb> sdeziel: it's been working on my local network for about a year now
<sdeziel> leftyfb: do you also provide something.local as your search domain?
<leftyfb> yes
<sdeziel> leftyfb: that's why
<sdeziel> leftyfb: this enables the special handling of .local by systemd-resolved
<sdeziel> leftyfb: but normally .local is reserved for mDNS
<leftyfb> oh wait, sorry. No. I only append .local
<leftyfb> so <hostname>.local
<leftyfb> works just fine on my network. I can resolve everything locally
<sdeziel> leftyfb: yeah, as long as local is in the search domain, systemd-resolved will try to accommodate for this abuse
<sdeziel> leftyfb: as an experiment, drop local from you search domain, restart systemd-resolved then try to resolve <hostname>.local, should fail
<leftyfb> of course it will. Unless I have avahi/bonjour setup on every client
<sdeziel> in theory, resolvers shouldn't attempt mDNS resolution when there are 2 labels with the last one .local but that is not implemented everywhere
<sdeziel> that's why it's best to leave .local and everything under it reserved to mDNS
<sdeziel> leftyfb: you said off course it would break without the search domain but that's only true for the .local domain... any other domain would have kept working
<leftyfb> sdeziel: how would that work? If you have a local .home as your local domain and you try to ping hostname.home without having home as a search, how would it know to append that tld?
<sdeziel> leftyfb: when you "ping hostname.home" no search label is appended because you provided one already
<sdeziel> https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html then search for ".local" it explains it with all the details you'd want :)
<leftyfb> sdeziel: and you said in the test, remove the search label. Obviously it would fail when you do that
<sdeziel> leftyfb: the search domain is used to tell systemd-resolved that you are abusing .local and to back down on mDNS
<leftyfb> oh sorry, my post above was wrong. %s/ping hostname.home/hostname/g :)
<sdeziel> > Note that by default lookups for domains with the ".local" suffix are not  routed to DNS servers, unless the domain is specified explicitly as routing or search domain for the DNS server and interface
<sdeziel> from the above link
<sdeziel> leftyfb: well, if you refer to the shortname, then of course it requires a search domain ;)
<sdeziel> > Note that today it's generally recommended to avoid defining ".local" in a DNS server, as RFC6762 reserves this domain for exclusive MulticastDNS use.
<leftyfb> I'll think about changing it to something else at home :)
<sdeziel> it's worth it if you can :)
<leftyfb> after I figure out why my Unifi USG likes to disallow outbound DNS traffic around 6/7am requiring a reboot a few times a week :)
<sdeziel> If only I had that insight years ago, I wouldn't have to leave with that damn foo.local at a client site ;)
<leftyfb> A fresh install of Ubuntu server 18.04.2. sudo apt install mysql-server. I run sudo mysql_secure_installation. I select No for everything except reloading the privilege tables and set my password. I cannot run: "mysql -uroot -p <password>" to successfully login. What am I missing?
<leftyfb> sorry, not space between p and the password
<sdeziel> leftyfb: are you able to connect with mysql --defaults-extra-file=/etc/mysql/debian.cnf ?
<leftyfb> negative
<leftyfb> https://pastebin.ubuntu.com/p/Tr9Q8XdTGX/
<leftyfb> also fails if I specify credentials
<leftyfb> also of note, I can login just fine with sudo and no credentials and every time I run mysql_secure_installation it says it's running with a blank password. It seems that utility isn't actually setting our password?
<sdeziel> leftyfb: I just tested in a fresh container and cannot reproduce the root auth failure
<leftyfb> I've been 'just testing" it all morning with fresh installs from the same flash drive I used to several install of other types of servers since yesterday.
<sdeziel> https://paste.ubuntu.com/p/NDMV5D3YmJ/ root doesn't have any password despite having been asked for one by mysql_secure_installation, weird
<leftyfb> yeah, that's a problem
<rbasak> I don't know about mysql_secure_installation
<rbasak> But you get socket based auth by default on the default install.
<leftyfb> what does that mean for credential based auth?
<rbasak> I'm not sure.
<rbasak> I'd have to read the manual, etc.
<rbasak> It seems odd to be using mysql_secure_installation to me.
<sdeziel> leftyfb: by default root has access through the Unix socket, without providing any password
<rbasak> I was under the impression the maintainer scripts did the right thing, and there was no need to run it. I could be wrong though.
<leftyfb> sdeziel: as in, the root user. No supplying root as the user on the command line as a non-root user
<sdeziel> leftyfb: correct, the root user which is why it worked for you with sudo
<leftyfb> ok, and if I want to authenticate using a php script?
<rbasak> For the root user?
<leftyfb> like I'm doing with the web app/db I'm trying to migrate from 16.04 to 18.04
<leftyfb> rbasak: yes, for the mysql root user
<rbasak> That seems dangerous. But if you insist, you'll have to set a root password. I'm not sure about how that interacts with socket auth (check the docs)
<sdeziel> rbasak: it uses the auth_socket plugin
<sdeziel> I'm assuming it's checking the UID of the user opening the Unix socket
<leftyfb> https://bugs.launchpad.net/ubuntu/+source/mysql-5.7/+bug/1610574
<ubottu> Launchpad bug 1610574 in mysql-5.7 (Ubuntu) "mysql_secure_installation/mysqladmin cannot set password when auth_socket is enabled" [Low,Triaged]
<sdeziel> a stock install: https://paste.ubuntu.com/p/t4JRNffHjR/
<leftyfb> looks like I have a workaround I can try
<rbasak> "Also note that mysql_secure_installation is largely redundant for a fresh 5.7 installation"
<leftyfb> redundant how? What's the replacement?
<rbasak> What are you expecting it to do for you?
<leftyfb> set the root password
<rbasak> So just set the root password then: sudo mysql -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH 'mysql_native_password' BY 'mypass';"
<leftyfb> there's other things you can config as well, but that's my main use case
<rbasak> (frmo the bug)
<leftyfb> yeah, I'm going to try that now on another fresh install
<rbasak> Perhaps we should stop shipping mysql_secure_installation to avoid misleading users into thinking it's useful.
<rbasak> Though that might be overkill because there are users who use mysql without the maintainer scrip tmanagement.
<rbasak> BTW, you can do that if you want. Just use mysql-server-core-5.7 and operate mysql yourself directly.
<leftyfb> That works
<DammitJim> does ubuntu support php 7.0? I think 7.0 is eol
<sdeziel> DammitJim: on 16.04, yes
<DammitJim> TY
<sdeziel> DammitJim: upstream declared it EOL but Canonical will keep backporting security fixes to it, like it does for 5.5 on previous Ubuntu versions
<DammitJim> ty
<DammitJim> how do I determine if canonical supports a piece of software
<DammitJim> say, I'm looking at the fact that I installed nginx
<DammitJim> how do I know if it came from main or universe or a different repo?
<lordcirth> DammitJim, apt show nginx | grep Source
<DammitJim> so, multiverse is NOT supported by Canonica, right?
<lordcirth> correct. Also things in multiverse usually have licensing issues
<DammitJim> actually, I just looked at say mongodb and we are using the mongodb repo
<DammitJim> what a mess
<lordcirth> Nothing wrong with using upstream repos, as long as they are well supported.
<lordcirth> I have HAProxy 1.7 from PPA in production.
<DammitJim> but 1.7 is not end of life
<DammitJim> I'm trying to keep track of what software needs to be upgraded because it's going to be end of life
<lordcirth> Oh I see
<lotuspsychje> ubuntu-support-status --help
<rbasak> lordcirth: I disagree. Using third party apt repositories is fundamentally broken and can break your system, even if well maintained.
<rbasak> But I appreciate that many people use them anyway.
<lordcirth> rbasak, so is not having the software you need.
<rbasak> Sure
<rbasak> But they should understand the risks.
<lordcirth> It's more stable to run an Ubuntu LTS + upstream-maintained stable release than any other option.
<lordcirth> I am happy that HAProxy 1.8.8 is in 18.04, so I don't need a PPA anymore
<rbasak> In particular I'd strongly recommend against attempting a release upgrade if a third party package has been on the system.
<lordcirth> I just don't do release upgrades. nuke and pave
<rbasak> The other thing that goes wrong is experimentation, so in production it's essential to have prepared an automated deployment
<DammitJim> ubuntu-support-status --help?
<DammitJim> OMG
<DammitJim> that's huge!
<lordcirth> Yeah, pretty cool
<sdeziel> ubuntu-support-status reports odd things here. libwoff1 is said to be supported by Canonical for 3 years yet it's in main
<lotuspsychje> wich ubuntu version are you on sdeziel
<lotuspsychje> same on bionic here: libbrotli1 libwoff1 3y
<sdeziel> lotuspsychje: 18.04
<sdeziel> lotuspsychje: yeah, same 2 packages in main that are reported as support for 3y only (instead of 5y)
<sdeziel> oh I see what happens
<sdeziel> those packages started in universe then were MRE after 18.04 release
<lotuspsychje> ah
<sdeziel> still, I don't see why they wouldn't get the full 5y support, looks like a bug in ubuntu-support-status
<lotuspsychje> if you report, ill affect :p
<sdeziel> how nice of you :P
<lotuspsychje> i just want a cookie
<sdeziel> lotuspsychje: https://bugs.launchpad.net/ubuntu/+source/update-manager/+bug/1820329
<ubottu> Launchpad bug 1820329 in update-manager (Ubuntu) "ubuntu-support-status wrong support period for packages that were MRE during a release's life" [Undecided,New]
<lotuspsychje> sdeziel: affected mate
<lotuspsychje> sdeziel: you think it would differ with someone with the hwe kernel?
<sdeziel> lotuspsychje: I wouldn't expect it to be different for those 2 libs but maybe for the hwe packages themselves since they've never been into bionic/main, only bionic-updates/main
<lotuspsychje> kk
<DammitJim> OMG, the things one doesn't know
<DammitJim> ignorance is bliss!
<DammitJim> I had no idea vim-nox was no longer supported!
<sdeziel> DammitJim: I too was using vim-nox but have since moved to plain vim that's in main
<lordcirth> I use neovim
<DammitJim> new vim?
<DammitJim> neovim?
<DammitJim> hhhmmm
<lordcirth> It's good. Some features have actually been backported to main vim.
<DammitJim> is eol of ubuntu 14 at the beginning or end of APril?
<sdeziel> DammitJim: April 25th, 2019
<DammitJim> is it always on the 25th?
<sdeziel> Canonical seems to have a thing for Thursdays but other than that, I think the EOL date can vary
<sdeziel> DammitJim: 14.04 will not be officially EOL but will transition to ESM (paid support)
<blackflow> Watched Popey's talk on snaps @ SCALE 17x. I couldn't disagree more on reasoning not to allow custom snap repos. What if a company wants to package their in-house, not publishable, apps as snaps in order to have a homogenous toolset and not a quagmire of snaps and dockers. Isn't that enough of a reason to allow custom repos?
<lordcirth> blackflow, well, what was his reasoning?
<popey> I only gave one reason
<JanC> who'd want to use snap on a (production) server anyway?
<blackflow> lordcirth: that allowing so would then cause the same issues PPAs have|had -- undiscoverability of programs published in custom repos
<popey> It's entirely possible to do the use case you describe
<popey> https://docs.ubuntu.com/snap-store-proxy/en/
<blackflow> popey: yes but not with a repo, it'd have to be uploaded manually and handled --dangerous'ly
<tomreyn> sdeziel: in bug 1820329 , do you actuall ymean MIR, not MRE (which was replaced by SRU, as far as I can tell)?
<ubottu> bug 1820329 in update-manager (Ubuntu) "ubuntu-support-status wrong support period for packages that were MRE during a release's life" [Undecided,Confirmed] https://launchpad.net/bugs/1820329
<sdeziel> tomreyn: right, I wanted to say MIR
<blackflow> popey: btw the vid linked from LINUX Unplugged, has bad audio and is missing segments. Is there a better one of your talk?
<popey> not that I'm aware of
<popey> I didn't even know it was streamed / recorded until afterwards.
<lotuspsychje> sdeziel: a member tested it on 19.04 also with this result: libbrotli1 libwoff1 is shown as supported for 9 months (Jan 2020)
<tomreyn> sdeziel: thanks for clarifying, i was just on a journey trying to understand what those different abbreviations mean (and got help there, so now i know)
<sdeziel> lotuspsychje: 19.04 as a whole has 9 mo of support
<lotuspsychje> sdeziel: allright, so not relevant for this bug right?
<blackflow> popey: ah, k.  btw that link is about proxying the official snap store, or am I missing something in that?  I was thinking about a completely custom store a company could use in-house on their servers
<sdeziel> lotuspsychje: I don't think
<blackflow> popey: good talk btw (even though I disagree on that little point about snap store)
<popey> blackflow: that's right. the proxy along with a brand store effectively gives you that
<popey> thanks
<popey> We have customers who have an enterprise proxy as a frontend to their own brand store, which they control the content of
<DammitJim> this might not be an ubuntu question, but more of a server question
<DammitJim> is there anything I should consider with regards to a max amount of services that a server runs?
<DammitJim> I have RAM and CPU if needed
<DammitJim> these are java processes
<DammitJim> any input from any of you?
<lordcirth> DammitJim, resource contention, effect of a reboot, security
<blackflow> popey: is that available to non-advantage users? I can't seem to find anything about it in those docs
<sdeziel> DammitJim: depends on the -Xms/-Xmx params you want to give to those JVMs I guess
<JanC> that depends on how many resources those services need
<lordcirth> DammitJim, are these diverse services, or just a bunch of instances of the same?
<DammitJim> diverse, but similar in business logic
<lordcirth> DammitJim, Consider running them all in unprivileged containers with resource quotas
<lordcirth> Then you can reboot them independently, they can't spin out of control and eat all resources, etc
<DammitJim> thanks... those are all good points
<popey> blackflow: the proxy is installable - it's a snap ;). There are indeed projects which have brand stores, which don't have UA
<lordcirth> We use LXC containers on big IAAS hosts, all controlled by Salt. works pretty well.
<DammitJim> when you say unprivileged containers, do you mean their own JVM?
<DammitJim> I'm using salt, too, but not to that leevl
<lordcirth> DammitJim, no, linux containers, ie LXC / docker
<DammitJim> level
<JanC> OS containers
<lordcirth> unprivileged LXC containers means that uid 0 (root) in the container actually maps to uid 100231 on the host, who doesn't exist and has no permissions.
<JanC> https://en.wikipedia.org/wiki/Container_(virtualization)
<lordcirth> privileged containers have no such mapping, so any hostile or compromised process that is root in the container can trivially break out
<blackflow> lordcirth: Salt control on a completely isolated vlan, or is it publicly accessible? We rent servers with several providers and thus the control channel would have to traverse public networks. Something I am totally not comfortable with.
<DammitJim> oh, these don't run as root either
<lordcirth> blackflow, in this case, an internal, though not immensely locked-down, network
<DammitJim> blackflow, ours is isolated vlan
<blackflow> yeah
<lordcirth> DammitJim, yes, but if someone exploits your java app, then escalates to root, they will only be root in the container.
<lordcirth> blackflow, the only weakness Salt potentially has is when a minion first connects to the master - it trusts the master's key on first connect. If you don't trust the network, preseed all your minions with the master key, and no more holes.
<blackflow> lordcirth: it's no that that I'm worried much about, it's the fact that the master would we a publicly accessible server -- because we rent them. I mean that one machine would be the gateway to all the servers. One point to compromise and then everything is.
<blackflow> would *be
<lordcirth> blackflow, ah, you are worried about the master being compromised. Well, one thing you could do is require a VPN to reach the master.
<lordcirth> It would take a bit more preseeded setup, but you could require new minions to establish an 2-way auth'd VPN, then connect to Salt over it.
<blackflow> lordcirth: it's still one publicly accessible port. one vuln in the network stack and poof...
<blackflow> but then.... the same vuln could be used on all teh servers individually... moot point. I worry too much.
<lordcirth> blackflow, that vuln in the network stack would affect all of your minions...
<blackflow> yeah
<DammitJim> we worry too much
<lordcirth> There's only so much you can do while being connected to the internet :)
<DammitJim> and you know what the worse part about that is? the day you get tired of worrying ('cause that happens to me), that day is when I make very poor decisions
<lordcirth> But props for being paranoid :)
<blackflow> yah :)
<lotuspsychje> healthy paranoia is good
<DammitJim> healthy paranoia is good, but it's not easy to achieve... gotta make sure to have a good balance
<blackflow> that balance is hardest to achieve
<blackflow> at the very least I manage to convince everyone in the company that public clouds and VPS'es are a big no-no, in the post Meltdown+Spectre world.
<supaman> talking about being paranoid ... I am taking over a system (a few virtual ubuntu servers) where the prev sysadmin did a apt upgrade without waiting a while and checking the packages for bugs or doing any kind of test ... would you continue that practice? :-)
<DammitJim> that's a good question... how does one deal with that supaman ?
<supaman> depends on how paranoid you are ;-)
<blackflow> with ZFS snapshots of course.
<DammitJim> I'm not for that kinda stuff
<DammitJim> we do run the production environment in a virtual lab and test our applications
<DammitJim> but I don't go looking for bugs in the updates
<lordcirth> I can't remember the last time that an apt upgrade broke something. Once or twice a full-upgrade kernel update did, but you need to reboot to notice that, and then you just reboot back to the old one
<Ussat> you dont need ZFS snaps if you have good backups
<lordcirth> Ussat, best thing is to take ZFS snapshots, and then backup the snapshots. Prevents smearing (I forget if there's a more common term)
<Ussat> naaa...we dont and wont use zfs
<lordcirth> The snapshots are instant and make sure your backup doesn't get different files at different times as it chugs along
<Ussat> no need
<lordcirth> Ussat, so, how do you detect bitflips?
<Ussat> in all the years I have been doing this, I have NEVER had an issue with that
<lordcirth> Ussat, that you know of
<Ussat> That I am sure of
<lordcirth> So, if a bit flipped somewhere on one of your drives, you are quite sure you would get an email about it?
<lordcirth> Just a few months ago I suddenly got 30 checksum errors on each of 3 drives in a raidz. No idea why, maybe a power surge. Didn't matter, ZFS fixed it all
<lordcirth> Granted that wasn't on enterprise hardware, but that's just a difference of degree.
<Ussat> I dont use non-enterprise HW at work
<Ussat> "just" a different of degree.....
<lordcirth> so you are less likely to get bitflips. Not 100% assured.
<Ussat> nothing is ever 100%...ever
<lordcirth> ZFS decreases your odds of a bitflip by several orders of magnitude, which is enough for me. trusting hard drives isn't good enough for me.
<Ussat> all my data I am concerned about is stored o a SAN, if a VM image is corrupted, no big deal.....
<Ussat> We use a real enterprise SAN, sot a bunch of commercial disks off the shelf with a fancy FS on it
<lordcirth> I trust ZFS a lot more than a "real enterprise SAN" blackbox
<Ussat> to each their own
<Ussat> I dont
<lordcirth> Not to mention the cost and vendor lock-in
<blackflow> lordcirth: hear hear
<blackflow> guess some people are too lucky and never had a failure
<Ussat> Oh I have had failures...sure......none of them crippling
<lordcirth> Obviously a good SAN is better than what a lot of people do...
<Ussat> My systems are very critical.....
<lordcirth> Some people run production data on commercial drives with just RAID.. ow.
<Ussat> a lot of people use a good SAN
<blackflow> ZFS snapshots (sent offsite) are godsent for backups. quick, atomic, with data integrity checks.
<Ussat> Not in my industry they dont
<blackflow> "a good SAN" is not replacement for offsite backups. ZFS (or any other CoW) is orthogonal to SAN. Your SAN can be powered by ZFS. So I don't get your point.
<lordcirth> A SAN could be powered by ZFS, but are generally powered by blackbox magic
<Ussat> Who said I dont have offsite backups ?
<Ussat> I never said that, I have a very robust backup solution w/offsite backups
<lordcirth> Yes, I assumed you did.
<blackflow> well you seem to have put ZFS in a opposition to SANs.
<lordcirth> blackflow, he just said he doesn't need ZFS because he has a fancy SAN. Doesn't mean you couldn't somehow have both.
<blackflow> which is wrong really, they're orthogonal. So is this: "20:58 < Ussat> you dont need ZFS snaps if you have good backups"  --- but ZFS snapshots _are_ good backups, when you `zfs send` them offsite
<blackflow> doesn't make sense to put ZFS snapshots and good backups in opposing relationship.   that's like saying "You don't need a filesystem if you have good backups"
<Ussat> incorrect, but thats ok you do what you want, I do what we do
<JanC> and you would likely want to use some sort of snapshot on that SAN too...
<Ussat> JanC, the SAN is mirrored between sites
<JanC> to make backups
<Ussat> NO...not to make backups
<Ussat> I am not going to explain how our san enterprise works...it works quite well and is very efective
<JanC> so how do you guarantee data consistency on you backups?
<Ussat> Its not just a bunch of off the shelf disks and consumer grade PCs ....
<blackflow> Ussat: not sure which part is incorrect. perhaps you don't know what ZFS is? a snapshot is a backup per se. it's a copy of data you can revert to. you can back your data from it. it becomes _good_ backup when you ship it offsite. you might not like it, that's okay. use something else, fine, but how is all this "incorrect"?
<Ussat> per se
<JanC> a snapshot is not a backup
<Ussat> ^^^
<JanC> but you need it to make consistent backups
<blackflow> yes it is, it's just not good enough if it's local
<blackflow> you can ship it elsewhere and it's still a zfs snapshot. in a physically different location.
<blackflow> maybe y'all don't know about the `zfs send` feature. it's made to send snapshots to external pools. and `zfs receive` to bring them in from external pools. how's that not a backup.
<JanC> I know what it is, and when you do that you have a proper backup indeed
<Ussat> I am quite aware of them, and we looked at them and rejected them
<lordcirth> arguing over the semantics of what is and is not a 'backup' is irrelevant when you agree on what they are
<blackflow> it's okay if you don't like them. but fact remains they _are_ useful backups.
<Ussat> Not in our case
<lordcirth> What is special about your case that ZFS is not sufficient?
<Ussat> Correct
<Ussat> We evaluated it, and it doesnt meet pur needs
<lordcirth> I am interested to know in what way ZFS fell short of your needs.
<blackflow> I'm dying to know too
<Ussat> I dont need to justify my descisionsto you.......as soon as you start cutting my paycheck, then that is a different matter
<blackflow> lol
<Ussat> It did not meet our needs
<lordcirth> I don't see any reason to suddenly turn so hostile...
<Ussat> Thats hardly hostile
<blackflow> why do you think that's "justifying"? this is a discussion forum. if you don't like it you can always /part
<blackflow> but I know what it is. you just have no idea what you're talking about. so lashing out is best defense to hide ignorance. fine by me.
<lordcirth> blackflow, while that is possible, I don't think you have sufficient evidence to assume that.
<Ussat> Yes, yes I do, we evaluated it and it did not meet our requirements
<blackflow> oh I do. seen that type too often.
<blackflow> armchair "admins" who feel threatened when asked to explain their use case.
<Ussat> armchair admins...um sure
<blackflow> yup.
<JanC> Ussat: it would be useful to know in (roughly) what way it didn't meet your requirements
<Ussat> OK, and you are obviousely the expert......you have no clue what industry or what the requirements are, but you can make that assumption. You seem to think that ZFS will meet all requirements
<lordcirth> Ussat, no, we asked what requirements it didn't meet, and you refused to answer
<lordcirth> Which usually means you don't know.
<shibboleth> any word yet as to this supposed horribad vulnerability that made scroogle and fakebook down their networks for "disruptive, lengthy, unannounced but totally planned and routine maintenance"?
<lordcirth> There are some requirements ZFS can't fulfill.
<Ussat> lordcirth, ir which means it may be none of your business
<lordcirth> Ussat, and yet you are still here arguing about it?
<lordcirth> If you have an NDA, say so.
<blackflow> Ussat: you don't know how to read either. I said several times it's fine if you don't like it or if it doesn't work for you. I never said ZFS _must_ meet your requirements.
<Ussat> I am in a highly regulated industry, and ZFS did not meet the requirements
<blackflow> that's fine. so we asked what does.
<Ussat> How effective is ZFS on a streached cluster ?
<lordcirth> You mean as a distributed filesystem? Normally I would run Ceph on top of ZFS for that.
<Ussat> No I dont mean distributed FS
<Ussat> I mean streached cluster
<Ussat> behind SVC's
<blackflow> a stretched cluster (not "streached"), is when two or more virt hosts are part of the same logical domain but localed in physically different locations. that's not in ZFS domain at all
<lordcirth> Yeah, just looked that up. Seems like something you'd implement above ZFS.
<blackflow> under
<blackflow> or, well, depends on your strategies I guess.
<lordcirth> Either would probably work, yeah
<Ussat> Or maby, just maby we tested it and ZFS crapped out
<lordcirth> Ussat, combining ZFS with what?
<Ussat> and IDGAF about the spelling, I am dyslexic, correct away
<blackflow> I personally don't have need for it nor personal experience so I can't vouch. I do know of people who happily use ZFS with lustre and for htat purpose exactly. zvol based virtualization like in a stretched cluster.
<Ussat> Oh please.....I dont need to continue to justify what we do
<blackflow> not justifying anything, just discussing your use case. that's what these public places are for.
<blackflow> nobody has brought you to court, judged you and pressed you to defend yourself.
<Ussat> the other issue was encryption at rest, ZFS , while it has it imlimented, does not meet the standards we must meet
<Ussat> implimented
<blackflow> ZFS encryption is still highly experimental. We use LUKS under id
<blackflow> *it
<Ussat> The time to encrypt/decrypt was horrendus
<blackflow> in our tests, with AESNI, the difference was ~2%, very much acceptable in our case
<Ussat> not ours
<lordcirth> Ussat, you were using ZFS 0.8 rc?
<Ussat> We are not useing ZFS at all now
<lordcirth> Ussat, in you tests, I mean
<Ussat> I believe so, it was a while ago
<lordcirth> your*
<lordcirth> Because ZFS 0.8 rc hasn't been out for that long, and still isn't a stable release, so it seems odd to rely on it for your apparently critical use case.
<lordcirth> Or consider doing so, I mean
<Ussat> I said we are not useing it, why do you think we are relying on it /
<Ussat> or why do you think we would ? I said we tested and decided it would not meet our needs
<lordcirth> Ussat, if you did the tests with ZFS encryption, then you must have been considering using it, no?
<Ussat> We considered it, yes, and put it asside
<lordcirth> It seems odd to run the tests only on 0.8 when it wouldn't be ready for your use case anyway, instead of testing, say, 0.7 on LUKS which would be.
<lordcirth> We have a Ceph cluster in production which is backed by ZFS on dm-crypted drives.
<lordcirth> Looking forward to 0.8 simplifying that, though.
<Ussat> We dont use software based storeage
<Ussat> no plan to ever
<lordcirth> Well, I'm sure somebody will keep selling blackbox storage as long as big companies buy them.
<JanC> your SAN is software-based too
<Ussat> JanC, yes yes.....and all raid is software based also, even when it is on a raid card
<lordcirth> Well yes, but raid cards suck
<Ussat> Yes its software...
<Ussat> lordcirth, big companies buy them because they are very reliable, depending on what you buy
<Ussat> everything is software based, when it comes down to it.....
<lordcirth> Ussat, but what most people mean by 'software defined storage' is that it is commodity hardware assembled by good, flexible, tunable software into a good system.
<lordcirth> A big box that says "no user serviceable parts inside" uses lots of software, but it's not SDS
<blackflow> needn't even be commodity hardware
<lordcirth> It needn't be, but it's generally one of the selling points
<blackflow> lordcirth: yeah, the strength of ZFS (and BTRFS) is that you can use crappy commodity hardware with no risk to your data
<JanC> I wouldn't say that about btrfs...
<blackflow> JanC: well okay, but in theory at least :)
#ubuntu-server 2019-03-16
<super_koza> Hi!
<super_koza> I am having issues with SSH on my home server.
<super_koza> I can't SSH into it using the domain name, but I can reach it with the IP address.
<super_koza> On the other hand, I can access services using the domain name.
<super_koza> What could be going on_
<super_koza> I am using freeDNS for dynamic DNS.
<super_koza> Also Ubuntu Server 18.04.
<tomreyn> super_koza: when you run "ping -c1 domainname" (replace "domainname" by your domain name), does it resolve to the same ip address you would ssh to?
<super_koza> tomreyn: No, it doesn't.
<tomreyn> super_koza: so that's why, you're connecting differently.
<super_koza> The address that is listed in the freeDNS (afraid) is working for SSH. Domain name is working for services hosted on the server.
<tomreyn> super_koza: chances are that your domain name resolves to your routers' public ip address, which is generally fine. and that you ssh to your lan ip address. is this correct?
<super_koza> No.
<super_koza> I will send you the addresses...
<tomreyn> !pm
<ubottu> Please ask your questions in the channel so that other people can help you, benefit from your questions and answers, and ensure that you're not getting bad advice.
<super_koza> So it is no lan IP, but rather a public IP.
<tomreyn> i will need to whitelist you to receive private messages from you, i discard them by default. i don't really need to know the addresses, you could just explain which address ranges they are from, or how you connect differently.
<super_koza> Ok, I just didn't want to share them publicly.
<super_koza> So, the address listed in freeDNS is something like: 33.233.232.2
<tomreyn> what probably matters here is where you are connecting from, and which ip address ranges those different domain names (the 'main' domain name  and the subdomains) resolve to.
<tomreyn> ok, go on.
<super_koza> And the one that I get when pinging or SSHing 69.197.18.182
<super_koza> I am connecting from a remote location.
<super_koza> So I am not inside the network.
<super_koza> Everything used to work, until a power outage occured. I guess that is when the IP got changed, and everything got broken...
<tomreyn> hmm so two different public ip addresses both connect to your ssh server?
<super_koza> No...
<super_koza> The first one is the actual address I think.
<super_koza> The second one might be the old one, which doesn't work now.
<tomreyn> 33.233.232.2 = "the first one"?
<super_koza> Yes
<super_koza> That is the address that I see with my DynDNS provider.
<super_koza> And I can ping and ssh with it.
<tomreyn> so where do you get the 69.197.18.182 address from?
<super_koza> when I use the domain name, I can reach services hosted on it, like nextcloud and so on...
<super_koza> but when I ping with the domain name, I get that 69.197... address.
<super_koza> No idea how.
<super_koza> Maybe I could try clearing the DNS cache on my client PC.
<tomreyn> so the dns for this domain name, is it managed by your freedns service, or is it manually updated by you?
<super_koza> It is updated with a cron job
<tomreyn> okay, i give up, i'll whitelist you.
<super_koza> the current IP is there
<super_koza> I have to run now
<super_koza> is it OK if I write you later
<super_koza> ?
<tomreyn> i'll probably not be around then, maybe yomeone else will be
<tomreyn> but you can try.
<super_koza> Have you got my messages?
<tomreyn> no, they went nowhere. but you can PM me now.
<tomreyn> got them now.
<leftyfb> um
<leftyfb> super_koza: we already went through all this
<leftyfb> super_koza: that domain does not point to the correct ip. Plain and simple. contact the dyndns/domain provider for support.
<leftyfb> super_koza: cloud.<your.domain> does. But the top domain does not.
<leftyfb> That is why your nextcloud works
<leftyfb> super_koza: something at the provider forwards it
<tomreyn> right
<leftyfb> super_koza: If you had contacted your provider, they would have figured this out for you and probably even resolved it. It is in no way an issue with or related to Ubuntu.
<leftyfb> an hour ago
<tomreyn> "You're going to help us, Mr. Anderson. Whether you want to or not."
<leftyfb> "Because I choose to"
<tomreyn> Yes, freedns.afraid.org is a service run by Joshua Anderson, who chose to provide an awesome free service and fighting against the (DDoS) windmills of the Internet and does so for 18 years now.
<tomreyn> oops, OT
#ubuntu-server 2019-03-17
<Fizzik> Hello server 18.04 having nameserver issues. I have static ip set and static DNS ip which is a local server 192.168.2.6 it shows up in systemd-resolve --status under global dns and the actual interface
<Fizzik> it seems as tho its pulling from /etc/resolv.conf
<Fizzik> which keeps getting overwrittren on boot
<ChmEarl> Fizzik, /etc/resolve.conf is a symlink into /var/run/* unless you change it
<OerHeks> netplan
<Fizzik> netplan aint going nothing
<Fizzik> lol
<Fizzik> doing*
<Fizzik> no matter what i change
#ubuntu-server 2020-03-09
<lordievader> Good morning
<MJCD_> hey all
<MJCD_> i'm still on a CLI based quickmenu style thing
<MJCD_> a basic gui that probably implements menu.lst or w/e
<MJCD_> but for the command line, there were many pre-windows things like this
<MJCD_> searching it I only really find "install xorg"
<MJCD_> and that's not at all what i'm after
<thelounge7666> I'm still having an issue where systemd-networkd will not re-aquire an IP address if the link goes down, even after restarting the service. only a reboot works. any thoughts?
<rbasak> ahasenack: do we understand enough about bug 1865900 to mark it Triaged? I think we do?
<ubottu> bug 1865900 in apache2 (Ubuntu) "apache 2.4.29-1ubuntu4.12 authentication with client certificate broken" [Undecided,Incomplete] https://launchpad.net/bugs/1865900
<teward> rbasak: mind if I take a look at that?
<teward> because i've been doing client auth stuff regularly currently and might be able to provide insight/debugs
<teward> on ${MULTIPLE_SOFTWARE_ENDPOINTS)
<teward> rbasak: is it possible that the issue isn't Apache side?  (https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/1834671 is linked by mdeslaur)
<ubottu> Launchpad bug 1834671 in firefox (Ubuntu Eoan) "TLSv1.3 client certificate authentication with renegotiation unsupported in browsers" [Undecided,Fix released]
<rbasak> teward: indeed
<rbasak> I just wanted it out of Incomplete status :)
<teward> rbasak: well if the issue is $CLIENT and not $SERVER then it's Invalid for Apache
<teward> not Triaged.
<teward> unless i'm missing something?
<teward> (has Apache2 been rebuilt against OpenSSL 1.1.1?)
<sdeziel> teward: yes, rebuilt after having TLS 1.3 support backported to Bionic
<ahasenack> rbasak: just saw your ping now
<ahasenack> teward: I think it's not an apache issue, unless we are willing to disable TLS 1.3
<ahasenack> which defeats the point of the SRU that brought it in
<teward> ahasenack: well for nginx
<teward> in the SRU we *disabled* NGINX TLS 1.3 by default by simply rebuilding against SSL
<ahasenack> well, apache had an sru specifically to enable tls1.3, with backported patches and all
<teward> yeah i'd nACK that
<teward> a similar SRU came in for NGINX
<teward> to enable 1.3 by default
<teward> and we NACK'd that
<ahasenack> that's the only remaining reason to have an apache task: if we want to revert that
<ahasenack> rbasak: ^
<ahasenack> if not, then we will need tasks for all clients that broke
<ahasenack> the fix for that could be to implement pha, or not not use tls1.3 when doing certificate auth
<ahasenack> s/not not/to not/
<teward> ahasenack: with nginx, we simply disabled TLS1.3 in the ssl_protocols - is there a way to do that in Apache, that is keep TLS1.3 support but disable it on the default available protocols
<teward> that way users can still *enable* TLS1.3 but on a selective basis instead of by default
<teward> I had extensive discussion with mdeslaur and rbasak and sarnold on this from a security perspective
<teward> and we decided to not backport the TLSv1.3 enablement
<teward> (but compile nginx against TLS1.3 so we can allow availability)
<mdeslaur> I don't think we should revert tlsv1.3 just for the few uses of client side certs
<ahasenack> teward: maybe if it's removed from the "ALL" keyword
<rbasak> I just wanted to get past triage!
<rbasak> Can I mark it New again at least, pending further discussion?
<ahasenack> rbasak: if you mark it new, it will get back into triage
<teward> ahasenack: well replace "ALL" with -SSLv2 -SSLv3 +TLS1.0 +TLS1.1 +TLS1.2 for the defaults
<teward> which is, you know, what we do in nginx-conf
<teward> because otherwise it defaults to "ALL" behind the scenes
<teward> which for obvious reasons is "Bad"
<ahasenack> if pha is part of tls1.3, and some client can't do pha, then that client should perhaps have tls 1.3 removed from its list of protocols it can speak
<ahasenack> since it can't, fully
<mdeslaur> well then firefox and chromium would get tlsv1.3 removed :P
<ahasenack> these clients would break if they talked to an apache server out there that didn't come from ubuntu. They might break if they talk to apache from focal, right? In the same way
<ahasenack> mdeslaur: at least firefox supports pha, I haven't checked if anything changed in the chromium side
<ahasenack> last I looked they were waiting for some spec
<mdeslaur> ahasenack: not by default, you have to enable it in settings because it breaks http2
<ahasenack> mdeslaur: ah, that was chromium's wait iirc
<mdeslaur> client side certs aren't common enough for them to care about it
<mdeslaur> and the workaround to disable tlsv1.3 on the server works
<ahasenack> ok, my point is that the same bionic client who fails to do post-handshake-auth in tls1.3 with client certs while talking to the new apache shipped with bionic, will also break when talking to a random web server out there who also offers tls v1.3
<mdeslaur> yes
<ahasenack> but there was a change in behavior
<ahasenack> "stuff was working inside my local net; update apache, stuff broke"
<rbasak> ahasenack: back into triage> that's OK - I tagged it server-next, so we can just treat it in our backlog - further triage doesn't really matter then, except to respond to further comments
<ahasenack> rbasak: what do you think about the bug being on the server and the client? The quick discussion above
<rbasak> I agree it's a regression
<rbasak> But equally sometimes we decide to not fix regressions. When it's complicated. This is complicated :-(
#ubuntu-server 2020-03-10
<lordievader> Good morning
<zenlinux> Hi folks - I've had an odd application process for a distro engineer role on the Ubuntu Server team. After I applied for the position, I received an email from a no-reply address at greenhouse.io to complete a take-home assignment, which I did and then submitted a week ago. I have no contact information of anyone in HR or the recruitment team. I'd
<zenlinux> like to follow-up with a person just to inquire about my application, but I have no idea who to reach out to. Any advice would be greatly appreciated. And apologies if this is too off-topic for this channel - I'm at a loss at what I can do next.
<powersj> zenlinux, I'll follow up with you
<zenlinux> much appreciated!
<halvors> how can i disable netplan? and use systemd-networkd instead?
<sdeziel> halvors: I believe that simply removing/rename the .yaml files from /etc/netplan will prevent netplan from generating networkd configs
<halvors> sdeziel: Yes it does. But is that the correct way to do it or will netplan service just simply fail to start?
<sdeziel> halvors: I believe that's the official way to prevent netplan from running. Alternatively you can remove the netplan.io package altogether
<halvors> but that would remove ubuntu-minimal as well?
<sdeziel> halvors: that's what's recommended in https://netplan.io/faq#how-to-go-back-to-ifupdown
<sdeziel> halvors: personally, I'd simply move the yaml away
<halvors> ok :)
<runelind_q> does apt run updates in the background periodically?
<mwhudson> runelind_q: it's a different package, unattended-upgrades, but it is enabled by default, yes
<runelind_q> cool, so I can stop doing my own apt updates and just do list-upgradable
<sarnold> runelind_q: yes; don't plan on never running apt upgrade by hand, of course, but list-upgradable should be Good Enough
<runelind_q> sarnold: great, thanks.
<mwhudson> runelind_q: unattended-upgrades only installs security updates by default, i think
<runelind_q> I'm only looking for the update part.  I do list-upgradable manually for now (this is just for home servers)
<runelind_q> I had landscape on premise set up a while ago but it kept throwing errors with updates not running.
<runelind_q> maybe I can investigate it again.
#ubuntu-server 2020-03-11
<lordievader> Good morning
<arielfe> how can i shut down pycharms database tool connection to a local postgresql from a terminal?
<coreycb> jamespage: for bug 1866361, I think I'm just going to bump python-openstacksdk from 0.26.0 to 0.27.0 for stein inline with upstream. the new minor release only includes the patch for this fix and it includes an API change.
<ubottu> bug 1866361 in Ubuntu Cloud Archive stein "[SRU] Connections to Neutron are not closed properly in v0.26.0" [Undecided,Triaged] https://launchpad.net/bugs/1866361
<jamespage> coreycb: sounds sensible
<runelind_q> how can I pass environment variables to a user in a systemd startup script? I created /etc/systemd/system/<service>.d/override.conf
<runelind_q> and put in Environment="VARIABLE=value" in the file
<runelind_q> but from what I can tell, it doesn't seem to be passed properly.
<sdeziel> runelind_q: have you run 'sudo systemctl daemon-reload' ? after creating the override.conf file?
<runelind_q> I did
<sdeziel> runelind_q: and restarted the unit in question?
<runelind_q> correct
<runelind_q> I'm supposed to be able to do systemctl --user show-environment, but I get "Failed to connect to bus: No such file or directory"
<sdeziel> runelind_q: the override you created is for the system instance, not the user one
<runelind_q> sorry it is /etc/systemd/system/traefik.service.d/override.conf
<runelind_q> (I'm working with traefik, obvs)
<sdeziel> runelind_q: this is still in the system instance, nothing you can see with systemctl --user
<sdeziel> runelind_q: that said, show-environment won't show you the env vars exported to the unit you overrode
<runelind_q> ah ok, so it looks like I potentially need to do /etc/systemd/system/user@.traefik.d/ ?
<sdeziel> runelind_q: to see those, you'd probably need to do something like this: sudo grep -a VARIABLE /proc/$traefik_pid/environ
<runelind_q> (am reading https://wiki.archlinux.org/index.php/Systemd/User#Environment_variables)
<runelind_q> I'm launching traefik with the traefik user and group
<sdeziel> runelind_q: right but that traefik unit is run/managed by the system instance of systemd
<sdeziel> runelind_q: could you pastebin the output of: systemctl cat traefik.service
<runelind_q> sdeziel: https://pastebin.com/EmYdEZ17
<runelind_q> it looks like some stuff got cut off, but it appears to be only comments.
<runelind_q> (copied the startup script from someone else)
<sdeziel> runelind_q: so I think the issue is that you pass Environment= twice
<sdeziel> runelind_q: only the last one is effective
<runelind_q> the traefik user needs CF_API_EMAIL and CF_API_KEY to be able to run traefik properly
<sdeziel> err, sorry, I can't read man pages, scratch that
<sdeziel> runelind_q: since the content of /etc/systemd/system/traefik.service.d/override.conf, I'd recommend tweaking it's ownership/permissions BTW
<sdeziel> runelind_q: your setup looks sane to me, what exactly makes you conclude it's not 'passing the env vars properly'?
<runelind_q> because traefik complains about missing values for CF_API_EMAIL or CF_API_KEY
<runelind_q> if I run traefik as root with those values manually exported, everything works fine.
<runelind_q> sdeziel: interesting if I grep -a CF_API_KEY /proc/22095/environ I do see it, same with EMAIL
<runelind_q> sdeziel: so I must be doing something else wrong :(
<sdeziel> sudo grep -a CF_API_ /proc/$(systemctl show --value -p MainPID traefik)/environ
<sdeziel> ah, I was too slow ;)
<runelind_q> but your command was fancier :)
<runelind_q> I guess I can go ask the traefik peeps
<runelind_q> oh wait hold on
<runelind_q> this is with the manually exported values.  Let me kill my root process and systemctl start it with the traefik user
 * runelind_q facepalms
<sdeziel> runelind_q: word of advise when doing tests as root: any file created when doing your test will be impossible to write by the low-priv user afterwards... which may cause you headaches
<runelind_q> noted
<runelind_q> ok, now that I'm running it as the traefik user, I don't see the environment variables at all.
<sdeziel> runelind_q: what do you get from: sudo grep -a CF_API_ /proc/$(systemctl show --value -p MainPID traefik)/environ
<runelind_q> nothing in return at all
<runelind_q> if I just cat it I get some values back
<runelind_q> LANG=C.UTF-8PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/binHOME=/var/wwwLOGNAME=traefikUSER=traefikINVOCATION_ID=b52587803e364c05beb88ca0dd98914dJOURNAL_STREAM=9:6600383
<runelind_q> but I definitely got more values back when running as root
<sdeziel> when running as root, it probably inherited your whole environment
<runelind_q> yes
<sdeziel> does "systemctl show --value -p Environment traefik" looks sane?
<runelind_q> also returns blank
<runelind_q> but ps -ax | grep traefik shows a pid of 28299
<runelind_q> so I did cat /proc/28299/environ
<runelind_q> and the results were pasted above
<sdeziel> does that match the value returned by: systemctl show --value -p MainPID traefik
<runelind_q> it does
<sdeziel> runelind_q: I'm out of good ideas, at this point I'd try adding 'ExecStartPre=/usr/bin/env' and see if the env ends up in the unit's logs
<sdeziel> runelind_q: I just noticed that your override file doesn't have the [Service] header
<runelind_q> oh, that's a good catch.
<runelind_q> ok, now they show up in systemctl show --value -p Environment traefik :D
<sdeziel> runelind_q: meanwhile, I got this log message when trying to restrict access to a config file: systemd[1]: Configuration file /etc/systemd/system/foo.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
<sdeziel> runelind_q: because of this, you might want to switch to using EnvironmentFile= pointing to a file that's readable by root only if you care about keeping the API key secret
<runelind_q> sdeziel: thanks, I'll look into it
<sdeziel> you are welcome
<runelind_q> sdeziel: going to wander off and grab some lunch and do further testing in a bit.  Thank you very much for your help!
#ubuntu-server 2020-03-12
<LarsErikP> coreycb: ref https://bugs.launchpad.net/cloud-archive/+bug/1866361 am I doing something wrong? I fail to find 0.27.0 in stein-proposed..
<ubottu> Launchpad bug 1866361 in Ubuntu Cloud Archive stein "[SRU] Connections to Neutron are not closed properly in v0.26.0" [Undecided,Fix committed]
<coreycb> LarsErikP: thanks for letting me know. there was a build issue. I'm retrying now.
<LarsErikP> coreycb: \o/
<LarsErikP> coreycb: tested now. it works :)
<coreycb> LarsErikP: great, thanks. we'll get regression testing done soon on our end. can you update the bug to mention it's fixed for you? fyi it needs to remain there for min 7 days before promotion
<LarsErikP> coreycb: already done :)
<coreycb> LarsErikP: thanks :)
<LarsErikP> thanks for handling this so quickly, and being so helpful :) I really appreciate it!
<halvors> I need a hids for my ubuntu servers, seems aide is pretty minimalistic as a hids, but maybe in combination with chkrootkit and another utility that looks at logs?
<halvors> Any ideas on a utility for that?
<sdeziel> halvors: there is auditd that you might want to look into
<sdeziel> for log monitoring I use logcheck but be ready to write a lot of regexes to silence all the unimportant noise you'll get
<halvors> sdeziel: thanks
<halvors> auditd vs logcheck what is the difference between them?
<sdeziel> they are very different, logcheck is going over your logs and removing the noise (what matches regexes of unimportant log entries)
<sdeziel> logcheck then extract the "signal" which is everything that was not silenced by the regexes of unimportant stuff
<sdeziel> that signal can be noisy because the base ruleset shipped by logcheck is not very up to date which is why you will need to write a lot of regexes to build a ruleset that matches your environment
<sdeziel> once done, you'll get a good signal from logcheck
<halvors> what does auditd do compared to logcheck then? is it not the same?
<sdeziel> logcheck runs on reboots and every hours and send you an email with the important logs that happened since the last notification
<sdeziel> auditd is like a blackbox recorder of a plane
<sdeziel> it can be configured to record/log every syscal ran, their args, who ran them, etc
<sdeziel> this allows you to track most actions taking place on your machine
<halvors> i see, so auditd is more to investigate a breach?
<sdeziel> you could use it for that or also to catch anomalies
<sdeziel> (logcheck is also good to catch anomalies but only when they show up in logs it checks..)
<halvors> sdeziel: thanks, i have another question as well, is there a good solution for active response rules, or is this a job for fail2ban?
<halvors> Are there other solutions like fail2ban that is better?
<sdeziel> halvors: suricata and snort can run in IPS mode where they typically inject firewall rules whenever a condition matches
<halvors> sdeziel: On the host itself not the firewall?
<sdeziel> halvors: the host itself has firewalling capabilities
<halvors> yes
<sdeziel> a bit like what fail2ban does
<halvors> thanks :) fail2ban is more to make it harder to gain access in the first place?
<sdeziel> halvors: fail2ban can protect many services, which one do you want to protect in your case?
<halvors> sdeziel: it's basically a task assignment at school, they recommend ossec but i like to go more into detail and provide the same functionality by using more simple but multiplie tools that is easier to understand and maintain.
<sdeziel> halvors: I've heard about ossec but know very little about it so I can't make recommendations on simpler alternatives :/
<halvors> sdeziel: i see, ossec seems bloated to me (there is windows rule definitions on the linux version)
<lordcirth> I have an LXC container which seems hung - lxc-ls and lxc-attach hang. No errors in dmesg. Any debugging tips?
<lordcirth> 16.04 guest and host
<halvors> How to exclude directories (and their subdirectories and files) from being include in aide scan?
<sdeziel> halvors: IIRC, it was with a rule like that: !/path/to/my/dir
<halvors> thx
#ubuntu-server 2020-03-13
<lordievader> Good morning
<bacarrdy> Hello, Any one tryed to create raid of ssd nvme drives with u.2 connector? was reading on internet so seems dell raid cards doesnt support it and found only an highpoint raid card what supports. Also there may be software raid, so any one tryed?
<lordievader> I assume that would work like any software raid config.
<coreycb> jamespage: fyi there's a new hvac available for vaultlocker
<jamespage> coreycb: oo
<IsntFunny> Hi, i from time to time have some network issues with my ubuntu server, i want to fully reset any settings and network maangment tools and want to do it for once nicely. any ideas where to start?
<shubjero> coreycb: Any idea if https://bugs.launchpad.net/nova/+bug/1864279 is going to be backported to Ubuntu 18.04? Specifically libvirt 5.10
<ubottu> Launchpad bug 1864279 in OpenStack Compute (nova) "Unable to attach more than 6 scsi volumes" [Low,Won't fix]
<shubjero> coreycb: Well, OpenStack Stein for 18.04 repo rather
<lotuspsychje> lordcirth: did you fix your lxc issue?
<lordcirth> lotuspsychje, nope. lxc-stop -k doesn't even kill it
<lordcirth> When strace'ing, all lxc commands hang at: connect(5, {sa_family=AF_LOCAL, sun_path=@"/var/lib/lxc/dc-3558.nextcloud.cs.uwaterloo.ca/command"}, 57
<lotuspsychje> maybe #lxc to the rescue?
<lordcirth> maybe
<TwistedBlizzard> Hi all, I know I asked for help on this before but I got stuck and discouraged. I would like to bond two network connections as described here: https://help.ubuntu.com/community/UbuntuBonding
<TwistedBlizzard> .Unfortunately, it seems the tutorial is out of date and I was wondering if there was a more up to date equivalent - I'm running 19.10
<powersj> TwistedBlizzard, if you are using 19.10, you are probably using netplan. You can check the netplan docs here https://netplan.io/examples#configuring-interface-bonding
<TwistedBlizzard> Thank you! That was the guide I was looking at before but I couldn't find it again.
<TwistedBlizzard> I've configured my interfaces as described above. Does this mean I can now set my gateway IP to the address I gave to the bond?
<ChmEarl> ahasenack, smb  https://paste.debian.net/plain/1134847
<ChmEarl> ^ patch to fix pygrub, xen.lowlevel.xc object in xen for focal
<ahasenack> I've seen that type of fix in samba's python module
<ahasenack> is that why you pinged me, or that was just tab completion for someone else?
<ChmEarl> ahasenack, I've seen you involved in xen patches before
<ChmEarl> ahasenack, I've just patched xen, built xen-4.12.2 in focal, installed and booted it
<ChmEarl> pygrub works !
<ChmEarl> ahasenack, do you run Xen in Ubuntu?
<ahasenack> nope
<ahasenack> must have been someone with a similar name
<ChmEarl> ahasenack, sorry, got wrong name, yep
<ahasenack> np
<ChmEarl> as soon as focal went to python3.8 pygrub broke
<smoser> so will /usr/bin/python point to python3 in focal?
<smoser> that wasn't clear to me from the ubuntu-devel conversation.
<smoser> rbasak answer maybe at your leisure.
<rbasak> smoser: AIUI, /usr/bin/python won't exist by default in Focal, but there are packages available to give you a symlink either way (which will presumably conflict)
<rbasak> smoser: mentioned in this post: https://lists.ubuntu.com/archives/ubuntu-devel/2020-February/040918.html - look for python-is-python2-but-deprecated and python-is-python3
#ubuntu-server 2020-03-14
<smoser> thanks rbasak
<smoser> have a nice weekend.
