#ubuntu-server 2006-04-03
<pab1> can some one here help explain a couple things about ssh
<pab1> ?
<infinity> ?
<pab1> Well im having trouble grasping onto where exactly I should be storing my pub and priv keys
<infinity> private keys go in .ssh on the machine you generated them on (which is going to be the "client" machine)
<pab1> ok that helps a lot
<infinity> public keys get copied into .ssh/autorized_keys on the target ("server") machine.
<pab1> in the home directory?   ~/.ssh/authorized_keys ?
<infinity> Yes, ~/.ssh/authorized_keys is just a text file, with one public key per line.
<infinity> (Well, it's more complext than that, you can limit what commands can be run by that key, etc, but by default, it's just "one key per line to allow access")
<pab1> ok cool
<pab1> so i just create the auth_key path in any accounts home directory on the server?
<pab1> then copy my pub key in auth_keys
<pab1> and im good?
<infinity> It's a text file, not a directory. :)
<infinity> But otherwise, yeah.
<pab1> o ok
<infinity> (And make sure the permissions are right on the directory...)
<infinity> ~/.ssh must be owned by the user, and must be 0700
<pab1> MUST be 700?  or is that best practice?
<infinity> MUST.
<pab1> :o
<infinity> sshd will tell you where to go and how to get there if it isn't.
<pab1> haha
<infinity> (And won't let you log in)
<pab1> ok one more thing... if I put multiple pub keys in the auth_key FILE ( :) ) then each of those clients can login as that user
<infinity> Yup.
<pab1> if that is the case, how would I login from the client?  with the server account (ssh severacct@host)  or with client account (ssh clientacct@host)
<infinity> serveracct.
<pab1> ok
<pab1> thanks a lot man, uve been a huge help!
<infinity> When you do "ssh foo@host", it authenticates on the remote end as "foo".
<infinity> What user you are locally is irrelevant.
<pab1> got it
<infinity> No different from "ftp user@host" or "http://user:pass@host/" :)
<pab1> ya i see now, just the fact that the client will have a different priv key confused me a bit
<infinity> The name on the key (for instance, mine say "adconrad@cthulhu") is just a comment so you can have a vague idea of where it came from.  it's meaningless to sshd.
<infinity> It's only the long hex string that matters.
<infinity> Or, ASCII, in the case of DSA and RSA keys for SSH2.
<infinity> But, yeah.  That long string (and the tag before it that says what sort of key it is, ssh-dss, etc) is what's important.
<pab1> gotcha
<infinity> sshd completely ignores the "user@host" comment on the end of the pub key, it's just a helpful comment for you to remember that you generated that key on host "foo" with user "bar". :)
<pab1> so I could tecnically edit that last comment with no real change to the keys function?
<infinity> Also handy if you're setting up a semi-insecure shared account, and dave@domain.com send you an urgent (signed with GPG, I hope) email saying that he needs his public key changed... You'll know which one to delete. :)
<pab1> not that I really have a reason
<infinity> Editing the comment to be more informative can be done, yes.
<pab1> haha
<pab1> welp i'm gonna try all this out
<pab1> thanks again!
<infinity> Have fun.
<infinity> Don't forget to save the backscroll until you're sure how it all works.
<infinity> No one likes people asking the same questions twice. ;)
<pab1> im logging this :D
<pab1> very helpful
<pab1> are you a unix admin?  if you don't mind me asking
<infinity> By trade?.. Probably... I currently work on Ubuntu fulltime, though.
<pab1> awesome
<pab1> i like your OS :-P
<infinity> Thanks.  I'm rather fond of it too. :)
<pab1> or distro i should say
<pab1> let me pick your brain on this...  is there a way to use samba as a central login for win and linux?   Linux mainly being the one that I'm not sure on
<yves> use
<yves> pab1, a very nice, but not really simple, way is to use samba+ldap
<pab1> hmm.  is there a straight forward way to do it or is that really the only route?
<yves> you can use smbpasswd as your auth backend
<pab1> i prolly should learn ldap anyways
<pab1> maybe ill just do that
<pab1> i use smbpasswd currently
<pab1> just for file and print sharing tho
<yves> you'd better first setup a domain using smbpasswd, then move on to ldap
<yves> then it's just a matter of setting your samba server as a login server
<yves> one line change :-D
<pab1> hmmm
<yves> that would'nt work for linux local accounts, though
<yves> only windows logons and file sharing auth
<pab1> ya thats the thing
<pab1> I actually did that before then run into the no linux login thing and the no admin group thing
<pab1> ldap will allow for at least an admin group right?
<yves> yes, it will solve both problems
<pab1> bah
<yves> its learning curve is high, i think
<pab1> im gonna have to wait till after finals i think
<evilmonkey> hi
<Pygi> hi evilmonkey
<evilmonkey> is this the place to ask question about installing java sun sdk
<Pygi> dapper, breezy? you want java sdk on server??
<evilmonkey> brezzy, no, local for development with eclipse
<Pygi> #ubuntu
<evilmonkey> ok
<evilmonkey> thankyou
<Pygi> yw
<Jeeves_> Hi there
#ubuntu-server 2006-04-04
<bip> anybody has experince with ltsp running on dapper ?
<fabbione> bip: -> #edubuntu
<bip> ok
<bip> grazie fabbione
<bip> ma ho risolto ;-)
<bip> maybe ;-)
<Bakgat> hi all
<Bakgat> anyone here well versed in networking on an ubuntu system (interfaces, shorewall, etc)
<Pygi> Bakgat: what exactly do you need?
<Bakgat> Hi Pygi
<Pygi> Hi Bakgat
<Bakgat> I've set up a mail server based on instuctions from http://flurdy.com/docs/postfix/#install_pack
<Bakgat> very detailed & helpfull
<Bakgat> Everything seems to have gone ok (took me better part of a whole day to set up). but i ca't seem to access the networks.
<Bakgat> I originally set it up with only a single NIC, detailed in /etc/network/interfaces
<Bakgat> i've atempted to fill in as much detail in /etc/shorewall/* , but i seem to have missed something, cause if i ifconfig, i only c the loopback
<Bakgat> this box is going 2 act as mail & web server from a dnz zone. i have a seperate box that acts as a router/firewall/proxy/etc
<Bakgat> my firewall will transparrently route traffic & handle security from internet, wired network, wireless network & dmz, but that's a topic 4 another day
<Bakgat> where do we start?
<Bakgat> the mail service & some other services seem ok
<Bakgat> i just can't get the nic to gell
<Bakgat> Pygi, u still there?
<Pygi> Bakgat: not really, and yes :-/
<Bakgat> so, what do u suggest?
<Pygi> tried restarting "networking"?
<Bakgat> yip
<infinity> You're configureing your network in /etc/network/interfaces?
<infinity> And does "ifup eth0" (or whatever interface) work?
<Bakgat> yip. will give detail soon
<Bakgat> nope
<infinity> Do you get any errors?
<Bakgat> networking/interfaces set to dhcp 4 now
<Bakgat> checking log not
<Bakgat> now
<ivoks> ifconfig | grep eth0
<ivoks> check out if kernel recognized your nic as eth0
<ivoks> or maybe some other nic is eth0 (this is not so rare as it seems)
<Bakgat> ok. where/how do i do that?
<Bakgat> ifconfig only lists loopback
<ivoks> in konsole/terminal
<infinity> "ifconfig -a" to show interface that aren't up.
<Bakgat> ok. hang on
<infinity> s/interface/interfaces/
<ivoks> may bad...
<ivoks> dmseg | grep eth0
<ivoks> not ifconfig :)
<Bakgat> cool! sees all nic, just not assigning adddr.
<Bakgat> not seeing dmseg cmd
<infinity> Kay, then your /etc/network/interfaces is wrong. :)
<ivoks> mii-tool?
<Bakgat> mii-tool, check
<Bakgat> eth0: Bad addr, eth1 op not supp
<Bakgat> no MII interfaces found
<Bakgat> bad network/interfaces?
<ivoks> eth0: bad addr?
<Bakgat> will set interfaces 2 static in interfaces
<ivoks> huh?
<ivoks> your eth0 gets ip from dhcp?
<Bakgat> mii-tool reports SIOCGMIIPHY on 'eth0' failed: Bad Address
<Bakgat> for now
<Bakgat> wil set to static
<ivoks> dont
<ivoks> do ifdown eth0
<ivoks> and then manually get address from dhcp
<ivoks> dhclient eth0
<Bakgat> sit0: unknown hardware addr type 776
<Bakgat> do u have skype addr? typing's a schlep
<ivoks> nope
<Bakgat> ok
<ivoks> forget about sit0
<ivoks> what did it tell you about eth0?
<Bakgat> no-go. hardware. let me just reboot with single nic
<Bakgat> sweet! got a DHCP lease (192.168.0.232), but not I need to set it up as static (192.168.0.100/255.255.255.0 for test & config, but 192.0.0.10/255.255.255.0 in production)
<ivoks> i told you to run dmsg | grep eth0
<ivoks> that other nic was eth0, and the second one was eth1
<Bakgat> disabled eth1 in bios, dmesg picks up on eth0. gives hardware id, mac, irq, link up, no ip6 routers
<Bakgat> i can c the apache server via http, but other services to working (phpmyadmin, squirrelmail)
<Bakgat> *other services not working*
<infinity> Bakgat: dID YOU JUST SAY 192.0.0.10?
<infinity> Bakgat: Erm, caps lock, sorry.
<infinity> Bakgat: But, I hope that was a typo.  192.0.0.0 is a routable subnet, not a private one.  People own that IP address. :)
<ivoks> infinity: people think 192.x.x.x is public :)
<ivoks> infinity: i saw one big company with 192.192.x.x for private network :)
<Bakgat> once in production i'll change the networkning & shorewall. i'm setting up the server @ home office
<infinity> ivoks: You mean private, I assume. :)
<ivoks> infinity: right :)
<infinity> Bakgat: In the 192 space, ONLY 192.168.0.0 is private, no other 192.x subnets.
<infinity> (If you really need more IPs than a class B, though that seems unlikely, use 10.0.0.0, which is entirely private)
<ivoks> Bakgat: if you need class A, take 10.x.x.x
<ivoks> lol
<Bakgat> i'm planning the networks from my firewall's PoV into 3 seperate network classes. DNZ in class C, wired network class B, wireless class A. have i got it wrong?
<infinity> And I always forget the other one... 172.something?
<infinity> Bakgat: Almost certainly.
<ivoks> infinity: that one i froget too...
<ivoks> Bakgat: you need three class C networks
<infinity> Well, or three much smaller networks.
<Bakgat> been a while since i've done subnetting, so i'm a bit rusty. used a subnet calc to determine networks
<infinity> classless subnetting is the future.
<ivoks> Bakgat: for example, 192.168.1.x, 192.168.2.x and 192.168.3.x
<ivoks> why bother with subnetting?
<ivoks> in this case...
<Bakgat> i'd like to seperate networks by subnets so that one network cant accidentaly stumble upon another
<ivoks> subnetting is ok only for public subnets
<maswan> infinity: 172.16 - 172.31
<ivoks> maswan: right :)
<infinity> maswan: Thanks, you ever-loving fount of useless knowlege. :)
<maswan> We use that here, actually
<infinity> Sure, use the one that's hardest to remember.  Makes sense.
<ivoks> :)
<infinity> I always use 10.0.0.0, because I'm a lazy typist. :)
<maswan> Since 10.x is used by the networking people and 192.168 is used alot
<maswan> Ok, as long as the campus networking people do things right, we never notice. But... ;)
<Bakgat> ok. lets assume this server will sit @ 192.168.0.10/255.255.255.0
<ivoks> Bakgat: that's ok
<Bakgat> i'd still like to put my other networks on seperate subnets, but that's an exercise for another day
<ivoks> Bakgat: ?
<ivoks> Bakgat: put the other one in 192.168.1.x
<ivoks> and there you have it
<Bakgat> I'll go mod my networks/interfaces file quiqly. cool?
<ivoks> on different subnets, one can't go to the other
<Bakgat> that's the idea
<infinity> Well, not really true.
<infinity> If they're the same physical network, you can easily hop from one to the other.
<ivoks> infinity: what's true these days? :)
<Bakgat> the router/firewall must handle routing (i use IPCop)
<infinity> But, shhh.
<ivoks> infinity: nothing can prevent that :)
<Bakgat> i'm looking @ LDAP on a seperate server, but also an exercise for another day
<infinity> I was rather surprised to discover that my AUD 150 hunk of junk DSL modem/router actually does VLAN switching.
<infinity> Not that I need VLANs at home, but neat anyway.
<ivoks> ah, i'm using wifi broadband :)
<Bakgat> ok, so I set my networks/interfaces to static as detailed in http://flurdy.com/docs/postfix/#install_pack
<Bakgat> now what
<Bakgat> net to fix static 2 nic
<Bakgat> do I need to set up shorewall next?
<Bakgat> would eth0 b net or loc on shorewall in the current stup
<Bakgat> would eth0 b 'net' or 'loc' on shorewall in the current setup?
* ivoks doesn't use wizards :)
<Bakgat> cool. set up my dhcp server to issue lease to server based on mac (will lock down later)
<Bakgat> now, how do I configure my shorewall files (bit confising)
#ubuntu-server 2006-04-06
<Bakgat> top of the mornin 2 ya
<z> hey I am having some trouble with postgresql
<z> I cant get version 8.0 running
<z> how do I update the locate database
#ubuntu-server 2006-04-07
<nictuku> how reasonable is it to expect, in the long run, that ubuntu acts like a "Debian Enterprise" - but in the sense that it is supported by products of major players like Oracle?
<nictuku> that's a possibility I'm really excited with about ubuntu server.
<nictuku> But as I didn't see it mentioned in the UbuntuServer, I don't know if it is in the agenda.
<nictuku> I only know that IBM's DB2 is homologated (is this wording fine?) for Ubuntu.
<neuralis> nictuku: we're working onit.
<neuralis> on it, even.
<nictuku> nice :-D. is there any public information available somewhere?
<neuralis> not at the moment, no.
<nictuku> how can volunteers help on that?
<nictuku> I mean, technically
<neuralis> ISV certifications are a pretty finicky and particular process. we appreciate your willingness to help, but i'm afraid the process mostly has to be left to canonical.
<nictuku> I thought so, but it wouldn't hurt to ask.
<neuralis> certainly. and if you have contacts at ISVs, convincing them to get in touch with canonical about certifying ubuntu is certainly one way to help.
<nictuku> That would be a major news for, well, at least the whole Debian community.
<neuralis> nictuku: rest assured, it's being taken seriously at canonical.
<nictuku> neuralis, would you mind if I blog about what you're saying? I think many others would be interested
<neuralis> nictuku: actually, i would. i'm not a canonical employee.
<nictuku> Oh I see
<neuralis> nictuku: i'm happy to put you in touch with the right person if you wanted to get the official position, though.
<nictuku> I'd be interested, yes.
<neuralis> nictuku: Malcolm Yates <mdy@canonical.com>
<nictuku> thank you
<neuralis> nictuku: sure. you're also welcome to quote me as a member of the server team, as long as you make it perfectly clear it's unofficial and not coming from an employee.
<nictuku> sure
<nictuku> there is this mail from Mark Shuttleworth mentioning ISV certifications, which helps http://lwn.net/Articles/175272/
<nictuku> by the way
<nictuku> There's a lot of road to run until official communication doesn't just help fostering the idea that ubuntu is only about desktop.
<neuralis> yes, that's an unfortunate problem with our marketing. we're debunking some of it in the server chapter of the official ubuntu book, soon to be released, and i'm working on some extra advocacy efforts for ubuntu server.
<nictuku> that e-mail just reinforces that
<neuralis> i disagree. a lot of that e-mail is desktop/server agnostic.
<neuralis> testing, certification and 24x7 global tech support apply equally to servers and desktops, for instance.
<nictuku> most part of it
<nictuku> We would like to show that a
<nictuku> Debian-based distribution can deliver the same world class desktop punch
<nictuku> that you might traditionally expect from Novell or Red Hat.
<nictuku> (quote)
<neuralis> right. it would have been nice to have ubuntu server mentioned there, and i'll talk to mark about mentioning it next time, but the word about ubuntu server is definitely getting out.
<neuralis> and we're still a young effort, with a lot of the really nifty features planned after dapper.
<nictuku> Another issues that worries me, and I'm sorry if that has been discussed too much (not in the ubuntu-server list, though, which I read), are the blessed server packages.
<neuralis> what worries you?
<nictuku> sure, and we're excited with all that. I'm even trying to help with what I can, implementing one of the specs.
<neuralis> nictuku: hm?
<nictuku> well, from a sysadmin point of view, this idea that only a part of main will have long time support is confusing, and even maybe useless sometimes.
<neuralis> nictuku: i didn't understand the "we're excited with all that" message, was that aimed for a different channel?
<nictuku> sorry, I meant excited with the server effort.
<nictuku> and "we" I meant, the community
<neuralis> ah, cool. which spec are you working on?
<nictuku> network wide updates
<nictuku> even having main and universe is confusing.
<neuralis> as for only part of main being supported, it makes sense. the alternative is a different archive for server and desktop ubuntu, and that's really best avoided.
<nictuku> I know the reasons, I'm just thinking from a marketing and user point of view.
<nictuku> https://dev.ubuntubrasil.org/trac/nwu
<neuralis> looks neat! i'll have to take a closer look later.
<nictuku> thank you. I hope the PyGTK interface we're working on will be coolness compliant.
<neuralis> i don't think i've seen this mentioned on ubuntu-server, can you post about it there?
<neuralis> hmm. why did you choose to go with SOAP?
<nictuku> no, xml-rpc
<nictuku> Used SOAP, then moved to xml-rpc
<neuralis> good. the wiki mentions SOAP, i'll correct it.
<nictuku> thank you
<nictuku> I'm currently working on the packages so testing can be easier.
<neuralis> great. please do post about it on the list, this is something that'll interest a lot of people, and you can benefit from the added testing.
<nictuku> A Debian developer friend of mine have even offered to send it to unstable already
<nictuku> and it's on revu too
<nictuku> right, I will, despite being very afraid of disappointing testers.
<neuralis> disappointed testers are much better than apathetic testers.
<neuralis> disappointed testers complain and tell you what's not good, and then you can fix it.
<neuralis> also, in a tool like nwu, security is paramount. you'll want to find a security person or two to give you a review before making the case for distro inclusion.
<nictuku> yes, that a major concern
<neuralis> i'm happy to do a security review of this in a month or so when my schedule clears up, so feel free to mail me around then and remind me to do it.
<nictuku> and one of the reasons I didn't mention it in the lists, because there are known unresolved security issues already. (clients always accept server SSL certificate)
<neuralis> well, you shouldn't let these things prevent you from asking people to test it. no one expects a perfect product on the first go. really. just mention the unresolved issues, and it'll be fine.
<nictuku> you're right, but i still have to convince the perfectionist part of myself of that
<neuralis> i empathize; i'm the same way ;)
<nictuku> my idea was to ship a very basic set of features, but a nice, stable and safe set of features.
<nictuku> the features are there already, I'm working on making it stable and safe.
<nictuku> and, argh, writing documentation
<nictuku> can i have your e-mail address? I'll contact you next month then, if you don't mind.
<neuralis> nictuku: krstic@fas.harvard.edu
<nictuku> thank you
<neuralis> sure. thanks for your work on this!
<nictuku> I hope the CC recons that on the monday meeting :-).
<neuralis> hm?
<nictuku> i'm on the process of being accepted as a ubuntu member. That's on the community council meeting agenda..
<neuralis> cool.
<tarvid> with a default postfix and fetchmail  install, which is the easiest imap server to use with mbox?
<yves> hi
<nictuku> Are there plans to LSB ceritication in Ubuntu? Mark Shuttleworth has once mentioned that "If we were to do LSB for Ubuntu, it would be done directly rather than as a compatibility layer.". So I guess that Ubuntu would rather take the long path of making Debian LSB-compatible - which seems to be the RightWay.
<ivoks> we should really include new samba :/
#ubuntu-server 2006-04-08
<tarvid> on one of my workstations I installed moodle and php5 which all went well enough.
<tarvid> then I installed mirrormed - it wanted php4
<tarvid> i found both installed on this machine but libapache2-mod-php5 was installed
<tarvid> i installed libapache2-mod-php4 and broke everything
<tarvid> any secrets to the php4/5 dilemma?
<infinity> Yes, don't install both.
* infinity can't find this mirrormed package you speak of.
<tarvid> mirrormed is a GPL medical practice management system
<infinity> Yes, Google tells me that.
<infinity> But I don't see any package of it that requires php4.
<infinity> If you're installing it from source, what makes you think it won't work with php5?
<tarvid> it is easy to say don't install both but sometimes one does not know in advance
<tarvid> the install scripts check php version and stop if it is less than 4.3 or greater than or equal to php5.0
<infinity> Well, you can't install libapache2-mod-php5 and libapache2-mod-php4 together.  They conflict with each other.
<infinity> So fix the install script to be more lenient, and see if it works with php5?
<infinity> If not, then I suppose you need php4 instead of php5, or you need to run one as a CGI.
<tarvid> I did patch out the test on php version and died in a related place so i tried to revert to libapache2-php4
<tarvid> but I am intrigued by your suggestion of running one as CGI
<tarvid> how would one go about that?
<infinity> With a fair amount of unnecessary pain, generally.
<infinity> You're better off not bothing unless you KNOW it's required.
<infinity> (It will require mass renaming of some PHP scripts, etc)
<infinity> You're better off figuring out where/how/why your install script dies and fixing it, I suspect.
<infinity> You can force php5 to be "almost like php4, but not quite" by enabling the zend.ze1_compatibility_mode flag (perhaps only in the vhost serving mirrormed) if it's really necessary.
<tarvid> This machine does not have to run other applications, machines are cheap
<infinity> Well, if the only application it needs to run is mirrormed, and you're pretty sure it needs PHP4, then just use PHP4.
<infinity> But then you're stuck not being able to run stuff that definitely DOES require PHP5, that's all.
<infinity> While php5 has some backward compatibility options, php4, oddly enough, doesn't have any forward compatibility flags. :)
<tarvid> my tech insists he didn't install php5 intentionally, is there a way to get the installation history to know when it came in?
<infinity> If you installed the moodle package, that's what pulled it in, yes.
<tarvid> I did that on my machine but he has performed at least two fresh installs on a different machine
<infinity> Well, unless you install some PHP application that depends on it, nothing will pull it in "by default"... We don't install any PHP interpreter by default.
<infinity> If you install php4-pear, that will bring in some php5ish packages, but that's not the same as "having PHP5 installed in apache"... Only libapache2-mod-php5 will do that for you.
<infinity> So, make sure apache is using libapache2-mod-php4, and you're set.
<tarvid> thanks, it may be working
<nictuku> infinity, are you there?
<infinity> nictuku: I am, but just heading out.
<infinity> What's up?
<nictuku> ah ok
<nictuku> I was wondering if you could show up in the #ubuntu-meeting and say "yes I remember this guy. he's working on nwu", since no one from my small fan club appeared (07:00am here). it could take a few minutes though, and since you're leaving, thanks anyway :-)
<infinity> Well, I haven't had a chance to look at or tst any of your code, so I wouldn't be a very good advocate anyway.
<Nrbelex> Hi - I'm learning through experimentation and so far I've setup Apache and registered a domain name from godaddy and created a tiny test website in my www folder. I registered at Zoneedit and am waiting for a response. I installed the ez-ipupdate package (but can't find it, if there is something to find in order to configure it). I opened port 80 on my router and can access the site from my...
<Nrbelex> ...IP address in the address bar. I don't understand how I do binding or if I need to and how to use a name server or anything. At this point, I'm thoroughly lost. Any help would be greatly appreciated.
<neuralis> Nrbelex: you'll want to edit your domain in godaddy, which provides free DNS services, and create an A record that points your domain to your IP address.
<Nrbelex> neuralis: looking for that on the site now... thanks
<Nrbelex> neuralis: I'm not seeing it on Godaddy
<neuralis> actually, if you're doing this with a dynamic IP (which is what i'm guessing you want to use zoneedit and ez-ipupdate for), you'll want to edit your domain at godaddy, and put zoneedit's DNS servers instead of godaddy's.
<neuralis> put =~ set it to use.
<Nrbelex> neuralis - I registered with them but they haven't gotten back to me yet - how do I do that?
<neuralis> Nrbelex: http://www.zoneedit.com/doc/dynamic.html
<Nrbelex> neuralis right but how do actually transfer the domain or will that come with my registration E-mail?
#ubuntu-server 2006-04-09
<maswan> Setting up snmpd (5.1.2-6.1ubuntu2) ...
<maswan> Couldn't create /home/snmp: Read-only file system.
<maswan> gah!
<Jeeves_> Nice :)
<maswan> Of course it can't create it!
<maswan> I have centralised home directories
<lionelp> maswan: juste install snmpd before lanching automount :)
<maswan> lionelp: No automount, AFS.
<maswan> And I'm not about to unmount it
<maswan> which means hacking around in inst scripts etc.
<maswan> bleh.
<lionelp> but i agree, snmpd could have a home somewhere else (in /var for exemple)
<maswan> yeah
* alleeHol wonders why /srv/snmp isn't used.  /home/ does not seem the right place 
<maswan> Should be a clear debian policy violation too, but I don't know how well in sync this is with the debian package
<lionelp> it is the same in the debian package IIRC
<maswan> lionelp: do you happen to have a sid system around to verify? I don't, otherwise I'd be submitting this to the debian BTS too.
<lionelp> I was wrong, in sid, the home is /var/lib/snmp
<lionelp> I have a sid in chroot
<lionelp> maswan: but is is not the same version
<lionelp> sid : Version: 5.2.2-3
<lionelp> , Dapper : Version: 5.2.1.2-4ubuntu1
<lionelp> maswan: you filled a bug ?
<infinity> maswan: Eek.  Is that on breezy?
<lionelp> oh infinity you are right, it is not any more the case in Dapper !
<infinity> It isn't?
<infinity> Looking at the package, it still should have the same bug.
<lionelp> no, not any more
<lionelp> it was the case in breezy
<lionelp> sorry
<infinity> No, looks like dapper has the same bug to me.
<infinity> (sid doesn't)
* infinity fixes.
<lionelp> infinity: you are right, i forgot to exit chroot after checking Debian....
<maswan> infinity: Oh, nope. Breezy.
<infinity> maswan: Yeah, it's buggy in dapper too.  Ficing there.
<infinity> Fixing, even.
<infinity> Not going to fix it in breezy-updates, not really critical enough.
<maswan> infinity: ah, great.
<maswan> infinity: yeah, the workaround should probably be somewhere findable though, but I don't really know how.
<infinity> Edit the postinst and add --no-create-home to the adduser call, then try again. :)
<infinity>         adduser --quiet --system --no-create-home --home /var/lib/snmp snmp
<infinity> That's what it'll look like in dapper in a few minutes.
<maswan> it runs fine without a home?
<maswan> or does /var/lib/snmp already exist for other reasons?
<infinity> maswan: /var/lib/snmp is shipped in the package.
<infinity> maswan: You might want to "chown -R snmp /var/lib/snmp" after the package installation, though (the new package in dapper will do so)
<tsurc> Hi, kind of a newbie here. I have a HP DL140 G2 server (Read 2*Xeon 4gb Ram 2*80Gb sata HDD) Do I install 686-smp or -server kernel for best performance using dapper?
<exobuzz> im considering running ubuntu server edition. I use kubuntu on a desktop machine. Im thinking to try breezy on a server (well.. on a intel mac mini).. wondered what I can expect in comparison to say debian stable in terms of packages/support/stability ?
<tsurc> I'v looked on the wiki and other places, but draw a blank, what s ther difference between the server kernel and the others. What so special about it?
<exobuzz> maybe things like v4l etc left out
<tsurc> does it have smp support? I have a dual xeon server I want to get Windoze off and liberate it
<tsurc> but I want to get the right kernel installed for the job. like having a 386 kernal on a dual xeon isn't good right?
<exobuzz> well i guess it wont run quite as well..
<infinity> tsurc: You want the -server kernel most likely.
<infinity> exobuzz: You won't get it to install or boot on an Intel Mac Mini.
<infinity> exobuzz: We're hoping to have all the pieces in place for that to work in some sane fashion for dapper.
<infinity> exobuzz: As for comparing to stable, I try pretty hard to make sure that our releases compare in quality.  Package selection is nearly identical (for obvious reasons), and support tends to be a bit quicker (security support, especially)
<exobuzz> infinity: why won't i get it to install  ?
<exobuzz> infinity: I'm smart you know :-)
<infinity> Because the Intel Macs aren't like other Intel systems.
<infinity> They're EFI based, not classic PC BIOS based.
<exobuzz> ive already got elilo up and running
<exobuzz> :-)
<exobuzz> just compiling the kernel now ready to boot a live cd and bootstrap ubuntu
<infinity> Oh, if you're already on your way, then fine.  If that's the case, I can't imagine why you'd need to ask the other questions (like how package selection compares between sarge and breezy)
<exobuzz> infinity: it was more a general question. i didnt mean package selection, i meant quality of packages etc.. configuration level of packages
<infinity> Pretty darned similar.
<exobuzz> ok
<exobuzz> :-)
<infinity> We don't fork the server stuff very far from Debian.
<exobuzz> im a little worried about myself btw, as i booted up osx for the first time today. and kinda liked it. should i see a doctor ?
<exobuzz> :-)
<exobuzz> i mean.. eye candy mental but
<infinity> And half the stuff you'd want to install on many home servers (postfix, apache2, php5, mysql, postgresql, etc) is maintained by Ubuntu maintainers in both Debian and Ubuntu.
<infinity> (ie: Me)
<infinity> And yes, you should see someone. :)
<exobuzz> and 3rd party support? for example debian stable users can go and install the dotdeb packages for latest php/mysql.. obviously ubuntu is more up to date that stable anyway.
<infinity> I like to look at OSX, but I can't stand using it.  The fact that it's painfully slow on REALLY FAST hardware doesn't sell it to me as a desktop, and as a server, it's just a bit too.. Quirky.
<exobuzz> i wouldnt know where to start to use it as a server. everythyings in the wrong place :-)
<infinity> exobuzz: I wouldn't recommend anyone, Debian or Ubuntu users, install anything from dotdeb.  Ever.  So, you're asking the wrong man.
<infinity> I've dealt with more bug reports coming from those packages being broken, because he just blindly mangles and backports my packages with very little thought on his part.
<exobuzz> ok.. actually i dont use dotdeb.. because my other server is a powerpc mac :-) (and they dont supply powerpc packages)
<infinity> But I'm not bitter...
<exobuzz> aah i see ok
<exobuzz> the main reason im considering ubuntu, is that I really dont want to wait 2 years for the next debian stable :-)
<infinity> But, in general, some people do provide 3rd party repositories, we also provide our own "backports" repository that will often contain backports of sources from release+1
<exobuzz> great
<infinity> (So, breezy-backports has many backports from dapper, hoary-backports has many backports from breezy, etc)
<exobuzz> yeh i use some on my desktop kubuntu
<infinity> Not recommended either, you're generally better off using the stable and supported packages, but hey.  If you're a bit nuts, go for it.
<exobuzz> and ubuntu/kubuntu is great for the desktop.. really lovely
<infinity> You won't have to wait 2 years for Etch. :)  If all goes well, it should be a Christmas release, give or take.
<exobuzz> i take it breezy has php4 as well as php5 ? and for compatibility i wonder if it has the 3.4 or..
<infinity> Debian's aiming for an 18 month cycle these days, while Ubuntu is a 6 month cycle, I think they should complement each other well.
<exobuzz> hmm.. christmas.. thats a long time to be living with an old php :-)
<infinity> breezy has 4.4.0 and 5.0.5
<exobuzz> 4.4 has compatibility issues with functions returning constants or something doesnt it. hmm
<exobuzz> it gives notices on them at least
<infinity> And well it should.
<exobuzz> they changed something from 4.3->4.4
<infinity> You shouldn't display notices in production anyway.
<infinity> (And you should fix the buggy code)
<exobuzz> :-)
<infinity> It was always incorrect to do the things that started throwing notices in 4.4, we just forgot to throw notices..
<exobuzz> what if i really like ubuntu server.. then i will want my other server to run it. but its 100 miles away
<exobuzz> damn :-)
<infinity> Anyhow, if you really want 4.3, there's a slim chance that the 4.3.10 packages from sarge will install on breezy.
<infinity> I do remote distro switches all the time (usually because leased machines come with Fedora or CentOS installed, and I want to switch them to Debian or Ubuntu)
<infinity> It's not terribly much effort.
<exobuzz> any tips ? im not sure id know where to start..
<exobuzz> :-)
<exobuzz> websvn on sf can be painfully slow.... grrr
<infinity> Get debootstrap.  debootstrap $distro_of_choice into /newroot.  Install a static shell (with enough builtins to pull off this trick).  Run static shell.  rm -rf /(anything not /newroot).  mv /newroot/* /.  Make sure bootloader is configured and installed on bootblock.  Reboot.
<exobuzz> oh yeh
<exobuzz> that makes sense
<exobuzz> :-)
<infinity> Things to watch out for:  Make sure you set a password for at least one user (a regular user with sudo or a root user) in the /newroot chroot when you were creating it.  Make sure to install sshd so you can get back in.  Don't screw up the kernel and bootloader setup.
<exobuzz> which static shell do you use ?
<infinity> sash works well.
<exobuzz> thanks
<infinity> I do this on a regular basis, and mostly from muscle memory, so I may have left out a few steps, but you get the general idea.
<infinity> Just make mental checklist of "ways you can screw yourself" before you reboot and find yourself locked out.
<infinity> The first time I did this, I created an initial user... Without sudo access.. And didn't set a root password.
<infinity> The funny half of this story is that the kernel I installed was the woody d-i kernel that had a known root hole, so I rooted myself to get access to the box.
<infinity> Beat calling up the hosting provider and paying them 50 bucks to reimage with RedHat so I could statr all over.
<exobuzz> hahaha!
<exobuzz> *phew*
<exobuzz> ok.. out of all the technicalities i could stumble across whilst getting linux installed on this intel mac mini
<exobuzz> how do i eject the cd :-)
<exobuzz> humph..
<exobuzz> one of the buttons on the kb for sure.
<exobuzz> hold f12 ! simple..
<exobuzz> im a genius!
<allee> mhmm, ganglia v3 released over a year ago. Are there problems with v3 (beside that debian maintainer didn't upgrade it's pkgs)
<exobuzz> got ubuntu-server up and running on my intel mac mini now!
<fabbione> allee: nobody contributed packages and we had no time to do it
<allee> fabbione: ok. looks like I'll to do it sooner or later ;
<allee> exobuzz: congrats
<exobuzz> :)
<allee> bbl
<tarvid> anybody working with ipp2p or l7-filter?
<danf_1979> Hi
<danf_1979> any tutorial for installing apache mod_security? the debian package seesms unavailable from the repos
<bpuccio> danf_1979:  http://packages.ubuntu.com/breezy/web/libapache2-mod-security it seems mod_security is there
<bpuccio> for warty, hoary and dapper as well
#ubuntu-server 2007-04-02
<Crell> Hi all.  Can anyone recommend a good PHP 5.2 backport for Edgy?  Unofficial is OK.  I am running into various segfault issues with PHP 5.1.6 that are apparently "known issues fixed in later versions". :-/
<foo> Yup :)
<foo> One minute, let me grab it
<foo> deb http://packages.dotdeb.org stable all
<foo> deb-src http://packages.dotdeb.org stable all
<Crell> Isn't dotdeb a Debian repository?
<Crell> I thought packages weren't portable.
<foo> uh, hmm. One of the techs has used it on 4 of our production servers... 0 problems
<Crell> I guess that's a good enough endorsement for me, for a development desktop. :-)
<KurtKraut> Crell, installing debian packages in ubuntu should not be your first choice... but in your case, the only choice besides that is compiling PHP by yourself
<Crell> Yeah, I'd rather mix repos than not use one at all.
<Crell> I figure I'll be upgrading to Feisty within 6 weeks, but I have some version-sensitive work to do before then. :-)
<Crell> Unfortunately, it looks like various extension packages are not included.  Grrr.
<KurtKraut> Crell, ... it seems that today is a happy day for compiling... ahahahah
* Crell grumbles and goes to trim packages he's not using.
<KurtKraut> Crell, do you use apache or lighttpd ?
<Crell> apache
<KurtKraut> Crell, do you have any spare server where you can install Feisty ?
<Crell> Not really.
<Crell> Well, I've a closet full of stuff, some of which works, but I don't think that really counts. :-)
<KurtKraut> Crell, you can install and run lighttpd in your production server, with no dist-upgrade
<Crell> My experience with lighttpd is 0.  One unknown at a time, please. :-)
<KurtKraut> Crell, but in your servers local network, you can install a Feisty box, that comes with PHP 5.2.1
<KurtKraut> Crell, in lighttpd you can point the PHP interpreter that resides in other machine, inside your LAN
<KurtKraut> Crell, lighttpd is easy as hell
<KurtKraut> Crell, so you can have a newer PHP version running outside your production server
<Crell> That's way too complicated for what I need...
<KurtKraut> it is quite simple. Despite installing a Feisty box, it would be done with less than 5 commands
<KurtKraut> pretty easy if you have compilingphobia :P
<Crell> Yeah... I just aptituded what I need.  I'll just restart apache now and see if it worked. :-)
<duri> hello I am thinking about moving some old redhat linux 9 boxes to the debian platform. could you guys explain to me what is the value add of ubuntu-server vs debian in the server environment ? thanks
<Burgundavia> right
<Burgundavia> ubuntu-server offers a fixed timeframe of support
<Burgundavia> debians is long but uncertain
<Burgundavia> ubuntu 6.06 will supported until June 2011 on the server
<duri> good. that's a good point, one I was aware of ... what about packaging policies (i.e. how new versions are included in the updates and such). I would be moving to apt-get for the first-time and would like to understand what are the best strategies to avoid surprises on the server side
<genii> Anyone alive in here?
* genii sips a coffee and contemplates
* foo tells the genii, "It's going to be ok, you can install ubuntu server."
<genii> Heh :) Thats not the immediate issue. I'm using apt-mirror but on the last round of updates it chinked out on me now , filled up a 40G hd whereas before it would go to 31G or so
<foo> apt-get update and upgrade filled up 40G of space?
<genii> No. apt-MIRROR
<genii> I'm mirroring the dapper install
<foo> ahhh, my bad. I'm not familiar with apt-mirror.
<genii> So Im currently wondering; If I have a medium sized hd around how much more than 40Gb is the repo now?
<Kamping_Kaiser> just dapper? shouldnt be more then 40 gig
<Kamping_Kaiser> hang on, i'll look it up
<genii> Kamping_Kaiser: It already filled up a 40G :)
<genii> I guess backports or so is the main hog
<foo> Anyone in here have ubuntu on any dell 2950s 
<foo> ?
<genii> foo I have it on some poweredge 2450
<Kamping_Kaiser> for dapper,dapper-updates,dapper-security,dapper-backports its 29.4 GB (on my setup)
<Kamping_Kaiser> that was made with debmirror btw
<foo> genii: Nice. Any big issues I should be concerned about? We might be getting some 2950s
<Kamping_Kaiser> *updates his mirror*
<genii> Kamping_Kaiser Hmm
<genii> foo The SCSI floppy doesn't work but I can live without it
<foo> hehe /me nods
<genii> Kamping_Kaiser: Maybe I need to elimate some of the directories. I'm mirroring also the security updates
<Kamping_Kaiser> genii, look what i listed
<genii> (from security.ubuntu.com)
<Kamping_Kaiser> -updates,-security-,and -backports
<genii> Kamping_Kaiser: Hangon a bit, I'll pastebin my mirror.list . I have to ssh into my box, it will be a minute
<genii> It could be I have redundant entries
<genii> http://paste.ubuntu-nl.org/13489/
<genii> Actually I see it now... bleh
<Kamping_Kaiser> i dont carry -proposed
<Kamping_Kaiser> so i'm not sure how big it is
<genii> maybe I'll eliminate the last 2 entries
<Kamping_Kaiser> :)
<genii> The first 2 commented-out entries are to get the netboot installer
<Kamping_Kaiser> debmirror needs a way to handle d-i gracefully, its a shocker at it :/
<genii> Kamping_Kaiser: I have the mirror because I'm mass-installing here. I thought I'd update just before the next round of installs but then it chunked out
<Kamping_Kaiser> genii, net installing?
<Kamping_Kaiser> i had to do a pretty big dose recently, but used mondo. rather netinst. myself
<genii> Kamping_Kaiser: Yes. My preseed file does auto partition and packages then I leave the username part for manual right now
<Kamping_Kaiser> what are you using for it? i looked heavily at a few options, but they were just to clunky
<genii> I just installed one time to a system, put the packages on it I wanted. Then used debconf-get-selections to make a prototype preseed. Then I setup an old dual cpu Dell server to be the tftpboot box
<genii> I have 24 headless machines which do an early-command of load and run sshd so then I ssh into them, set a user/passwd and it chugs along. It ejects the cdrom tray at the end so I know when it's done
<Kamping_Kaiser> oh for crying out loud. LP is offline?
<ajmitch> Kamping_Kaiser: yep!
<ajmitch> for 15min or so
<Kamping_Kaiser> ajmitch, ah, well i suppose i can handle that :) (but what timing ;||)
<genii> Bah still chunks out. Guess I'll remove all the deb-src entries too
* genii grumbles
<Kamping_Kaiser> really? my mirror is 30 gig with source.
<genii> Maybe I need to comment out all the lines except "clean" do that then uncomment just the absolute neccesary ones
<genii> It's driving me nust cuz now I have a bunch of mixed packages
<genii> nust=nuts
<Kamping_Kaiser> i havent used apt-mirror, does it store x number of revisisons deep?
<Kamping_Kaiser> i think debmirror only does 1, so that might be a problem
<genii> I'm not exactly certain
<genii> Interesting.From Bambi-something to something-Kaiser
* genii thinks about vicious fawns
<Kamping_Kaiser> lol
<ivoks> what do you think about changing debconf priority for ubuntu-server?
<ivoks> and... changing default configs for some services (for example, vsftpd by default enables anonymous login, which i don't considera a good idea)
<lionel> ivoks: looks like a good idea
<[miles] > guys
<[miles] > anyone alive?
<[miles] > ping ...
<mralphabet> hola
<[miles] > mralphabet: 
<[miles] > muy buenas
<[miles] > mralphabet: any idea wtf my crondaily insists on emailing me to inform me that the ntp has adjusted the time by 3 seconds?
<ivoks> [miles] : because it has output
<[miles] > mmm but
<[miles] > I'm dumping it to /dev/burninhell
<[miles] > see /usr/sbin/ntpdate ntp.ubuntulinux.org 2>/dev/null || exit 0
<ivoks> yeah, but that's for stderr, not stdout
<ivoks> this is 'unwise' ting to do
<[miles] > what?
<ivoks> you will not know when there is a problem, and you will know when it works :)
<ivoks>  2> is redirection for stderr
<ivoks> not stdout
<[miles] > it's both no?
<ivoks> 1> is for stdout
<ivoks> no
<[miles] > &>
<[miles] > shite
<[miles] > it's that iirc now
<ivoks> &> is all
<[miles] > nod
<[miles] > balls
* [miles]  feels a proper twat and gets his coat
<ivoks> and it's ntp.ubuntu.com
<mralphabet>  >> /dev/null 2>&1
<ivoks> >> is something else
<[miles] > nod
<ivoks> but 2>&1 is redirect 2 in 1
<[miles] > &> is both tho no?
<[miles] > in one syntax
<mralphabet> I believe so
<[miles] > thanks guys
<[miles] > ivoks: I've changed to ntp.ubuntu.com also
<[miles] > ciao todos
* netjoined: irc.freenode.net -> brown.freenode.net
#ubuntu-server 2007-04-03
* #ubuntu-server  [freenode-info]  channel flooding and no channel staff around to help? please check with freenode support: http://freenode.net/faq.shtml#gettinghelp
<jmg> !cdimage
<jmg> !iso
<jmg> groan
<jmg> where are the cdimages? specifically minimal
<jmg> i can only find ia64
<jmg> http://cdimage.ubuntu.com/non-ports/ubuntu-server/non-ports/non-ports/non-ports/non-ports/non-ports/non-ports/non-ports/non-ports/non-ports/non-ports/
<jmg> PuMpErNiCkLe: thx
<PuMpErNiCkLe> np
<erchache> hi
<erchache> im using for backup my servers a host with rsnapshot+ssh key
<erchache> im looking for change it for iscsi system
<erchache> anybody are using iscsi?
<Kamping_Kaiser> :/
<Kamping_Kaiser> i could have asked them about it
<davekempe> anyone here used priomark for filesystem testing/benchmarking?
<davekempe> http://www.ipacs-benchmark.org/index.php?s=download&unterseite=priomark
<Kamping_Kaiser> hi davekempe 
<davekempe> gday
#ubuntu-server 2007-04-04
<rockzman> can someone help me to install ubuntu 6.10 server?
<mgalvin> rockzman: what do you need help with?
<Innatech> asking here because #bind has been silent: can anyone point me towards good docs on split DNS (internal/external) in terms of how to pattern the hostnames/domains and any issues that might arise therefrom in Kerberos, LDAP & Samba? 
<mralphabet> I run them as two seperate entities
<mralphabet> granted, I don't run kerb / ldap / samba access on both sides of the firewall
<Innatech> Yes, that's more or less what I intend. Nothing funky cropped up doing that? 
<mralphabet> nope
<Innatech> cool, thanks. 
<mralphabet> resources that _do_ have holes poked in the firewall for them have two names
<Innatech> hrmm. OK, that should be manageable without too big a headache. 
<mralphabet> IE internally it resolves both blah.somedomain.com *and* blah.somedomain.local
<mralphabet> when dealing with end users who use the resource internally and externally, I refer to everything as blah.somedomain.com
<mralphabet> external obviously only has the listing blah.somedomain.com
<mralphabet> any services that need inter-server communication uses blah.somedomain.local
<mralphabet> if that makes any sense
<mralphabet> not saying it's right, but it works for me /shrug
<Innatech> I think I get it. That's about what I envisioned. 
<Innatech> I'm just not sure why one would want to do it that way, rather than using a subdomain. But I'm probably missing some no-duh big picture issue there. 
<mralphabet> I don't think it matters one way or the other, I think it is just preference
<Innatech> ah, OK. I felt like something obvious was escaping me. 
* Starting logfile irclogs/ubuntu-server.log
<davekempe> free tech support from me for the next half hour. ask me anything to do with ubuntu on the server! :)
<Kamping_Kaiser> why doesnt my work use ubuntu on its server? :(
<davekempe> hehehe. just get them to call me
<davekempe> after I get back from holidays
<davekempe>  :)
<davekempe> no seriously
<Kamping_Kaiser> hehe
<fabbione> davekempe: i have an HSG80 in multibus failover exporting 2 DX towards 2 switches and there to 1 one machine with 4 FC-HBA lanes.. can i use dm-multipath+dm-roundrobin to load balance the traffic and handle failover?
* fabbione tickles daq4th 
<fabbione> ops
* fabbione tickles davekempe 
<davekempe> fabbione, hrmm
<davekempe> on dapper?
<fabbione> dapper or feisty
<davekempe> multipath is a bit undermaintained for my liking
<davekempe> what hbas?
<fabbione> the controllers are 2 Emulex LP9000 and 2 qlogic 2400
<davekempe> nice
<davekempe> dunno about their support in dapper. I would get a more recent kernel myself, built by hand. 
<fabbione> of course all cross connected to avoid failures by vendor
<davekempe> thats what i did for a similiar situation recently
<davekempe> ok
<fabbione> you didn't answer my question tho
<davekempe> yeah im still thinking
<davekempe> honestly, I would just try it and see
<davekempe> can i use questions require a certain degree of experimentation in my exp
<fabbione> davekempe: i will tell you the answer.. i was just teasing you...
<davekempe> yeah i thought so
<fabbione> i know the answers to all these questions
<davekempe> hey have you seen priomark?
<fabbione> but free support was an offer i couldn't resist :)
<fabbione> priomark?
<davekempe> sorry I am not as familiar with your hardware as I would like to be :)
<davekempe> http://www.ipacs-benchmark.org/index.php?s=download&unterseite=priomark
<davekempe> I can send you the paper if you want to read more - I bought the paper for it yesterday
<fabbione> no i didn't see it
<davekempe> I am looking for an effective way to test my AoE SAN across different archs/distros
<fabbione> AoE? brrrrrrrrr....
<fabbione> it depends what kind of tests you want to perform
<fabbione> stability? performance? redundancy?
<fabbione> failover?
<davekempe> bonnie++ seems to give me strange results
<fabbione> i use dbench
<davekempe> more like what the hit in performance
<davekempe> over local disk etc
<davekempe> ok I will check it out
<fabbione> it's somewhere in universe
<davekempe> going back onsite tomorrow to play with it
<fabbione> well clearly the first hit you get is the network traffic
<fabbione> you should really separate the bits in the setup
<fabbione> first benchmark the network of the server
<fabbione> because you might have a 10Gbit Ethernet
<davekempe> yeah i ahve been testing them all seperatly as i build it
<fabbione> but a broken driver that push 10kbit
<davekempe> yeah i noticed a real difference on the areca raid card between dapper and fiesty
<davekempe> 20% performance increase
<fabbione> i would also perform tests from one machine only towards the SAN
<fabbione> and see if you can saturate
<fabbione> if you can't it's pointless to go with another node on it
<fabbione> and if you plug more than one node.. then you need to decide how you want to test the access to AoE
<fabbione> 2 different partitions from SAN to 2 machines?
<fabbione> or one partition using a clustered FS?
<fabbione> it's a pain to track all the options
<davekempe> no clustered fs
<davekempe> yeah its a minefield
<davekempe> aiming for xen dom0's booting the domU off their own raided aoe slices
<davekempe> btw - you have any idea how i can tell if I have jumbo frames enabled?
<davekempe> not that i have googled that yet
<fabbione> no i don't know.. i would have to goggle it too
<fabbione> but it also depends from the hw
<fabbione> some cards have limited MTU in hw
<davekempe> ahh
<davekempe> ok cool
<fabbione> to offload ipv4 checksum calculaton down to the chipset
<fabbione> that's something i am sure about :)
<davekempe> thanks for the free tech support :)
<fabbione> why your name keeps ringing a bell in my head but i can't associate to something specific?
<davekempe> appamour?
<fabbione> no ablo francese :)
<fabbione> davekempe: i might be just on crack
<ivoks> je ne parle pa fracise :)
<ivoks> pas
<fabbione> <davekempe> appamour?
<ivoks> or whatever :)
<ivoks> has anyone had anything to do with chillispot?
<davekempe> i was pushing various canonical peeps about it. now its getting into fiesty i am happy. big gratz all round
<fabbione> food time
<fabbione> bbl
<davekempe> ivoks, thats a wireless captive portal right?
<ivoks> yes...
<ivoks> i have problems with it; it wouldn't connect to my radius server
<davekempe> i haven't used it before, but I still may be able to help you
<davekempe> what have to you checked - does it have logs? tethereal/tshark? strace?
<[miles] > morning guys
<[miles] > I really do think we should try to get the ubuntu team to fix up the saslauthd and postfix problem
<[miles] > as its bloddy annoying
<ivoks> davekempe: yes it has, but nothing meaningful in them
<ivoks> davekempe: radius doesn't record connection being made at all
<davekempe> tried tshark? packet sniffers never lie
<[miles] > having the socket outside of the postfix jail is a nag
<davekempe> miles - yeah its a bit of trap for new players
<[miles] > davekempe: I submited a patch
<[miles] > davekempe: and opened a bug
<davekempe> overall the setup is still way easier than other platforms.... :)
<[miles] > davekempe: yeah, but messing around with the init script etc
<[miles] > davekempe: is still a fuckin pain
<davekempe> i agree
<davekempe> got any response on your bug?
<[miles] > yeah, I submitted it a few months ago, but it got checked the other day I think
<[miles] > https://launchpad.net/bugs/79371
<davekempe> damn searching for bugs on launchpad is annoying
<davekempe> what about a debian bug? upstream might have more weight on this package
<[miles] > no idea
<[miles] > I don't touch debian
<[miles] > not involved with it in any way shape or form
<[miles] > davekempe: your british I guess by your name?
<davekempe> lodging the same but against the debian package might be tackling the problem from two angles :) or maybe just bad form :(
<davekempe> nope im Aussie
<[miles] > ah jeje
<[miles] > nice!
<[miles] > better
<[miles] > the brits are a bunch of twats
* [miles]  knows
<davekempe> ill take your word for it :)
* [miles]  is a scouser
<[miles] > but live and work in Barcelona 5 years
<davekempe> wtfs a scouser?
<[miles] > someone from Liverpool
<davekempe> i see :)
<rambo3> what are the basic meta-packeges  for basuc system in server ?
<Nafallo> ubuntu-{minimal,standard}
<rambo3> ok thanks 
<JakeX> hey can someone help me with an issue regarding samba? I just need to upgrade to latest version on a breezy dist.. http://www.ubuntu.com/support/communitysupport
<JakeX> i seem to be limited to samba 3.0.14 :(
<JakeX> oops.. wrong given a sec ago, heres the right one: http://ubuntuforums.org/showthread.php?t=401200
<shawarma> 3.0.14 is the newest available on Breezy.
<shawarma> And if I remember correctly, Breezy will be unsupported in 10 days.
<JakeX> hmm well i have a production server..
<shawarma> JakeX: https://lists.ubuntu.com/archives/ubuntu-announce/2007-March/000099.html
<JakeX> and due to the MacOSX tiger upgrade I'm forced into upgrading
<shawarma> Which version do you need?
<JakeX> hmm so do a dapper upgrade from breezy -> dapper -> edgy
<shawarma> Do you really need Edgy?
<JakeX> i found that 3.0.14 causes problems with macs.. and other users mentioned 3.0.20 as having a fix
<shawarma> Dapper is supported for 5 years.
<JakeX> no.. i don't really need an upgrade.. but I need samba fixed :|
<JakeX> well i can do dapper i guess..
<JakeX> but my chief concern is .. a broken os.. due to upgrade.. i've read some people having issues..
<JakeX> and since its a server install i didn't wanna play with it :P
<JakeX> but it looks like thats my best option.. plus support for 5 years
<shawarma> JakeX: Precisely.
<JakeX> process for the upgrade would be simply -> swap in dapper source.list and apt-get update > dist-upgrade ?
<shawarma> You might be able to backport Samba to your Breezy install, but as I mentioned before, it's unsupported as of the 13th .
<JakeX> ya
<JakeX> wasn't aware of that..
<Nafallo> JakeX: you're not concerned that breezy reached EOL? :-)
<JakeX> not really.. it is an internal samba server..
<JakeX> iptables + samba + mysql & tomcat for internal app processing..
<JakeX> in reality.. i couldn't care less except for samba problem with mac osx :(
<chandu_> hi
<chandu_> Is there any utility to mount the gzip compressed image without gunziping it 
<mralphabet> hi chandu, glad you hung around for an answer!
<fdoving> total of 4 min. patience!. :)
<mralphabet> so . .. how would one revert from feisty to edgy
<mralphabet> ivoks: you wouldn't happen to know how to downgrade to edgy from feisty would you?
<ivoks> sure i would :)
<mralphabet> Oh? Please, share ;)
<ivoks> but, it's not that flawless like upgrade
<mralphabet> The feisty + vmware server issues are driving me crazy
<ivoks> in /etc/apt/preferences
<ivoks> you have to add:
<ivoks> Package: *
<ivoks> Pin: release a=edgy
<ivoks> Pin-Priority: 1001
<ivoks> and you should have edgy repos in sources.list
<ivoks> simple apt-get  dist-upgrade should start your longest night in your life :)
<mralphabet> hmm, it seems I do not have /etc/apt/preferences
<ivoks> of course you don't
<mralphabet> make a new file?
<ivoks> yes
<mralphabet> deb http://us.archive.ubuntu.com/ubuntu/ edgy main restricted universe multiverse
<mralphabet> deb http://us.archive.ubuntu.com/ubuntu/ edgy main restricted universe multiversdeb-src http://us.archive.ubuntu.com/ubuntu/ edgy main restricted universe multiverse
<mralphabet> deb http://us.archive.ubuntu.com/ubuntu/ edgy-proposed main restricted universe multiverse
<mralphabet> look right?
<ivoks> this is support for server
<ivoks> not 'how do ubuntu repos look like
<ivoks> :)
<mralphabet> heh
<mralphabet> good point
<ivoks> we have to have a clear separation; we don't want this to become support for everything
<mralphabet> I understand, no worries
<mralphabet> 0 upgraded, 13 newly installed, 928 downgraded, 86 to remove and 0 not upgraded.
<mralphabet> we'll just let that run for a while
<mralphabet> ivoks: thanks for the help
<ivoks> good luck :)
<mralphabet> If at all in doubt, answer Yes. If you know exactly what you are doing, and are prepared                                                                                                                                       
<mralphabet>  to hose your system, then answer No
<mralphabet> 
<mralphabet> certainly an eyeopener
<jronnblom> mralphabet: what problems do you have with feisty and vmware server? Im running feisty server with vmware server 1.02 and I haven't run into any problems yet.
<mralphabet> jronnblom: for whatever reason, I can not get the host and guests machines to do anything more then ping
<mralphabet> ubuntu host with xp and vista guests, guests can not browse network (host is network browsing master) and can not access host samba shares and can not ssh to host
<mralphabet> guests can see other machines on the network, other machines can see guests
<jronnblom> ah, I remember your problem with ping
<jronnblom> what h/w are you on?
<mralphabet> it's a white box, asus board w/2.8ghz intel p4
<mralphabet> edgy + vmware server 1.0.1 worked
<jronnblom> and I assume that you tried 1.02 with feisty? 
<theacolyte> yeah ive ran feisty in vmware server 1.02 myself just fine
<mralphabet> jronnblom: correct
<mralphabet> theacolyte: "feisty in vmware"? feisty as guest?
<jronnblom> have you tried with etherreal/tcpdump and see whats on the network when you try to ping the host or guest?
<jronnblom> what NIC is on the asus board?
<theacolyte> feisty IN vmware server, actually, sorry just reread it all
<theacolyte> not as host
<mralphabet> jronnblom: RealTek RTL8139
<daq4th> fabbione: ?
<fabbione> daq4th: sorry.. it was the wrong nick/tab completion
<daq4th> ;-)
<mralphabet> jronnblom: when I ssh from guest to host I get a timeout error in auth.log
<jronnblom> mralphabet: I have only Intel or Broadcom in my servers and they don't seem to have a problem with feisty and vmware-server
<jronnblom> I would try and replace the realtek card if possible
<theacolyte> broadcom makes some really good chips these days
<theacolyte> not too hot on realtek either
<mralphabet> I believe I have a 3c905b or c laying around
<theacolyte> ! now that's a good card. I still use my DEC Tulip card..
<jronnblom> hmm, it used to be a good card but its very old nowdays.
<J_P> hi all
<pursuantirc> first time ir
<pursuantirc> irc
<pursuantirc> question on linux
<pursuantirc> ubuntu rocks, by the way, and have used the desktop software.  I am interested in a server.
<pursuantirc> are there gui tools for the server?
<theacolyte> no
<theacolyte> not unless you installed them
<pursuantirc> thanks
<Aw0L> I have a basic dns server setup with dnsmasq - is there a way to make what I have in /etc/hosts to take precedence over the real IP of a site?
<Aw0L> like, if I want to make google.com point to a local IP for instance
<shawarma> dnsmasq already serves your /etc/hosts via dns.
<Aw0L> shawarma, true, but if I add an entry for a domain that already exists, it points to that domain instead of what's in my /etc/hosts
<shawarma> Aw0L: That sounds broken. Why should it try to resolve something it finds in its hosts? Odd.
<Aw0L> that's what I'm curious about
<shawarma> Aw0L: Have you tried putting something like "www.google.com." in the hosts file? Note the final dot.
<Aw0L> what does the final dot do?
<shawarma> It's kind of like the leading / of a path.
<Aw0L> yeah, but why is it necessary?
<Aw0L> wait
<Aw0L> maybe I should clarify
<shawarma> Well, it shouldn't be, but it might help.
<Aw0L> on the dns server itself, if I add an entry for google in my /etc/hosts file and type "ping google.com" it pings google
<shawarma> Since you tell it that "www.google.com" has this address and not "www.google.com.your.own.domain".
<Aw0L> on another computer that uses my dns server as it's dns server - it just goes to google instead
<shawarma> Aw0L: eh?
<Innatech> anyone have any ideas about why SSH port forwarding suddenly stopped working for connections to local ports on the target host? As in, I SSH to my office LAN, port 22 is forwarded to a machine running the SSH server, and login is normal. Ports tunneled from my home machine to other hosts on the office LAN work normally, but connections tunneled to the machine running the SSH server die. 
<shawarma> Aw0L: Your DNS server pings google even thought it's overridden in the hosts file?
<Aw0L> not quite
<shawarma> Innatech: Define "die".
<Aw0L> from the dns server, if I ping google, it returns what's in my /etc/hosts
<shawarma> Aw0L: Ah. That's not waht you said. :-)
<shawarma> Aw0L: "and type "ping google.com" it pings google"..
<Aw0L> if I ping google from another box that has my dns server in /etc/resolv.conf, it pings google's real IP
<shawarma> Aw0L: Right, ok.
<shawarma> Aw0L: Could you try adding the final dot and SIGHUP'ing dnsmasq?
<Innatech> shawarma: I've only monkeyed around with HTTP traffic thus far, but either a generic "server not found" or a  "connection was reset" error page. 
<shawarma> Innatech: "Server not found". Which browseR?
<Innatech> Shawarma: Firefox. 
<Innatech> IE gave a similar error, I forget the specific language. 
<shawarma> Innatech: Ok. I just seem to remember something about IE saying that whenever anything at all went wrong. Very confusing.
<Innatech> Yes, IE does do that. Which sucks. 
<shawarma> Very much so.
<Innatech> Anyway, its a client side not found. Not a 404 from the server. 
<shawarma> Especially when you're dealing with people who actually know a little bit about what they're talking about, but still not quite, and they tell you that "requests to blablabla gives me a 404" and after hours of debugging it turns out that they're acutally seeing that no good generic error which doesn't say a thing about 404. 
<shawarma> Not that I'm bitter or anything..
<Innatech> Now I'm at the office and can verify that the httpd is couldn't reach is indeed running. (whew). So now I want to get the SSH tunnel straightened out so I don't have similar heart attacks (thinking my webapp is down when I try to use it remotely.) 
<shawarma> Innatech: Well, I see no reason why this should start failing spontaneously. Try with telnet or netcat or something.
<Innatech> yeah....telnet failed silently, and I was on XP hosts @ school when it happened, so no netcat there. 
<shawarma> Innatech: And you're sure about your command line being right?
<shawarma> I have to ask.. :-)
<shawarma> Aw0L: Any luck?
<Innatech> I was using PuTTy, actually. So unless there
<Innatech> 's a bug in the latest build...
<Aw0L> shawarma, no
<shawarma> Aw0L: The final dot didn't help?
<Aw0L> basically, it get it to do what I want I'd have to setup a zone for that domain, which I don't want to do 
<Aw0L> and no, it didn't :(
<shawarma> Aw0L: I think I have a dnsmasq running on my router. I can try a few things. hang on.
<Aw0L> ok, thanks
<Innatech> hrm. I'm going to try it from another company's LAN, down the hall. (Oh, the suspsense!) 
<shawarma> Aw0L: Hmmm. it works just fine here.
<Aw0L> really?
<Aw0L> are you pinging from the router though?
<Aw0L> or from another machine?
<shawarma> Aw0L: No.
<shawarma> Another machine.
<shawarma> Although..
<shawarma> Ah, I think I know what your problem is.
<shawarma> $ host www.google.com
<shawarma> www.google.com is an alias for www.l.google.com.
<shawarma> www.l.google.com has address 209.85.129.99
<shawarma> www.l.google.com has address 209.85.129.104
<shawarma> www.l.google.com has address 209.85.129.147
#ubuntu-server 2007-04-05
<shawarma> .. If you've added an entry to the hosts file on the dnsmasq server, you'll have a line like
<shawarma> www.google.com has address 127.0.0.1
<shawarma> at the top.
<shawarma> Right?
<Aw0L> eh?
<Aw0L> my 127.0.0.1 is set for localhost
<Aw0L> if that's what you maen
<Aw0L> ?
<shawarma> Well, not necessarily 127.0.0.1, but whatever you've set www.google.com to point to, of course.
<Aw0L> okay?
<Aw0L> I just added "192.168.1.x google.com google"
<shawarma> Not an actual "x", right?
<Aw0L> whatdo you mean by the alias stuff?
<Aw0L> right :)
<shawarma> Good. :-)
<shawarma> Try running "host www.google.com"
<shawarma> You'll see.
<shawarma> It'll all make sense.
<Aw0L> in my fstab?
<shawarma> !??!?! What?
<Aw0L> huh?
<shawarma> fstab?!?
<shawarma> What?
<Aw0L> whoops
<Aw0L> typo
<Aw0L> hosts
<Aw0L> /etc/hosts
<shawarma> In a terminal, type:
<shawarma> host www.google.com
<shawarma> press return.
<shawarma> watch.
<shawarma> :-)
<Aw0L> ok, that's what it says
<Aw0L> but...
<Aw0L> what does that mean?
<shawarma> "www.google.com is an alias for www.l.google.com." means that there's a CNAME record for www.google.com pointing at www.l.google.com.
<Aw0L> wait
<shawarma> Are you familiar with CNAME records?
<Aw0L> I think it was a misconfiguration
<Aw0L> in my /etc/resolv.conf my first nameserver was 127.0.0.1 instead of the local IP
<Aw0L> that's on the dns servr
<Aw0L> it works now!
<shawarma> Yay.
<Aw0L> cool!
<Aw0L> thanks for your help
<shawarma> any time.
<Innatech> so...I can reproduce this SSH problem from down the hall. I can hit servers on hosts other than the ones I'm SSHing into, but local ports mapped to ports on the target machine don't work. Firefox gives a "the connection was reset while the page was loading" page. I have cygwin on the machine I'm testing with. Any tips for using netcat to examine the problem? 
<Innatech> Target box is running Ubuntu 6.06, btw. 
<shawarma> Innatech: Well, a quick "netcat localhost 80" would perhaps reveal something.
<davekempe> anyone here used dbench? how long is a test with 100 clients supposed to take on decent new hardsware? in real time
<Innatech> well, that cygwin install is missing netcat, but the windows version yielded some bizzare results:
<Innatech> bleh, can't find the transcript now. But here's some exemplary lines from the SSH event log...
<Innatech> Opening forwarded connection to 192.168.1.1:80
<Innatech> 2007-04-04 15:53:09     Opening forwarded connection to 192.168.1.1:80
<Innatech> 2007-04-04 15:53:10     Forwarded port closed
<Innatech> 2007-04-04 15:53:10     Forwarded port closed
<Innatech> 2007-04-04 15:53:16     Opening forwarded connection to 192.168.1.5:80
<Innatech> 2007-04-04 15:53:16     Forwarded connection refused by server: Connect failed [Connection refused] 
<Innatech> Any ideas welcome. 
<davekempe> have tried telneting from the forwarding server to the target server?
<davekempe> on the target port
<Innatech> forwarding server is the target serve.r 
<Innatech> 192.168.1.5 is the machine SSH'd into. 
<davekempe> sorry im coming on the tail end of this problem
<Innatech> and yes, local browsing is OK. 
<Innatech> no, good question. I didn't include the first lines of the SSH log so as not to spam the channel. 
<davekempe> don't hesitate to reach for a packet sniffer if you are scratching your head
<Innatech> yeah....sigh. I tend to get a little lost trying to do that, but it's worth a try. 
<davekempe> just filter out only the ports you need
<Innatech> so....it appears to be some kind of problem with HTTPS. As in, its not working. If I specify plain HTTP over 8080, then it works. Same thing on the local host, too. Bizzare. 
<Innatech> *local host in that sense being the box running the httpd I want to hit, and into which I'm SSH'd and tunneled. 
<dsdg> halo, anyone here running mod_encoding?
<dsdg> I am struggling to compile it,
<dsdg> i have mod_encoding.c, how do i compile it?
<dsdg> to get the .so file?
<\sh> for apache?
<\sh> or apache2?
<\sh> oh he's gone..apt-get install libapache2-mod-encoding will help anyways ,-)
<evilkry> hello
<evilkry> I am new to the lamp server install of ubuntu - i'm going to use the 6.06 version
<evilkry> I have a question in regards to the default of mysql... what is setup after a default install.. what will I need to do as far as changing the root password and or creating my users?
<theacolyte> well
<theacolyte> besides telling you to google something like that because the first things that pop up will tell you what
<theacolyte> mysqladmin -u root password yourrootsqlpassword
<theacolyte> mysqladmin -h server1.blahblahblah.com -u root password yourrootsqlpassword
<theacolyte> for user admin, www.mysql.com
<ivoks> evilkry: any previous experience with mysql?
<evilkry> yes
<evilkry> i've been using it on my godaddy.com server account for a while
<ivoks> then phpmyadmin is overkill for you or a handy tool? :)
<evilkry> but i'm bringing it in house to a new server
<evilkry> phpmyadmin is handy :P
<ivoks> then install phpmyadmin too :)
<evilkry> I definatly intend to hehe
<evilkry> I here that on ubuntu I can just apt-get install phpmyadmin ?
<theacolyte> evilkry: yes
<theacolyte> evilkry: packages.ubuntu.com
<evilkry> oh cool, thank you
<ivoks> evilkry: you should enable universe repository
<ivoks> and then install it
<ivoks> it will be on http://your_server/phpmyadmin
<evilkry> ok, as soon as my iso image finished downloading i'll burn it and then rock it out
<evilkry> i'm trying to feal out most of my playing field here before I hit it
<ivoks> on linux, you have to know only one thing
<ivoks>  /var/log has answers to all of your problems :)
<evilkry> haha
<evilkry> I havn't ran a linux server in quite some time... I beleive it was slack 9 - the last I had running
<theacolyte> I think it's the other way around
<theacolyte> i.e. /var/log has all your problems
<theacolyte> hehe
<ivoks> :)
<evilkry> hrhr
<evilkry> how hard do you all think it will be in setting up my server to run about 6 different IPs, some sites running on the same and some sites running on seperate ips
<ivoks> not hard
<ivoks> if you understand how apache works
<theacolyte> any of you know anything about ipv6?
<evilkry> theacolyte: wish I did, the last ccna course i was barely even mentioned it :(
<ivoks> i'm avoiding learning ipv6 untill my job starts depending on it :D
<theacolyte> hehe
<theacolyte> I'm going to be setting it up on my home office
<theacolyte> Have a Cisco 1811, it can do it
<evilkry> ivoks: I understand how to do UP aliasing in linux, just a little confused perhaps on how apache handles the domains... I also think I understand that I can just setup like the NameVirtualHost * adn then do Servername www.somedomain.com with DocumentRoot etc
<theacolyte> ISP supports ipv6 tunnels
<evilkry> just kinda throws me when you start bringing in the "other" ip addresses
<theacolyte> evilkry: the apache side of vhosts is by far the easiest part of the whole process
<theacolyte> hehe
<theacolyte> I've done it plenty o times
<theacolyte> elyob: there are no gui options, installing X is a lot of work on ubuntu-server
<elyob> theacolyte: Hi, thanks for that. I'm just figuring out my way around server at the moment. 
<evilkry> i've seen a lot in my reading today the follow: mysqladmin -u root password yourrootsqlpassword
<evilkry> mysqladmin -h server1.example.com -u root password yourrootsqlpassword
<evilkry> do I really need the second part including the -h server1.example.com ?
<theacolyte> evilkry: that's what I told you to do like... an hour ago :P
<evilkry> or can I get by with just mysqladmin -u root password yourrootsqlpassword
<theacolyte> yes, it sets access for root to only work from your localhost
<evilkry> ah
<theacolyte> not doing so is a serious security risk
<evilkry> well i'm going to do that hehe
<theacolyte> good idea :)
<katia_> hello everybody
<katia_> Has anyone had install qmail?
<katia_> Has anyone had install qmail?
<katia_> Has anyone ever had install qmail?
<ivoks> not me :)
<lionel> katia_: yes I use to install it
<lionel> but it was on Debian...
<katia_> lionel i'm trying to install qmail using kubuntu 6.06
<lionel> I do not know why qmail-src disapears from Ubuntu...
<katia_> i'm using the qmail howto
<katia_> what distro you recommend to use as a server?
<lionel> Ubuntu :)
<Burgwork> funny that we would recommend that here
<katia_> anyone has install qmail using ubuntu?
<katia_> noone?
<lionel> katia_: this is probabily a stupid question but... I am not going to say 'you should not use qmail', etc.
<lionel> but do you have a good reason to use qmail ? qmail is quite difficult to setup and to maintain
<lionel> it was secure and fast. But now, postfix and exim are as secure and fast
<katia_> what software you recommend?
<lionel> postfix or exim
<katia_> ok
<lionel> I would personaly recommend postfix
<katia_> thanks lionel
<lionel> you're welcome
<jhutchins> katia_: I'd second that, postfix with postgrey.
<katia_> ok thanks guys
<ivoks> postgrey rocks
<katia_> i've been almost a month with qmail
<ivoks> if only @ubuntu.com's MX would use postgrey ;] 
<J_P> hi all
<J_P> hey all..
<J_P> I think I find a bug on 6.10 server..
<J_P> look:
<J_P> I have two HDs 80GB. So I create this:
<J_P> /dev/sda1  - raid autodetect
<J_P> /dev/sda2 - swap
<J_P> /dev/sdb1  - raid autodetect
<J_P> /dev/sdb2 - swap
<J_P> so i create RAID 1 via software (install software) and after first reboot... 
<J_P> machine initiate and show: 
<J_P> mdadm: /dev/md0 has been started with 2 drivers.
<J_P> and after long time (10 minutes more less)  show this message:
<J_P> ALERT! /dev/hda1 does not exist. Dropping to shell.
<J_P> And go to busybox shell..
<J_P> why this ?
<J_P> I don't have any IDE (hda1) 2 discs are sata..
<J_P> anyone here... ?
<Pumpernickel> bug 36829
<Pumpernickel> gah
<Pumpernickel> https://launchpad.net/bugs/32123 and 36829
<evilkry> hey guys i've successfuly got my ubuntu server up and running
<evilkry> hooray
#ubuntu-server 2007-04-06
<evilkry> hey guys, have any of you installed ISPConfig on ubuntu?
<elyob> How can I fix my IP address with ubuntu-server? Thanks
<elyob> i.e. 192.168.x.x
<theacolyte> man ifconfig
<elyob> THanks, so ifconfig eth0 192.168.x.x. 255.255.255.0
<theacolyte> basically yes
<elyob> Thanks
<Kamping_Kaiser> is anyone here running scalix?
<Kamping_Kaiser> i'm wondering how nicely it plays with LTS
<Kamping_Kaiser> just filed a [need-packaging]  bug on scalix - hope it isnt a dupe of a rejected bug :|
<ivoks> ok, debian will get zenoss packages
<ivoks> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=361253
<ivoks> i'll work with Bernd to make this happen ASAP
<iaj> i am having problems installing 6.06 LTS Server in VMWare.  after installing when it boots the first time it hangs at the following:  uncompressing linux... ok, booting kernel
<iaj> can someone help?
<theacolyte> http://www.vmware.com/community/thread.jspa?messageID=419923
<theacolyte> see if that helps
<iaj> tried
<iaj> it
<iaj> and several other suggestions i foudn on google
<iaj> still no luck
<theacolyte> =/
<theacolyte> I have no real ideas besides what were outlined in that post
<iaj> i have not delved this far into the OS (unix or linux before) including installing it myself.  Is there some sort of debugging I can use log the point where the problem occurs?
<theacolyte> not for that, no
<iaj> ok thx
#ubuntu-server 2007-04-07
<evilkryWork> does anyone here use ISPConfig?
<shawarma> evilkryWork: 
<shawarma> whoops
#ubuntu-server 2007-04-08
<ctan> hello there
<Innatech> heya
<ctan> know any mentors for google SoC which I can convince to accept my proposal :P ?
<ctan> i have a great marketing pitch!
<fabbione> ctan: repeatedly asking in different ubuntu channel will not help
<fabbione> and you already got an answer in #ubuntu-devel
<fabbione> you have to wait
<ctan> i was asking a different question in #ubuntu-devel
<ctan> but i get the picture :P
<ctan> sorry to have bothered
<[miles] > afternoon guys
<[miles] > guys, am I going crazy, or is there no pam_ldap in LTS?
<Nafallo> [miles] : universe
<Nafallo> https://launchpad.net/ubuntu/+source/libpam-ldap/180-1ubuntu0.6.10
<[miles] > h 
<[miles] > jeje
<[miles] > ok cheers
<[miles] > forgot to add the repo
<Burgundavia> ajmitch: do we have a list of cool news thing for ubuntu server in feisty?
<ajmitch> nope
<ajmitch> I can't really think of much that's cool & new
<ajmitch> maybe apache 2.2, the GFS & clustering stuff (some of which is new)
<ajmitch> you looking for release note stuff?
<Burgundavia> apache 2.2 is worth talking about
<Burgundavia> yep
<maswan> oprofile and systemtap!
<ajmitch> maswan: aha, thanks :)
<Burgundavia> systemtap?
#ubuntu-server 2008-03-31
<owh> Hiya. I'm installing a samba gutsy-server and the WinXP clients cannot connect as guests. I've manually added the group mappings according to: http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/groupmapping.html#set-group-map - using local unix groups, but I still cannot connect. Ideas?
<owh> BTW, browsing on a gutsy workstation to the server works just fine.
<sommer> owh: what's up
<owh> Hiya sommer, pulling out my hair.
<sommer> heh, I usually go to the barber
<sommer> owh: for the samba question did you set a guest password?
<owh> I don't even know where to start with this samba thing.
<owh> sommer: What do you mean?
<sommer> your windows users do they have accounts on your ubuntu box?
<sommer> or a guest user with a password?
<sommer> also what is your security option set to?
<owh> Security is set to user because at some point in the not to distant future there will be actual user accounts, rather than one single guest account.
<owh> I'm just wondering something.
<mralphabet> does it prompt you for user / pass when you browse to it from the windows machines?
<owh> There needs to be a mapping between a unix account and a local account as specified in smbusers right?
<owh> mralphabet: Yes.
<sommer> smbpasswd -a username
<mralphabet> and what user account do you type in?
<owh> mralphabet: Well, initially nothing, then I tried "guest".
<owh> smbusers contains this line: samba = guest pcguest smbguest
<owh> Meaning, the guest account is mapped to the Ubuntu "samba" account.
<owh> This account exists.
<owh> I've locked the password with passwd -l samba, I'm guessing that I should not have :)
 * mralphabet nods
<owh> So, how do I create a guest account that does not let free access into the Ubuntu box.
<owh> Do I set the shell to false?
<sommer> owh: that's what I do
<owh> And if that's how its supposed to work, how do I set the guest (samba Ubuntu user) to nothing?
<owh> Ah, passwd -d :)
 * owh crosses fingers and wanders off to a WinXP box :)
<owh> sommer: While I'm waiting for a reboot, is there any reason that the server guide uses /etc/init.d/foo stop as examples, rather than invoke-rc.d foo stop
<foo> owh: you have an init script named after me? Aw, you're so sweet
<owh> foo: Heh.
<foo> :
<foo> )
<owh> You'll keep :)
 * owh starts contemplating tomorrow.
<sommer> owh: not really sure, from my experience /etc/init.d/foo stop is more common :)
<owh> sommer: Fair enough.
<owh> On the samba issue, no change.
<owh> So, there is a samba local Ubuntu user, it is unlocked, login shell set to /bin/false, passwd deleted. Any thing else?
<owh> Is it possible that the WinXP machines are connecting with a username that I'm not expecting, or am I looking down the wrong path?
<sommer> owh: is there any errors in /var/log/syslog or /var/log/auth ?
<sommer> or any of the samba logs
<JanC> owh: AFAIK invoke-rc.d is only intended to be use by package scripts   ;)
<owh> JanC: Ah. I just find it less typing: inv[tab] foo
<JanC> well, I use 'wajig'  ;)
<owh> sommer: Only the ones about creating users and administrators.
<sommer> have you tried setting the samba password with smbpasswd ?
<sommer> er the user's samba password that is
<owh> sommer: Hmm, for which username?
<owh> sommer: The Ubuntu samba username?
<sommer> whichever one you're using guest... I think
<sommer> the ubuntu samba user is for the service I believe
<owh> sommer: You might be on to something there.
 * owh goes to check.
<owh> BRB
<owh> Nop
<owh> sommer: The ubuntu "samba" user did not exist. I created it.
<sommer> ah, did you then do a smbpasswd -a samba, then try connecting with that user?
<owh> Now I've set the passwd for the user "samba" to null in both Ubuntu and Samba.
<sommer> the "-a" adds the user to the samba user database
<sommer> maybe I'm confused on what you're trying to do?
<owh> sommer: Should I add a "guest" user?
<owh> Let's start a set back for a moment.
<owh> When I browse in WinXP to a share, which "anonymous" username does it use?
<sommer> I think I'm just confusing you as well, heh
<andguent> sorry to jump in, but this may be of interest smb.conf: guest account = nobody
<owh> andguent: Yup. I've got that set to guest account = samba
<sommer> owh: and you have a system user named samba?
<owh> Yes
<sommer> and a samba user named samba?
<owh> Yes
<sommer> and the samba user's Samba password is valid?
<sommer> as in a valid smbpasswd samba
<owh> sommer: It's set to null with smbpasswd -n samba
<sommer> owh: ya, I think it's going to need an actual password
<owh> There is also this mapping in smbusers: samba = guest pcguest smbguest
<owh> sommer: Since when does a guest account need a password?
<sommer> what is the security = ?
<owh> users
<sommer> I think for the guest it needs to be security = share
<owh> Hmm, I'm just noticing that this is commented out. I wonder what the default is.
<sommer> owh: users
<sommer> but if your goal is to not have a prompt then you'll need to set it to share
<owh> So, can I create a passwordless guest account in a "user" environment? Ultimately we'll need to move to that.
<sommer> owh: that I'm not 100% sure of, but I don't think so
<owh> We're working on migrating them all to a SOE.
<sommer> when you want to migrate you'll just need to create the users and set the security to users
<owh> sommer: So, if I login as "guest" on the WinXP box with no password, then it should work?
<owh> That is, the mapping in smbusers maps guest to samba which then connects, right?
<sommer> owh: that's I'm not sure of, but it's worth a try
<owh> It doesn't work.
<sommer> did you try security = share?
<owh> sommer: No because then the users that do have an account will have issues.
<sommer> owh: ah, true
<owh> sommer: This is a setup that has been borked by many previous admins. I'm trying to start from scratch.
<sommer> so you want some shares to prompt for a password and others to not?
<owh> sommer: Yes.
<owh> sommer: But it's simpler than that.
<owh> sommer: The only ones that prompt are home directories.
<owh> Let me ask a different question.
<sommer> then I think there are options for the share definitions that can enable that
<owh> When I browse the network under Ubuntu Workstation and connect to the server I am not prompted for a password.
<sommer> is that on the server itself?
<owh> No, from my laptop.
<owh> So, there is a guest mechanism working.
<owh> And it shows me connecting as the "samba" user in the log for that machine.
<owh> Does WinXP have a different "magic" guest username?
<andguent> i can stop butting in, but I like this type of share setup: [Shared]; path =/opt/samba/shared; writable = yes; guest ok = yes; create mask = 777; directory mask = 777; force user = nobody
<sommer> that sounds reasonable to me
<sommer> what andguent said anyway
<owh> andguent: Feel free to continue to butt in :)
<andguent> that of course, would give wide open perms to that share, and make any new file saved there have wide open security, only recommended for non confidential info
<sommer> andguent: definitely feel free to, I'm sort of running out of ideas :-)
<owh> Hmm, I wonder if the "force user" is required.
<andguent> maybe not, but it will make things easier for authentication
<owh> Lemmie try that for a mo.
<andguent> if you want an area for important secured info, make another share with different perms
<andguent> recommended for that setup: valid users = larry, moe, curly
<andguent> or: valid users = @threestoges
<owh> Yeah, that's the next step, but today we just want to recreate the environment that the company had on Friday :)
<andguent> yup yup
<owh> So, the force user=samba didn't work.
 * owh checks logs
<owh> No change.
<andguent> i assume you already have guest ok = yes
<andguent> on that share
 * owh checks to make sure.
<owh> [Accounts]
<owh>         path = /home/samba/Accounts
<owh>         writeable = yes
<owh>         guest ok = yes
<owh>         force user = samba
<owh> That's all there is.
<andguent> what are the permissions on that directory? ls -lah /home/samba/Accounts
<owh> Owned and operated by the user samba.
<owh> Hmm, I wonder if it has to do with WINS support?
<andguent> can you dump the output of these commands to http://paste.ubuntu-nl.org/ ---- ls -lah /home/samba/Accounts; cat /etc/samba/smb.conf; cat /etc/passwd; cat /etc/group; pdbedit -L
<owh> andguent: I can give you the Accounts directory itself, not its content.
<andguent> thats fine
<andguent> cut out what you want
<owh> andguent: http://ubuntu.pastebin.com/m135ac5d4
<owh> Hello: https://bugs.launchpad.net/ubuntu/+source/samba/+bug/32067
<andguent> i dont think its a big deal, but i'm grasping at straws too, it all looks fine..... can you change /bin/false to /bin/bash temporarily?
<ubotu> Launchpad bug 32067 in samba "public Samba SMB shares cannot be accessed anonymously from Windows XP, a password prompt appears" [High,Fix released]
<owh> map to guest = Bad User
<owh> That seems to be the fix for some reason.
<andguent> got it working then?
<owh> Not yet, still reading, but that looks like the fix from that bug report. I'm going to reload samba shortly.
 * owh crosses fingers
<owh> Well, under Nautilus I can no longer access the shares.
 * owh tries WinXP clients next.
<owh> Nope, "the user does not exist"
<owh> Hmm
<owh> Let me try map to guest = samba
<owh> No, but my logs are filling with: sid_to_uid for samba (S-1-5-21-2317322771-484201766-2685857054-501) failed
<owh> # net groupmap list
<owh> Domain Admins (S-1-5-21-2317322771-484201766-2685857054-512) -> administrator
<owh> Domain Guests (S-1-5-21-2317322771-484201766-2685857054-514) -> samba
<owh> Domain Users (S-1-5-21-2317322771-484201766-2685857054-513) -> staff
<owh> Crap, why is this so hard?
<andguent> still stewing it over..... you aren't using smbldap are you?
<owh> No
<owh> tdbsam
<andguent> how many times have you tried deleting the samba user in tdb via smbpasswd -x and then remaking?
<owh> Never, but I'm just thinking that the force user is when it went to pot. I'm going to remove that and leave the Bad User mapping.
<andguent> go for it
<owh> Nautilus is working again. XP is next.
<owh> BRB
<andguent> i dont have any XP computers to test with right now, otherwise I would try it from here
<owh> Whoot!
<owh> We have lift off.
<owh> Ok, so guest mapping needs to map to a bad user and force user should not be there.
<owh> When I'm back in the office I'll see what hardy does by default, but soren was already involved in the bug I mentioned, so I'm hoping this has been addressed :)
<owh> sommer: andguent: mralphabet: Thank you all for your assistance.
 * owh goes to test printing :)
<andguent> np, i just wish i knew how i did it before, its definitely possible
<owh> It would be great if my VPN connection worked from the office, then I could test this before coming on-site :)
<owh> andguent: Yeah, it's all working now as expected. I'll contact the authenticated users and make sure that they work as expected.
<andguent> one difference, I usually do not have low security in a domain master setup
<owh> I'm not yet sure that I need the groupmap.
<owh> andguent: And as soon as we have moved the environment the way it needs to be then that will change and the guest account will give you access to the Transfer volume :)
<owh> Thanks again.
<andguent> no prob, good luck, just know that it is possible :)
<owh> Yeah
<owh> Should I delete the group mappings?
<andguent> it shouldn't matter tooo much, if you know how to recreate the mappings, sometimes its nice to reset settings :)
<owh> Well, my first thought was that the installer should have taken care of them and because it didn't they were not required. The samba manual talks about them previously being created by default, but that they now need to be manually created, so I'm confused as to what really should happen.
<owh> I don't even know if they're needed for actual operation.
<andguent> I'm assuming you want to use a domain setup in the long term?
<owh> Yes
<owh> That makes me think, remove them now while I remember, then when we do this for real, create them properly.
<owh> I mapped them to administrator, staff, samba arbitrarily.
<owh> Though I think if they're required that it would be good if the installer did this.
<andguent> I have a similar setup to what you currently are going for in the short term, no domain master, and net groupmap list is empty, just as an FYI
<andguent> i used to have access to 30-40 different servers, so many samples to work from, i miss them greatly now :)
<owh> I think I'll delete them for now.
<owh> VMware is your friend :)
<andguent> definitely
<andguent> i just don't have those sample configs in front of me anymore, harder to remember everything off the top of my head
<owh> Hey, don't feel too bad, your probing caused me to seek further and find a solution.
<owh> Now I have to figure out why LAMP out of the box is serving php pages as text.
<andguent> that is odd
<owh> Yup, I'll say.
<owh> The mime-type mappings seem to be there.
<sommer> owh: I'd double check that the php modules is enabled and restart apache
<owh> sommer: Nope, but disabling it and re-enabling it and force-reloading in between fixed it. Mine is not to ask :(
<sommer> wierd...
<owh> sommer: That sums up the last 48 pretty well :(
<owh> 300Gb of data with only 40Gb of real stuff. The rest left lying around by previous administrators just for fits and giggles.
<andguent> heh
<owh> And of course all of it was being backed up each day :(
<owh> At least the current disks will handle the load when the organisation ever grows that big :)
<andguent> ha, yea, time for a good nas/offline storage system
<andguent> shove the stuff out of the way and get on with things
 * owh is working on it :)
<owh> Isn't the ctrl-alt-del detection in /etc/inittab?
<andguent> working on keeping your windows admins from rebooting the server at the console? :)
<owh> Yup
<owh> I don't seem to have an inittab.
<owh> WTF is going on?
<owh> Hmm. /etc/event.*
<andguent> yea, /etc/event.d/control-alt-delete
<owh> Yeah, I was just in the process of adding that as a comment here for future googlers :)
<andguent> http://ubuntuforums.org/showthread.php?p=2181001
<owh> Well, that just about concludes my pain for this location. Thanks andguent :)
<andguent> no prob, good luck
<owh> I'm making a generic user backup script. Anyone see any issues with this: http://ubuntu.pastebin.com/d75014df5
<owh> The idea is that you symlink to it with each username from cron.daily.
 * owh realises that the script is pretty trivial. I'm using the same mechanism to backup drives.
<owh> s/drives/mount points/
<rhineheart_m> hello. may I ask how to remove webmin?
<owh> How did you install it?
<rhineheart_m> mmm.. manual I guess... webmin is not in the repo right?
<ScottK> rhineheart_m: It was terminated with extreme predjudice.
<rhineheart_m> so, I believe I installed it manually..
<ScottK> rhineheart_m: Didn't you ask this same question recently.
<ScottK> The webmin web site (IIRC) provides .debs for it.
<rhineheart_m> nope. I asked but nobody answered as far as I could remember
<owh> rhineheart_m: So, yes you did ask :)
<rhineheart_m> :). Okay. Nope there I meant I asked but I didn't receive a good answer
<owh> rhineheart_m: Again, how did you install it?
<rhineheart_m> owh, a minute.. m trying to figure out how did I manage to install it
<owh> Basically, depending on how you installed it, determines how you remove it.
<rhineheart_m> I guess I installed it via wget.. and sudo sh setup.sh
<owh> rhineheart_m: Then you'll need to look for the uninstall script included in the package.
<rhineheart_m> uninstall script? where to find it owh ?
<owh> rhineheart_m: I don't understand the question. If you installed an application, then it would come with an uninstall script. If not, it should come with an install.log, if not, then you may have success re-installing it and looking for the logs, but as I've never installed things that are not packages - which you should also never do, that's all I have.
<rhineheart_m> owh, I found this one: uninstall.sh in /etc/webmin directory.. what do you think?
<owh> rhineheart_m: I'd look at it before running it.
<rhineheart_m> uhuh... are you trying to tell me to read the script first before running it?
<owh> Yes
<owh> rhineheart_m: Unless of course you have a full system backup in which case it won't matter if the uninstall script hoses your system.
<rhineheart_m> uhuh.. this it the content... http://www.pastebin.org/26398
<rhineheart_m> *is
<rhineheart_m> I just tried to execute: /etc/webmin/stop and it returned stopping...well, can you suggest heading/?
<owh> rhineheart_m: I do not know what this means: "well, can you suggest heading/?"
<rhineheart_m> owh, I mean.. will I go with this steps: cd ~ ; cd /etc/webmin ; sudo sh uninstall.sh ?
<owh> rhineheart_m: I cannot answer that. There is a serious question about liability here. If I were to say yes and you hose your system, there is an implied idea that I'm then going to help you recover your machine. That is not the case. All I can say is: "Did you backup your system first?"
<owh> rhineheart_m: I should point out that:
<owh> rhineheart_m: This is only the case because you're not using a package that comes from Ubuntu. If it did, then the liability issue becomes even more muddied :)
<owh> rhineheart_m: Personally, I'd investigate what the .pl script does and see if you think it's playing in places that it should not.
<owh> rhineheart_m: As this is a server channel, I need to assume that you're doing this on a server.
<owh> rhineheart_m: So, I'm saying - I don't know. It shouldn't hurt, but I'm not going to tell you that it will not hurt.
<owh> rhineheart_m: Do you understand what I'm saying?
<rhineheart_m> uhuh.. okay.. this is the case actually.. I just don't know if you've heard this... when I will open the port for webmin.. and will access and using it.. I experienced intermittent connection and I don't know for what reason. I've been observing the performance. For the past 3 days and blocked that port.. and I didn't experience anymore disruption of internet connection..
<rhineheart_m> owh, yeah.. the server is live and two sites are hosted there.. :(
<owh> rhineheart_m: This is the same computer we spoke about a few days ago with the ADSL issue?
<rhineheart_m> owh, yeah.. this is it!
<owh> rhineheart_m: Then you need to do some serious troubleshooting. It is possible that someone has been using webmin to hack your server, but I cannot answer that. You need to determine what is causing the issue. Do the disconnects happen if the server is not connected etc. As I see it, you're just flailing in the dark at the moment.
<rhineheart_m> and based on my investigation.. webmin causing the problem.. mmm.. or might be.. something is wrong with the network that can't handle https requests. It could handle for the few minutes but later it can't
<owh> rhineheart_m: Or some script kiddie is running something that connects to webmin and reboots your router - who knows. Unless you investigate you'll not know the answer.
<rhineheart_m> I have logs.. and only me (IP) is accessing the port for webmin
<owh> rhineheart_m: What about network sniffing logs?
<rhineheart_m> network sniffing logs? lemme ask you. is incoming tables not reliable to log incoming requests?
<owh> rhineheart_m: Before all that I'd eliminate simple things. You told us on a previous occasion that the Windows machines are also disconnected. This is more likely a router issue or an ADSL issue.
<owh> rhineheart_m: Making things more complicated by looking for things that no-one ever heard of are unlikely to solve your problem.
<owh> rhineheart_m: I'm not saying the cause is not webmin, just that it is very far down the list of possible causes of intermittent network connectivity for a LAN.
<owh> rhineheart_m: Fundamentally you need to eliminate each failure point, one at a time.
<owh> rhineheart_m: So, connect a single Windows computer to the ADSL modem. Does it continue to work?
<owh> rhineheart_m: Then connect that same machine to the router and the router to the modem, does it continue to work?
<rhineheart_m> but for the past almost a month.. tell me why I've been experiencing intermittent connection when webmin is being used? I just used it to transfer/edit some files using its file manager.
<rhineheart_m> presently.. I've been doing the same transfer of files using winscp and I didn't experience such..
<owh> rhineheart_m: Well, you also said that the Windows computers on the same network fail intermittently. At the same time, or at a different time? Is their failure related to your failure, or are there more than one issue?
<rhineheart_m> for the past 3 days.. never the internet connection suffered the same problem which was actually not usuall for the past few weeks
<owh> rhineheart_m: You cannot just wave your hands around and point at the last thing you touched. You'll never discover the problem.
<owh> rhineheart_m: I understand that you are frustrated, but I suggest you read through the steps I've listed and proceed one step at a time.
<rhineheart_m> owh, this is what I see it... adsl and the ISP looses its sync (PPPoE) when I'm using https or maybe webmin..
<owh> rhineheart_m: That has very likely nothing to do with https. Losing sync is a layer 2 issue, not able to be influenced by what traffic occurs.
<owh> rhineheart_m: Most modems will show you what link quality there is. Perhaps you need to look at that first.
<rhineheart_m> BTW, I'm still waiting for the replacement of wrt54g since I believe of its faulty https . Sorry to tell you.. I haven't told you before.. I just forgot
<owh> rhineheart_m: I also don't know how the ADSL filters are connected, nor other devices on the telephone network.
<rhineheart_m> it only defaults to port 143
<owh> rhineheart_m: The problem you are solving does in my opinion have nothing to do with TCP/UDP/HTTP/HTTPS/IP or any "high-level" information. This is a physical link problem from what you have told me.
<owh> rhineheart_m: The most often diagnosed problem is that the ADSL filter is incorrectly fitted.
<PanzerMKZ_> ok so is there any downside to making dns entry for us.archive.ubuntu.com if I have an apt-get mirror that is local on my network for the different versions of ubuntu I am normally dealing with?
<rhineheart_m> owh, I have  it checked already by our ISP techs. In fact, they changed the cabling system ( I mean.. the analog line) already
<owh> rhineheart_m: The second most often diagnosed problem is that there are poor internal leads.
<rhineheart_m> So I guess. the problem is not more on physical set-up ... but something otherwise.
<owh> rhineheart_m: What you are describing does not ring true.
<owh> rhineheart_m: Loosing sync is an ADSL level issue, not a IP level issue.
<rhineheart_m> if it is more on physical coonection..now.. why is it loses its connectivity when I am connecting to the box and doing some revisions to the live site?
<owh> rhineheart_m: I could dream up a scenario where your machine has been hacked and each https connect causes a script to reboot your modem, but I don't buy it.
<owh> rhineheart_m: Perhaps because there is more traffic moving in the opposite direction.
<rhineheart_m> I tried chkrootkit output I didn't see any problem
<owh> rhineheart_m: Again, you're looking at higher level issues.
<owh> rhineheart_m: Give me a moment while I chew some lunch.
<rhineheart_m> or again you might see.. something in the windows machine causes the problem (like it would reboot the adsl modem)
<rhineheart_m> chew some lunch? from where you're from? it is lunch time here too
<owh> AU
<rhineheart_m> okay.. I believe same time here.. :)
<owh> Right. Food swallowed.
<owh> Here beginneth the lesson.
<owh> Information is shared across a network using technology. At the lowest level we have the electrons, then above that we have several layers until we get to the top where we send email and IRC around the net. With me so far?
 * owh won't continue until you've understood what I wrote rhineheart_m
<owh> I'm not trying to be an ass, I'm trying to explain the problem you're seeing and describing.
<rhineheart_m> owh, thank you for that info. I'm into it..
<owh> rhineheart_m: So, you understood that last paragraph?
<rhineheart_m> yeah.. that's clear
<rhineheart_m> troubleshooting should start from the very first level..
<rhineheart_m> and you are telling me that it should start with the most basic possible causes.. not jumping on the higher ones...
<owh> rhineheart_m: So, if the electrons are not flowing, then no matter how much diagnosing you do at the email level, is the address right, was the mail server configured properly, did I get the password right are all irrelevant because there are no electrons.
<owh> rhineheart_m: Yes. you start at level 1 and work your way up.
<owh> rhineheart_m: So, in your case, you have electrons flowing. But then it breaks for a bit sometimes.
<rhineheart_m> plainly.. that's going to be more on physical connection FIRST.. but I guess.. I exerted most of my efforts on that very side already..
<owh> rhineheart_m: So, the first thing to check at that level is what exactly is happening.
<owh> rhineheart_m: Well not enough in my opinion. The fact that you are loosing sync cannot reasonably be described by rogue software that responds to https packets. It's possible - everything is possible - but it's unlikely.
<rhineheart_m> mmm.. not unless that's the highest level a nurse could do
<owh> rhineheart_m: It would be a little more likely if your modem and your router were the same device. But they are not.
<owh> rhineheart_m: Why is the PPPoE client running on the router and not on the modem?
<rhineheart_m> yeah.. they are not.. the modem has its routing capabilities.. why should I not use it instead....what do you think?
<rhineheart_m> what are you trying to say? the PPPoE is in the ADSL..the router just did the dialing.. is it not that better set-up?
<owh> rhineheart_m: This is really only a red-herring. Loosing a PPPoE connection is a higher level activity than losing ADSL sync. If the sync isn't working then the PPPoE won't, but not vice versa.
 * owh has seen too many ISPs with no clue and has little faith in their ability to diagnose sometimes very obvious problems.
<owh> rhineheart_m: Is there a central ADSL filter in the building?
<owh> rhineheart_m: Does the ADSL line share any facilities with *ANY* other devices, like faxes, eftpos machines, phones, etc.?
<rhineheart_m> owh, how I wish I could talk with you longer... I still have duty in the hospital minutes from now..  see you later.
<rhineheart_m> owh, well for that.. I have fax machine connecting to the same line
<owh> rhineheart_m: The ADSL filter needs to have the fax on the filtered outlet and the modem on the unfiltered outlet.
<rhineheart_m> Yeah.. I have that.. microfilter thing
<owh> rhineheart_m: How many outlets does it have?
<rhineheart_m> and as far as I know.. it is set-up correctly
<owh> rhineheart_m: That was not my question.
 * owh assumes everyone is an idiot and lies until proven otherwise :)
<rhineheart_m> 3. one from the line. the other is for the modem. the the last one is for the phone
<owh> rhineheart_m: Have you personally looked at the filter and seen how it is connected?
<rhineheart_m> yeah. I did it..
<owh> rhineheart_m: Have you removed the filter and the fax and only connected the modem directly to the wall?
<rhineheart_m> mmm.. not yet.. I know where are you heading at.. :)
<owh> rhineheart_m: Also, is that line a shared line, or does it come from the street straight to your wall socket?
<rhineheart_m> nope.. a dedicated line. since it is a business line
<owh> rhineheart_m: One step at a time you need to eliminate the assumptions you have made. Why? Because it's not working.
<owh> rhineheart_m: What about the actual cabling in the building?
<owh> rhineheart_m: Also, while an ADSL filter is simple, it can fail.
<rhineheart_m> owh, as I said earlier.. it has been changed already
<owh> rhineheart_m: That doesn't mean that it is not faulty.
<owh> rhineheart_m: You can test that by removing it and the fax and connecting the modem directly.
<rhineheart_m> owh, I better leave now.. or else I will be late in my duty.. Yeah.. I will do that. thanks anyway...
<owh> rhineheart_m: Other causes are incorrect, long, or faulty phone wires. A borked modem. Etc.
<owh> rhineheart_m: Have fun.
<rhineheart_m> but the ISP had changed the modem already as requested..
<rhineheart_m> :)
 * owh assumes everyone is an idiot and lies until proven otherwise :)
<rhineheart_m> hahahaha.. nice.. :)
<_ruben> mornin
 * owh looks over shoulder. _ruben, where, I don't see any mornin?
<owh> :)
<_ruben> stupid DST makes it feel like night overhere anyways :p
<owh> _ruben: We've just come out of that 34 hours ago :) Now the #u-s meeting is at 5am :)
<hari> i'm installing game known as urban terror. it's in file.tar.gz. where do i have to extract in ubuntu gutsy file system so all user can use it.. regards.
<_ruben> hari: i'd go for /usr/local/urbanterror/
<troofy> is it a good idea to use a server for desktop use also. (linux)?
<hari> @_ruben : ownernya khan root:root om.. bisa ya diakses oleh user lain..? atau bagaimana diekstrak begitu saja?
<hari> sorry @_ruben..
<hari> so just extract it to /usr/local/urbanterror/
<hari> then all user should be abel to use it, is it right?
<_ruben> hari: i wouldnt know what it'd take for that game to be playable .. atleast it should be accessable for all local users
<_ruben> troofy: the server kernel isnt tuned for interactive use, but for server use .. so might not be optimal .. what would be idea of using server instead of desktop?
<hari> thx.. i wil try
<troofy> _ruben i have one pc. i have to make it server and use it as desktop
<_ruben> i'd go for the version which you'd use most .. desktop if you use it as desktop primarily and also as server .. or server if its server primarily and also used as desktop occasionally
<troofy> hm
<tmadsen> Hi, when i plug in NIC number three in my linux server, it should just start working right away yes?
<kraut> moin
<tmadsen> moin
<soren> tmadsen: Yes.
<_ruben> unless the hardware isnt supported (too new, too old, etc)
<tmadsen> the hardware is not, is there a nice command that fetches drivers and whatnot?
<soren> The hardware is not what?
<tmadsen> supported
<tmadsen> why is alsa part of the server?
<_ruben> that was part of a debate not too long ago .. had smth to do with hardware detection i think .. didnt follow the complete discusion .. but i think its planned for removal in future version(s)
<soren> tmadsen: Er... Well, if the hardware isn't supported then, no, sticking it in your server will not make it work right away..
<soren> _ruben: Why remove alsa from servers?
<_ruben> soren: it was a discussion in this very channel a while ago .. having sound stuff on a server seems not-logical
<soren> telephony servers?
<_ruben> soren: well .. it wasnt about removing alsa completely .. but having alsa-base installed by default seems a bit odd .. for telephony servers (or any other service depending on also) all packages could be pulled in when need i think ?
<soren> Oh, the userspace stuff? Right, yes, that rings a bell.
<soren> I think we've already removed that now for hardy.
<_ruben> hmm .. wonder if there's a cmdline tool to "analyze" vob files .. wondering if my downloaded dvds do have the subtitles i wanted :p
<Tootbatot_> i have a dsl connection. but my friends share it with me. i think giving them line from modem is not good idea, as i cant limit their speed. i run ircd + host my web. should i use a third computer as router and server or use my one pc as router to limit there bandwidth + as server?
<_ruben> Tootbatot_: i'd go for a dedicated machine .. but thats just me
<Tootbatot_> a machine router?
<_ruben> hmmm .. wonder if canonical will be providing vmware server 1.0.5 on their partner repo
<_ruben> Tootbatot_: hardware router .. linux based router .. depends on the avail hardware and all
<Tootbatot_> what should be alternative..
<soren> _ruben: It's not really up to us. It's up to VMWare. You should ask them.
<Tootbatot_> _ruben what should be alternative..
<_ruben> soren: ah, the package is provided by vmware ?
<_ruben> Tootbatot_: not sure what you mean?
<Tootbatot_> _ruben if i dont want to purchase a new pc to be as router
<Deeps> Tootbatot_: how do you feel about purchasing some new cisco kit?
<Tootbatot_> Deeps how much will it cost . like an old p2?
<Deeps> depends on what you buy
<Tootbatot_> i want to make up with one pc
<Tootbatot_> that does it all. i have p4 1.8g
<Tootbatot_> 1g ram
<soren> _ruben: I'm not sure who exactly makes the package, but its availability is up to VMWare.
<_ruben> soren: ic
<Tootbatot_> _ruben if i dont want to purchase a new pc to be as router
<_ruben> Tootbatot_: well .. you *could* use your computer as desktop + server + router .. nothing wrong with that when you're low on hardware .. tho usualy old pcs (like p2/p3) can be bought for cheap or even picked up for free
<Tootbatot_> _ruben for free?
<Tootbatot_> _ruben whats wrong with making one pc do all that?
<Deeps> you're more or less ok if you use 1 machine as a server + router in a home environment
<Deeps> just remember that if you do anything on it that requires a reboot, everyone loses their net connection
<Tootbatot_> Deeps what things require reboots. i use it as desktop too.
<Deeps> using it as a desktop as well is even less than ideal, as due to the way people tend to use desktop computers, the machine would be less stable than would be appreciated by everyone eles trying to use the net
<Deeps> i dont know, i dont use linux on the desktop
<Tootbatot_> and what chances are there if it gets held. and do not respond. ill have to reboot.
<Tootbatot_> ok
<Tootbatot_> can i limit total bandwidth /s  combinely for 2 ips ? by firewall. eg ip1 + ip2  should not exceed 20kb/s  ?
<kesshiiii> hi, when i connect to my samba server it is slow to give the auth window, after auth everything is fast, i think it has to do with the server being in another subnet
<Tootbatot_> can i limit total bandwidth /s  combinely for 2 ips ? by firewall. eg ip1 + ip2  should not exceed 20kb/s  ?
<kesshiiii> ssh was also slow to ask password but when i added "UseDNS no" to the config it was fast
<Tootbatot_> can i limit total bandwidth /s  combinely for 2 ips ? by firewall. eg ip1 + ip2  should not exceed 20kb/s  ?
<_ruben> Tootbatot_: http://lartc.org
<Deeps> Tootbatot_: http://tldp.org/HOWTO/Traffic-Control-HOWTO/index.html
<Tootbatot_> thanks
<_ruben> bugger .. authentication is messed up for canonical's vmware server 1.0.4 package .. lets investigate
<_ruben> hmm .. looks related to my 64bits host os and the 32bits binaries of vmware
<soren> Yeah, it's a horrible mess.
<soren> I looked into it at some point, but I think it stranded somewhere.
<soren> Oh, right, it's a pam issue.
<_ruben> found a fix in lp
<_ruben> gonna test it
<soren> Er... bug nr.?
<RainCT_school> hi
<_ruben> #112937
<RainCT_school> how can I activate chroot in pure-ftpd?
<_ruben> still some errors in the logs .. but it does allow me to login
 * soren grumbles
<soren> Yeah, that'll work, but seeing as you're using the pam libraries from vmware, you're SOL if there are any vulnerabilities in there.
<_ruben> yeah
<_ruben> still getting stuff like "PAM (other) illegal module type: @include" in the logs .. tho there's no @include lines in the vmware-authd file now .. ow well ..will do for now
<_ruben> but /etc/pam.d/other does .. doh
<_ruben> lets download some jeos images
<soren> There. Updated the bug. That should be enough for someone with time on their hands to fix it.
 * _ruben hits F5
<_ruben> i only got pam_ldap.so in /lib32/security btw .. might be missing a package tho
<soren> _ruben: The modules are probably not there since they wouldn't work anyway.
<_ruben> hehe
<_ruben> sounds fair enough
<RainCT_school> (nevermind, found it. wow was that easy :))
<soren> So, in addition, libpam-modules should be added to ia32-libs.
<soren> Added a bugtask.
<_ruben> soren: thanks ;)
<_ruben> hmm .. unless im blind (wouldnt be the first time) the dutch archive mirror doesnt host jeos images
<soren> url?
<_ruben> nl.archive.ubuntu.com ? ;)
<soren> Ah, the *.archive.ubuntu.com don't usually hold the cdimages.
<soren> *shrug* It's on cdimage.
<soren> (but you can just use the server cd, too)
<_ruben> well .. i wanna taste the jeos experience this time ;)
<_ruben> cdimage sure aint the fastest server out there ;)
<_ruben> heh .. gutsy's jeos is 150% the size of hardy's jeos
<soren> _ruben: There are two cdimage servers. One seems to be really fast, the other not so much.
<_ruben> ic
<soren> If I'm unfortunate enough to get the slow one, I just disconnect and try again.
<Deeps> torrents are faster!
<Deeps> none of the european mirrors i tried (tried maybe 15?) would give me more than 180k/s, torrent maxed out my line
<_ruben> gutsy downloaded at 700K/s .. now hardy's coming in at 4-500K/s
<Deeps> i dont think i uploaded a single bit either
<Deeps> which was nice
<soren> I usually get 3-5 MB/s from cdimage.
<Deeps> guess it just sucks to be me
<soren> Yeah, being on the outskirts of the internet must suck :)
<soren> From home I get about 2 MB/s, usually.
<_ruben> hmm .. i should see similar speeds here then .. guess i was out of luck and hit the slow one
<fromport> ruben_ : ftp://ftp.het.net/linux-cd-images/ubuntu/jeos-8.04-beta-jeos-i386.iso
<_ruben> fromport: that one sounds fast enough .. lets see :)
<_ruben> ok .. something's wrong on my end .. even hetnet is giving me 500K only
<_ruben> ah .. 1.4M now .. but still
<fromport>  1110.72Kbyte/sec
<soren> sent 16.09K bytes  received 679.24M bytes  1.92M bytes/sec
<soren> from cdimage.ubuntu.com
<fromport> _ruben: it sounds like you have only 10 megabit ethernet ;-)
 * soren points at _ruben and laughs
<soren> I haven't had 10 MB ethernet this millenium. :)
<_ruben> the line is supposed to be 100mbit .. well 50mbit is what they told me, 100mbit is what i've seen
<soren> Doesn't help much if your own switch is 10 mbit :)
<_ruben> those are gbit ;)
<_ruben> hmm .. install failed .. during "Install the base system"
<fromport> i just installed the jeos image this morning on my kvm host, no problems there!
<soren> _ruben: This is in vmware server, is it?
<_ruben> yes
<soren> Interesting.
<_ruben> last step it tried was "installing device smth" .. only flashed for split sec
<soren> Anything on vt 4?
<_ruben> ah .. couldnt find package lvm2
<_ruben> thats odd
<_ruben> lets check iso integrety
<_ruben> its gutsy jeos btw
<_ruben> md5sum is ok
<soren> Oh.
<soren> Don't use the gutsy one.
<_ruben> figured i'd try both
<soren> WEll, feel free to use it, but e.g. lvm won't work.
<_ruben> booting hardy disc ;)
<_ruben> now if only those 32gigs of ram would arrive and the 2 extra 146G sas disks .. then finally it'll be real virtualization time :p
<fromport> kvm ? :-)
<_ruben> vmware esx
<_ruben> hrm .. hardy jeos disc refuses to boot it seems
<_ruben> ah .. problem between chair and keyboard
<soren> phew
 * soren wipes a bit of sweat off of his brow
<_ruben> sorry ;)
<soren> :)
<_ruben> it was a vmware gimmick i think .. not booting from cdrom by default after having boot once
<soren> Hehh. I hit the slow cdimage this time: sent 154.57K bytes  received 188.87M bytes  139.80K bytes/sec
<soren> _ruben: Oh, vmware does that, too?
<Deeps> afaik vmware never boots from cd as default, always places hd boot as higher priority-- reason it usually works 'first time' is because when you create a new vm and with it a new disk, there's nothing on the disk to boot from ;)
<Deeps> (i think)
<_ruben> soren: apparently .. hadnt run into this before ..
<ScottK> soren: Your FFe for ubuntu-vm-builder is approved.
 * nijaba hugs ScottK
<soren> ScottK: Thanks very much.
<fromport> soren: i sent an email to "ubuntu" on 8 of march telling them there is one slow download server
<fromport> Resolving cdimage.ubuntu.com... 91.189.88.34, 91.189.88.39:  27% [++++++++++++                                 ] 190,795,776 5.86K/s  ETA 2:26:32
<fromport> whereas: Connecting to cdimage.ubuntu.com|91.189.88.39|:80 28% [++++++++++++                                 ] 199,053,332 908.96K/s
<fromport> never heard anything back...
<soren> fromport: "ubuntu"?
<soren> fromport: How do you send an e-mail to "ubuntu"?
 * _ruben is tempted to put just "ubuntu" in the To: field of his email client
<fromport> i looked in general at the website where to report problems with the website: couldn't find that info so sent to the "general/universal" contact point: webmaster@ubuntu.com
<soren> _ruben: You do that... And tell them I said "hi".
<_ruben> soren: wil do ;-)
<_ruben> install completed .. runing initial a-g upgrade
<W8TAH> for a basic server with samba - how much space should i allocate for the / partition?
<_ruben> depends .. will there be other partitions as well ?
<soren> And what are you going to use it for?
<W8TAH> the swap partition and the rest of the drives will be LVM
<W8TAH> its a file server
<soren> If it's just to do authentication against, a few hundred MB is fine. If you want to put loads and loads of movies and stuff on it, you need lots, lots more.
<W8TAH> so if i give it 7 gb that should be plenty?
<W8TAH> its gonna contain photos and digital video stock files
<W8TAH> it has 80gb total
<W8TAH> in 2 40 gb drives that im combinign using lvm
<_ruben> 7G for just system is more than enough
<W8TAH> ok
<W8TAH> thanks
<_ruben> 1 or 2 might even be sufficient .. but one good argue: better safe than sorry
<W8TAH> ya
<_ruben> s/good/could
<_ruben> damn phonetic typos :p
<W8TAH> the balance between conservative and safe
<W8TAH> LOL
<W8TAH> thanks
<_ruben> soren: what's your favorite fs for storing virtual machines?
<soren> None.
<_ruben> raw disks eh? ;)
<soren> I don't mean that I have no preference, but rather that I prefer no fs.
<soren> Yes.
<_ruben> but if you had to chose one?
<soren> _ruben: xfs, probably.
<_ruben> soren: ic .. currently using reiserfs on most systems since thats the default in sles9 .. using ext3 on my ubuntu servers .. was contemplating to give xfs a whirl
<_ruben> might just give it a shot on my next reinstall
<soren> _ruben: reiserfs is probably the worst you can possibly use :)
<soren> _ruben: If a VM decides to have a reiserfs formatted fs, you're screwed.
<_ruben> soren: i think novell went with reiserfs because ext3 wasnt mature enough in their eyes .. and most my guests are windows anyways
<zul> _ruben: besides reiserfs murders files
<_ruben> zul: ow? care to elaborate?
<soren> The problem with reiserfs in this situation is that if you have a file that contains a valid reiserfs superblock on a reiserfs filesystem, an fsck of the "outer" reiserfs will find the inner one and assume it's part of the outer one. BOOM!
<zul> _ruben: bad hans reiser joke
<_ruben> not that i'll be deploying any new servers with reiserfs
<_ruben> zul: ah ;)
<_ruben> soren: read about that a while ago .. very scary
<soren> http://shinola.org/pages//posts/free-hans-t-shirts175.php
<soren> _ruben: So reiserfs is the worst possible choice for holding disk images unless you are in absolute, complete control of them.
<_ruben> soren: nice shirt ;)
<soren> That one still cracks me up. :)
<JaxxMaxx> You alive there, nawty_ ?  how'd that freeradius server go?  Mine still needs fine tuning
<ivoks> zul: hi
<zul> hey ivoks how goes it?
<ivoks> zul: time to build qt bacula console again :D
<ivoks> qwt is in main
<zul> ivoks: yep I forget are you core-dev?
<ivoks> no :/
<zul> ivoks: you could send me a debdiff :)
<ivoks> zul: deal
<juliux> hi
<juliux> does somebody use nagios2 and check_icmp?
<nxvl> mathiaz: done with Bug #182086
<mathiaz> keescook: you mentionned you uploaded virt-clone last week - where ?
<mathiaz> keescook: nm - found it
<zul> mathiaz: declare fanta? ;)
<mathiaz> zul: that's the example given in the help message
<zul> ah...
<faulkes-> when it rains, it pours
<faulkes-> 2gb of myisam tables corrupted at one site, followed by a dead switch at another which cascaded the entire ndbd cluster down
<faulkes-> ahhhh, good times
<faulkes-> I hate inheriting other peoples infrastructure ;)
<sommer> faulkes-: at least you didn't come to work this morning and find that your domain has been sending spam all weekend :-)
<sommer> that was my morning... woot
<keescook> faulkes-: hi, is there a beta copy of the survey up and running somewhere?  I wanted to poke at limesurvey
<keescook> faulkes-: nm, found it.  :)
<JaxxMaxx_> If there is a mention of a usergroup in a .conf file, or a user to run a process as,   do I need to create that user on the system before the daemon will run properly?  (this is dealing with FreeRADIUS and MySQL)
<faulkes-> keescook: sorry, yeah, great, you found it
<faulkes-> unfortunately, that means I have to kill you now
<faulkes-> you've defeated our security through obscurity part of the survey
<keescook> faulkes-: hehehe
<zobbo> somehow I've confused the mysql installation on a machine. Now trying to reinstall and it won't put back /etc/init.d/mysql* scripts
<zobbo> tried --reinstall
<zobbo> tried purge and install
<zobbo> tried "-f"
<zobbo> all with apt-get
<zobbo> any suggestions on what other incantations I could throw at it ?
<JaxxMaxx_> Anyone know where the object rlm_sql_mysql.so  would come from?  like what package?  FreeRadius server refusing to start with SQL support, complains that driver / shared object is missing
<zobbo> JaxxMaxx_: after a quick google "freeradius-mysql"
<JaxxMaxx_> I'm trying to follow the guides on their wiki, not having much luck
<JaxxMaxx_> bah, damn, that package is already installed...  bet it'll break all my conf edits if I reinstall it...
<zobbo> does it put the so file somewhere obscure ?
<JaxxMaxx_> I did a find / -name rlm_sql_mysql.so    for it as root  and it found nothing
<jetole> hey guys, is there a way to bring ubuntu into non multi tasking, basically for diagnostic purposes?
<zobbo> jetole: "init 1" will take you to single *user* mode
<zobbo> JaxxMaxx_: curious
<zobbo> sounds like you're in the same boat as me
<JaxxMaxx_> oh, dammit.  checked aptitude, the freeradius-mysql  module wsn't installed
<zobbo> trying to force files to be reinstalled, but something thinks they're already there
<zobbo> haha
<JaxxMaxx_> stupid wiki guides
<JaxxMaxx_> have you had any luck configuring FreeRADIUS on Ubuntu, zobbo?
<JaxxMaxx_> this is the first real replacement server I've done in Ubuntu, and it's not going smoothly at all.
<zobbo> JaxxMaxx_: I know the square root of sfa about it :)
<JaxxMaxx_> I'm trying to use Linux in  more and more roles, but it's beginning to get very difficult
<zobbo> I know quite a bit about Linux but nothing about Radius.
<zobbo> All I know about Radius is they used to make Mac screens 15 years back ;)
<JaxxMaxx_> I think my problem is more MySQL than the freeradius.   It's spitting up huge amounts of debug statements on the webbased admin page
<JaxxMaxx_> There, dropped the database and reimported the scripts, hopefully that'll calm it down
<Silvanov> is there a small install/footprint on linux similar to wampserver.com 's offering on windows?
<infinity> Silvanov: Grab the Ubuntu Server ISO, and install the "LAMP" option, and you'll have pretty much the same thing.
<Silvanov> alright, just downloaded it, will give it a try
<Silvanov> hardy server iso is all command line?
<ScottK> Yes
<andguen1> ubuntu server is all command line, hardy or otherwise
<andguen1> you can always install all of the server software on a desktop install, its just a bit more memory used for the generic GUI interface
<good_dana> apt-get install webminm
<ScottK> good_dana: No.  Don't do that.
<andguen1> i've heard webmin isn't good for ubuntu, can't remember why
<ScottK> !webmin | good_dana
<ScottK> !ebox instead
<ScottK> Maybe some more information if the bot's alive.
<andguen1> ebox is a similar ..... yea what ScottK said
<Silvanov> !ebox
<andguen1> i think we are missing a bot
<Silvanov> ahh
<andguen1> maybe
<Nafallo> LjL: not you. the bot :-)
<Nafallo> hehe
<Silvanov> im a linux newb, installed kubuntu, but really wanted a lightwieght lamp setup. So just installed the hardy 8.04 server iso. familiar with dos, but not linux commands. I guess i need ftp access, and some sort of web administration?
<good_dana> yeah bot is missine
<good_dana> i know about ebox too
<good_dana> i've just never used it
<LjL> Nafallo: i know, i know :P
<good_dana> maybe i'll set that up next
<LjL> Nafallo: my bot is misbehaving too
<Nafallo> LjL: lovely ;-)
<Silvanov> my end goal is to migrate my blog, which im hosting on my main pc, via wampserver.com s offering, to this newly installed linux box, and maybe learn a bunch in the process :D
<good_dana> Silvanov: if you install a LAMP server you can administrate it pretty easily with ssh
<good_dana> and sftp
<Nafallo> and rsync via ssh :-)
<good_dana> exactly
<Nafallo> and bzr via ssh :-)
<Nafallo> hmm
<Nafallo> I should just stop there ;-)
<good_dana> everything over ssh!
<Nafallo> !ebox
<ubotwo> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See the plans for Hardy at https://wiki.ubuntu.com/EboxSpec
<Nafallo> yay!
<Nafallo> !webmin
<ubotwo> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. See !ebox instead.
<Silvanov> i installed hardy server with the lamp server option, but no clue where to go from there. any guides to get me headed in the right direction?
<Nafallo> Silvanov: help.ubuntu.com has a server guide :-)
<Silvanov> sudo apt-get ebox to start?
<Silvanov> https://help.ubuntu.com/7.10/server/C/configuration.html going through this atm :D
<Silvanov> installing ebox now. is there an ftp server installed by default or is there one you guys reccomend?
<good_dana> i'd install ssh server
<yarddog> whats wrong with webmin?
<Silvanov> sounds like it doesnt handle packages the same as ubuntu
<yarddog> works with debian
<yarddog> strange
 * yarddog shrugs
<Silvanov> alright, got ebox installed and up and running. its not for setting up and configuring the lamp server though is it?
<incorrect> retarded question, but can an amd opteron run 32bit linux?
<Deeps> yep
<incorrect> because i am hosting games and it would appear that all i end up doing is running lib32 to support the game
<incorrect> i guess is a lot slower than running plain old 32bit
<incorrect> anyone?
<kirkland> incorrect: absolutely
<incorrect> that is what i thought
<incorrect> i think i should go back to 32bit linux
<kirkland> incorrect: i have found amd64 linux to be valuable to me in 2 situations....
<kirkland> incorrect: when I have >16G of memory
<kirkland> incorrect: and when I'm building beowulf clusters that do lots of heavy math
<incorrect> thank you :)
<kirkland> incorrect: np ;-)
<incorrect> its just nice to talk someone about this
 * kgoetz has an amd64 running etch which he builds 32 bit chroots in for dev work
<incorrect> wouldn't that be slower for hosting games?
<incorrect> than just native32bit
<Deeps> kirkland: surely it already benefits when you've got more than ~3.2gb of ram?
<kgoetz> probably
<incorrect> i have 8gb in each machine
<kirkland> Deeps: does it really?
<Deeps> afaik, 32bit os can only address ~3.2gb
<incorrect> 32bit linux can go upto 32gb can't it?
<Deeps> nope
<kgoetz> 3.2gb+kernel space (of 800mb)
<incorrect> hmmm
<incorrect> i was pretty sure linux could go past 4gb on 32bit platform
<kgoetz> with hacks it can
<kirkland> incorrect: it needs PAE
<Deeps> nope
<incorrect> yeah with hacks
<Deeps> 4gb limit for ram+swap combined
<Deeps> unless hacked some how i guess
<incorrect> kernel option tweaks
<incorrect> i was sure i saw an option
<Deeps> ah yes, PAE
<Deeps> i didnt know about that
 * Deeps learns with whisky
<Deeps> It's not exactly true about the "4G" limit on 32-bit operating systems. Most new processors support PAE (Paging Address Extension). However, the stock linux kernels do not support >4G total memory space without a kernel recompile. Windows supports >4gb on their Server products. There is some extra over-head in using PAE (virtual memory pages need to be remapped when accessing anything outside the current 4GB window), but you will fully utilize all your 
<kgoetz> later all. got to head out
<Deeps> from ubuntu forum
<Deeps> s
<nijaba_> Deeps: by default -server kernel do support PAE
<Deeps> nice
<incorrect> i can compile a kernel
<Deeps> i dont have anything with that much ram anymore, so it's not something i've looked into extensively, thanks for the knowledge though :D
<Deeps> always good to learn something new before bed
<incorrect> i got given a load of amd64 opterons with 8gb
<incorrect> so i did the logical thing installed 64bit linux
<incorrect> however it would seem as my intension was to host games, i thought cool, valve do a amd64 build
<incorrect> yeah right do they, its ****
<incorrect> and then the drop support for it
<yarddog> Silvanov: i dont see where it admins apache
<incorrect> oh no its nearly april,  ok next question
<incorrect> should i just install 8.04 rcX ?
<Deeps> "how does my machine cope with april fools day? does it play any pranks on me?"
<Deeps> incorrect: not in a production environment
<yarddog> haha
<incorrect> i guess my production environment is my games servers
<incorrect> meh i can wait i guess
<Deeps> betas are betas for a reason, play with it on a dev machine or two, dont put it anywhere mission critical
<yarddog> its almost final
<Silvanov> im noticing that too, thanks yarddog.
<kirkland> incorrect: and the upgrade path from 7.10 to 8.04 shouldn't be too hard for you, come a few weeks
<yarddog> Silvanov: i installed it too, webmin looks better
<Deeps> if your stuff's important, you're probably better waiting a month or two after 8.04 is released too, so any other bugs that are encountered from upgrades are ironed out for you
<Silvanov> but it doesnt work with ubuntu?
<yarddog> yes it works
<yarddog> just a sec
<Deeps> if it's not uber important, upgrade immediately and help report any issues you may encounter!
<mathiaz> !webmin | Silvanov
<ubotwo> Silvanov: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system. See !ebox instead.
<yarddog> Silvanov: have you been here: http://www.webmin.com/
<yarddog> debian package there
<yarddog> ebox does not handle apache
<incorrect> kirkland, yeah i just didn't want to do a job twice, figured 8.04 was nearly done
<yarddog> webmin does
<incorrect> just a few bug fixes
<incorrect> maybe i should just go with what works
<kirkland> incorrect: perhaps, but perhaps not
<Silvanov> so how do I uninstall ebox? going to try webmin now
<Deeps> apt-get dist-upgrade isnt a big job ;)
<yarddog> i would like to know what the issues are with webmin instead of just blatently saying that.
<yarddog> Silvanov: sudo apt-get remove ebox
<yarddog> but im leaving it in
<yarddog> in a mission critical channel for servers, statements like that should include documentation rather than blind statements.
<Silvanov> im running it on a fairly old box and don't want to waste system ressorces on it if i dont need to.
<yarddog> gotcha
<Deeps> yarddog: 'google', webmin's not new, issues with it are well known
<yarddog> im running a p3 here
<Deeps> its like an faq page, why repeat it when the information's already there
<yarddog> Deeps: like what issues? rtfm?
<yarddog> telling ppl to google in a server channel is odd
<Deeps> yarddog: my google fu suggests "ubuntu webmin issues"
<Deeps> as a starting point
<Deeps> indeed, you'd think server admins would know how to do their own research first already, and come and ask when they hit a roadblock ;)
<yarddog> and what if a person dont have X installed on their server to google with?
<yarddog> use common sense here
<Deeps> i dont know many server admins who dont have desktops/laptops as well, the die hard cli fans i know use lynx ;)
<yarddog> the whole point of official support in an irc channel is not to tell ppl to google
<yarddog> this is lame
<yarddog> and i dont recall google or rtfm being in the ubuntu code of conduct
<Deeps> fine, i'll stfu and leave you to it
<Deeps> gl
<Silvanov> is there something you reccomend instead of webmin Deeps?
<yarddog> hehe
<Deeps> Silvanov: the only web-based administration tools i've used for this kinda stuff has been when dealing with commercial webhosting, plesk, cpanel, ensim
<Deeps> none of which are free
<Deeps> all of which suck horribly in their own unique ways
<Deeps> plesk i found to be the nicest for the end-users (customers) last i used it (v7.5 i think)
<Deeps> all were a nightmare to maintain though
<Deeps> id honestly recommend intalling an ssh server and learning to configure through the command line
<Deeps> installing*
<Deeps> gives you more flexibility, more control, while admittly increasing the chance of breaking it all as you learn
<Deeps> but thats why God gave us dev machines/vms
<WheelsOnFire> I have an ubuntu server machine with a 3ware raid card in it. i have installed the 3dm2 3ware management software. in virtually all other OSes we use this in it is installed as an init script that can be stopped and restarted and started. How can I get this in ubuntu ?
<Deeps> WheelsOnFire: what version of ubuntu?
<Silvanov> id prefer to learn command line honestly, but im looking to make an easier transition from wampserver to a dedicated linux server box
<WheelsOnFire> 7.10 server
#ubuntu-server 2008-04-01
<WheelsOnFire> for example, in redhat after installing 3dm2 you simply run, service 3dm2 restart
<WheelsOnFire> or whatever
<WheelsOnFire> and when you init 0 or 6, the service is stopped
<kirkland> WheelsOnFire: is there a script named that in /etc/init.d/* ?
<kirkland> WheelsOnFire: RH's "server" command simply prepends "/etc/init.d/" onto the service you specify (3dm2) and passes it the action you specify (restart)
<kirkland> WheelsOnFire: not "server", but "service" command
<WheelsOnFire> yeah I get you
<WheelsOnFire> so in other words if they didn't give an init script I need to make one
<kirkland> WheelsOnFire: what format is the package you installed?
<kirkland> WheelsOnFire: a .deb ?  a .rpm ?  a tarball ?
<WheelsOnFire> =]
<WheelsOnFire> originally it's installed with a java installer. however the installer is closed source and bugged. to get it to install on ubuntu I have to install it elsewhere and then with a custom script which builds a tar from the files on that computer and installs them on the ubuntu machine.
<WheelsOnFire> but I'm pretty good with bash so I'll just script my way through it I guess
<WheelsOnFire> alright thanks
<Silvanov> http://www.ispconfig.org/ this looks similar to cpanel. not sure its a lamp config web gui like im looking for though.
<andguen1> If anyone has a second, I would love comments on https://help.ubuntu.com/community/ShorewallBasics -- especially from people just starting into command line based firewalling, and especially experienced shorewall users as well, thanks
<Silvanov> is there an ssh server installed with ubuntu server (lamp option) by default?
<andguen1> nope
<andguen1> sudo apt-get install openssh-server i believe
<Silvanov> alright, thanks andguen1.
<kirkland> Silvanov: when you choose the lamp option, ssh is in that list too, in the installer
<Silvanov> I didnt see a way to select multiple options though, and I've already completed the installation, just get started learning commands and how to set it up
<andguen1> Silvanov: sudo apt-cache search is a wonderful command to know -- try 'sudo apt-cache search php' or 'sudo apt-cache search openssh' -- any one of those entries there can be installed
<Silvanov> nice. thank you very much. I also found aptitude which seems much more 'user friendly' repository/installer gui.
<andguen1> Silvanov: There are always always always multiple ways to do it :) there wouldn't be thousands of Linux distros if we geeks didn't want some choice in the matter :)
<Silvanov> hehe
<andguen1> find a way that works and use it for a while, just be aware of what others use and check it out just in case
<andguen1> I have to confess, I'm a bit of a hypocrite, or just lazy, some of my friends like zsh, I'm staying with bash shell for now :)
<Silvanov> ive tried suse and red hat before in the past, but this will be my first serious foray with linux, and i wanted to start with ubuntu, because ill be selling and supporting ubuntu machines and my new job :D
<Silvanov> alright, much easier now. dont have to get up and change chairs, have ssh installed and logged in from my main computer :)
<andguen1> agreed, definitely a key first step. -- I have worked with Redhat and SUSE as well, I've gotten spoiled with how easy it is to install packages in Debian/Ubuntu, not to mention the speed of new releases....
<Silvanov> so far i liked the new kubuntu, everything seemed fairly simple to use, and install and configuration was by far much easier than windows 98-vista. For the first time ive tried linux, all the hardware just worked as well. Now ive reinstalled hardy 8.04 server on that machine, and just trying to learn as i go, with the motive of tranferring my blog, which i host on my main pc via wampserver to this box.
<andguen1> it definitely takes a good project to keep you learning and diving further into it
<Silvanov> the only commands I know thus far are sudo, apt-get, ls and ifconfig lol
<andguent> locate is a nice one to know, grep, xargs..... hmmmm what else :)
<Silvanov> figuring I'll learn a ton as i get different software installed and try to configure them.
<owh> Of course there is the server guide :)
<owh> !guide
<Silvanov> !guide
<owh> !serverguide
<owh> Hmm
<Silvanov> bot dead again?
<owh> So much for that.
<owh> One mo.
<owh> http://doc.ubuntu.com/ubuntu/serverguide/C/
<owh> That won't answer all your questions, but it gets you started.
<andguent> if one document answered all of my questions, I wouldn't read it :P
<andguent> takes all of the fun out of life
<Silvanov> im actually reading through that atm, but is it accurate for the 8.04 release as well?
 * lamont isn't familiar with the 'ifconfig lol' command... :-)
<andguent> alias ifconfiglol='echo Come again?'
<owh> Silvanov: Some people would be insulted at that question, but yes, many hours were spent on making it so. Mind you, I'm not sure if that URL has the latest version of the docs, YMMV.
<lamont> heh
 * lamont uses 'ip' rather than 'ifconfig' anymore anyway
<andguent> owh: insulted? mmmmkay, sounds like a reasonable question to me, shrug
<Silvanov> owh: thanks, and sorry, didnt mean to insult or offend anyone.
<JanC> doc.ubuntu.com = "development" server of the docs team
<andguent> I asked this about an hour ago, but If anyone has a second, I would love comments on https://help.ubuntu.com/community/ShorewallBasics -- especially from people just starting into command line based firewalling, and especially experienced shorewall users as well, thanks
<JanC> andguent: did you test that with the new shorewall ?
<owh> andguent: Silvanov: Ah, sorry, that's my sense of humour acting up. The insulted comment was really supposed to be in quotes and I should have added >:-) to the end :)
<andguent> JanC: how new? there is always something newer, i just accept that sometimes, it should be good with the latest shorewall from the gutsy repositories
<owh> JanC: So, you're saying that its the latest version?
 * owh really, really wishes that the documentation team started including version strings on all the documents.
<andguent> JanC: I assume you are referring to shorewall 4.0 & up? No, I'm testing it on 3.4.4 currently
<JanC> andguent: according to http://packages.ubuntu.com/shorewall hardy has shorewall 4.x ?
<andguent> still gutsy on my home computers, definitely good to know, thanks for pointing it out
<JanC> you can try hardy in a VM  ã
<andguent> yup, when the time arrives, but definitely good to try
<Silvanov> alright, ive got the lamp setup, phpmyadmin, ssh and ftp servers installed, but not configured yet.
<andguent> Silvanov: sounds like an excellent start, which ftp server did you go with? most everything else there usually configures itself
<Silvanov> vsftpd via http://doc.ubuntu.com/ubuntu/serverguide/C/ftp-server.html
<Silvanov> was able to turn it on, connect, see from my main pc, see there was no files and turn it off. Figure once I get lamp setup, and need to tranfer my wordpress blog over, ill figure that part out lol
<andguent> yup, ftp can definitely do that for you, but one other piece of software to be aware of is scp/ssh, scp lets you copy files from one pc to another if ssh is running at the destination (and your username can get to the destination directory)
<Silvanov> i'll check it out, but atm i don't know directory structures or default locations for anything on linux :S
<Silvanov> im so acustomed to windows, im intrigued, but feel like a new born at the same time lol
<andguent> if you know /home, /etc, and /var, the rest can be lower priority -- /home for your user's settings, /etc for global settings like server daemons, and /var is for files that change often like logs
<Silvanov> so windows analogy wise, home is like my documents, etc is like my programs, and var is like temp?
<andguent> mmm, /etc is probably closer to the registry then to program files
<andguent> var is used for things that are around for a while, but might just change a lot, there is a /tmp directory, that is VERY temporary, and gets cleaned out every reboot
<Silvanov> very good to know, and great explanations :D
<Silvanov> how about opening/reading/editing text or config files. is there a command for that?
<andguent> dozens
<andguent> nano is an easy one to learn, vi is complicated but powerful, some people really like emacs, but thats another story :)
<andguent> if you ever see documentation or menu shortcuts that say ^X -- that usually means Control+X, if you jump into nano you will see what i mean
<Silvanov> i actually just figured that out, and am playing with nano right now :D
<Silvanov> im guessing ^r (read file) is like open file
<andguent> if you are working with text files on the computer you are at, you can try gedit too -- similar to notepad
<andguent> most likely, I tortured myself and jumped right into vi/vim so i would have to learn a nano feature in order to explain it
<Silvanov> thats cool, reading the ftp conf file now, looks like i can edit it through this as well.
<Silvanov> btw, thanks for all your friendly help thus far and in advance :D im sure everything im asking is extremely newbish, but i do appreciate your answers.
<andguent> here, quick tip for you then, I found it was 100 times easier to learn what files contained what settings once I learned how to search for text in files --- 'find /etc/|xargs grep eth0' -- searches the /etc directory for anything that contains 'eth0'
<andguent> I used to work as an IT helpdesk manager, we had a heavy amount of Ubuntu/Debian boxes, and a good deal of our techs were Windows guys, I have practice explaining this stuff, but it helps if you know at least some good 'ole DOS
<andguent> so, you're welcome :)
<Silvanov> :D im pretty comfortable in dos, wrote some dos scripts before, so I think I'll learn easier than most, I just tend to overwhelm myself sometimes lol.
<Silvanov> i remember theres a way to look up what commands do, and thier syntax, is it command /? or man or something?
<Silvanov> nvm, figured it out :D
<Silvanov> man then the command
<andguent> one of the absolute greatest things of linux is.... once you get your vsftp server running, take the config file /etc/vsftpd.conf or /etc/vsftpd/vsftpd.conf or whatever, and back it up, that usually allows you to duplicate the exact same setup on another box, or break it and reset it later
<andguent> command --help, command -h, man command -- all of those should work, but some programs are older then others
<pablodias> hello folks.. well. I have a question about "fsck". Is it a server question or should I move to #ubuntu?
<andguent> its fine here, just ask, we can see what we can do
<pablodias> thanks =p
<pablodias> after a energy down, my filesystem got broken and then it said to run fsck manually
<pablodias> I'm running it a long time ago, with "-y"
<pablodias> It's actually on the Unattached Inode 819000 (and counting)
<pablodias> it never stops
<pablodias> something I can do to fix it?
<andguent> If possible, try to keep multiple comments to one line, just in case others are talking in the middle of your explanation
<pablodias> ah, ok. i'm sorry
<andguent> how long does it stay on that inode? any weird noises from the drive itself as it hangs on that spot?
<pablodias> It started at 16pm and is still running. I dont know about some noises because I'm on a remote connection
<pablodias> I read about "temporary files" on lost+found, is it right? If yes, is there a way to delete all those files?
<andguent> how many hours ago was it started? I'm on EasternUS time, but I hate to assume you are in the same time zone. :) Is there anyone near the box itself that could possibly hold a phone to the case as its working?
<andguent> ....or maybe just describe what noises they hear
<pablodias> it's taking about 5 hours
<pablodias> I think no at moment. maybe tomorrow
<andguent> If fsck files parts of files and doesn't know how to repair the files, it often does drop them in lost+found, I'm not sure if that answers your lost+found question or not
<pablodias> That number are counting quickly. Visually doesnt look like "finding errors" but it's just an impression
<andguent> If the numbers are moving along, I would say let it do its work, it probably is aggressively checking for any errors it can find
<owh> I'd check dmesg while this is going.
<andguent> If the position on the drive has stopped, and stays at one area for a very long time, you may have a hardware problem, I would agree with owh, dmesg gives good info
<andguent> Whatever happens, always always have good backups, if your hard drive survives this, your next priority should be testing all of your backup systems, assuming you have them :)
<pablodias> I'm using a remote system I think cannot show me multiple terminals. I'll keep it running until tomorrow. Many thanks for your help, guys =)
<pablodias> haha, ok. thank you again
<andguent> if signing in via ssh, you can always start a second session, also keep in mind if you started the fsck from a remote shell, that connection needs to stay open for the command to continue
<andguent> shutting down your workstation for the night may kill your disk scan
<andguent> depending on how you started it
<pablodias> It's not SSH because the system is not starting. It's stopping on dis check, no services was started. I'm using a KVM remote access. Looks like a VNC
<andguent> ahh, very nice --- don't suppose you have options for Ctrl-Alt-F2?
<pablodias> I tried on the shurtcuts menu option, but didn't find. let me see again
<andguent> I know for a fact that some of those integrated remote access cards will not have the option, so don't kill yourself looking for it
<pablodias> Yep. No way to change terminal. And i dont know if a system that isn't started can show more terminals than the first one
<pablodias> It looks like that free space counting on Gmail =p
<andguent> if it isn't stopped, cross your fingers and go to bed :)
<pablodias> I started on manually accept this Unattached Inodes, but I stopped at the 600th pressed "Y"
<andguent> oh dear, 600??
<pablodias> manually. after that I started fsck with "-y" option
<andguent> i hope you have good backups man, that just doesn't sound good
<pablodias> many thanks andguent. if I remember how to get back here I tell you about the end of fsck tomorrow. thanks!
<andguent> good luck
<Silvanov> !webmin
<JaxxMaxx> How about a good recommendation for SSH client, windows based?   Right now I use putty, but that seems to be limited to a single shell.   Wonder if there's like, a multi windowed one
<rhineheart_m> hmmm.. have you tried winscp
<Silvanov> you can run multiple instances of putty as well
<Silvanov> when searching for ssh clients for windows, i found a tabbed client earlier today as well.
<faulkes-> eh?
<faulkes-> putty + screen
<JaxxMaxx> I've used WinSCP before, on my Smoothwall router... didn't seem that special
<JaxxMaxx> Windows doesn't have Screen.   I'm thinking of something like mIRC, with 4 tiled windows all to the same server
<JaxxMaxx> got enough monitor resolution to handle it
<rhineheart_m> JaxxMaxx, what would you like to accomplish?
<JaxxMaxx> Seeing whatever the Debug Output is, while keeping my command window from scrolling back would be a good start.
<JaxxMaxx> trying to troubleshoot why FreeRADIUS/dialupadmin and MySQL aren't happily married yet.
<JaxxMaxx> maybe a packet monitor to discover what the Radius packet situation really is like
<JaxxMaxx> I've got a test user that I provide the right credentials, but the result still comes back a failure, but with the success message
<owh> I'm looking for some opinions. I've got a generic user.sh backup script. The way it is intended to be used is that you symlink to it from /etc/cron.daily. The symlink name will be used to determine which username to backup, using `basename $0` - pretty straight forward. So far so good.
<owh> I've got the same for fstab mounted devices. Works in the same way, mounts stuff based on their name in fstab.
<owh> Now for the challenge.
<rhineheart_m> owh, good morning! :)
<owh> If I want to write a generic script that needs other parameters, for example an rsync server and module, or a path to backup, or I need to order the things in /etc/cron.daily, I need to do some magic with the name if I use this idea.
<owh> So, in the opinions stakes. Am I better off making a configuration file, finding a way to "split" the `basename $0` into parts, or do something else?
<owh> The nice thing about doing it this way is that ls -l /etc/cron.daily shows exactly what is happening.
<owh> And of course, the scripts are completely trivial, simple to maintain and common across all backup types.
<owh> Hi rhineheart_m.
<sommer> owh: seems to me that you have enough options to warrant a config file
<Silvanov> got webmin set up finally :D
<kgoetz> owh: make $1 hte user name, then in the script test for $2. if $2 is there require the extra params
<owh> kgoetz: How would I do that if the way that the script is run is as a result of it being in the /etc/cron.daily/ directory.
<kgoetz> owh: the only way it is run is via symlink? ah.
<owh> sommer: A config file then requires parsing and other stuff. I'm not disagreeing, just trying to find the simplest solution.
<owh> sommer: More accurately, the cleanest solution.
<sommer> owh: ah, for me the cleanest would be to not use /etc/cron.daily, but an actual crontab file
<sommer> then you could use a simpler script with more arguments
<sommer> you can't ls the directory, but you can crontab -l the file :-)
<owh> sommer: I understand what you're saying, but that then requires the administrator to understand the format of the crontab file. Something which you and I take for granted, others are flummoxed.
<sommer> owh: heheheh, yep that changes things
<owh> sommer: I like your argument of it allowing you to provide parameters though.
<kgoetz> owh: replace teh symlink with a script?
<owh> This discussion is precisely why I ventured here to ask for opinions :)
<owh> kgoetz: What do you mean?
<owh> Ah, create a script that calls the central code. That's possible, not as pretty, but possible.
<sommer> owh: in that sort of situation maybe bacula would be a good fit, require's more back end configuration, but has gui client
<kgoetz> owh: instead of /etc/cron.daily being a link farm put in a dumb script farm
<owh> sommer: Ah, definitely no. Going down that path then introduces waaaay more complexity, other than an rsync with mount, etc.
<sommer> heh, that's true
<owh> kgoetz: You'll loose this though: /etc/cron.daily/user -> /opt/backups/user_backup.sh
<owh> kgoetz: Which sort of tells you what is going on immediately.
<kgoetz> owh: why will you?
<owh> kgoetz: Then perhaps I do not understand what you are saying. I was showing you a line from ls -l /etc/cron.daily.
<owh> Let me ask a different question as I've already come up against a limitation of my implementation.
<kgoetz> owh: make the file /etc/cron.daily/user a shell script which calls /opt/backups/$0_backup.sh + your params
<owh> kgoetz: No, because then there would be (n * 2) + 1 scripts, rather than one script and n symlinks.
<kgoetz> (n * 2)?
<Jester45> is there any package that i can use to monitor a remote server's resources like ram/swap cpu useage and bandwidth?
<owh> My different question is. How do I order the scripts. Naming them 01-bob 00-judy is obvious, but how do I split off the numbering?
<Jester45> i know i can just use cli tools via ssh but i would love to use a diffrent tool that i can add into conky
<owh> kgoetz: You are suggesting a script in /etc/cron.daily/ for each user, one in /opt/backups/ for each user and the central backup script.
<kgoetz> cut? sed? depends where they are being trimmed
<Silvanov> jester45: webmin or ebox might be what your after
<owh> kgoetz: Hmm, yes, I'm familiar with the concept, is there a cleaner way?
<kgoetz> owh: perhaps i didnt understand yoru symlink then. i take it "/opt/backups/user_backup.sh" isnt your master backup script then?
<kgoetz> Jester45: depends what you want. theres lots of options though
<Jester45> ill look at ebox but i dont want webmin id rather keep it a bit more secure and use ssh+htop/iftop
<owh> kgoetz: Let me show you a more accurate cron listing: /etc/cron.daily/kgoetz -> /opt/backups/user_backup.sh
<Jester45> ebox looks like the same webui stuff
<kgoetz> owh: cleaner way would depend on how broken my suggestion re sed/cut was ;)
<rhineheart_m> Jester45, have you tried phpmyinfo?
<rhineheart_m> Jester45, have you tried phpsysinfo rather?
<kgoetz> owh: ah. and what happens in user_backup? it uses $0 to say 'backup kgoetz'?
<owh> kgoetz: Well to be precise it uses USERNAME=`basename $0`, but yes.
<Jester45> thats looks better
<kgoetz> owh: and for some reason your going to need extra info per user?
<Jester45> thanks rhineheart_m do you know any simpler ones? or maybe something like conky? cli only server
<owh> kgoetz: Well, no, not for the user script, but for a path backup script, yes. That is, now I want to backup /home/fred/accounts/debtors, but I really don't care about fred's photos.
<Silvanov> are you looking for webbased guis?
<kgoetz> Jester45: a one off or lots of servers?
<owh> kgoetz: And similarly, I want to backup to an rsync server with a named module.
<Jester45> kgoetz, just one
<rhineheart_m> Jester45, mmm Nagios
<kgoetz> Jester45: for multiple i'd suggest nagios but not for one
<kgoetz> owh: mmm. i see
<owh> At that point it becomes more and more viable to use sommer's suggestion of config files. I could name them after $0 perhaps.
<Jester45> Silvanov, if you where talking to me im not looking for webuis it jus that they seem to be the only good ones, i think all i really want is ram useage bandwidth and cpu useage in text, file based or via a sevice/pipe
<kgoetz> a onfig file somwhere will be almost required
<owh> Jester45: Then why not set up a password-less ssh and run some remote commands?
<owh> kgoetz: Yuk. but yeah, it's beginning to look like that.
<Jester45> owh, do they connect faster than passworded ssh? so i could enclude into conky?
<kgoetz> owh: the reason i thought of using a script rather tehn a symlink is because you can drop extra info into them
<owh> Jester45: Alternatively you could run MRTG.
<owh> kgoetz: Yeah, but it leaves stuff all over the place, making it harder to maintain.
<kgoetz> owh: yeah.
<owh> kgoetz: It's not when you set it up the first time, it's when you set it up the next time. For example, if I wanted to add a new user to backup, I just create another symlink and off it goes. If I did it with an extra script, I'd need to copy it, then rename it, then edit it, check it for typos, etc.
<kgoetz> owh: now think symlink+config file and its hardly any less complex
<kgoetz> if you have a standard config that most clients use you wont have to edit teh script each time anyhow
<owh> kgoetz: Ah, but I can make it fail if there is no config file and report back. That way I get told it's borked.
<kgoetz> updating is harder then the symlink method though
<kgoetz> thats a feature? ;)
<owh> So, are there any dissenting/alternative opinions around?
<kgoetz> make users do their own backups *mwhahahah*
<kgoetz> </ot>
<owh> kgoetz: Yeah, no.
<kgoetz> lol
<owh> Would this be evil: ln -s /opt/backups/path.sh /etc/cron.daily/home--fred--accounts--debitors
<owh> Similarly, ln -s /opt/backups/rsync.sh /etc/cron.daily/hostname--module
<kgoetz> -- would be more trouble then its worth. i'd think that would be a fairly fragile way to do it in general though
 * owh just did "locate '--'" with no hits.
<kgoetz> think parsing it
<owh> kgoetz: Yes, it's not pretty.
 * kgoetz expects theres no --'s for a reason
<owh> kgoetz: So, that's good then :)
<kgoetz> if your after a uniq string it should be :)
 * kgoetz tries to work out making directories in perl
<owh> kgoetz: What, mkdir isn't good enough for you?
<sommer> kgoetz: `mkdir dirname` :)
<kgoetz> owh: i'm asuming its harder then that :p
 * owh is with sommer :)
 * owh doesn't speak perl :)
 * kgoetz neither ... yet
<sommer> kgoetz: http://perldoc.perl.org/functions/mkdir.html
<sommer> needs a module though
<kgoetz> yegad. its not that hard o_0
<owh> Well, the POSIX.pm has a mkdir function :)
<owh> kgoetz: And the manual refers to Perl's build in mkdir function as well :)
<owh> s/build/built/
<kgoetz> perldoc -q mkdir didnt find anything, so i assumed it was going to be veeery hard
 * owh grepped :)
<owh> kgoetz: Very evil: locate perl | while read n ; do grep mkdir $n ; done
 * kgoetz wonders how many binaries owh just grepped
 * owh didn't worry about it.
<kgoetz> lackadasical fiend
<owh> sommer: I'm going with your suggestion of the config files until I come across a better idea. Thanks.
<owh> kgoetz: Nah, if a binary matched, it said so :)
<owh> kgoetz: Sometimes close enough is good enough.
<owh> kgoetz: Of course that will only be true if it actually works, but by then you'll have used google :)
<sommer> owh: np
<kgoetz> mkdir "wikimangle"; ftw!
<kgoetz> sommer: cheers mate
<owh> ROTFL
<owh> sommer: Did you get a reply about the Guide?
<owh> sommer: Or did they try hard not to laugh?
<sommer> oh ya, I did, since it's so far after SF committing the changes will mess with the translators
<sommer> but, we can commit just the spelling changes right before release
<sommer> wich will be after the translators are done
<sommer> just need to make sure the translation doesn't change :-)
<owh> So, that means we'll have it translated *and* spell checked?
<sommer> yeppers
<owh> sommer: So, do you want me to give you one without example.com, but leave the rest in?
<sommer> I think I created one, did I forget to attach it when I replied?
 * owh still thinks there should be a standard for example urls.
 * owh checks.
<sommer> sure, you just created the standard heh
<owh> sommer: I mean across all the documentation, not just our little guide :)
<owh> Yes, there was a .diff attached. I'm checking it now.
<sommer> you mean not just the server guide?
<owh> Yes
<sommer> um for the diff for the standard?
<owh> Huh?
<sommer> for the using example.com as a standard it would probably be a good idea to post to the doc ml, but probably after hardy is released
<JaxxMaxx> isn't example.com  THE  example URL?
<owh> Yes. I think that needs looking at in more detail. Hostnames, user names, example users, urls, etc.
<owh> JaxxMaxx, Yes, in very small examples, but not across the board.
<sommer> sure, I'm sure other members of the doc team would agree, plus there are many "student" documentors that could handle that
<owh> For example, what do you name the localhost's FQDN? What about a generic mail server? What username do you give?
<JaxxMaxx> I prefer Testy McTesterson myself
<JaxxMaxx> localhost.example.local?
<JaxxMaxx> Opinions on SecureCRT?
<owh> sommer: Your diff seems to have lots removed. I've not got time right now to check, but I'll have a look-see.
<JaxxMaxx> Hmmm, any way I can get Alt-Fn   to work in Putty/  =]
<sommer> owh: sure, whenever you get a chance
<owh> JaxxMaxx, that makes no sense. Alt-Fn, in the context of consoles is hardware specific. You're better off using screen.
<sommer> well I'm off to sleepy time, have a good one all
<owh> Later sommer
<JaxxMaxx> ooh, I remember screen.   not how to use it, mind, but I recall the command from University Unix shell days...
<owh> Ctrl-A Ctrl-D = detatch, Ctrl-A Ctrl-C = create, Ctrl-A Ctrl-N = next.
<owh> Have fun.
<JaxxMaxx> sounds like orphaned processes to me.... =]
<jetole> does anyone know what the ubuntu developer channel is?
<ScottK> lamont: Here's a reminder about Bug #207526.  I don't think we want to skip fixing this one before the release.
<JaxxMaxx>  is it -devel   or -development ?
<JaxxMaxx> what command do I use to display what a symlink points at?   trying to find where stuff in init.d  is pointing
<jetole> devel
<jetole> ls -l symlink
<jetole> ls -l /dir will show long for all files including where symlinks point
<JaxxMaxx> is green a symlink?
<jetole> no
<jetole> executable
<jetole> sym links are light blue
<jetole> if a file has x in it's permissions then it will be green
<JaxxMaxx> Hmm.  Then Ubuntu packages put the executables into the init.d  dir?
<jetole> they should be
<JaxxMaxx> how do I tell what folder the conf files are in?
<jetole> /etc
<jetole> these are standard linux details
<jetole> /etc/init.d is scripts that run when your computer starts
<JaxxMaxx> Sorry for being such a fresh n00b .  =]
<JaxxMaxx> most I've done with Linux before is a Smoothwall Express box
<jetole> however they do not run if they are in that directory but /etc/rc2.d like folders will link to them
<jetole> I have never used it
<jetole> smoothwall
<jetole> "You call that a firewall? This is a bloody firewall"
 * jetole points to custom iptables from hell
<JaxxMaxx> I don't have the time for custom iptables =]
<jetole> yes you do, you just don't know how
<JaxxMaxx> I used to
<JaxxMaxx> way back
<jetole> it is not a slow process when you know how to use it
<JaxxMaxx> but work wasn't eating up the time back then.
<jetole> I can configure it quicker with iptables then any gui
<JaxxMaxx> Too many simple boxes you can drop in for firewall duty.  You never truely know how they work exactly, but the bosses believe they do the job
<jetole> and gui's lack features
<jetole> well security is my job
<JaxxMaxx> I'm not lucky to be so focused.
<themime> i installed ubunutu-server on my laptop because it only has 128MB ram and i wasnted a very basic install.  i installed fluxbox on top of it, and now i want to run a network manager that supposedly comes with ubuntu-desktop.  i thought i might have already apt-get'd it, is there a command i can use to run it to see?
<themime> s/wasnted/wanted
<JaxxMaxx> current boogeyman is a Ubuntu LAMP server hosting FreeRADIUS/DialupAdmin  for a Nomadix captive portal
<jetole> I have un used ip on public addresses that are spaced between real ip, if any packet goes to that address then they are blacklisted, if someone port scans a system they are temp blocked, find me a gui that allows me to do that
<jetole> themime: apt-get install ubuntu-desktop -y
<jetole> sudo if need be
<themime> whats -y?
<jetole> yes
<themime> i don't want ubuntu desktop though
<jetole> then why did you just ask for it?
<themime> sorry, i was referring to network manager
<themime> can i run it command line?
<jetole> apt-get install -y network-manager
<jetole> apt-get install -y network-manager-gnome
<JaxxMaxx> Is my  mysqld   supposed to be running all the time with --skip-external-locking ?   I fear I changed a conf file trying to reset the 'sa' password
<themime> jetole: im using fluxbox, will that require a bunch of gnome crap i don't need?
<jetole> well, first off there is no 'sa' password
<jetole> themime: probably but that is one of the network-manager guis
<jetole> the other option is the kde gui
<jetole> JaxxMaxx: there is no 'sa' password
<JaxxMaxx> it doesn't ask for a baseline password when you install MySQL?  probably my root sql login then
<themime> is there a non gui version? my question of "how do i run it" was not the install, but to run network manager, cause i think i may have installed it
<jetole> JaxxMaxx: no, and you do not need one when you install it but lemme look at my sql
<jetole> yes it is supposed to have that option
<jetole> I just checked on 3 systems
<jetole> the option for password is something like --skip-grant-tables
 * jetole looks for sure
<jetole> yes, that was the exact option
<JaxxMaxx> ah, right
<jetole> but if you install mysqld from apt-get then there is no root password
<JaxxMaxx> Where does the Debug Log  end up?  Supposedly there are messages in there useful for troubleshooting
<jetole> if you install it from the server then it prompts you, I mean from the CD
<jetole> during OS install
<jetole> themime: ubuntu server also has a kernel you don't want, if I were you I would install ubuntu desktop and do an apt-get remove ubuntu-desktop -y
<jetole> add --purge onto the end
<jetole> the server system is configured to be a server
<jetole> themime: apt-get install linux-image-generic -y && update-grub
<jetole> themime: reboot and choose the new kernel
<jetole> and then do a apt-get remove linux-image-server && update-grub
<JaxxMaxx> Here's what may be a silly question:  If something is told to authenticate to MySQL as a specific username, does that username have to exist in the Linux subsystem, or is it specific to the SQL server?
<jetole> specific to sql
<jetole> grant all privileges on table to 'user'@'host' identified by 'password';
<jetole> I think that is the syntax
<jetole> and host is optional
<jetole> host is also the host of the sql server
<JaxxMaxx> Hmm, maybe I'm not specifying the @host part in this conf file...
<jetole> no
<jetole> that applies to SQL only
<jetole> if host is 'localhost' then they can only connect through localhost etc
<JaxxMaxx> ah, no domain on SQL specific accounts...
<jetole> it's possible, login to sql manually and run => select user, host from mysql.user;
<jetole> it will tell you what users you have defined and what host is associated with them
<jetole> if you want it to be assiciated with any host then change the host to %
<jetole> update mysql.user set host = '%' where user = 'my_fscking_user';
<JaxxMaxx> nah, only things on localhost shoudl talk to this MySQL server
<JaxxMaxx> just hard to tell if it's actually succeeding.   the web based admin keeps showing SQL DEBUG statements at the top of frames
<jetole> JaxxMaxx: yes but localhost is a propername as well which represents 127.0.0.1 so if you tell your app to login to the ip that is not the localhost ip and localhost is defined as the login host then it will fail
<jetole> JaxxMaxx: have you tried logging in locally with that user name and password to the ip that you are specifying
<jetole> ?
<JaxxMaxx> like with SSH, or via mysql
<jetole> mysql
<jetole> mysql user names do not have ssh access, they are not system users
<JaxxMaxx> I've got the prompt  (never been able to fathom this properly)
<jetole> mysql root user has not password by default but imagine if ubuntu shipped where anyone could login to ssh as root with no pass
<jetole> from bash => mysql -u user -p database
<JaxxMaxx> I've set a root pass in MySQL
<jetole> that says launch mysql as user for database and prompt for password
<jetole> if there is no password then leave -p off
<JaxxMaxx> yeah, that works, and I can see the tables the scripts imported...
<JaxxMaxx> now to make sure freeRadius uses that credentials
<jetole> well that is freeradius specific but now you know mysql is setup properly
<jetole> your using sql.conf in the /etc/freeradius directory?
<JaxxMaxx> how about restarting one of the init.d  scripts, without rebooting the whole server?  I've just been doing shutdown -r   and waiting
<jetole> /etc/init.d/script restart
<JaxxMaxx> I'm guessing so, that's where I've put the credentials
<jetole> I have never used the software but I just installed it and am looking it over
<JaxxMaxx> it's one of the most popular RADIUS servers available
<jetole> actually fuck that, I am too tired to look it over
<kgoetz> jetole: better watch the language - ubuntu channel
<jetole> right
<jetole> actually fsck that, I am too tired to look it over
<jetole> better?
<JaxxMaxx> Heh.  Usage of Linux should allow for the occaisional invective.
<jetole> JaxxMaxx: http://ubuntuforums.org/showthread.php?t=151781
<JaxxMaxx> Yeah, I've been "looking it over" for around a week now, starting to get tired myself
<jetole> http://ubuntuforums.org/showthread.php?t=151782
 * kgoetz hates freeradius auth setups
<jetole> nothing on linux is impossible, somethings just take a lot of determination but feel good when they are done and offer more options then commercial apps on graphical smiley operating systems
<kgoetz> i doubt setting up radius is easy on any OS tbh
<jetole> plus graphical smiley operating systems got there a$$es owned this year at CanSecWest / Pwn 2 Own
<jetole> I have never done it
<kgoetz> i have. its a pita :)
<jetole> I honestly am not sure what radius offers, I know it is a central point of authentication, is that about it?
<JaxxMaxx> Huzzah for replacing servers other people configured!
<jetole> JaxxMaxx: my life revolves around that to a degree
<JaxxMaxx> virtually every appliance box  security thingy can base off RADIUS authentication
<kgoetz> jetole: it provides basic acouting and authentication
<JaxxMaxx> lots of ISPs use it for PPPoE accounts
<JaxxMaxx> yeah, the accounting side is the big one
<kgoetz> basic is a key word ;)
<jetole> hmm
<jetole> so what do you need it for JaxxMaxx
<jetole> ?
<JaxxMaxx> I need to make a user-friendly interface for adding usernames to a Captive Portal device (Nomadix)
<JaxxMaxx> authenticates people on a Customer access WLAN
<jetole> ah
<jetole> like wifi for mcdonalds?
<JaxxMaxx> currently there's an old linux install on a Dell box providing FreeRADIUS already
<JaxxMaxx> larger scale, but yeah.
 * jetole nods
<JaxxMaxx> it's a Convention venue, wireless access for exhibitors and other customers
<jetole> whats wrong with the dell box?
<JaxxMaxx> it's getting old, fear of hardware failure
<JaxxMaxx> it's an old Dimension desktop
<jetole> oh
<jetole> ouch
<kgoetz> is it server or pc?
<JaxxMaxx> now they have a "proper" pizza box server
<kgoetz> oh, pc *heh*. ugly
<JaxxMaxx> and I'm attempting to recreate
<JaxxMaxx> with moresupport for accounting and tracking
<jetole> I have a bunch of those in my office but they are used as desktops
<JaxxMaxx> hence the MySQL integration
<JaxxMaxx> yeah, the previous linux guy here was...   odd
<jetole> we just installed 3 new dell poweredge 2950's in a data center
<jetole> those are nice
<jetole> with one huge fscking flaw
<JaxxMaxx> "Let's throw an essential service on this POS dell desktop"
<jetole> dell DRAC which is sold on them from the dell.com/linux site isn't linux compatible
<kgoetz> heh
<JaxxMaxx> Buy Dell servers WITHOUT OS PRELOADED.    Golden Lesson.
<jetole> it's barely windows compatible to be honest and the DRAC is honestly a joke in both my opinion and generally in the public
<kgoetz> *always* buy servers clean
<jetole> JaxxMaxx: DRAC is a client access device
<jetole> OS preload is irrelavent
<JaxxMaxx> DRAC?  that some sort of remote admin card?
<JaxxMaxx> hurrrrr
<JaxxMaxx> Dell Remote Admin Card
<JaxxMaxx> me so smarty
<jetole> which reminds me, speaking of DRAC, anyone know of a device that I can install on the server that will give me IP KVM that allows me to access bios etc and gives me virtual media so that a CD in my drive at my office appears present in the server in the data center?
<jetole> JaxxMaxx: no but it claims to be
<jetole> it's an over priced managed PDU
<JaxxMaxx> generally those have to be vendor specific, jetole
<JaxxMaxx> I like the IBM and HP ones
<jetole> JaxxMaxx: they shouldn
<jetole> t be
<JaxxMaxx> And in a perfect world they wouldn't.
<kgoetz> open firmware + alom \o/
<JaxxMaxx> but, they do have to interface with teh BIOS, so that is all super sekret tech
<jetole> it is something that can be done generically in principal, I mean the bios over kvm can be done with a device that appears as a video card to the server, there are usb cdroms and pci drive adapters so if the card manages the over the internet part it is fine
<jetole> the remote reboot capability can be done through a managed PDU
<jetole> JaxxMaxx: no it isn't
<jetole> can IP KVM cards not do that already?
<jetole> I mean there is nothing secret about it
<JaxxMaxx> There may be PCI based IP KVM cards, but I'm not familiar with generic ones...  only specific addons from the server vendor
<jetole> you're emulating a screen, keyboard and mouse, bios doesn't have to know what it is connected to
<jetole> JaxxMaxx: they are out there but I have never used one
<jetole> it's the virtual media which I thought would be less likely
<JaxxMaxx> the one I've used worked well for loading my FLASH drive remote to server, and let me watch screen across a reboot
<jetole> JaxxMaxx: thats what DRAC claims
<JaxxMaxx> silly RAID error refusing to pass a "push a key" prompt
<JaxxMaxx> yet DRAC won't load the media into the Linux OS?
<JaxxMaxx> doesn't play nice with umount or whatever
<jetole> after hours of tech support and on site dell technicians who didn't get it we finally realized that with a highly tuned windows machine it works some times and techs argues with each other about if linux works
<JaxxMaxx> Most of the ones I've come across emulate you plugging the device in via USB
<jetole> this one is supposed to do that also
<jetole> DRAC 5
<jetole> worst case scenario, if the server fails then I am driving to downtown miami to fix it
<JaxxMaxx> I don't do Dell that much, honestly.
<jetole> my boss was adamant about dell
<JaxxMaxx> Stupid boss.
<jetole> my boss is a software guy though
<JaxxMaxx> HP and IBM both will special bid Dell price on anything not bottom barrel
<jetole> he was cautious about buying non dell computer monitors wondering if they would be compatable
<JaxxMaxx> if you've got a good VAR
<JaxxMaxx> Hmmm.  Sounds like your boss needs some reprogramming.  I'll fetch the BOFH cattle prod
<jetole> yeah well, my boss is a good programmer but doesn't know shit about hardware
<JaxxMaxx> Find out who in the area does the Onsite server hardware calls for IBM and/or HP.  they'll get you good pricing, they want to get in instead of Dell
<jetole> JaxxMaxx: we already have the dell servers on site and live
<JaxxMaxx> I'm lucky enough to work for the company that does it in my City.  =]
<JaxxMaxx> yeah, I feel your pain.
<kgoetz> sun > ibm > hp > * > dell
<JaxxMaxx> Tell Dell to fix their crappy remote admin cards
<jetole> honestly if I can find a good card I may be happy, the dell computers do kick but otherwise
<JaxxMaxx> tbh, that would be interesting.  Addon PCIe card that replaces video controller with a passthru to IP KVM instead of video device...
<jetole> http://www.avocent.com/What-is-KVM-over-IP.aspx
<jetole> that looks decent except for windows
<jetole> but they mention virtual media
<jetole> JaxxMaxx: http://en.wikipedia.org/wiki/KVM_switch  <== browse down to the kvm over ip section
<JaxxMaxx> They are very handy devices.
<jetole> seems like if I can find a DRAC like one that works, likes linux and is hardware indifferent
<jetole> lol @ http://okvm.sourceforge.net/links.html
<jetole> see if you can spot iy
<jetole> *it
<JaxxMaxx> Heh.  realy Open Source, build your own PCI interface card
<jetole> I was actually refering to rdesktop x2
<JaxxMaxx> oh, heh.  hurray for volunteer proofed pages
<JaxxMaxx> ugh, blargh,  why are DEBUG statements showing up in the PHP based pages for dialupadmin
<jetole> probably because it is enabled somewhere
<JaxxMaxx> would that be a SQL or apache thing?
<JaxxMaxx> I can't find the debug statements anywhere else
<jetole> http://www.avocent.com/DSR_Switches.aspx
<jetole> it would be a dialupadmin thing
<jetole> it would be in a configuration file somewhere that the dialupadmin php parses and when it sees display sql debug then display sql debug
<JaxxMaxx> now to stop the debug statements...
<JaxxMaxx> ewwww, it might be  because DialupAdmin was written with PHP4 in mind, and now everything is PHP5
<jetole> nope
<jetole> thats not a good feature but doesn't explain the debug statements, there is a config file somewhere that has them enabled
<jetole> i am going to bed
<JaxxMaxx> good night
<JaxxMaxx> stupid other packages depending on php5, and php5 breaking when I install PHP4
<rhineheart_m> hello.. is this article true? http://article.gmane.org/gmane.comp.version-control.git/78613
<kraut> moin
<foo> moin
<spiekey> Morning!
<spiekey> Can you bind netcat to multiple ports?
<_ruben> spiekey: dont think so, but you could run netcat multiple times
<spiekey> _ruben: hmm..okay ;)
<spiekey> has someone an idea whats going on here? http://pastebin.ca/965845
<spiekey> it doesnt make sense to me at all :-/
<soren> spiekey: Hm... Looks like fun :)
<soren> spiekey: Oh, I know.
<soren> spiekey: It doesn't respond to ping, so nmap skips it.
<soren> To change this behaviour pass -PN (used to be -P0 (and you put -PO, not -P0)).
<twb> Hi.  I just tried to set up user quotas on a test machine (on the / filesystem, because I forgot to create a separate /home), and the quota command is listing values that are clearly wrong.  / is mounted -o usrquota according to /proc/mounts and /aquota.user exists, but "quota al" claims I have only twelve thousand (12340) blocks used when du -sch /home/al clearly reports 700MB of usage, all owned by al.
<twb> So, the quota file was generated correctly by checkquota (as part of /etc/init.d/quota), but it hasn't being correctly updated -- which is why quotas aren't enforced.  What can I do to isolate the fault?
<twb> Probably unimportant details: the server is running hardy, and the users in question exist in LDAP (getent passwd, i.e. nss, can see them) and Kerberos (they can log in, i.e. pam_krb5 can see them).  /home is exported by NFS 3 and /export and /export/home are exported by NFSv4.  /home is -o bind mounted to /export/home.
<spiekey> soren: i used 0 as in zero.
<spiekey> soren: -sT seems to solve it.
<soren> spiekey: Oh, so you did. The font on pastebin misled me.
<twb> It mysteriously started working.
<spiekey> soren: when i open the port 1234 with netcat: nc -l -p 1234 -u -k   and i scan this port with nmap my netcat dies.
<spiekey> http://www.networksecurityarchive.org/html/Security-Basics/2008-02/msg00354.html --> they confirm my option flags
<spiekey> any idea why netcat dies?
<spiekey> this is on gutsy
<twb> spiekey: dumb question: is it because nmap opens the connection, then hangs up?
<twb> netcat will exit when the other end hangs up.
<spiekey>  but i want nc to stay alive :)
<soren> spiekey: netcat only handles one connection.
<soren> spiekey: And then dies.
<spiekey> this sucks :D
<spiekey> how would i then e able to test a udp connection with nmap and netcat?
<soren> while true; do netcat ; done
<spiekey> ah! :)
<spiekey> okay, lets assumei want to open up 20 udp ports with nectat...all with 200 while loops...how will i be able to kill them all afterwards?
<fromport> killall nc
<spiekey> fromport: nc is running in a loop
<spiekey> while true;do nc...;done
<soren> Kill the shell that's running the loop.
<_ruben> spiekey: why not use smth like xinetd to do the listening?
 * faulkes- makes a mental note to have the ubuntu forum team killed
 * kgoetz wonders what faulkes- is plotting
<faulkes-> I appreciate that it's april fools but what they've done deserves nothing less than death
<kgoetz> ah... i wont look
<faulkes-> best not
<soren> Oh, dear.
<faulkes-> soren: git my scattergun, I'ma goin huntin forumpossums
<zul> hmm...I seem to have a deer in the backyard of my house how odd
<soren> Er... what?
<zul> a deer like bambi
<faulkes-> zul: used to see that all the time when I lived in boulder, co
<zul> faulkes-: yeah but this in the middle of the city, kind of
<faulkes-> very odd the first time you see it if you're used to living in the city
<faulkes-> zul: well, no choice, get out the steak knives, start preparing lunch
<zul> ick
<JaxxMaxx> can't believe anything you hear on 4/1
<troofy> if i dont have dns control, hurdles will i face in my .com?
<soren> what?
<troofy> i wana have a .com
<troofy> some dont give full dns control. right?
<soren> That doesn't mean anything.
<soren> WEll, that's not entirely true. It's wildly ambiguous, though.
<soren> You want to buy a .com domain? Is that it?
<troofy> ya
<JaxxMaxx> All the good domain hosts will let you have control over the full DNS zone
<troofy> soren someone said go for provider that gives good dns control
 * soren is curious what this has to do with Ubuntu
<soren> troofy: WEll, yeah, some sort of dns control would be useful :)
<troofy> ubuntu server can be used to have websites hosted with apache
<soren> troofy: I all depends on what you're going to use the domain for.
<troofy> soren what is this 'some sort' ?
<soren> troofy: It all depends on what you're going to use the domain for.
<Deeps> godaddy.com, coupon code OYH3, $6.95 .com domain
<troofy> soren domain will be used as email server, web server, ircd server.
<Deeps> afaik thats the cheapest you can get
<Deeps> (as an individual)
<troofy> k
<troofy> Deeps goddady can close my websites? for spamming?
<soren> Owning a domain is useless if you have no control over it, and what you want to do is almost the simplest thing in the world (from dns management perspective).
<Deeps> if you're planning on spamming, you're better off going elsewhere
<Deeps> ie, ask irc.spam.net in #spam
<soren> I'd be surprised if someone offers a dns service that so amputated that you can't even set up a web and mail server.
<Deeps> and not in here
<troofy> how are domains shutdown. for what reasons?
<soren> If they suck too much.
<sommer> lol
<troofy> like?
<Deeps> troofy: ask the registrar
<Deeps> troofy: nothing to do with ubuntu
<soren> troofy: Dude.. This channel is about Ubuntu server.
<troofy> .coms have high link with servers. and i like ubuntu:)
<soren> I like liquorice. That doesn't make Ubuntu server on-topic in #liquorice, either.
<troofy> can any one tell me off the record?
<Deeps> ask the registrars, nothing to do with us
<troofy> godday say it can shutdown for no reason atall
<soren> troofy: Dude. Go somewhere else.
<Deeps> troofy: http://www.icann.org/registrars/accredited-list.html go through that list
<Deeps> those are allk the people that'll sell you domains
<Deeps> have a nice day now :)
<troofy> i wil :)
<Deeps> good bye! :)
<troofy> bye..
<troofy> Deeps arent you out yet?
<_ruben> sweet .. will be getting a test san tomorow or the day after .. an equalogic one .. they're giving a seminar nearby and will be dropping one off here so we can play with it for a while
<henkjan> "Hardy will be delayed by 3 months"
<henkjan> from #ubuntu+1
<_ruben> so it'll be 8.07 then i guess?
<travisb> I set log_errors = On and error_log =/filename in php5 and it's not logging anything to that file.
<ivoks> restart apache
<travisb> I did
<travisb> still nothing
<travisb> apache logs correctly
<soren> travisb: I suspect /filename is not the real path?
<ivoks> travisb: try with /tmp/somefile
<jetole> holy hell
<ivoks> www-data can't write into /
<soren> jetole: ?
<jetole> userfriendly.org is down, damn april fools joke I hope
<travisb> corect it's file /var/log/php.log and www-data owns it
<ivoks> jetole: we had this as index page on ubuntu-hr.org: http://ubuntu-hr.org/jebemti.html
<ivoks> jetole: total panic :D
<bdmurray> soren: somebody mentioned bug 207526 to me
<ubotu> Launchpad bug 207526 in postfix "default main.cf.tls causes syslog warnings" [Medium,Confirmed] https://launchpad.net/bugs/207526
<jetole> huh, well since I have never been to the site before I would not be too worried, but uf.org?
<lamont> bdmurray: meh
<jetole> fsck me! I can't go to work without reading a little sys admin comics and so far I have only seen dilbert, thats like a half dose dude
<lamont> bdmurray: I'll figure out something with it today (and no, changing /var/spool/postfix into a postfix-owned dir is probably not the right answer...
<bdmurray> lamont: okay, thanks!
<jetole> uf.org is more like 3/4 of what a sysadmin needs daily and now it's gone?
<nxvl> how is that i get a security update uploaded into the stable releases
<nxvl> using the SRU procesure?
<mathiaz> nxvl: if it's a security update, you shouldn't follow the SRU process
<mathiaz> nxvl: IIRC keescook or jdstrand will sponsor your debdiff
<jdstrand> nxvl: hi.  what is the bug number?
<nxvl> jdstrand: Bug #210175
<ubotu> Launchpad bug 210175 in openssh "[openssh] [CVE-2008-1483] allows local users to hijack forwarded X connections" [Undecided,Confirmed] https://launchpad.net/bugs/210175
<jdstrand> nxvl: thanks
<nxvl> does anyone know something about this -> http://blog.drinsama.de/erich/en/linux/debian/2008040101-renaming-directories
<mindframe-> nxvl, oh jeez
<mindframe-> i can't say i'm a fan of that change
<mindframe-> probably better for usability purposes though
<nxvl> well, for the users it will be better, but for sysadmins it will be hell
<mindframe-> agreed
<mindframe-> why not just do symlinks
<nxvl> well. it will be easier for sysadmins to do some symlinks than for users
<nxvl> so they don't have this directories they don't know what they are
<henkjan> w00t
<henkjan> installing hardy in kvm
<akincer> I've been bashing my head against a wall with bind9 for about an hour and I'm pretty sure apparmor is my problem. Can anybody here give some quick pointers on how to address an issue with apparmor?
<akincer> Ok, so how do I configure apparmor to let named access zone files in /etc/bind/zones?
<akincer> I could lament how dumb THAT is, but I'll refrain since I hope there is an easy fix
<mindframe-> akincer, maradns to the rescue
<akincer> ?
<mindframe-> just spamming my preference of dns server, sorry
<akincer> more like /etc/init.d/apparmor stop to the rescue
<mindframe-> hah
<mindframe-> maybe set it to 'learning' mode for a bit
<akincer> I'm sure it's great, but not letting bind read zone files treads on absurdity
<akincer> As soon as someone explains that one to me and how to fix it, I'll think more highly of it. Until then, I consider it a nuisance to be turned off
<akincer> or point me to some documentation
<akincer> Googling apparmor ubuntu bind9 doesn't bring up anything promising
<akincer> Ahh, how cute. Found this in /etc/apparmor.d/usr.sbin.named: # Dynamic updates needs zone and journal files rw. We just allow rw for all in /etc/bind, and let DAC handle the rest
<akincer> Sorry, this gets a big FAIL
<akincer> I'll fix it since DAC seems to be failing all on its own
<jdstrand> akincer: where are you storing your zone files?
<jdstrand> /etc/bind?
<akincer> in /etc/bind/zones but not to worry. Adding /etc/bind/zones/* rw, in the usr.sbin.named fixed it
 * jdstrand nods
<akincer> I shouldn't HAVE to do that
<Deeps> i suspect apparmor might have expected your zone files to be in /var/cache/bind/
<jdstrand> apparmor is configured for /etc/bind and /var/lib/bind
<Deeps> (which is the default behaviour on ubuntu systems)
<akincer> None of the how-tos out there use that convention
<jdstrand> Deeps: it's /var/lib/bind
<Deeps> jdstrand: my ubuntu 7.10 box says differently
<slangasek> there's no /var/cache/bind used for slave zones?
<jdstrand> it's /var/lib/bind on hardy for sure
<Deeps> /etc/bind/named.conf.options:
<Deeps> directory "/var/cache/bind";
<Deeps> by default
<Deeps> on gutsy
<jdstrand> which is what I assumed we were talking about, since hardy is the first release with an enforcing profile
<Deeps> and on debian etch
<slangasek> jdstrand: that's, mmm, wrong then :)  slave zones are cache data, and should be stored in a separate dir
<jdstrand> slangasek: talk to lamont-- I didn't do it ;)
<jdstrand> let me check /etc/bind/named.conf.options...
<lamont>  /var/cache/bind is for slave zones
<slangasek> right
<lamont>  /var/lib/bind is for zones that you have nsupdate hitting
<lamont>  /etc/bind/ is for zones that you master
<slangasek> so the proper apparmor policy is to allow both
<jdstrand> slangasek: and it does
<Deeps> i guess all my zone are in the wrong place then, as all i do for my zones is file "zone", no extra pathing
<lamont> realistically, named should not need write access to /etc/bind
<Deeps> (which dumps them all in /var/cache/bind)
<Deeps> (whoops? heh)
 * jdstrand nods, but acquiesed in the knowledge that some people configure it that way
 * lamont points at "Configuration Schema" in /usr/share/doc/bind9/README.Debian.gz
<lamont> there  are also people who put everything in /var/lib/bind, and I mean everything
<akincer> there doesn't seem to be any good docs on wiki.ubuntu.com for this
<Deeps> ah
 * Deeps learns more
<akincer> http://ubuntuforums.org/showthread.php?t=236093
<akincer> That's what I followed
<akincer> It seems to me extremely unwise to put an enforcement mechanism on the server edition without some documentation on what basic assumptions it makes.
<akincer> And I think that is being generous when I say that
<lamont> akincer: I wasn't consulted before they uploaded the apparmor crack
<lamont> er, stuff
<sergevn> is there an opensource alternative for DirectAdmin?
<akincer> I gotcha. Somewhere someone made some assumptions and those assumptions haven't, so far as I can tell, been documented. I had to look in a config file to find out? VERY bad form
<lamont> akincer: actually, they follow the documentation in README.Debian.gz
<jdstrand> akincer: it is documented that apparmor is in enforcing mode in README.Debian
<Deeps> is there a way to read the README.Debian without having to gunzip it first?
<slangasek> zless
<Deeps> (as typically it comes .gz)
<jdstrand> akincer: I am not 100%, but I believe it's in the server guide too
<Deeps> ah
<Deeps> nice
 * lamont uses "vi" (== vim) 
<Deeps> vi can gunzip on the fly?
<Deeps> well, vim
<jdstrand> akincer: and we have https://wiki.ubuntu.com/DebuggingApparmor for debugging profile bugs
<jdstrand> akincer: which you hit-- that lin in usr.sbin.named should be /etc/bind/** rw,
<akincer> http://doc.ubuntu.com/ubuntu/serverguide/C/dns-configuration.html
<akincer>  	      Now use an existing zone file as a template to create the /etc/bind/db.example.com file:
<akincer> sudo cp /etc/bind/db.local /etc/bind/db.example.com
<akincer> but the read/write isn't recursive? Does that REALLY make sense?
<jdstrand> lamont: can you change the apparmor profile to have '/etc/bind/** rw,' instead of '/etc/bind/* rw,' when you upload -9
<incorrect> hello, sorry to go on about this again, but does anyone know of any comparisons between running 32bit apps on 64bit platform vs running them on a native 32bit
<jdstrand> akincer: it doesn't make sense.  it's a bug
<lamont> -  /etc/bind/* rw,
<lamont> +  /etc/bind/** rw,
<lamont> like so?
<jdstrand> yep
<incorrect> i am pretty convinced that running 32bit apps on 64bit platforms is a waste of time
<jdstrand> lamont: that'll fix akincer's issue
<lamont> incorrect: it depends on the app, I rather expect
<jdstrand> lamont: do you think it would be worthwhile to do the same for /var/cache/bind and /var/lib/bind?
<jdstrand> lamont: in thinking about it, I do
<lamont> yeah, it does
<jdstrand> lamont: can you do that as well?
<lamont> sed -i 's/\*/**/' :-)
<lamont> done
<jdstrand> lamont: thanks!
<incorrect> i expect taking the 32bit emulation down to the silicone should be a lot faster than using lib32
<slangasek> really would be better if it could be /etc/bind/** r, /var/lib/bind/** rw...
<akincer> Are there plans to write up a tutorial on apparmor if one doesn't already exist?
<slangasek> a bit less protection if your daemon is allowed to overwrite its own config files, which is what /etc/bind is supposed to be
<jdstrand> slangasek: I agree, but to not break people's configurations who are doing the wrong thing there, we did 'rw'.
<lamont> slangasek: I could make it read only for /etc/bind... it'd break more than one common-but-well, wrong installation class though
<jdstrand> slangasek: remember that apparmor respects unix perms, so the default install is still ok
<lamont> these are the same people who scream every upgrade because postinst makes  /etc/bind 644 root:bind :-)
<akincer> I'd be happy to stop doing the wrong things there. But I think this should be documented unambiguously. So far, I'm unconvinced that it is.
<slangasek> heh :)
<slangasek> akincer: in a sense, this is documented in the FHS; but I agree that this could be made a bit more explicit
<lamont> jdstrand: likewise, I'm not terribly averse to putting a comment above the '/etc/bind/** r' entry that points to README.Debian.gz :-)
<Deeps> if /etc/bind is supposed to be read only by named, and also where you're supposed to keep your master zone files, where do you keep your dynamic zone files? /var/lib/bind?
<jdstrand> lamont: that would be most welcome
<slangasek> Deeps: yes
<Deeps> dynamic zones that you're master for*
<akincer> slangasek: I'm a documentation nazi. To me, it's binary. Either something is documented unambiguously or it isn't.
<lamont> slangasek: yeah - the series of bugs that eventually led to /etc, /var/cache, and then /var/lib are siting FHS
<jdstrand> slangasek: if the release manager says go for 'r' on /etc/bind, I'm cool with it-- but there will be bugs on it
<lamont> jdstrand: was that a +1 for making /etc/bind "r"??
<lamont> or just the comment?
<jdstrand> lamont: I always wanted it to be 'r', but was trying not to break that common misconfiguration
<jdstrand> lamont: it a 'correct' vs 'pragmatic' kinda thing
<jdstrand> it's
<slangasek> jdstrand: hey now, I'm not speaking as release manager when I say that. :)
 * lamont looks at one bind9 instance he cares about, and finds: include "/var/lib/........conf";
<lamont> so that one breaks in any case./
<jdstrand> lamont: the '**' wouldn't fix it? It's not in /var/lib/bind/...
<lamont> Deeps: dynamic master zones ==> /var/lib/bind
<lamont> it's /var/lib/$somewhereelse
<jdstrand> lamont: ah-- well yes. we have also talked about have a comment in the config files about non-default locations
<lamont> my stuff uses /etc/bind/pri for primary zones
<slangasek> akincer: the FHS unambiguously documents what the heirarchy is supposed to be for files, and Debian policy references the FHS, and Ubuntu references Debian policy... so it's not ambiguous, it's just not self-evident :-)
<akincer> How about sticking a README in /etc/bind with some clarity so hopefully someone like me would read it
<jdstrand> eg, my.cnf now has a warning in it about needing to change usr.sbin.mysqld if the default patchs are changed
<jdstrand> akincer: docs are https://help.ubuntu.com/community/AppArmor
<akincer> LOL, perhaps unambiguous isn't the word I'm looking for . . .
<jdstrand> akincer: https://wiki.ubuntu.com/DebuggingApparmor
<lamont>   # Dynamic updates needs zone and journal files rw, use /var/lib/bind
<lamont>   # /etc/bind should be read-only for bind
<lamont>   # See /usr/share/doc/bind9/README.Debian.gz
<lamont>   /etc/bind/** r,
<lamont>   /var/lib/bind/** rw,
<lamont> jdstrand: how's that look?
<jdstrand> akincer: that is not inreference to your README suggestion
<jdstrand> lamont: what about /var/lib/cache?
<lamont> akincer: I'm pretty sure that READMEs don't go in /etc
<jdstrand> /var/cache/bind
<lamont>   /var/cache/bind/** rw,
<lamont> ah, yeah.  in the comment
<lamont>   # /var/cache/bind is for slave/stub data, since we're not the origin of it.
<lamont>   # /etc/bind should be read-only for bind
<lamont>   # /var/lib/bind is for dynamically updated zone (and journal) files.
<lamont>   # /var/cache/bind is for slave/stub data, since we're not the origin of it.
<lamont>   # See /usr/share/doc/bind9/README.Debian.gz
<lamont> there
<lamont> and moved /var/cache/bind up as well
<lamont> akincer: and I really don't want to modify named.conf* unless I have to, since they're almost always modified by the admin, and it's sad to make the upgrade prompt them for the diff
<jdstrand> lamont: those comments are in the apparmor profile?
<lamont> yes
<lamont> http://people.ubuntu.com/~lamont/bind9.apparmor
<akincer> Hey, it was me that used a howto on ubuntuforums
<lamont> is the (uncommitted) file
<jdstrand> I like them and your changes to the profile (though I still think 'r' might get us in trouble-- but upgrades are covered properly in postinst, so probably not too bad)
<lamont> jdstrand: if I upload today, we should hear about it this week, yes? :-D
<jdstrand> lamont: looks great
<akincer> Interestingly, had the rw been recursive, I wouldn't have had the problem to begin with naughtiness of me putting zones in /etc/bind/zones aside
 * lamont has had mixed results with ubuntuforums howtos....
<jdstrand> lamont: do you have an opinion on putting a comment in the non-apparmor config files?
<akincer> That's the first time I've had a failure. But to be fair, it would have worked had apparmor not gotten in the way
<lamont> jdstrand: <lamont> akincer: and I really don't want to modify named.conf* unless I have to, since they're almost always modified by the admin, and it's sad to make the upgrade prompt them for the diff
<lamont> OTOH, dapper smacked them around in an upgrade, iirc
<akincer> I am not suggesting you do so
<lamont> -security upgarde
<lamont> so y'all already made it painful for some upgrades... thanks. :-P
<akincer> hehe
 * jdstrand doesn't recall that
<slangasek> akincer: "would have worked", but it was still recommending usage that was contrary to the FHS :/
<akincer> There are times (like today) that your first concern is to get it working. Then you go back and nice it up
<slangasek> so it was only a matter of time before the advice in that howto was brought up short by reality
<jdstrand> lamont: I haven't looked at it's conffile/config file handling-- I was mostly concerned about a new install there
<lamont> jdstrand: IIRC, query-cache crappage that I was ignoring since the defualt changed in-source in 9.4
<jdstrand> lamont: if that's too hard, no problem
<slangasek> (e.g., storing nsupdate zones in /etc/bind will also fail for users who have read-only root filesystems)
<akincer> And yes, I could have used 7.10 server, but I would have to upgrade soon anyway
<akincer> figured I'd save some time
<lamont> slangasek: that howto doesn't actually do anything wrong that I saw... other than totally not mentioning dynamic updates and what to do with the zone file
<slangasek> lamont: ah, fair enough
<lamont> and if a package delivers a README.Debian file, it's _always_ a good idea to read that file...
<lamont> if for no other reason than to find out what crack the maintainer is on
<lamont> which reminds me... did we ever decide if it was just dund or all 3 that I'm adding back into bluez-utils?
<slangasek> I think just dund :)
<lamont> stevenK disclaimed knowledge on the subject
<slangasek> would be nice to get Marcel's input
<lamont> slangasek: sounds good to me
<lamont> Marcel == debian maint?
<slangasek> Marcel == upstream
<slangasek> debian maint just introduces gratuitous deltas to the Ubuntu packaging, I don't think he'll have any relevant input ;)
<lamont> heh
<lamont> ok.  I'll turn on dund and fire email at upstream then. :-)
<slangasek> (Marcel was at UDS Boston; dunno if he's coming again to Prague)
<lamont> slangasek: I'm gonna not be in Prague either
<slangasek> lamont: aww
<MountainX>  I need to know what will happen when I change a Group's ID. I want to change my admin GID from 114 to 113. There is another group with ID 113 at the moment. So, how do I proceed? (This is part of setting up an NFS server...)
<mok0> MountainX: you need to move the other users to a different group first
<MountainX> there are no users in GID 113 (name=adm). So if I change admin group to GID 113, then can I change adm group to GID 114 in a second step?
<MountainX> is there a howto for manually syncing passwd and group files?
<MountainX> on #ubuntu they recommended I just try it and see what happens. I would rather read up on the details first however ;)
<mok0> MountainX: then it doesn't matter, go ahead and change it. No howto afaik
<MountainX> mok0 - thx
<kirkland> MountainX: are you going to use groupmod to do it?
<mok0> MountainX: It's really no big deal... only edit the /etc/passwd file
<kirkland> MountainX: or were you planning on editing /etc/* ?
<kirkland> MountainX: man groupmod
<MountainX> I have no idea how to do this. My goal is to have all GIDs and UIDs sync'd up on my half dozen computers. (I'm setting up NFS.) I will do the best way that is recommended. I had planned on editing /etc/*
<mok0> MountainX: that's fine
<kirkland> MountainX: I'd probalby use NIS
<MountainX> If I edit passwd and make a version that I like (say with admin GID = 113) can I just copy that to all the other computers without wreaking havok?
<kirkland> MountainX: are all of the machines going to have identical groups?
<mok0> kirkland: if you use nis, you have to be aware that it doesn't serve uids < 1000 afair
<MountainX> Once I do this step, I think I'll tackle OpenLDAP next. But I want to sync everything up first so all users have same UID on all machines. And I want the same for groups. SO yes, I will set up the same groups on each machine I think.
<kirkland> MountainX: if you have different distributions (fedora, ubuntu, etc), or even different versions of the same distribution (edge, hardy), or even different packages installed on different machines with the same distribution, you might have issues
<mok0> MountainX: how many users?
<MountainX> I have some Gutsy and some Hardy atm. And I'm looking for a simple solution to get NFS working. I have about half a dozen users and about the same number of computers. It is a home office. (We have more computers than cars ;)
<mathiaz> MountainX: I'd suggest to use ldap to centrally manage your uid and gid
<mok0> MountainX: just create a passwd and a group file and copy them to all of the workstations
<mathiaz> MountainX: mok0 suggestion ^^ is also worth a try if you want something working now
<MountainX> OK. I will do both :)
<MountainX> I will organize my users first and copy a consistent passwd file to all computers. Then I will try LDAP next.
<mok0> MountainX: get it to work first, then worry about ldap later
<mathiaz> MountainX: once you have ldap running you won't need the passwd and group file synchronization
<kirkland> MountainX: do you care about system users, or only real human users?
<mok0> right, at that point you need to remove the changes to /etc/passwd and /etc/group
<MountainX> I want my admin account to have the same GID on all machines and I want my real human users to each have the same UID on each machine for starters.
<kirkland> MountainX: I've done something similar in the past, syncing only users >= 1000
<mok0> In Ubuntu the paradigm is that the first user belongs to the admin group, and can do sudo (sudo -i)
<mok0> kirkland: yes UID >= 1000 must be adhered to, otherwise a lot of stuff doesn't work for users
<mok0> for example users don't have access to certain devices
<kirkland> MountainX: that helps if you a situation, such as one workstation running, say MythTV, but that user/group doesn't exist on your main server you're syncing from
<kirkland> MountainX: you'd erase the mythtv user/group on the clients that have them
<kirkland> MountainX: use your imagination, replacing mythtv with mysql, postgres, something-more-near-and-dear-to-your-heart
<MountainX> So I understand that I can pick one passwd file that I like and edit it a bit (only being concerned about UID >=1000) and then copy it to all clients. I am concerned then about the resulting changes. Will users be able to log in after I copy the new passwd file to the machine?
<mok0> MountainX: sure.
<mok0> MountainX: I suggest you make a passwd file just containing the 6 users and append it to each passwd file
<mathiaz> MountainX: hum... Not sure it's a good idea to copy a complete password file around
<mathiaz> MountainX: you may have specific system account created on some computers so that services can run correctly
<mok0> right
<mathiaz> MountainX: if you copy the complete password file around you may end up in situation where services are not running anymore
<mok0> so it's better just to append the "users" part of the passwd file
<MountainX> ok. I will just change the 6 human users (all with UID>=1000). Then I will append to existing passwd on each machine. (And I assume I will delete the pre-existing lines in each passwd file for those 6 users before saving.)
<kirkland> MountainX: don't overwrite, append
<kirkland> MountainX: yup, i suggest using grep
<mok0> MountainX: yes, exactly
<MountainX> thank you everyone
<mok0> Good luck MountainX
<MountainX> and for making the admin group have the same UID on all machines, are there any gotchas?
<mok0> sounds like fun
<mok0> MountainX: say you have UID 1000. Then make sure that you belong to group "admin", and put that in /etc/suderes
<mok0>  /etc/sudoers
<mok0> It is probably there by default, if it's an Ubuntu system you have
<MountainX> my problem is that admin group has GID 110, 113, 114 etc on different machines. I want to make admin GID the same on all machines.
<MountainX> the reason I want admin GID to be the same is because of the PITA Windows Service for Unix running on my file server.
<mok0> admin has 110 on my machines, it appears
<MountainX> I checked all mind and they range from 110 to 114. I need them to be the same. But I am concerned about gotchas when I change them.
<mok0> MountainX: it doesn't really matter, as long as user "MountainX" belongs to the appropriate admin group on each machine
<MountainX> I am finding that it matters for Services for Unix. At the moment I have SFU all set up but I am stumped by permission denied errors so I'm working through that. This effort to make all admin GIDs the same is part of my effort to fix it.
<mok0> THen I suggest you make gid = 110 on all machines
<mok0> for group admin
<MountainX> (My next step would be to remove Windows and install Ubuntu on the file server, but that is about a 1 week job at least. I have a great backup solution running on the Windows box and I don't know enough yet to get the same running under Linux... but thats off topic.)
<kirkland> rsync ;-)
<MountainX> I know about rsync, but I have to give myself more than a week to learn it well enough to rely on it.
<mok0> rsnapshot -- based on rsync but with a layer on top to keep daily snapshots
<MountainX> so that's why I'm sticking wth the PITA services for unix (I hate it)
<mok0> MountainX: where does that come from? Never heard of it
<MountainX> http://en.wikipedia.org/wiki/Microsoft_Windows_Services_for_UNIX
<mok0> eeek
<MountainX> here's my thread on the difficulties getting the NFS server part working. I still don't have it solved... http://www.interopsystems.com/community/tm.aspx?m=14379
<MountainX> eeek is right :)
<mok0> But you have an Ubuntu system?
<MountainX> all computers except file server run ubuntu (hardy or gutsy)
<MountainX> file server will run ubuntu as soon as I learn more
<mok0> MountainX: normally, you have to export the file systems you want to serve in the file /etc/exports
<mok0> Perhaps you need something similar on SFU
<rgl> hi.
<MountainX> so what happens to permissions if I go to a ubuntu computer and change admin GID from 113 to 110? Can I really just make that change without breaking anything? (That is, assuming GID 110 has no users assigned at the time of the change).
<rgl> is there a work arround for running hardy beta inside virtualbox?
<mok0> MountainX: the only thing that happens has to do with permissions to read/write to directories
<mok0> so you need to make sure that all files that "belong" to the old gid get owned by the new one
<MountainX> say a user has write permissions because they belong to admin group (when GID = 114). Then I change admin group GID to 110. Does that user lose write permissions?
<mok0> MountainX: yes, but you can change the gid of the file/directory
<MountainX> mok0 - OK, thanks.
<mok0> MountainX: I can now see that the admin group has different gid's on my machines, and it doesn't matter
<MountainX> mok0 - it only matters because my file server is running Windows Services for Unix.
<mok0> hm, ok.
<mok0> I guess you can't trust Microsoft to implement anything correctly
<MountainX> yes, I'm getting that MS software off the file server as soon as I can. But there is a lot to learn in the transition.
<Deeps> why not just use samba in the mean time?
<MountainX> I can't use smbfs or cifs because of the gedit/cifs bug
<nxvl> MountainX: you don't need tu use smbfs, you can use samba
<MountainX> when I set up fstab, I thought I had to specify either smbfs, cifs, or nfs (I'm not considering sshfs).
<nxvl> you can use smbclient on your init
<nxvl> instead of smbfs+fstab
<mok0> That's easy to test
<MountainX> if smbfs and cifs both have the bug with gedit and similar apps, wouldn't samba have the same problem?
<nxvl> MountainX: it sound logical, but you never know
<nxvl> MountainX: samba is an app, smbfs a kernel module
<MountainX> I'm more than a week into getting NFS to work. I think I'll stick with NFS until I either get it to work or I hit a deadend.
<MountainX> I suspect smbfs and samba use the same (or very similar) protocol
<nxvl> yep, but one uses the kernel, the other don't
<MountainX> and I suspect it will cause the same bug that made me switch to nfs
<nxvl> i can be a bug involving not only smbfs, but the kernel also
<nxvl> s/i/it
<MountainX> nxvl - thx. Good to know the difference, but I still think either one will have the same gedit problem. The problem is that samba/smbfs when connected to shares on a Windows server don't allow an open file to be moved/renamed.
<MountainX> therefore, gedit doesn't work.
<nxvl> MountainX: well, that can be a gedit bug also, i don't say there isn't that bug with that configurations, just that you don't know :D
<MountainX> the gedit/cifs bug has been discussed for two years. I decided to just resolve the problem by moving to nfs.
<nxvl> i will think that the bug in this case
<nxvl> is windows
<nxvl> :D
<nxvl> if a microsoft product is involved it is always it's fault
<nxvl> :P
<mok0> bug #1
<MountainX> haha
<ubotu> Launchpad bug 1 in ubuntu "Microsoft has a majority market share" [Critical,Confirmed] https://launchpad.net/bugs/1
<nxvl> anyway i don't use gedit
<MountainX> yeah, I'm doing my part to solve bug #1
<nxvl> i'm a shell man
<MountainX> is there an easy way to replace GID on all files (regardless of location) on the local disks on my server -- for only those files that have MountainX:admin as the owner?
<nxvl> shell scripting!
<mok0> find . -uid 113 -print
<mok0> sudo find / -uid 113 -print
<mok0> or gid
<nxvl> ls -l | grep MointainX | cut -$(i don't remember :P)
<MountainX> nxvl - as a newbie I'm in that situation where everything I need to do has a step that leads to something else I don't know how to do ;)
<mok0> MountainX: then come back here and ask again :-)
<nxvl> MountainX: it is the best situation
 * nxvl loves not to know
<mok0> hopefully your IRC client still works ;.)
<nxvl> that makes you learn new things
 * nxvl loves to learn
<mok0> MountainX: but look at "find" it is a great tool
<MountainX> yeah, I am having a good time learning Linux. I am never going back to Windows. Although when I get really frustrated, I think about it for a few minutes before I come to my senses
<nxvl> MountainX: man is your friend
<nxvl> what i'm more grateful about linux is how it has make me an investigation person
<nxvl> and also learn things i have never imagine before
<mok0> yeah that's true
<MountainX> what I am most grateful for is the sense of freedom of choice and ability to get to the bottom of anything.
<nxvl> MountainX: if you are going to do sysadmining work, you MUST learn some scripting language, i recomend bash, because is what yu will use more
<nxvl> MountainX: so, find some bash books and start reading
<nxvl> walk before run
<MountainX> nxvl - OK. I will
<nxvl> MountainX: getting to the bottom of anything makes you learn and see things you have never imagine there where there
<nxvl> also
<nxvl> can't you use sshfs?
<nxvl> if all of your clients are linux, and server is linux
<mok0> hey, sshfs, what's that?
<MountainX> sshfs I am told is not good for large files. I copy videos, virtual machine images, etc.
<nxvl> sshfs is easier and safter to use
<nxvl> MountainX: mounting remote folders via ssh :D it rocks!
<nxvl> MountainX: well, i haven't try it with large files IIRC, so i can't tell
<mok0> cool
<MountainX> nxvl: do you recommend it for files as large as 2 GB? I heard it was slow and prone to errors on large files.
<nxvl> MountainX: but for quick things it rocks
<MountainX> ok
<nxvl> mok0: is like scp, but as a fs or something like that
<mok0> cool
<kirkland> sshfs has high cpu overhead on both client and server, due to encryption of *everything*
<nxvl> MountainX: also, why is that you need gedit that hard?
<mok0> kirkland: well that's ssh
<nxvl> kirkland: encryptation of everything rocks!
<nxvl> kirkland: you are talking to a man that tunnels everything via ssh, so i don't really matter :P
<kirkland> nxvl: :-)
<MountainX> this wasn't supposed to be hard when I started... I just installed Hardy on a computer and set it up the way I set up Gutsy before. But then gedit and other apps would not edit any files. (All files are on the Windows file server.)
<kirkland> nxvl: me too, except when I'm backing up 1TB of data from one machine, to another sitting right next to it
<nxvl> i will eat all my CPU one or other way ecrypting anything
<mok0> It's not really encrypted... only during network transfer
<MountainX> I thought switching from cifs to nfs would be easy ;)
<kirkland> nxvl: in which case, I've seen a 40% improvement using NFS rather than rsync+ssh
<nxvl> kirkland: i only copy txt files
<mok0> I will be looking at openafs shortly
<nxvl> kirkland: the large files i copy are logs, and i rotate them always
<kirkland> nxvl: i have some very large qemu vm images that don't compress well
<nxvl> MountainX: hardy is still beta, report it to launchpad and ping here for a solution
<nxvl> kirkland: vmware server, it is free :D
<mok0> kirkland: what format are the images?
<kirkland> nxvl: free as in beer
<nxvl> kirkland: yep
<nxvl> i like beer
<nxvl> :D
<kirkland> nxvl: kvm/qemu free as in freedom (and beer)
<nxvl> ssh -X virtualbox :P
<MountainX> nvxl - the gedit/cifs bug has been around for more than 2 years. gedit devs won't fix it because they say it is a file system problem. The cifs/samba people won't fix it because they say that a file system shouldn't allow an open file to be renamed.
<nxvl> MountainX: an open file shouldn't be renamed, and that is that it work
<nxvl> and that is using samba or a localfs
<nxvl> i'm not a samba expert
<MountainX> nxvl - tell that to the gedit devs ;)
<mok0> MountainX: can't you use another editor?
<nxvl> as you maybe have notice i'm a crypt/security man
<nxvl> :D
 * nxvl loves vim
<MountainX> yes, but the problem happens with other apps too. I thought it would be easiest to get rid of cifs/samba.
 * mok0 loves emacs
<nxvl> since i don't have X server running on servers
<MountainX> this problem is on the clients
<nxvl> mok0: emacs is nice for long editions, but for quick edits, it sucks
<mok0> nxvl: yeah, I use vim for those as well
<nxvl> ok, going out for a wile
<nxvl> while*
<mok0> hehehe
<nxvl> or however it should be written :S
<nxvl> bbl
<mok0> going out for a wife
<MountainX> ^ that's how I read it  lol
<mok0> I read
<nxvl> mok0: not even joke about it!
<nxvl> im still to yung
<mok0> nxvl: there's still time :-)
<mok0> MountainX: usually root is not allowed to access NFS mounted shares
<MountainX> yes, that's how I have it set up
<mok0> MountainX: that might explain why you could not cd to /mnt
<mok0> (c.f. your posting on that SUA board)
<MountainX> mok0 - let me show you something....
<MountainX> it may take a few minutes...
<mok0> sure
<MountainX> mok0 - here is my current problem:
<MountainX> sudo cp /tmp/Basket/ /home/user/Documents/Baskets/
<MountainX> cp: accessing `/home/user/Documents/Baskets/': Permission denied
<MountainX> ... /home/user/Documents/ is an NFS mount
<mok0> MountainX: ls -ld /home/user/
<MountainX> drwxrwx---  2 user admin           64 2008-04-01 02:14 Documents
<mok0> hm
<mok0> MountainX: ls -ld /home
<MountainX> mok0 - that was ls -la
<MountainX> should I repeat with ls -ld?
<mok0> ok
<mok0> -d just means not to enter the directory
<MountainX> ok
<MountainX> here is ls -ld /home == drwx------ 48 user user  4096 2008-04-01 15:56 user
<mok0> ... and you are currently logged on as "user"?
<MountainX> yes
<mok0> I think /home should be owned by root
<mok0> and have mode 755
<MountainX> I changed all that based on several Ubuntu security guides.
<mok0> then directory /home/user should be owned by user:user
<MountainX> OK, so maybe I changed it one level too high...
<mok0> and have mode 751
<mok0> here's my home:
<mok0> drwxr-x--x 245 mok mok 36864 2008-04-01 23:00 /u/mok
<mok0> and:
<mok0> drwxr-xr-x 3 root root 0 2008-04-01 14:05 /u
<MountainX> maybe I pasted the wrong thing earlier. My /home is the same as yours:
<MountainX> ls -ld /home/
<MountainX> drwxr-xr-x 5 root root 4096 2008-04-01 11:55 /home/
<mok0> looks ok
<MountainX> and here is /home/user again (for user "user")
<MountainX>  ls -ld /home/user/
<MountainX> drwx------ 48 user user 4096 2008-04-01 15:56 /home/user/
<mok0> I'd make that mode 751
<MountainX> OK. I can try that change. But I'm not sure how the chmod command will work given that /home/user/Documents is an NFS mount using Windows SFU.
<mok0> no
<mok0> but worth a try
<MountainX> ok
<mok0> you may have to to it under SFU
<MountainX> can I change just  /home/computeruser/Documents/Baskets/ to test? Will access be granted if the parent has more restrictive permissions? I guess not.
<mok0> right
<mok0> If you are not permitted to traverse a directory, for instance (the x bit)
<MountainX> ok
<mok0> you need to check all directories in the path
<MountainX> ok
<mok0> from your posting, you have some very strange uid/gid's: 4294967294 ??
<MountainX> I figured out those strange gid's
<mok0> ok, good
<MountainX> They are when SFU has no GID assigned. It is tough to get rid of them because all new files/folder automatically get created with the Administrators group as owner. But that doesn't translate to SFU. So it everything ends up with no valid GID until you do chown on it. But when I do chown, then the NTFS permissions seem to disappear...
<mok0> hmm
<MountainX> what causes "omitting directory" when trying this cp command? ~$ cp /tmp/Basket/ /media/Shared/Basket/
<MountainX> cp: omitting directory `/tmp/Basket/'
<mok0> eerh. You can't copy a directory with cp, you  need cp -r
<MountainX> ok. finally I asked a question with an easy answer ;)
<MountainX> thx
<mok0> heh I feel good now
<MountainX> when I mount NFS shares under /media/ for user myuser, are the following permissions OK: drwxr-xr-x 12 root root 4096 2008-04-01 00:17 /media/
<mok0> great
<MountainX> mok0-thx
<mok0> so SUA and Linux agree on the uid's
<MountainX> mok0  - I have been working to make that the case. Services for Unix has a user and group mapping tool I've been using. I'm mapping my Ubuntu group admin to the Windows group Administrators. That's why I need Linux group admin to have the same GID on all my computers.
<mok0> OK, I understand
<MountainX> and the Linux group root will not be mapped at all.
<mok0> That might be uid 0, though
<mok0> ah
#ubuntu-server 2008-04-02
<Deeps> heh, in this time you probably could have installed an ubuntu server, had nfs serving from there, and be half way to finding your backup solution too
<Deeps> oh, you've got it working now, nice :)
<MountainX> Deeps - I was thinking the same thing
<Deeps> key search terms would be rsync incremental backup
<MountainX> No, it is not working now. I thought it was, but I still have permissions issues.
<Deeps> oh
<Deeps> heh
<MountainX> My goal is to put Ubuntu on that file server eventually. This might be the right time to start. This morning I thought I would begin by testing NFS an another server running Ubuntu as a first step. That led me to start asking a bunch of new questions about NFS version 4...
<mok0> MountainX: Just use whatever comes with Ubuntu
<Deeps> apt-get install nfs-server
<MountainX> mok0 - true. I tend to make things too complicated.
<Deeps> or, probably, apt-get install nfs
<Deeps> you use what your package manager gives you unless you want the headache of manually maintaining your software
<MountainX> I think I need nfs-common, portmap and nfs-server-something
<mok0> MountainX: it's better to get something working first
<Deeps> nfs / nfs-server will be dependant on those packages, and will install them automatically
<MountainX> Deeps - OK
<Deeps> Package nfs-server is a virtual package provided by: unfs3 0.9.17+dfsg-1 nfs-user-server 2.2beta47-23 nfs-kernel-server 1:1.1.1~git-20070709-3ubuntu1
<MountainX> thx
<Deeps> so that'd be nfs-kernel-server i guess
<MountainX> Deeps - yes, that's the one I was thinking of
<mok0> yup, that's the one
<MountainX> I would still like to know if it supports nfs version 4. just curious.
 * mok0 sheds a tear from watching "extreme makeover" 
<mok0> :-)
<MountainX> is anyone here watching Oprah's "A New Earth" online? I think it's good.
<mok0> Online?
<MountainX> mok0 - http://www.oprah.com/obc_classic/webcast/archive/archive_download.jsp
<MountainX> the player doesn't run on Linux, so I don't watch live. I just download and play in VLC later.
<mok0> Hmm, I'm on my Powerbook and it doesn't work here either. Booh, Oprah
<MountainX> I think Ubuntu does support NFS version 4. On the client, the mount is like this: mount -t nfs4
<MountainX> I am going to eat. Thank you everyone! I appreciate all the help -- and the encouragement to go ahead load Ubuntu on my file server too :) I will be back another day.
<MountainX> (or maybe later today)
<mok0> Huh? Oprah just crashed Safari :-D
<mok0> MountainX: see you
<MountainX> mok0 later
<MountainX> and thx
<mok0> np
<mok0> No, Ubuntu NFS is v3
<BarryToeman> How different is a Ubuntu command-line install vs base Ubuntu Server install (no extra packages installed)?
<Deeps> if i'm right, server install install ubuntu-server package (and all the wonders that come with it)
<Deeps> if i'm right, server install install ubuntu-server package (and all the wonders that come with it), while cli only installs ubuntu-standard
<Deeps> whoops
<Deeps> see package.ubuntu.com for details as to what's in there
<Deeps> (or apt-cache)
<BarryToeman> Deeps: thanks.
<teamcobra> I've got a colocated server running hardy, and I set up ldap today.... now the system hangs at starting kernel log daemon
<sommer> teamcobra: it may be bug #155947
<ubotu> Launchpad bug 155947 in libnss-ldap "ldap config  causes Ubuntu to hang at a reboot" [Undecided,Confirmed] https://launchpad.net/bugs/155947
<sommer> I believe there are work arounds in the comments
<teamcobra> ok, it booted properly when removing "ldap" in /etc/nsswitch... will ldap-created users be able to log into the machine, and if not, is there a workaround? (it is an nxserver among other things)
<sommer> I think so, but you'll want to test that
<sommer> that bug is a priority so, hopefully it'll be fixed before hardy release if not sooner :)
<genii> Just wanted to know the new package which will be replacing webmin function, if anyone may know
<sommer> !ebox
<ubotu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See the plans for Hardy at https://wiki.ubuntu.com/EboxSpec
<teamcobra> sommer: nope, ldap-created users can't log in :/
<genii> sommer: Thanks
<sommer> genii: np
<sommer> teamcobra: ya, I think the best work around is to rearrange the start order of openldap
<sommer> there should be details in the comments to that bug
<sommer> I've been working on setting up a test environment for that bug myself, but have had issues with finding the time lately
<teamcobra> that's what I was thinking, looking into it
<flyback> hey
<flyback> apprentely I canucked up and 6.06 lts does support smp the installer just installed the wrong kernel
<flyback> what do I need to do to force all the smp supporting kernel packages to install, headers, etc I want to make sure I have all the steps right so I can repeat this 9 more times
<flyback> also is there anything in the supplied kernel that would conflict with vmware server
 * flyback is starting to like vmware server 2 beta 2 for windows
<teamcobra> back, going to try to start slapd earlier in the bootprocess
<teamcobra> will report how it goes
<sommer> cool
 * flyback bbl
<teamcobra> sommer: I don't think I did it properly.... do you have any spare time?
<sommer> some
<sommer> fire away
<teamcobra> what's the easiest way to get slapd running before klogd (or any other daemons that need to run under their own user)?
<teamcobra> back ;p
<sommer> teamcobra: adjust the scripts in /etc/rc*.d
<sommer> I think you want to look at /etc/rc3.d first
<teamcobra> ok
<sommer> lower numbers start first :)
<sommer> I'd try starting slapd right after the networking
<teamcobra> it seems like networking starts before klogd as I can ping the device (static ip setup)
<teamcobra> changing the other rc folders now
<teamcobra> hrm, still didn't fix it
<teamcobra> but looking at the log... it doesn't look like slapd started after networking where I told it to ;p
<sommer> what if you change nsswitch.conf similar to: https://bugs.launchpad.net/ubuntu/+source/libnss-ldap/+bug/155947/comments/41
<ubotu> Launchpad bug 155947 in libnss-ldap "ldap config  causes Ubuntu to hang at a reboot" [Undecided,Confirmed]
<teamcobra> one sec ;)
<teamcobra> still hanging :/
<flyback> <flyback> hey
<flyback>  apprentely I canucked up and 6.06 lts does support smp the installer just installed the wrong kernel
<flyback>  what do I need to do to force all the smp supporting kernel packages to install, headers, etc I want to make sure I have all the steps right so I can repeat this 9 more times
<flyback>  also is there anything in the supplied kernel that would conflict with vmware server
<sommer> teamcobra: did you try this order: https://bugs.launchpad.net/ubuntu/+source/libnss-ldap/+bug/155947/comments/49
<ubotu> Launchpad bug 155947 in libnss-ldap "ldap config  causes Ubuntu to hang at a reboot" [Undecided,Confirmed]
<teamcobra> yes, will make sure it is still set at s13 after this fsck
<teamcobra> but pretty positive it is
<sommer> flyback: I would just install anything package with -dev in the name... I guess, not sure about vmware, but I've heard it works well in gutsy and hardy
<flyback> thx
<sommer> teamcobra: maybe try this: https://bugs.launchpad.net/ubuntu/+source/libnss-ldap/+bug/155947/comments/36
<ubotu> Launchpad bug 155947 in libnss-ldap "ldap config  causes Ubuntu to hang at a reboot" [Undecided,Confirmed]
<flyback> now I just gotta find out if I can have a xvnc session off the desktop ubuntu livecd :P
<teamcobra> when making a new file in rcS.d, should I just touch it as root?
<sommer> maybe, but I'd think it'd work either way
<teamcobra> 1 sec, rebooting again
<teamcobra> S13 didn't fix it
<sommer> have you tried comment 36 of that bug?  maybe it can use the cached creds
<teamcobra> 1sec
<teamcobra> still hangs
<sommer> I guess maybe try this guys script: https://bugs.launchpad.net/ubuntu/+source/libnss-ldap/+bug/155947/comments/42
<ubotu> Launchpad bug 155947 in libnss-ldap "ldap config  causes Ubuntu to hang at a reboot" [Undecided,Confirmed]
<sommer> teamcobra: that script is pretty large I
<sommer> 'd look through it and make sure that's what you want to do before using it
<sommer> I'll try to get a test machine setup tomorrow, and hopefully confirm a workaround
<flyback> ugh wish I could figure this out :/
<LiENUS> what package contains nbackup for firebird 2?
<teamcobra_> hrm, looks like it booted up after running that script, but ssh logins don't work
<teamcobra_> still
<kirkland> sommer: teamcobra_: i was actually trying to reproduce that bug today
<kirkland> sommer: teamcobra_: i could get hardy to hang on login, but not on boot
<teamcobra_> kirk: install gutsy, and follow the first 2 pages of this:
<teamcobra_> it'll reproduce
<kirkland> teamcobra_: hmm, i was trying with hardy
<teamcobra_> http://www.howtoforge.com/openldap-samba-domain-controller-ubuntu7.10
<teamcobra_> erm, I meant hardy
<teamcobra_> sorry, my brain is starting to scramble
<teamcobra_> w00t, just ssh'ed
<teamcobra_> thank goodness, sommer.... you've saved me a huge headache
<teamcobra_> and ldap does indeed work ;)
<kirkland> teamcobra_: which comment was your workaround?
<kirkland> teamcobra_: for my reference?
<teamcobra_> kirk: the script, that's gz'ed
<teamcobra_> 1 sec
<sommer> teamcobra_: good news
<teamcobra_> I did have to manually edit my ldap conf afterward to reflect my ip
<sommer> ya I noticed that script had a hard coded one
<sommer> did that work around fix it?
<teamcobra_> yep
<sommer> kirkland: it's attached to comment 42
<teamcobra_> https://bugs.launchpad.net/ubuntu/+source/libnss-ldap/+bug/155947/comments/42
<ubotu> Launchpad bug 155947 in libnss-ldap "ldap config  causes Ubuntu to hang at a reboot" [Undecided,Confirmed]
<teamcobra_> beat me to it ;)
<sommer> heh, link is better
<kirkland> sommer: thanks
<teamcobra_> ack, that sudo-ldap that gets installed broke my sudo however
<teamcobra_> which is a biiig problem
<sommer> should be able to remove that package, I'd think
<teamcobra_> trying now
<teamcobra_> wooowoowooo, all better now (removed sudo-ldap and reinstalled sudo from the recovery kernel)
<teamcobra_> thank goodness for kvm-over-ip
<sommer> party!
<teamcobra_> yeah, party after 8h of headaches :D :D
<sommer> heh, time to drown the woes
<teamcobra_> heheheh, or smoke them out of their holes ;p
<sommer> lol
<sommer> that's good
<teamcobra_> this should be an easy fix to implement then before the final release, since that script did the trick ;) just a matter of cleaning it up and such
<sommer> ya, I imagine the fixes will be applied directly to whichever packages need adjusting
 * teamcobra_ <3's seeing bugs squashed ;)
<foo> teamcobra_: You're sick
<foo> teamcobra_: :)
<teamcobra_> heeheh
<themime> how do i partition my disk via command line
<foo> themime: look into cfdisk
<themime> thanks
<themime> ubuntu created 3 partitions, a linux, an extended and a swap.  whats the difference between the ext and the linux partition? why 2? whats the other meant for?
<themime> (i mean other one besides swap, i know what that ones for)
<kgoetz> extended is for logical partitions to go in
<themime> ?
<kgoetz> the extended partition iis for logical partitions to go in.
<kgoetz> the 'linux' is ext3 (extention3 filsystem), but a primary partition
<themime> i don't understand what you mean by logical partition.  why did ubuntu create it? what did it place in there (more specific than "logical partition")
<kgoetz> the swap is inside the extended partition
<themime> ah ok
<themime> i have an lzm of a kernal from "slax" that i want to use instead of the normal kernel.  is this compatable with ubuntu somehow?
<flyback> I can't seem to figure out which is the stock kernel for ubuntu server 6,06lts
<flyback> when looking thru a list of packages
<flyback> linux, linux-image, kernel, wtf?
<soren> flyback: I'm not sure what you're really looking for?
<soren> themime: No clue. Try.
<soren> themime: Why, though?
<teamcobra> hrmm, after I got ldap working, none of my users get panels once they log in (no top/bottom panel, just background image)... not even newly-created ones
<teamcobra> and it appears that all necessary directories are created upon login
<teamcobra> looks like the panel is segfaulting
<soren> You're not likely to get qualified help for gnome panel issues here.
<teamcobra> k.....
<tonyyarusso> soren: What?  You don't run a GUI on your servers?  I'm shocked!
<Jeeves_> OMG!?
<tonyyarusso> :P
<soren> tonyyarusso: Not if anyone asks, anyway :)
<tonyyarusso> haha
<_ruben> heheh
<teamcobra> well, this is a rdesktop server ;p ;p
<teamcobra> looks like it's not initializing the HAL either upon gui logins :p
<teamcobra> heh, what a catch-22, this problem falls right between server and desktop ;)
<kraut> moin
<teamcobra> here's the hal error, if anyone has an idea: Could not init PolicyKit context: (null)
<soren> teamcobra: There's no catch 22. It's a desktop thing.
<soren> What are you using to serve rdesktop, by the way?
<soren> xrdp?
<teamcobra> nx
<soren> nx does rdp these days?
<teamcobra> nop, just nx
<teamcobra> sorry for the confusion ;)
<soren> no worries :)
<faulkes-> gonna be a long day
<faulkes-> but I finally have access to our facilities to get at that damnable iscsi box
<_ruben> faulkes-: nice :)
<_ruben> just an equallogic reseller representative give a demo of one of their san devices .. quite impressive
<_ruben> +had
<Deeps> any benefits to ipv6 yet, other than 'because you can' and ipv6porn?
<akincer> Unless your provider supports IPv6, not really AFIK other than getting your feet wet in it
<Deeps> i guess i'll be going for a dip for the fun of it then
<Deeps> forced me to learn how to handle the ripe db already, which is nice
<Deeps> and anyone here have any idea how to produce logarithmic traffic graphs with mrtg/rrdtool?
<Deeps> eg, i have a 100mbit link that usually sits around 100kbit
<Deeps> until i do an apt-get update+upgrade
<Deeps> then the graphs become worthless cuz of the spike
<Deeps> google finds a logconvert.pl patch for mrtg that 404s
<Deeps> rrdtool inticates there are command line flags that can be issued, but i'm using mrtg-rrd to generate hte pages and i cant work out how to alter that, and suspect that the change has to be made within mrtg rather than the website
<Deeps> and/or scrapping mrtg entirely and using rrd on its own to poll the interfaces + update its db
<Deeps> nm, just discovered logscale option
<Deeps> spent 3 hours on google yesterday with no joy :(
<mzungu> hi ppl - i'm trying to use apt-proxy on my pc so that other pc's will update through mine instead of directly. i have a slow pipe, and my pc is kept up to date, so only my pc should have to fetch the updates.  I'm having a problem with apt-proxy-import - which seems to be flagged under bug #4844.  Does anyone know how to get the .debs already in the apt cache into apt-proxy manually??
<ubotu> Launchpad bug 4844 in apt-proxy "apt-proxy-import says "no suitable backend found"" [Medium,Confirmed] https://launchpad.net/bugs/4844
<blue-frog> mzungu: will not be easy as they are put in folders if memory serves but you could install apt-cacher in a split second, configure it even faster and copy all your existing deb into apt-cacher folder
<mzungu> ah - so you recommend that over apt-proxy?
<blue-frog> then sources.list have to be tweaked with http://apt-cacher-host:3142/archive.ubunt...
<mzungu> ok
<blue-frog> I used apt-proxy for a long time until I got fed up with minor things but lots of them
<mzungu> similar to apt-proxy, i guess
<mzungu> ok - so apt-cacher is sold?
<blue-frog> same stuff maybe less options but well works well
<mzungu> s/sold/solid ;)
<blue-frog> so far so good.  you need to tweak 2 files
<mzungu> thanks - lemme go try that instead - and dump apt-proxy :(
<blue-frog> one in /etc/default to make it start automatically at boot
<mzungu> ok
<blue-frog> and eventually its conf file if you want to move its homedir folder
<mzungu> ok
<blue-frog> (don't forget to restart the service if you do so..)
<mzungu> he he - yes
<mzungu> many thanks, blue-frog  - i'll give another shout later if i have probs
<blue-frog> then you will need to copy all deb in /path/to/apt-cacher/packages/
<mzungu> sure
<blue-frog> might need to do chown -R ww-data  to have all the deb readable
<blue-frog> www-data
<mzungu> i'll google apt-cacher as well to see what pops up ;)
<blue-frog> it's straight forward anyway...
<mzungu> yep
<mzungu> let's hope so....
<mzungu> apt-proxy was straight forward too, until it came to the import - which is the whole point of the exercise ;)
<fromport> blue0frig/mzungu: you can either tweak the sources _or_ use the apt.conf line Acquire::http::Proxy "http://[cacheserver]:3142";
<mzungu> this is for apt-cacher or apt-proxy?
<fromport> my experience with apt-cacher are positive, but dont combine ubuntu/debian machines. That will go wrong. I ended up with apt-cacher-ng, which works well with both distro's but has a memory leak
<fromport> mzungu: apt-cacher
<mzungu> ok
<mzungu> i only have ubuntu machines - all gutsy
<mzungu> well - in reality, there's my pc, and now the wifes ;) - so i want that to update through mine, as mine is always up-to-date - and we have a slow internet connection
<mzungu> i did also just notice that apt-cacher is not in the repositories - so prolly i'll have to get the source anyway
<fromport> did you enable univer/multiverse in your setup ?
<fromport> apt-cache search apt-cacher |wc -l   gives me 4 lines ;-)
<mzungu> where i'm at is: stuck importing debs from apt cache into apt-proxy. so blue-frog suggested apt-cacher - i've yet to install it
<mzungu> ah - ok - yesy, i have uni/multi
<mzungu> but lemme check... something may have got screwed up with apt-proxy :(
<mzungu> i reverted my sources.list - but seems it now needs an update...
<mzungu> yes - now it's fetching the whole list again ;(
<mzungu> DAMN apt-proxy ;)
<fromport> so it can only get better ;-)
<mzungu> yup! ;)
<mzungu> all i'm trying to do is save downloading all the updates since gutsy came out!  mebbe there's a moral here - perhaps it should be an install option to (a) have a machine act as proxy, and (b) the installer ask if there's already a proxy on the lan - this can't be an uncommon situation
<mzungu> this would also lighten the load a bit on canonical's servers
<fromport> just copy al *.deb files from /var/cache/apt/archives/ from your first updated machine to any other machine will also do the trick
<mzungu> ah - ok - didn't know that
<mzungu> so the apt cache is searched first - i thought there would be complications in terms of the packaged.db or something
<fromport> nope
<fromport> when you do a "apt-get update" it will see the files allready "downloaded" and apt-get dist-upgrade wont download those files again
<mzungu> ok - so in my case, this should work - for now - but still, to keep copying before updating is a pain
<fromport> yep , that's why i use apt-cacher-ng (about 6 machines to update)
<mzungu> you mentioned a memory leak? - how bad is that?
<mzungu> well, the update did the trick, and i now have apt-cacher - lemme go play with it, and i'll give you a shout later
<mzungu> many thanks for the help, and info
<spiekey> hi
<spiekey> i am wondering how this "security" bug is affecting opensshd systems http://www.securityfocus.com/bid/28531/discuss  ?
<spiekey> it did some googling and i found out that in some cases it executed ~./ssh/rc, but if its your rc file it does not really, care. Does it?
<lamont> spiekey: well, if I gave you only specific commands that you could  access via forcedcommands in ~/.ssh/authorized_keys, and also gave you write access to .ssh/rc, well then you could bypass my restrictions, and that might not be good. :-(
<lamont> OTOH, if I'm locking down ssh and I leave you with write access to .ssh/rc, then I'm not very good at locking things down, am I?
<rhineheart_m> hello.. Is there an .asp server for ubuntu-server?
<zul> mathiaz: hey I saw your samba patch for ucf it looks good
<mathiaz> zul: thanks. slangasek is going to upload a new version of samba that fixes another bug
<zul> mathiaz: sounds good
<spiekey> lamont: who would leave ~./ssh/rc writeable by anyone??
<lamont> spiekey: muppets
<spiekey> this is more a architecture bug, not really security related :=
<dthacker-work> specifically Fozzy
<spiekey> nothing to worry about i guess. right?
<lamont> spiekey: well, as long as you're not a muppet. :-)
<lamont> I expect that there are enough muppets in the world that we'll see a backport of the fix
<lamont> OTOH, I'm less concerned that I was when I first read the securityfocus non-information bulletin
<spiekey> hehe
<spiekey> i actually came here for some udp+nmap question. If i want to scan a udp port in the same subnet, it seems to work. But if the destination is a diffrent network then i keep seeing arp requests and no udp packets
<spiekey> and no arp reply
<spiekey> any idea why?
<lamont> sounds like you have b0rked routing
<lamont> if it's off-subnet, then you won't see an arp-reply
<spiekey> so arp replys only work for same subnets?
<lamont> arp is a discovery mechanism whereby machines on the local subnet translate layer 3 (IP) addresses into layer 2 (ethernet) addresses.
<lamont> for off-subnet destinations, you should know the IP of the router (in the routing table), and then you should see an ARP request for the router's IP, not the destination IP
<lamont> ARP requests, being layer 2, are not forwarded off subnet
<lamont> OTOH, if the router is configured to do proxy-arp, then it'll respond for any address that is off-subnet, with it's ethernet addr.
<lamont> that's not generally how it's done in most places, though
<spiekey> lamont: thanks a lot!
<spiekey> i get the idea
<rhineheart_m> hello.. Is there an .asp server for ubuntu-server?
<sommer> rhineheart_m: I wouldn't hope for much seeing as .asp is a Microsoft technology, but I found this thread: http://forums.burst.net/archive/index.php/t-2132.html
<zul> rhineheart_m: you might try libapache-asp-perl
<rhineheart_m> thanks..
<glycoknob> hi folks
<glycoknob> i some questions regarding security - the wiki was a not great help - is propolice enabled for packages in server-feisty? and what about grsecurity/pax are there pre-build kernel images or source-patches? or maybe someone can throw some links at me
<lolichan> Can anyone recommend some sort of server statistic packagey thing? As in that'll show graphs and stuff of bandwidth/cpu usage/etc. ( Â¯3Â¯)
<dthacker-work> !mrtg
<ubotu> Sorry, I don't know anything about mrtg - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<dthacker-work> lolichan: cactus, zabbix, nagios, or roll your own with mrtg and rrd
<lolichan> oh ho, those look pretty good.
<lolichan> thanks :3
<dho_ragus> so, what is the best way to get centralized credentials across multiple linux systems?
<dho_ragus> and what is the best place for me to find information on how to implement features of that nature in ubuntu?  i assume ubuntuguide.org won't be the place...  their LDAP section is incomplete.
<dho_ragus> ...is LDAP the only/best way to do it?
<good_dana> dho_ragus: LDAP is the best way to do it
<good_dana> i'm having a problem with setting up my SSH server, i cant access it from remote networks, however the port is forwarded
<dho_ragus> good_dana: any info in ssh -vvv ?
<good_dana> neither ssh nor sshd accept -vvv or -v as a command switch
<dho_ragus> ssh should...
<sommer> 4
<dho_ragus> -vvv will spit out the version, then debug: [1234] lines
<good_dana> http://pastebin.ca/967890
<dho_ragus> good_dana: oh, i mean ssh -vvv user@host
<good_dana> haha
<good_dana> i'm just connecting through putty
<dho_ragus> what's the IP# you're trying to connect to?
<good_dana> i sent it to you in a pm
<good_dana> i think
<good_dana> haha
<good_dana> dho_ragus: 64.200.16.140
<dho_ragus> port 22 connection refused, so we know at least that much.
<dho_ragus> alternate port?
<good_dana> i'm connecting from a different external ip address
<good_dana> in the 64.200.16.x range
<good_dana> so it's definitely on that port
<dho_ragus> well port 22 is closed for me
<dho_ragus> can you ssh to the local IP# from the machine itself?
<good_dana> yeah
<good_dana> wait whats the question?
<good_dana> can it ssh to itself?
<dho_ragus> yes, that is the question
<good_dana> i can ssh to it from my ip (which is a different external ip, but in the same class c), i can ssh to it from the private network its on, i cant connect to it from anything else
<dho_ragus> ah.  what kind of gateway to the internet do you have?  perhaps a bridged firewall preventing inbound connections on 22?
<good_dana> cisco 2600
<good_dana> http://pastebin.ca/967906
<good_dana> local ip
<dho_ragus> well, it's possible that something is blocking inbound connections from the internet on 22.
<dho_ragus> i'd try an alternate port.
<good_dana> well, i have control over every router between it and public internet, so i'm going to see if i can fix the acl
<dho_ragus> oh, wait, you *can't* connect to it on its public IP#?
<good_dana> thanks for your help
<good_dana> if i use the private ip it works
<pr0le> good_dana: yeah, I'd watch your acl hits when trying to connect - it's probably just buried in your acls
<dho_ragus> yeah, definitely sounds like a router problem.
<pr0le> *something* buried in your acl, I should say
<good_dana> dho_ragus and pr0le: i figured it out, i had an explicit deny on 22 for every host that wasnt mine!
<pr0le> ah, that would do it :)
<spiekey> does anyone know how a nmap udp scan works since a handshake (SYN, ACK, FIN) does not exist?
<Deeps> spiekey: google: nmap udp scan, 5th link http://www.linuxsecurity.com/content/view/117695/171/ says
<Deeps> Nmap will send a 0-byte UDP packet to each port. If the host returns a "port unreachable" message, that port is considered closed. This method can be time consuming because most UNIX hosts limit the rate of ICMP errors. Fortunately, Nmap detects this rate and slows itself down, so not to overflow the target with messages that would have been ignored.
<mindframe-> spiekey, udp scans are super slow and inaccurate.  it's best to use something like nessus that uses "udp scripts" to determine if a udp service is responding.
<mzungu> fromport: update- all working now - thanks :D
<zul> ergh meeting in 5 minutes?
<fromport> mzungu: good job! ;-)
<spiekey> thanks Deeps and mindframe
<akincer> it seems that on Hardy, /etc/init.d/bacula-sd restart is broken
<akincer> I had to issue a start command to get it to run
<zul> akincer: please open a bug in launchpad and I will take a look at it when I get a chance thanks..
<akincer> ok
<akincer> evidence will be slim as the logs just don't show anything useful
<akincer> I may issue a bug report on this as well, but the default configs have the FDAddress, DirAddress and SDAddress as 127.0.0.1. This breaks any remote bwx-console or bat connections
<mathiaz> owh: I've closed the udev task for the status action script
<mathiaz> owh: and remove the init script from the bug attachment
<mathiaz> owh: udev maintainer said it was really a bad idea and doesn't want to see such a thing
<J-_> what permissions do I have to change to access /var/www as user? So I can drag and drop files, etc.? And, what do I need to do?
<foo> J-_: ls -ld /var/www
<J-_> foo: what will that do?
<foo> J-_: tell you the perms you need to know to answer your question
#ubuntu-server 2008-04-03
<J-_> foo: that gave, drwxr-xr-x 3 root root 4096 2008-04-02 03:01 /var/www
<foo> J-_: http://www.dartmouth.edu/~rc/help/faq/permissions.html
<foo> J-_: Alright, read that and it'll tell you how to read that output so you can get to what you want to
<kirkland> owh: fwiw, diffs usually end in .diff, rather than begin with them :-)
<kirkland> owh: at least vim will syntax highlight as a diff/patch if they end in .diff or .patch ;-)
<owh> kirkland: Tah. I tried calling you about it last week but you were not around and I forgot to follow it up. The apache patch I sent you was the wrong way around, that is from new to old :(
<owh> kirkland: I fixed it on the bug report :)
<kirkland> owh: it's not a big deal, just thought I'd mention
<kirkland> owh: the actually code being there is more important
<owh> kirkland: Yup. Tah, It was more a function of my automated script. It allowed me to parse/filter/etc.
<kirkland> owh: fair enough
<owh> kirkland: Yeah, we're discussing how to deal with Debian in ubuntu-bugs atm.
 * faulkes- is not happy
 * faulkes- is not happy about missing todays meeting
<kgoetz> :(
 * faulkes- is not happy about a missing 2600 cisco and a missing 2900 switch
<kgoetz> never mind. i missed it too
<faulkes-> on the plus side, after spending far too much time, I did manage to finally our iscsi box on the network so it can be configured
<faulkes-> 6.0TB in it too
<kgoetz> "to finally our" ... missing 'get'?
<faulkes-> just a very long day of being wedged between a 2 ton AC unit, a rack cabinet and the white noise of 2k sq of servers
<faulkes-> yes, forgive me, I missed the get ;)
<Deeps> ouch
<faulkes-> yeah, it was all kinds of awesome
<faulkes-> apparently nobody knew that we had a cisco 7505
<Deeps> shit like that is why i quit being a dc monkey, heh
<kgoetz> hehehe
 * faulkes- is *not* a dc monkey
<Deeps> you spent the day in a dc, sounds like a monkey to me ;)
<faulkes-> unfortunately, I was forced to go in
<kgoetz> "i was forced to! my pants kept falling down."
<faulkes-> I'm not a politician ;)
<Deeps> heh, monkey for a day aint so bad
<faulkes-> more like janitor
<faulkes-> because 2 people were fired over this entire mess
<Deeps> whoops
<kgoetz> :O
<Deeps> bleh, .ca
<faulkes-> yeah well, add it to there lessons learned
<kgoetz> thats a pretty messy mess
<Deeps> (im unemployed currently and it sounds like whereever you work might be hiring now ;))
<faulkes-> heh
<Deeps> but .ca's too far
 * faulkes- recalls we do have a fl.us head office
 * faulkes- is waiting for the .uk office though
<Deeps> i'm in .es
<Deeps> and looking for telework atm
<faulkes-> depends, what is your skill set like
<Deeps> umm, 2-4 years linux experience, java, php, sql, bash, basic python, ccna level (but not yet certified) cisco skills
<Deeps> a bachelors degree but few people tend to care about that already despite it being only 8 months
<rhineheart_m> how to determine if fastcgi has been installed to my server?
<rhineheart_m> can anybody here guide me on how to install fastcgi?
<PanzerMKZ> fastcgi in blender?
<rhineheart_m> fastcgi for apache
<PanzerMKZ> go to #apache
<kgoetz> rhineheart_m: tried running `apt-cache search fastcgi` yet?
<rhineheart_m> not yet. that would if I have it installed already or not, right?
<kgoetz> `dpkg -l |grep cgi` to list packages related to cgi
<rhineheart_m> kgoetz, I got this one: ii  libfcgi-perl                          0.67-2                   FastCGI Perl module
<kgoetz> rhineheart_m: now `apt-cache show libfcgi-perl`
<kgoetz> rhineheart_m: dont PM me
<rhineheart_m> kgoetz, that was the content
<kgoetz> rhineheart_m: i dont care. its for your information
<rhineheart_m> kgoetz, okay. it doesn't help
<Silvano> are there any noticable improvements from ubuntu server 7.10 to 8.04?
<kgoetz> LTS support
<andguent> I'm curious to know the same, all I know for sure is that when possible, it seems best to keep up with the latest, Ubuntu developers move too fast to leave your system completely alone for years :)
<kgoetz> with luck it'll be more stable too (gutsy isnt renoun for its stability)
<kgoetz> although that is mainly desktopland. serverland seems to be ok
<flyback> oh really?
<flyback> thx
<flyback> you just solved a worrysome issue with 2 of my sisters
<flyback> err systems
 * flyback hates trying to type few mins after waking up from a deep sleep
<kgoetz> heh
<flyback> ugh
<flyback> brain rattle
<Silvano> so if i've already got 8.04, and am about to reinstall on a different harddrive, there should be no reason or benifit to go with 7.10 then?
<flyback> thyat was a seriously wacky dream too
<flyback> although beats the last 5 yrs of paxil nightmare hell :/
<flyback> anyways
<Silvano> lol
<andguent> if 8.04 works, use it, it will be officially released soon enough
<flyback> I have had a hell of a time trying to put the right kernel in 6.06lts
<Silvano> sounds good andguent. thanks for your advice :D
<flyback> ended up just having to remove all the stuff with kernel or linux in it then apt-get the right one based on the dpkg list I got from a temp vm I setup
<kgoetz> flyback: if you can afford a month of 'work in progress', then go with 8.04
<flyback> na it's just to run vmware server
<flyback> i've decided on fedora at home guys no offence but some stuff rubb3ed me the wrong way
<flyback> but stuck with 6.06lts at work :P
 * kgoetz feels no pain
<kgoetz> (then again, i dont have to suffer fedora)
<flyback> heh I started 3with redhat 2.1 back in the day
 * flyback beats kgoetz with his walker
 * kgoetz swats at flyback 
<andguent> Silvano: when it is officially released and stable, do an apt-get update; apt-get upgrade (or other similar function from another favorite package manager)
<flyback> not a good idea :)
<flyback> a flyback is a type of high voltage pulse transformer
<flyback> turns 100-200 volts into 35,000
<flyback> it's what energizes a picture tube
<kgoetz> flyback: :) do want
 * flyback bzzzzz kgoetz
 * kgoetz smokes
<kgoetz> (not literally, fwiw)
<flyback> na it's about 0 current, you can't boost voltage magically for free :P
<flyback> but it will still knock you on your ass
<flyback> how the hell do I tell what kernel to use what I got dual p4 xeon blades
<flyback> and why is there likes 10 packages
<Silvano> i almost don't want to reinstall, got things together fairly nice and the way I want so far, minus my actual site up and running. But its on a 5 gig hd, and ive aquired a 40 gig for this box :S
<flyback> so ghost it
<kgoetz> flyback: not sure i follow your qustion
<kgoetz> rsync ;)
<rhineheart_m> I guess webmin is already been maintained
<flyback> ok apprentely the installer being scripted by this pxe boot vm installed a kernel that can't do smp
<flyback> so I had to force in the right one
<Silvano> rhineheart_m: supposedly, it doesnt work well with ubuntu, i've got it up and running, and it seems fine.
<flyback> and I just had a hell of a time knowing what was what I finally resorted to doing a clean install to a vm and making a script to apt-get those files
<rhineheart_m> Silvano, yeah.. I agree with that.. if you just know how to handle config files.. then webmin is really a great stuff! I love its file manager.. it uses https and can be configured what port you will be using.. so I found it more secure this time than before
<flyback> after I dpkg --get-selections |grep kernel to a file and again made it intoa  script that removed all that
<andguent> Silvano: there are multiple good drive clone utilities out there, sometimes a new disk in retail package actually has the software included, there are definitely ways around having to start over. -- A second option is learning how to grab the appropriate configs from the old drive and placing them on the new drive to see what you can "restore from backup"
<flyback> nm
<flyback> my brain is fried
<kgoetz> does 'just install the smp kernel' help?
<flyback> I think I just did like kernel-server and that worked
<Silvano> thanks andguent, but although I don't want to, I think reinstalling will be good for me, as Im finally starting to understand some key commands and linux concepts :D
<flyback> I think the virtual machine I used that takes a iso of several os's and scripts it for pxe installs did somethjing\
<andguent> Silvano: just keep that old drive intact, you can always mount it as a secondary drive and take a peek at your ftp daemon config file or whatever
<flyback> they are headless blades
<Silvano> great idea...didnt think about that!
<flyback> it's just there was like 3 ways to get the same thing when I was looking at packages to install for the kernel
<flyback> and no explanation
<rhineheart_m> what does this warning mean (chkrookit output): Checking `lkm'... find: WARNING: Hard link count is wrong for /proc: this may be a bug in your filesystem driver.  Automatically turning on find's -noleaf option. Earlier results may have failed to include directories that should have been searched.
<kgoetz> what part Uisnt it self explanitory? :|
 * kgoetz suspects its a JFGI
<flyback> = *CANUCKED*
 * flyback goes to cook a steak, bbl
 * kgoetz wonders what a canucked is
<kgoetz> bbs though
<flyback> it's a verb :P
<J-_> I know a canuck is a Canadian, I am one.
<J-_> flyback is cooking a steak in 2 channels. /me giggles
 * flyback bites J-_
<flyback> canuck
<flyback> canuck
<flyback> canuck
<flyback> canuck
<flyback> that's for william shatner's greatest hits, and other canadian acts of terrroism!
 * J-_ hides
<foo> hehe
<foo> J-_: Get that perms issue sorted?
<J-_> foo: It was for a friend =) She got it sorted. =)
<foo> J-_: ah, gotcha :)
<stiv2k> why is mysqld_safe taking up ALL my resources ALL the time?
 * flyback bbl
<stiv2k> flyback: you're just on every freakin channel aren't ya? :P
<kgoetz> lol
<kgoetz> hes a spy
<stiv2k> lol
<stiv2k> so, anybody know?
<kgoetz> no i dont. i just kill -9'd it
<flyback> yes
<flyback> do I know you?
<stiv2k> flyback: not personally
<stiv2k> flyback: #electronics
<flyback> oh ok
<stiv2k> flyback: do you know what this mysqld_safe is?
<flyback> not really
<stiv2k> hm
<JaxxMaxx> it's the safe mode of the SQL daemon
<JaxxMaxx> No idea why it would be running instead of plain mysqld
<foo> what do you see?
<JaxxMaxx> Hmm, work PC must be having network issues
 * flyback notes getting 91% isosproypl alcohol into a wound where the skin is skin in 1/2, *really* *really* *REALLY* freaking hurts
<Silvano> what ftp server do you guys reccomend for a lamp server?
<foo> I like proftpd
<foo> Everyone will tell you something different, though :)
<foo> your options: pureftpd, proftpd, vsftpd
<Silvano> I've tried vsftpd, and wasn't too sure on configuring it (im a linux newb) so I think i will try proftpd as I was browsing through http://ubuntuforums.org/showthread.php?t=79588 this tutorial when you mentioned it :D
<calcmandan> hi guys. i'm about to pull an old box out of the closet and play with mail servers for our small business.  anyone have a good recommendation? something where my brother can comfortably use his precious outlook.
<calcmandan> the workstations around the shop are xp pro. he has a unix server used for diagnostics, and untouchable.
<kgoetz> if all you want is mail any mailer will do
<Silvano> thunderbird >outlook imo :D
<kgoetz> * > outlook
<Silvano> lol
<calcmandan> kgoetz: yeah, he wants to use his calendar and shit.
<calcmandan> Silvano: thunderbird > outlook. kontact > outlook...
<kgoetz> calcmandan: so he wants more then a mail server
<calcmandan> kgoetz: well, i'm looking at a server that is specifically for mail, and maybe a few other things.
<foo> calcmandan: Outlook doesn't depend on which mail servers you want, you just need something with IMAP/POP.
<foo> calcmandan: personally, I'd just setup google hosted for him and do something else with your time.
<calcmandan> yeah, I'm aware of that.
<kgoetz> foo: depends how you feel about your data
<foo> (unless there is a good reason not to use it)
<foo> kgoetz: right
<kgoetz> calcmandan: its the 'other things' that are really casual and easy to throw in the sentence that make the difference
<kgoetz> bbs
<calcmandan> kgoetz: sorry bro.
<Silvano> how do i access a local ubuntu box by hostname from a windows box?
<foo> Silvano: you could hard code it in your hosts file on windows, or tell windows to use linux box as DNS server and have the hostname resolve to the linux box
<Silvano> foo: theres no way to automatically broadcast it?
<foo> Silvano: hm, not sure to be honest... let me know if you find a way
<Silvano> will do. I honestly cant remember setting hostnames for my windows boxes in the router, but they are there. I assumed that they were broadcasted somehow, and was just wondering if it was possible with the linux box. itd make it much easier than typing the ip tons of times a day :D
<foo> what are you typing it for?
<Silvano> ssh http testing ftp testing etc. just trying to get it set up, configured and tested
<kgoetz> Silvano: setup key based ssh :)
<kgoetz> Silvano: avahi advertises hostnames, fwiw
<kgoetz> avahi/mdns
<Silvano> itd take 10 secs to set in the router, just trying to see if it was something simple I didnt know about
<Silvano> kgoetz: avahi looks nice...will try it out in the morning.
<kgoetz> Silvano: if your into that sort of thing its nice. cant say i like it
<Silvano> off to bed, thanks for your advice and conversation, and see you guys tomorrow.
<kgoetz> sleep well mate
<Silvano> might make things more 'user friendly' for me is all :D
<foo> Silvano: oh...
<foo> Silvano: yeah, you should just change c:/windows/drivers/etc/hosts I think it is
<foo> anywho, see ya
<kgoetz> for whoever was asking about calednaring: http://trac.calendarserver.org/projects/calendarserver
<Silvano> thanks, and ttyl :D
<kgoetz> i'm hopeful for that project, just not tried it yet
<kgoetz> foo: Silvano windows/system23/etc
<kgoetz> *32
<foo> kgoetz: ah, it's official, you're a windows user.
 * foo bans kgoetz 
<foo> :)
<kgoetz> :o
 * kgoetz stabs foo for spreading such evil thoughts about him
<foo> hahaha
<kgoetz> hehe
<Jeeves_> Morning
<kgoetz> morning
 * flyback right eye is blood red and almost swelled shut, yep it's springtime
<kgoetz> :/
<kgoetz> almost winter here
 * flyback goodnight
<_ruben> morning :)
<kraut> http://www.youtube.com/watch?v=3KANI2dpXLw
<\sh> moin moin
<\sh> guys, does anyone ran ubuntu-server daily (from yesterday or older) on a machine with BCM57xx gbit NICs? I just installed it, and d-i tells me, I don't have any NICs anymore ;) lspci says otherwise
<kraut> moin
<fromport>  \sh nope e1000 here
<\sh> fromport, I wonder where the tg3,ko is hiding...means: which udeb it's in
<\sh> gnarf
<fromport> find /lib/modules -name tg3\*
<fromport> does that find something ? are the modules there ?
<\sh> fromport, the problem is in d-i
<\sh> it loads the udebs from the cd
<\sh> and somehow it can't find the tg3.ko
<\sh> could be a glitch in cd makeing...or a packaging bug...
<fromport> ah .. i misread, the daily cd image.... sorry...
<fromport> i installed one of a few days old and keep it up date daily...
<n_a_u_t> hi. is there a chance to use a promise tx2000 ata raid with ubuntu server 7.1?
<dthacker-work> n_a_u_t: you should probably google that question.
<youngmusi1> hey. I have a server with a 100Mbit and a Gigabit ethernet card. But i don't know which is which... Is there a command to see that?
<Deeps> youngmusi1: dmesg should shed some light, as should ethtool (supported link speeds)
<_ruben> ethtool
<dthacker-work> what _ruben said
<foolano> youngmusi1: mii-tool may help you too
<AnRkey> I am trying to setup a software raid 5 here. How do I remove all raid configurations from the drives?
<AnRkey> whenever I try to recreate the raid setup during the installation of ubuntu server and I try to configure the raid, the installer adds the old raid partitions to the volumes and I can't remove them
<AnRkey> i want to start fresh by wiping the drives
<AnRkey> any help would be cool
<youngmusi1> Deeps: ok, thank you
<Treenaks> soren: have you seen bug 205011, and do you think it's likely to be fixed before release? (I almost have confirmation here, as soon as mkfs of 1TB of disk is done :))
<ubotu> Launchpad bug 205011 in lvm2 "LVM2 doesn't recognise 'virtio' virtual disks (/dev/vd*)" [Undecided,New] https://launchpad.net/bugs/205011
<AnRkey> is there anyone available to help me with my software raid?
<zul> hey foolano
<foolano> hi zul
 * AnRkey is now an mdadm mastah!
<W8TAH> hi folks -- this my not be the right place to ask this question -- but im looking for a simple / easy to operate utility to quantify the speed of one of my network links here -- can someone suggest such a program
<_ruben> W8TAH: you mean measuring your network usage?
<_ruben> if so: iftop
<W8TAH> hi _ruben no - -i mean in terms of what the line can transmit -- its a bit of a far run - i have 1gb switches at both ends - -but i need to know in general terms if the line is actually running 1gb or something less
<W8TAH> im considering moving a number of our servers to the far end of the line and need to know if speeds are going to be badly impacted
<_ruben> W8TAH: ah .. i ran into a nice tool for that quite some time ago .. cant recall its name tho :(
<W8TAH> ok
<_ruben> but as long as you stay within the specs you should be ok i'd say .. 100meters is the max for cat5/5e/6 afaik
<W8TAH> ya - the run is about 230 feet --
<faulkes-> yes, 100m for cat5 is the bicsi spec
<W8TAH> i just dont need issues -- and before i commit to a datacenter move i wanna be sure the line can handle it
<_ruben> the only real test is a live test ...
<W8TAH> ok
<W8TAH> it was worth a try i guess
<c0ldfusion> I don't know if any of you are directly responsible, but thanks anyway for such an excellent project. I have two servers in production (so far) running ubuntu server. I love it!
<foo> c0ldfusion: Awesome!
<amen51> hi, a commond line question:
<amen51> how to pipe a file to a command that usually accepts the file as an argument
<amen51> e.g. evince file1
<amen51> we want to pipe file1 to evince
<amen51> anybody?
<leonel> you need that the program  can  read from pipes .
<amen51> is there a command-line related channel?
<leonel> in that case  maybe evince can be controled by gnome / corba / dbus or something
<amen51> can you give a specific command line example
<amen51> e.g. cat file1 | evince does not seem to work
<amen51> thanks anyways
<c0ldfusion> you know you can redirect standard input
<c0ldfusion> to the program
<c0ldfusion> not sure if that would help
<amen51> i guess something like evince do not read from standard input (not sure though)
<c0ldfusion> try evince < file1
<c0ldfusion> yeh maybe not
<c0ldfusion> I don't even know what that is ;)
<amen51> thanks again
<dthacker-work> program < fileyouareinputting
<dthacker-work> too late, of course
<c0ldfusion> heh
<nxvl> mathiaz: i update the patch you ask me to update
<nxvl> mathiaz: Bug #159371
<ubotu> Launchpad bug 159371 in sysvinit "Default MOTD for server should point to documentation URL" [Medium,Confirmed] https://launchpad.net/bugs/159371
<flyback> http://www.digitimes.com/mobos/a20080401PR205.html
 * flyback bbl going to go get my allergy shot
<flyback> quick question
<flyback> what's the main kernel package to install when installing the default kernel
<flyback> apprentely the installer script I used goofed and installed one that doesn't do smp
<flyback> I kinda figured out manually how to do this but I am stuck with a lot of stuff listed in dkpg as "deinstalled" and not sure if it's 100%
<infinity> "linux-server" for servers, "linux-generic" for desktops.
<flyback> ok thx
<flyback> anything you know of in the server kernel that might clash with vmware server?
<infinity> Nah.
<flyback> thx :)
 * faulkes- prepares iscsi
 * flyback bbl, going to clean some circuit boards 
 * flyback is getting fed up with ubuntu and gnome bugs
<nxvl> do we have any certification of hardware?
<nxvl> i mean from IBM or HP servers
<Loosewheel> Hi everyone
<silvanozzzz> hello
<Loosewheel> silvano: do you know how to create a link in the index.html file? I'm trying to link a Open Office Presentation ?
 * flyback stuffs his freshly washed circuit boards into a silver static bag and places the opening near a heater to dry aka "shake-n-bake"
<silvano> hmm, not sure. what extension is it?
<Loosewheel> Well I put it together with oo, and the exported it as .html. I can link to www using oo writer, but can't figure out how to make a link to grab this file from the web page.
<silvano> if its just an html document, you could try html to link it http://www.quackit.com/html/codes/html_link_code.cfm
<Tatster> Hi all. I'm trying to recover from a disk failure from a raid 1 mirror.  The outputs from my various LVM commands are at http://pastebin.ca/970152 - the issue is my logical volumes are showing as "Not available"  and I can't work out why/how to fix!
<Tatster> Can anyone suggest any ideas?
<RPC> Hello, everyone.
<Loosewheel> hi
<RPC> Is there someone in here who would be willing to basically hold my hand through a linux install?
<Loosewheel> silvano: Thanks, I'll chew on that for a while.
<silvano> np, hope it helps
<RPC> guess that's a no
<silvano> RPC: its fairly simple if your installing ubuntu 8.04 desktop. I believe its a series of 7 steps, asking names, passwords, timezones, keyboard and region layouts etc.
<RPC> Yea, until somethign gives an error...
<Tatster> i think there may also be some good guides on www.howtoforge.com
<silvano> your gettting an error during the install?
<Tatster> If anyone can help with LVM issues I'd be really appreciative
<RPC> no, im having a lot of things not work after install
<Tatster> RPC: what things are not working after your install ?
<RPC> wifi, vid card, that i've noticed
<RPC> volume switch
<Tatster> Which version of Ubuntu are you installing ?
<RPC> ubuntu 7.1
<RPC> but i actually have been trying with Mandriva most recently
<RPC> with the same issues
<Tatster> the desktop or server version of 7.10 ?
<RPC> desktop, i assume
<Tatster> ok,  I literally arrived in this channel a few minutes ago, and someone more regular please correct me if I'm wrong - this channel is aimed at Ubuntu server specific issues, desktop talk is normally discussed in #ubuntu
<Loosewheel> Tatster: Sorry I can't help....but http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html?page1, may help.
<Tatster> Loosewheel: that looks like a good reference article.  Doesn't cover my issues but good reading none the less
<Loosewheel> Thought maybe the commands on page 2 might get you there.
#ubuntu-server 2008-04-04
<teamcobra> hi everyone
<owh> Hiya, can someone tell me how far away we are from the Hardy GM?
<kgoetz> GM?
<owh> Golden Master
<owh> kgoetz: The final actual bits that make it onto the CD's that will be shipped.
 * owh is trying to work out some time-lines.
<kgoetz> owh: the last RC (i think ~20th) is intended to be the gold release. they'll only change things if they absolutely have to after that
<kgoetz> about a week after universe freezes it should settle down though (iirc the 10th?)
<owh> Hmm.
 * kgoetz looks on wiki
<owh> URL?
 * owh couldn't find it.
<kgoetz> locating ...
<kgoetz> https://wiki.ubuntu.com/HardyReleaseSchedule
<owh> Merci
<kgoetz> 17th is RC, 24th is gold
<owh> Crap, so much for that idea. Ah well. Thanks.
 * owh was trying to figure out hardware purchases vs. release dates.
<kgoetz> oh, heh
<ajmitch> install with a RC, perhaps?
<owh> ajmitch: Yeah, but that will just give me kittens with bugs :)
<kgoetz> wtf
<owh> kgoetz: Huh?
<kgoetz> owh: sorry. deja vu
<kgoetz> ajmitch: said that in exactly this context last release
<owh> kgoetz: What, "all over again"?
<kgoetz> out of the blue. like that.
<kgoetz> owh: yes
<ajmitch> kgoetz: why is that a problem?
<owh> ajmitch: and I, we're twins :)
<kgoetz> ajmitch: its not. its just ... disturbing
<kgoetz> this may even be 3 releases in a row its happend
<ajmitch> owh: there won't be many fixes going in between RC & release
<ajmitch> kgoetz: that what has happened?
<kgoetz> *would have to check logs*
<kgoetz> ajmitch: you've given me deja vu
<ajmitch> oh well
<owh> kgoetz: If you wait a bit, I have them lying around.
<owh> ajmitch: Yeah, I'm crossing my fingers for that.
<kgoetz> all the freezes have happened now
<owh> kgoetz: Was that in #ubuntu-server, or somewhere else?
<kgoetz> owh: i dont remember. it was #ubuntu-* . if it involved ajmitch it was probably here
<ajmitch> it probably wasn't me
<owh> kgoetz: I have two kittens in the logs.
<owh> 20071003.ubuntu-server.txt:[11:46] <soren> jdstrand: the register_shutdown_function trickery kills kittens.
<owh> 20080111.ubuntu-server.txt:[08:59] <owh> Hows this as a disclaimer: Note that this just makes it run, we haven't done any configuration, haven't confirmed we can actually use it, that it won't fill up your hard disk or kill kittens.
<owh> Enough fun and games already :)
<owh> kgoetz: In case you care: http://www.google.com.au/search?q=kittens+site%3Airclogs.ubuntu.com
<owh> :)
<sboysel> i've had problems in the past setting up a static IP, does anyone know a safe way to do it?
<owh> sboysel: What do you mean by safe?
<sboysel> like whenever i try to do it it will sometimes mess up my ethernet connection
<sboysel> no internet
<owh> sboysel: That is most likely a carbon error. That is, it has nothing to do with a static IP address, but the details you use to configure it with.
<sboysel> i wanted static IP so i could try to set up a server
<owh> sboysel: A static address is only static if you are given it by your ISP.
<kgoetz> does your ISP have you alocated a static ip?
<sboysel> no i think it's DHCP
<owh> sboysel: Then allocating a static address is a recipe for failure.
<owh> sboysel: Do you understand the concepts behind DHCP and Static IP?
<sboysel> no way to do it without dealing with my ISP?
<sboysel> not completely
<owh> sboysel: Well, there are services like dynamic dns.
<sboysel> DHCP finds what ever IP it can
<sboysel> and static always has the same one
<owh> Well no.
<sboysel> oh
<owh> DHCP uses an address as supplied by an appropriate DHCP server. Often run by your ISP.
<owh> Most likely the DHCP address supplied to you is not visible from the wider Internet.
<sboysel> right which is why i would need a domain name?
<sboysel> and static IP?
<owh> That means that from my workstation here, it is unlikely that I can reach your IP address.
<owh> No
<owh> Stop jumping in for a bit.
<sboysel> sorry
<owh> For me to get access to your computer, my computer needs to be able to "reach" your computer.
<owh> Most addresses allocated by an ISP are local to that ISP.
<owh> For example, my ISP on my satellite modem is: 10.24.7.25
<owh> s/ISP/IP/
<owh> That address was given to my modem by my ISP's DHCP server.
<owh> In a static world, I would allocate that same IP address, and all the other settings to my "server".
<owh> It would make my address "static".
<owh> But it would not give you access.
<owh> You cannot get to the IP address I supplied.
<owh> It is internal to the ISP's network. With me so far?
<sboysel> yes
<sboysel> why can't i access your server with a static ip?
<owh> Because the IP address is not "visible" to you.
<owh> Specifically, the 10.x.x.x network range is called "non-routing".
<sboysel> because it has been allocated?
<owh> There are others, like 192.168.x.x and 127.x.x.x
<owh> No, not because it has been allocated.
<lamont> 192.168.0.0/16 and 172.16.0.0/12, specifically
<owh> Tah lamont.
<owh> sboysel: So. To "get" to your server, a few things need to be in place.
<lamont> and it's "private address space" - it _routes_ just fine
<lamont> well, other than the fact that the core doesn't accept routing advertisements for it
<sboysel> i have 192.168.....
<owh> lamont: Yeah, but I figured that would make it more complicated :)
<lamont> within your organization, you can do whatever you want with it.
<owh> sboysel: Yes, so hold on for a bit.
<lamont> and explaining what's wrong with the terms later is less confusing?
<lamont> :-)
<owh> Hey, I'm all for complete explanations, but I figured baby steps would be a good idea, given the questions asked.
<sboysel> amen
<lamont> (people still refer to "class A networks" even though they haven't existed since 1993, for example..  and then wonder why people are confused about routing...)
<owh> lamont: But we're not talking about routing here are we :)
<lamont> not particularly, no.
<owh> sboysel: So, for you to have your server visible to me, the address that it has needs to be related to the server in such a way that the address is advertised on the wider Internet. I won't go into how that really works, because lamont will chew off my fingers and your head will explode.
<lamont> OTOH, my ISP hands out 192.168.168.0/24 addresses, and I talk to lots of infrastructure machines on those addresses just fine...  including from the big-bad-internet (via a proxy, of course)
<owh> Now, you cannot just "randomly" choose an address.
<lamont> owh: hehe
<lamont> not and expect that address to work...
 * lamont goes back to lurking
<owh> sboysel: Given what you've told us so far, you won't actually be able to advertise your server on the net unless some specific things are done.
<sboysel> ok
<owh> sboysel: Simplest to achieve, but likely associated costs, is to ask your ISP for a static external IP address. It's unlikely that they'll give you one. Don't dispair.
<owh> sboysel: The second way of doing this, which millions of users use each day, is a service like dynamic dns.
<lamont> which still requires a publicly routed address, and may cost money
<owh> sboysel: What happens is that you run some software on your server, which tells your dynamic dns service what your internal IP address is.
<sboysel> i see
<owh> lamont: I'm not sure what you're referring to.
<owh> sboysel: Now, the way the service works is that because it is external to your ISP's network, it can match your internal IP address with the IP address associated with your ISP.
<sboysel> like a medium
<owh> sboysel: So, you establish some form of connection to your dynamic dns, which then accepts connections on your behalf, sending them to you.
<owh> sboysel: Like a middle-man.
<owh> The final piece in the puzzle is how to advertise this.
<sboysel> isnt dynamic dns a web based service?
<owh> It can be,
<sboysel> or software i would install?
<owh> You may receive a domain name like bob.ddns.org.
<sboysel> oh
<owh> So, I could then surf to that address and see the stuff on your server.
<lamont> owh: dyndns still requires a publicly routed address, and many ISPs will charge you for that even.
<owh> sboysel: Now, in addition to that, you could map a domain name over the top of that.
<owh> lamont: I would have thought that it would use the external ISP address and tunnel back to your server.
<sboysel> oh so like www.bob.org translates to bob.ddns.org
<owh> sboysel: Yup
<sboysel> so i have to use a Dynamic DNS web service
<owh> sboysel: Go to a terminal and type: apt-cache show ddns3-client and see what it tells you.
<sboysel> so there's a package in the hardy repos?
<owh> Yup
<owh> lamont: So, am I sending sboysel on a wild goose chase, or does it in fact tunnel?
<lamont> all the dyndns stuff I've seen wants you to have an IP.
<sboysel> should i install that package?
<lamont> owh: then again, I have a /24 that follows me around, so I've never actually bothered with any dynamic dns services....
<lamont> owning your own network does tend to make you lazy that way.
<owh> sboysel: No, the first step is to check what the requirements are for each of the services named.
<owh> lamont: Yeah, I'll lay claim to owning a mobile LAN :)
<sboysel> services being what my server is going to be doing?
<owh> Yup
<sboysel> maybe a file server, web server, wordpress
<lamont> heh.   just passed the 9-year anniversary of registering that network
<sboysel> all for fun nothing harcore
<sboysel> how should i go about setting those up?
<owh> sboysel: Well, visit http://www.dyndns.com/ and start reading.
<sboysel> if you dont mind me asking what services do you use your server for?
<lamont> sboysel: my primary server has postfix and bind9
<owh> sboysel: I'm not sure it's relevant, but LAMP, rsync, ssh, samba, mail, ftp, upnp to name a few.
<lamont> and provides imap service to the house
<owh> Yeah, that too :)
<sboysel> yeah i installed LAMP, ssh, samba stuff like that
<lamont> heh.  I didn't even consider ssh as a service... :-)  of course it's there - how else would I login??
<sboysel> i was hoping to build an ftp, web server
<lamont> the ftp server lives on the machine where the ubuntu mirror lives
<sboysel> any thoughts?
<owh> sboysel: That is outside the scope of assistance I'm willing to give you, but https://help.ubuntu.com/7.10/server/C/ will give you the server guide which will set you on your way.
<lamont> apache2 (and apache2-mpm-worker, actually), and vsftpd are what I use
<sboysel> thank you so much guys
<owh> lamont: You could run an nmap :)
<lamont> generally speaking, setting up a server and exposing it to the internet as a learning exercise is welcomed by the script kiddies, who will help you generate traffic once they compromise your machine... :-(
<owh> lamont: Also, dyndns seems to say nothing about external IP static addresses that I can see.
<lamont> owh: for extra points, look at who the maintainer of nmap is. :)
<owh> lamont: That was a wild goose chase, packages.ubuntu.com/nmap -> ubuntu-core-developers -> https://launchpad.net/~ubuntu-core-dev/+members + /whois lamont ==> LaMont == ubuntu-core-developers. That's a lot of work, just to make a point :)
<owh> Ah, Denver, that explains it all >:-)
<lamont> well, it also led to me noticing that there was an ubuntu upload, which means there is still a bug, and I need to upload the fix to debian and get it synced to ubuntu
<owh> lamont: So, I'm curious, how do you have a network that follows you around? I carry a vSat around which gives me the portable LAN :)
<lamont> 192.0.0.0/7 --> "The swamp"
<lamont> predating CIDR, they are all grandfathered, transportable networks
<owh> lamont: Yeah, I carry that one around too :)
<lamont> as in I don't pay ARIN for it, and if I switch ISPs, the new ISP gets to advertise a route for me.
<owh> lamont: Hold on, you're telling me something different aren't you.
<kgoetz> i thought 192 was scientific/research purposes
<owh> lamont: Yeah, I thought I was too quick on the [Enter] key :)
<lamont> 192 was the base of "class C" (back when there were classes...) the allocation method was simple: what's the next one up.  here you go.
<lamont> they were in 193 somewhere when CIDR came along, and they stopped allocating from 193
<owh> lamont: By the look of your membership page you're not that busy then :)
<owh> sboysel: Just for your information, some domestic routers/modems provide a dynamic dns client built in which might simplify your quest. I suggest you read the appropriate manual for your device(s).
<sboysel> yeah i think buying new hard ware is out of the question but thanks
<owh> sboysel: No, I'm saying check out the stuff you have already.
<lamont> heh
<lamont> some days are busier than others
<lamont> of course, sometimes living in the sticks with rural internet can be, um, annoying
<owh> lamont: My solution to that was to become mobile and carry a 1M/256K satellite link around. All good :)
<lamont> satellite is to laggy
<owh> No, ssh is like the 1200/75 baud days :)
<lamont> heh
<owh> I'm trying to figure out if I can get screen to give me a buffered input line to make editing a little less painful :)
<lamont> I have always enjoyed having people get weirded out when I'm fixing typos faster than the keystrokes are echoing back
<owh> Yeah, get's them going every time :)
<owh> s/'//
<owh> Watching an .iso download at 117KB is like watching paint dry.
<owh> 56%, 47 minutes remaining
<lamont> no, 14KB/s is watching paint dry. :-(
<owh> The flip-side of course is that you can setup anywhere: http://itmaze.com.au/locations/ - http://itmaze.com.au/locations/sa/middle.of.nowhere
<owh> lamont: No, that's like growing old :)
<kgoetz> sa *is* the middle of no where ...
<owh> kgoetz: It's not that bad.
 * kgoetz rhubarb
 * kgoetz suspects owh doesnt live in sa
 * owh accuses kgoetz of living in sa.
 * kgoetz is guilty as charged
<owh> kgoetz: I have spent many months in your fair state as the map will attest.
<kgoetz> so i see
<owh> In fact, I was there in October to run the web site for the World Solar Challenge :)
<kgoetz> :)
<owh> kgoetz: The give-away was a German name in an Australian TimeZone :)
<owh> 90% sa :)
<kgoetz> owh: heh. i would have thought the massive dislike was ;) (i'm not from here, i'm actually from tassie)
<owh> kgoetz: Ah, now I know what's wrong with you :) -- just kidding.
 * owh can't talk. Born in VIC, stayed a week, got on the Indian Pacific, moved to WA, then to Holland, then back to WA :)
<kgoetz> lol
<owh> Still have scars from VIC :)
<kgoetz> hehe
 * kgoetz just realised that Sun buying MySQL might mean OO.o's db will suck less
<owh> kgoetz: Perhaps not in our lifetimes though.
<kgoetz> owh: harsh :)
<kgoetz> owh: do you have a ham licence? or are you just doing sat wifi?
<owh> Sat WiFi only.
<kgoetz> ok
 * kgoetz thought it was worth asking :)
<lamont> I wouldn't run internet over ham radio - I really have objections to not ssh-encapsulating things, you see...
<lamont> and cleartext passwords are Teh Suck.
 * lamont declares bedtime
 * ScottK declares lamont old and weak.
<lamont> work day starts at 0600 local.. 7 hours from now.
<lamont> hrm.. given a bug in the debian bts, is there a trivial way to import that to launchpad?
<lamont> or is it easier to just file a new one?
<ScottK> OK.  Mine starts at 7 and I'm two hours east of you.
<lamont> yeah
<ScottK> You link the Debian bugs, you don't import them, so you still have to file a bug.
<lamont> ScottK: win!
<ScottK> lamont: I've been told by senior launchpad developers that because I liked the pre-beta LP U/I better than the current one and every one KNOWS the new one is better because it has CSS instead of tables my opinions on Launchpad aren't credible.
<ScottK> As a result, I neither file nor comment on Launchpad bugs and really do my best not to have any opinions about it at all.
<lamont> heh
<ScottK> Not kidding you know.
<lamont> yeah
<dennister> g'morning channel, anyone awake? scripting newbie could use some assistance for my server configuration :)
<kgoetz> !ask
<ubotu> Please don't ask to ask a question, ask the question (all on ONE line, so others can read and follow it easily). If anyone knows the answer they will most likely answer. :-)
<dennister> g'morning kgoetz :)
<dennister> r u any good at bash shell scripting?
<kgoetz> somewhat. ask in #bash if you need the pros :)
<dennister> i'm reading the docs, tutorials, etc., but...
<dennister> kgoetz: ty, i will
<kgoetz> dennister: np :D
<dennister> kgoetz: not having much luck so far in #bash, i'm afraid; it's like mythtv-users, where people don't want to help, they just criticize you for your questions and ask you to justify your decisions/needs forever
<_ruben> dennister: do have any specific problem/question?
<dennister> _ruben: i did have a problem, but it seems like someone in #bash has come through for me after all :) "bash"-ed them too soon
<_ruben> hehe
<omnz0r> Hi, apt-get transfers on port 80?
<_ruben> if you have http sources, yes, which is the most common (theoretically you could also have ftp or maybe even rsync sources)
<_ruben> see /etc/apt/sources.list
<omnz0r> ofc, thanks _ruben :)
<rhineheart_m> how to show the ports used in ubuntu server gutsy?
<faulkes-> netstat -ant
<rhineheart_m> imap and pop3 are needed to receive mails from yahoo.com to the mail server right?
<kraut> s/and/or
<kraut> as a client?
<rhineheart_m> I have postfix and courier-imap installed... it was actually working before.. it can't accept emails from the outside after I changed my router..
<rhineheart_m> I port forwarded 143 and 110 already..
<rhineheart_m> did I miss something?
<_ruben> inbound 143 and 110 is to fetch from your box to a remote location .. if you can from remote locate inbound mail to your box: open port 25 (smtp)
<rhineheart_m> how to check what ports are configured for postfix and courier?
<zul> mathiaz: so there is a bug in the dovecot-common.postinst in the ssl creation stuff
<mathiaz> zul: where Ã
<zul> basically it checks for an ssl certificate by grep but since the line is commented out by default it never creates a ssl certifcate
<zul> mathiaz: around line 38
<mathiaz> zul: hum... you mean that ssl_cert_file is commented in dovecot.conf ?
<zul> yep
<mathiaz> zul: it shouldn't matter - grep doesn't match the beginin of the line
<_ruben> hrm .. whats the 'proper' way of automagically adding network routes? putting a script in ifup.d? or is there a "nicer" way
<mathiaz> zul: the reason why the ssl creation code is never run in the postinst is that dovecot depends on ssl-cert
<zul> mathiaz: well the check is wrong it spits out "You already have ssl certs for dovecot" when you clearly dont :)
<mathiaz> zul: which will already generate the certificate
<zul> ssl-cert will generate the certificate for dovecot?
<mathiaz> zul: the default certificate file set in dovecot.conf is the same as the default one generate by ssl-cert
<_ruben> ah .. found smth
<mathiaz> zul: you can try to install ssl-cert, but not dovecot
<mathiaz> zul: remove the default cert generated by ssl-cert and install then dovecot
<mathiaz> zul: you should see the ssl cert creation done in the postinst
<zul> ah gotcha now
<zul> meh :)
<JaxxMaxx_> Is there such a thing as a "debug log" in 7.10 ?   I'm trying to troubleshoot a FreeRADIUS installation, and the docs keep mentioning the Debug output, but I'm not sure where to find it
<zul> JaxxMaxx: syslog maybe
<zul> JaxxMaxx: /var/log/freeradius as well
<JaxxMaxx_> heh, people trying to access PHPMyAdmin scripts...  silly kiddies.
<good_dana> JaxxMaxx: you might have to restart the service with a different level of verbosity
<JaxxMaxx_> I'm guessing that init.d scripts don't take additional parms?
<zobbo> Hardy Heron Server on latest VMWare workstation - always freezes at partitioner (IDE or SCSI). Anyone else seen this?
<good_dana> zobbo: are you running the jeOS version?
<zobbo> good_dana: no - just bog standard server
<zobbo> gosh, never knew about jeOS
<good_dana> zobbo: might want to try that, otherwise, i do know that partitioning virtual disks takes a pretty stupid long time in my experience.
<zobbo> good_dana: Thanks for the feedback. I've been building images like mad for the last six months with 7.10. Not urgent to get this running. I'll drop a note in the ubuntu forums and see if anybody else has seen this.
<ScatterBrain> Does anyone know why mod-security isn't available in Gutsy?
<keescook> ScatterBrain: afair, it has no upstream any more.
<ScatterBrain> keescook: So it's no longer in Debian either?
<ScatterBrain> wow.
<keescook> ScatterBrain: http://packages.qa.debian.org/liba/libapache-mod-security.html
<keescook> "removed"
<keescook> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=352344
<ubotu> Debian bug 352344 in ftp.debian.org "RM: libapache-mod-security -- RoM; undistributable due to licence conflict (APL/GPL)" [Normal,Open]
<keescook> so, I misremembered -- it's a license issue, not an upstream-is-dead issue
<ScatterBrain> thanks keescook
<gourgi> hi , i created a local repository for my LAN with apt-mirror, i added a puvlic key , but when i upgrade from my other pc, the packages are not verified/authenticated , what i have to do ?
<gourgi> i did "apt-key add"  but i get the 'not authentucated' message
<Cahan> I have a headless server that I SSH into using putty, how would I keep a user logged into the server between SSH sessions?
<sommer> Cahan: screen is probably what you're looking for
<Cahan> sommer, thank you, I'll check it out
<logist> heya, can anyone help me i have searched the web and i m going crazy, i have ubuntu server on a ibm blade center hooked up to the DS3400 storage via optic fiber.. how do i mount a drive from that storage im going crazy here
<logist> or any other SAS storage via FC for that matter
<logist> anyone even here?
<logist> im using ubuntu server 7.10.. just dont know how the heck to mount or make te lun wich i created via storage manager to make ubuntu SEE and format the partition
<logist> ayone? heeeeelp kinda loosing my mind
<logist> at least a hello would be nice :D anyone here ? :D
<Cahan> ofc
<logist> ah!
<logist> thank god :D
<logist> a human :D
<logist> any idea where could i search for this problem of mine? really keeping me going cuz i cant seem to figure out how to simply make a drive from SAS storage system visible in ubuntu
<Cahan> unfortunatly I have no idea, sorry
<logist> thanks.. i guess ill keep asking here the next couple days see if i can get some expert on ubuntu server.//
<zul> logist: just a stab in the dark if you run dmesg do you seee the drives?
<faulkes-> logist: open-iscsi?
<logist> sec..
<faulkes-> I believe is what you want
<logist> didnt try openiscsi yet.. isnt that supposed to run over tcp/ip ethernet card?
<faulkes-> yeah, ip based, although, if you are doing FC, hrmm, I guess it depends on if the card is supported and recognized
<logist> the blade is connected to storage via optical fiber switch.. qlogic
<logist> yeah i can see the fiber card fine..
<faulkes-> iirc last I dealt with FC was w/ qlogic cards and they were supported
<logist> ok terying dmesg
<faulkes-> so I suppose you have to make available the disks as array / logical drives
<faulkes-> the card should pick up the luns and you should be available to mount them
<logist> uf alot of text dmesg returned..
<logist> well heres the deal.. on the other windows platform i have the storage manager up and running by wich i defined a lun, parititoned it.. set it up all that.. now just how to look if the linux box sees the lun and can use it.. well format it firstly i guess
<zul> logist: you will have to look through the dmesg to see whats going on
<logist> looking..
<logist> [  112.676504]  QLogic Fibre Channel HBA Driver: 8.01.07-k7
<logist> [  112.676506]   QLogic IBM FCEC -
<logist> the card is there..
<logist> scsiadd -p returns:
<logist> Host: scsi0 Channel: 00 Id: 00 Lun: 00
<logist>   Vendor: IBM      Model: 1726-4xx  FAStT  Rev: 0617
<logist>   Type:   Direct-Access                    ANSI  SCSI revision: 05
<logist> Host: scsi1 Channel: 00 Id: 00 Lun: 00
<logist>   Vendor: IBM      Model: 1726-4xx  FAStT  Rev: 0617
<logist>   Type:   Direct-Access                    ANSI  SCSI revision: 05
<logist> Host: scsi4 Channel: 00 Id: 00 Lun: 00
<logist>   Vendor: IBM-ESXS Model: MAY2073RC        Rev: T107
<logist>   Type:   Direct-Access                    ANSI  SCSI revision: 05
<logist> Host: scsi4 Channel: 00 Id: 01 Lun: 00
<logist>   Vendor: IBM-ESXS Model: MAY2073RC        Rev: T107
<logist>   Type:   Direct-Access                    ANSI  SCSI revision: 05
<logist> Host: scsi4 Channel: 01 Id: 00 Lun: 00
<logist>   Vendor: LSILOGIC Model: Logical Volume   Rev: 3000
<logist>   Type:   Direct-Access                    ANSI  SCSI revision: 02
<logist> Host: scsi5 Channel: 00 Id: 00 Lun: 00
<logist>   Vendor: MATSHITA Model: UJDA775 DVD/CDRW Rev: CA02
<logist>   Type:   CD-ROM                           ANSI  SCSI revision: 00
<logist> all is LUN 00 dont see the lun 01 i specified on the storage
<zul> logist: you might want to try sudo fdisk -l as well, and please use pastebin
<logist> tryed that... doesent show up... paste bin?  :P
<spiekey> hi
<spiekey> does anyone know if you can set up netcat so that it always echos back?
<teamcobra> are there any adverse effects to assigning all new users to the group "users" instead of a new group being created with the user's name (the default)? I'd like to do a systemwide quota this way
<dthacker-work> teamcobra: not that I'm aware of.  We've done it that way for years.
<teamcobra> dthacker: could you point me in direction of the easiest way to do this (I have the quota all set up, I just need to have it auto-assign all new users to the group 'user')... I've been googling, but without much success
<sommer> teamcobra: how are you creating your users?
<JaxxMaxx_> !pastebin | logist
<ubotu> logist: pastebin is a service to post multiple-lined texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the channel topic)
<teamcobra> sommer: ldap
<sommer> teamcobra: setting the gidNumber attribute to the group's number should do it
<dthacker-work> teamcobra: oh. I was using plain old useradd.
<teamcobra> dthacker: it'd like to have it working w/ useradd too, for kicks ;)
<teamcobra> sommer: thanks, will look into that
<sommer> np
<blue-frog> teamcobra: for useradd you need to tweak /etc/useradd.conf
<dthacker-work> useradd -g GID assigns the primary group, or follow the blue-frog ^^
<blue-frog> sorry /etc/default/useradd
<dthacker-work> keep following :)
<teamcobra> uid=30004(exe5) gid=100(users)id: failed to get groups for user `exe5': No such file or directory  :/
<teamcobra> after doing an id on the user
<ScottK> lamont: Your fix for "request to update table btree:/var/spool/postfix/smtpd_scache in non-postfix directory" does not repair existing installations with the table already in the wrong place.  Dunno if you meant to or not, but thought I mention it.
<lamont> ScottK: correct
<ScottK> OK.  Just checking.
<lamont> I've adopted a rather hands-off policy on config files over time.
<lamont> because it sucks to do otherwise.
<ScottK> I wanted to make sure before I fixed mine by hand.
<ScottK> Agreed.
<lamont> even if it's a change that _MUST_ be made before the daemon will run again, you must prompt the user and get permission before changing his config file for him.
<lamont> consequently, my philosophy these last few years has been to only do it when it causes the daemon to not run if I don't (warnings are left as an exercise), and make the permission check happen in preinst and fail the install if they tell me 'no'.
<lamont> it's the traditional 'I WIN' approach to the problem. :-)
<ScottK> Heh.
<ScottK> Yesterday I approved a Universe Feature Freeze exception.
<lamont> it makes postinst much easier to write, though:  "I got here, so therefore I must have permission. kthx"
<ScottK> The person who asked for it was a bit cautious about it and asked me not to approve it if I wasn't sure.
<ScottK> Yeah.
<ScottK> I told him not to worry.  Either the upgrade would go well and the distro would be better or it would go badly and I'd get to torture him into fixing it.  Either way I win.
<lamont> heh
<lamont> yep
<lamont> besides, it's universe. :-)
<ScottK> There is that.
<ScottK> Would someone who understands sql please look at Bug 211915 and tell me if it's really insecure or if the program is just whining?
<ubotu> Launchpad bug 211915 in amavisd-new "Insecure dependency when using sql for Log Reporting" [Undecided,New] https://launchpad.net/bugs/211915
<Scott08> Anyone had any luck getting mod_mono working?
<sommer> ScottK: what database is he using?
<ScottK> sommer: All I know is what's in the bug.
<sommer> ScottK: I'd probably ask for that info, should help determine the security risk better... but I'm definitely no security expert so others may have a better idea
<sommer> almost looks like a perl thing, to me anyway
<ScottK> sommer: Would you please?  I'm about up to my eyeballs is stuff and know very close to zip about sql.
 * ScottK knows less about Perl than sql.
<sommer> ScottK: sure no problem
<ScottK> Thanks.
<elventear> Hello. I have a problem, with RAID and LVM  for the /. I have a RAID my main OS system. If I boot with a degraded RAID (remove one drive, for testing) I see that the LVM on the RAID drive is not recognized. Does this happen because the RAID is degraded and won't assemble at boot time? Is there some way to go around this?
<fromport> elventear: what raid setup do you use ? could you give more info ?
<elventear> fromport: I am using RAID 1. I have 3 raid partitions. One for /boot and two for LVM
<elventear> In one LVM VG I have a LV for /
<elventear> and /home
<elventear> The other if other tstuff
<elventear> The system boots fine. Until I remove one of the drives
<elventear> Then the LVM partition for / cannot be found
<fromport> okay, that sounds serious. I haven't tried that to be honest.
<fromport> i used http://www.howtoforge.com/software-raid1-grub-boot-debian-etch  & http://ubuntuforums.org/showthread.php?t=408461
<LeChacal> can anyone tell me a reason why I wouldn't want to make my server auto update by the use of this crontab file http://paste.ubuntu-nl.org/62149/
<faulkes-> err, uhh, if you put sudo in a crontab, it's going to expect a password
<faulkes-> you should instead put it into /etc/cron.daily iirc following the conventions of the files which already are there
<LeChacal> faulkes: no edit the crontab with sudo so it dosn't ask for it
<LeChacal> faulkes: sorry better englsih, I edited the crontab file with "sudo crontab -e" and then it dosnt ask for a password when i use sudo in it
<faulkes-> if this is something you intend to do for a long period or as a management technique, I would still suggest using /etc/cron.daily and such as the preferred system rather than going from a user based crontab (i.e. sudo crontab -e)
<faulkes-> je parlais un petite peu francais auci mais, presque veignt ans ou jais parle toujour
<LeChacal> faulkesL: je comprendre mais je habite aux Ãtats-Unis and i speak English normally the screen tag is from my favorite book "The Day of the Jackal"
<LeChacal> faulkes-: so other then moving to /etc/cron.daily you don't see any problem with what i am doing, i am doing this because i am setting up a server at school that i am leaving next year and no one else know linux
<slicslak> i just installed 7.10 ppc on an old mac.  i think i'm having trouble with screen resolutions as nothing shows on boot.  are virtual consoles still available?  should i be able to reach it via ctrl-alt-f1?
<kgoetz> slicslak: sure this is a server question?
<slicslak> yes, because server guys know more than the regulars.  :p
<kgoetz> so its not ...
<slicslak> virtual consoles in 7.10?  that's a server question
<Deeps> you can in 7.10 cli + server
<kgoetz> PPC is not offically supported in 7.10, so i'm guessing a search engine will be your best bet
<Deeps> dunno about the desktop install, but i'd assume so
<Deeps> would be remarkably silly to get rid of them
<slicslak> that's what i figured.  i guess i got other issues then.  ok thanks guys.
<Deeps> try killing x
<Deeps> ie, stop gdm/kdm/xdm/whatever display manager you're using
<good_dana> slicslak: do you get nothing or nothing when it switches to X?
<Deeps> and then try flicking between terminals
<Deeps> if it's not working within X, then you know where abouts the vague area of the problem is, makes googling easier
<slicslak> well that's the thing.  i get yaboots, after that the monitor turns off.  so hit ctrl-alt-f1 and the monitor turns on but i don't see a prompt (yes, i tried adjusting monitor).
<slicslak> i think i'll try booting off ubuntu-server next and manuall editing xorg config.
#ubuntu-server 2008-04-05
<teamcobra> sommer, do you have any more information on using gidNumber to make the quota through ldap work?
<teamcobra> I haven't had much success finding _good_ info on it
<sommer> teamcobra: are you using OpenLDAP?
<teamcobra> yes sir :)
<teamcobra> I was told that a slapd overlay might be the way to go, but info on that is kind of scarce too
<sommer> and you have user entries in your directory?
<sommer> I use objectClass=posixAccount for my user entries and I think gidNumber is part of that
<teamcobra> yes, but only a couple... and I don;t care if the quota doesn't apply to them
<sommer> so are you migrating existing user's into LDAP?
<sommer> I guess I don't remember where or when I started using gidNumber, but it must have been part of some default config :)
<sommer> seems like user posixAccounts need both a uidNumber and gidNumber
<teamcobra> actually, no... liferay (www.liferay.com) creates the users in ldap
<sommer> for permissions and such
<teamcobra> and I have autodir working
<Silvanov> anyone know anything about IP PBX systems? I am trying to do some research on upgrading our office (one phone line) into multiple phones, extensions, and voicemail. Ive heard theres a linux distro or app that i should look into.
<teamcobra> so each user gets a home
<teamcobra> and can be added to the nxserver/other services
<teamcobra> Silv: I've set up one before
<teamcobra> asterisk is the app, and I'll find the distro
<teamcobra> asterisk is actually a lot of fun ;)
<Silvanov> awesome! thanks teamcobra.
<teamcobra> http://slast.org/ looks nice ;)
<sommer> teamcobra: ah, never seen liferay before... can you get a list of attributes that they use in their LDAP entries?
<sommer> do you configure your server to use their LDAP for auth, permissions and suck?
<sommer> such rather
<sommer> heh
<Silvanov> teamcobra: mind if I PM?
<teamcobra> sure, give me a couple minutes and I'll make a pastebin
<teamcobra> silv: no problem
<keithclark>  If I log in to a remote machine in a terminal using ssh -X username@xxx.xxx.xxx.xxx and then start a program such as firefox, is the program actually running on the remote machine?
<sommer> tes
<sommer> or yes
<keithclark> Wow, so simple and powerful
<sommer> be careful with the powe... it's intoxicating :)
<sommer> or power... heh
<keithclark> hahahaha, yes....
<kgoetz> keithclark: dont know what sort of link your using but ssh -CX would be good on slow links
<kgoetz> keithclark: dont know what sort of link your using but ssh -CX would be good on slow
<keithclark> kgoetz, what is the difference?
<kgoetz> keithclark: -C is compression
<keithclark> ah, let me try that!
<kgoetz> if you have a slow cpu at either end you may find (or a fast link, ofr that matter) that compression causes more overhead then it saves (just a warning)
<keithclark> It is working about as fast a local!
<kgoetz> :)
<andguent> if just running on a LAN, -C is useless, if running over slow-medium grade internet, it's great :)
<teamcobra> the next step is to make hpn-ssh the default in ubuntu
<teamcobra> now _that_'d be killer
<keithclark> Ah, I am running local.
<teamcobra> multithreaded ssh, ftw
<keithclark> local network that is.
<keithclark> I am running firefox on a remote machine and it is just as fast as it is running on the local machine.  (too many local uses to confuse things)
<andguent> now play with ssh keys, and setup 2 dozen linux workstations, keep everyones home directories and setting on the central server :) a few ssh tweaks, and some adjustments of desktop shortcuts, and everyone runs off the server without knowing it :)
<keithclark> andguent.....That was kind of the idea, on a much smaller scale.
<keithclark> andguent, I'm just learning here.
<andguent> I know I know, I like thinking big, kinda shows the power of it :) use it in whatever way works best for you, therein lies the greatest power -- flexibility
<keithclark> I have a lot to read and learn, but I got this far and am proud and pretty happy.  I just wanted to confirm that it is actually working as I think it is.
<andguent> It can take years to feel fully comfortable in it, just learn to back things up, then break them, then teach yourself why it is broken. The information is out there on how to teach yourself anything Linux, just not always in one concise place...
<keithclark> Oh yeah, I understand that all too well.  I have my main server machine that I don't fool with.  I have this machine I'm on now that I use as the beta test machine.  It is running 8.04 now.  I have two other laptops and a kids machine that runs pclos with xfce.  Lots of machines to learn with!
<keithclark> Just how does this ssh work though?  Does everything run on the remote machine, and then send it to the local machine over the network?  Even the graphics?
<keithclark> Maybe I should be using client/server terminology instead of local/remote?
<teamcobra> keith: what is it?
<teamcobra> yeah, ssh works like that, I use nx on servers to serve up remote desktops
<keithclark> teamcobra, it is just so fast.
<teamcobra> apps run on the server, graphics/mouse are tunneled through ssh to keep it encrypted, very nice
<keithclark> teamcobra, yeah, I can invest in the one server machine and use cheapies for the rest of the machines.
<teamcobra> sommer: had to fix another problem, my layout went crazy... fixed it now, getting those ldap settings
<teamcobra> keith: I'm launching a platform for business to do that w/o buying the server... hopefully tonight/tomorrow ;)
<teamcobra> 1 more thing to iron out
<keithclark> teamcobra, exciting stuff!
<mralphabet> I am apparently missing something with ntp, I install ntp with 'apt-get install ntp', I point a windows machine to the ubuntu machine to sync time and the windows machine says "the peer's stratum is less then the host's stratum", any suggestions?
<keithclark> teamcobra, so all I really need is just a basic ubuntu installation on the client machines then?  No apps really?
<teamcobra> right... some apps are better suited to being on the machine, but only really heavy stuff
<keithclark> I don't really use heavy stuff....No big gaming.  Just one flight sim that I can run on the server when nobody is looking
<keithclark> teamcobra, I'm actually thinking about re-doing my network here....moving the machines around but I'm unsure what machine would be best and where!
<teamcobra> well, not even gaming... more like blender, etc
<teamcobra> heavy gimp usage
<teamcobra> vid editing
<keithclark> teamcobra, not too much.  DVD copying from time to time and DVD creation.
<teamcobra> sommer, sorry for the long wait, here's the screenshot of the ldap confg page in liferay: http://img127.imageshack.us/img127/4503/liferayldaptp5.png
<teamcobra> the admin dn is just cn=admin,dc=mydomain,dc=com
<sommer> teamcobra: I could be wrong, but it looks like you need to add objectClass=posixAccount
<rhineheart_m> quick question: how to prove why an email won't arrive to local mailbox well in fact I could send and receive mails if doing it locally? I am using postfix and courier imap with squirrelmail. thanks!
<sommer> that should give you the gidNumber attribute, but there maybe another attribute you can use for what you're trying to accomplish
<teamcobra> and I add that to the Import Search Filter?
<teamcobra> under groups
<sommer> ya, I'm not too sure
<sommer> you could definitely try that
<sommer> my experience with LDAP is administering my own server, which gives you the control to add those type of things :)
<teamcobra> sommer: aha, that does show groups now
<sommer> but I'm sure if you ask the admins they'll be able to help you out
<sommer> cool
<teamcobra> heh, unfortunately, I'm the one adminning this ;p
<jjesse> evening :)
<teamcobra> and well, I've been having a weeklong crash-course in ldap ;p
<teamcobra> evening jjesse
<sommer> uhhh, maybe then I don't understand, are you liferay?
<sommer> hey jjesse
<teamcobra> ohh, no
<sommer> teamcobra: as in are you hosting liferay
<sommer> or a customer of liferay?
<teamcobra> haha, no... and I've been searching the forums for that
<teamcobra> just a liferay user... admin of the server that is being configged
<teamcobra> sorry for the confusion there ;)
<sommer> okay, I'm with ya, you should then just need to configure your server look into libnss-ldap
<sommer> if you haven't already :-)
<teamcobra> I believe it's set up properly, ldap works fine
<keithclark> Sorry to bother again....I just deleted a user and wanted to recreate him again but it does not seem to allow me to do so.
<teamcobra> http://ubuntuforums.org/showthread.php?t=689229, should help
<teamcobra> I'm lagged
<rhineheart_m> quick question: how to prove why an email won't arrive to local mailbox well in fact I could send and receive mails if doing it locally? I am using postfix and courier imap with squirrelmail. thanks!
<keithclark> teamcobra, no, not working...hmmmm
<teamcobra> hrmm
<keithclark> It allows me to recreate, but the new user just does not appear....no errors.
<teamcobra> odd
<keithclark> maybe a reboot (to quote Windows)
<teamcobra> heheh
<keithclark> In Windows it seems to solve every problem!
<keithclark> I have not rebooted my main server for like 3 months!
<keithclark> No, sorry....I lied....two months
<keithclark> Actually, I should not make fun, I once went about 2 months with XP
<keithclark> ONCE
<keithclark> Ok, reboot does not work.\
<teamcobra> hrmm
<teamcobra> not sure, will keep looking in a sec, fighting w/ my own user battle (should be done soon)
<keithclark> (my earlier ramblings were meant as comedy relief btw)
<andguent> Every time I see "A reboot fixed it" I can't help but think of WinNT, a reboot fixed everything, and everything required a reboot, it felt like 2-3 dozen reboots just to patch it all the way from SP0 fresh install
<keithclark> andguent, oh yeah....everything windows is "reboot"
<jjesse> my windows servers I only reboot for patches
<andguent> keithclark: remember the days where changing your IP address required a reboot? yea....
<jjesse> everything else runs stable and rock solid
<andguent> I don't miss that
<teamcobra> heh, I stopped using windows w/o a vm when I did an xp sp1 install that was rooted 5 mins after being connected via dialup (I didn't even get to go to any websites
<teamcobra> so not even an IE pwning... it was scanned and rooted before any patches could be applied ;p
<keithclark> andguent....a software install required a reboot!!!!!!!!!!!!!!!!!!!!!!!!1
<keithclark> andguent....an actual software install.....I cannot believe that.  Buy me, install me and stop whatever you are doing and restart your machine so you can use me
<jords> lol I love it how all the installers still tell yoy to shut down all other programs on your computer before installing
<andguent> yes, but that still happens today, even Windows 2000 can handle an IP address change without rebooting
<keithclark> Windows 2000 was ok
<jjesse> win 2k3 doens't have that problem
<keithclark> I did not say great, I said ok
<andguent> I hope all of the silent 20yr+ computer vets are easily amused by the rantings of young computer users, we never had to walk in the snow up hill both ways :)
<keithclark> hahahhahahahahahah.........
<keithclark> with no shoes
<rhineheart_m> quick question: how to prove why an email won't arrive to local mailbox well in fact I could send and receive mails if doing it locally? I am using postfix and courier imap with squirrelmail. thanks!
<keithclark> rhineheart_m, in all my years I've learned that there are no quick questions!
<rhineheart_m>  okay.. so it is not.. :)
<keithclark> no, just an observation, sorry
<rhineheart_m> mmm.. so have an idea? keithclark?
<keithclark> thinking......
<keithclark> searching.......
<rhineheart_m> hehehehe.. that's great! :)
<jords> When I have users fail sudo authentication (eg not in sudoers), a email gets sent to the root account... how can I change the email address the message is sent to?
<keithclark> Sorry, had to protect my son's computer with dansguardian.....no more "Monster" sites....
<teamcobra> :)
<kgoetz> rhineheart_m: look in your bloody logs for once
<kgoetz> jords: edit /etc/aliases and edit the root: line
<kgoetz> keithclark: no more wedding sites either ;)
<kgoetz> or any site that talks about "cocktails"
<rhineheart_m> kgoetz, I cannot see any logs there telling me errors why it can't accept mails from yahoo.com
<teamcobra> bahaha ;)
<kgoetz> teamcobra: you can laugh :p
<keithclark> kgoetz:????
<kgoetz> keithclark: :)
<keithclark> Why would flash not work on a user when it works on an administrator?
<teamcobra> the cocktails one was punny ;)
<kgoetz> because it wasnt intalled globally?
<teamcobra> I needed a lauch, watching Hitman and working w/ ldap, 2 very grim things
<teamcobra> laugh
<keithclark> kgoetz, and to do so?
<kgoetz> keithclark: no idea. i dont use flash
<kgoetz> althought it sounds like an #ubuntu question to me
<keithclark> teamcobra, kgoetz, yes, cocktail was funny
<keithclark> do I sound like a straight nerd here?
<teamcobra> everyone has a nerd moment sometime ;p ;p
<keithclark> Ok, my son just wants to play a game that requires flash........and it does not.
<keithclark> Ok, nevermind, it's his bedtime.....maybe nextime!  Thanks.
<teamcobra> keith: using gnash?
<keithclark> gnash?
<teamcobra> gnash doesn't play _everything_ yet.... it's getting there, but not quite
<teamcobra> yeah, the gpl flash plugin
<keithclark> Hey, we got so far today.....let's see what tomorrow brings.
<keithclark> Thank you all for the help
<teamcobra> for max compatibility, I'd say just use the lame adobe one
<teamcobra> k, gnit
<teamcobra> gnite
<keithclark> No, I'm not tired
<keithclark> Just so much fun here
<teamcobra> :)
<keithclark> I still cannot believe that this is a connection between this feeble machine and the server upstairs.
<teamcobra> :) webdav is the next step
<teamcobra> it'll rock your world
<teamcobra> ;)
<dthacker> ghostnob: hostname -f returns the fully qualified domain.  What were you expecting?
<rhineheart_m> can plesk be installed in ubuntu? is it supported by cannonical
<ghostnob> i was expecting hostname to return ghost1.mydomain.com
<Silvanov> isnt plesk commercial/not open source?
<rhineheart_m> Silvanov, it is commercial
<ghostnob> the tutorial said: "Both should show server1.example.com. If they do not, reboot the system:" when you run hostname and hostname -f
<dthacker> hostname (no params) only returns the short hostname, so in the example you just gave it would return "ghost1"
<dthacker> !commercial
<ubotu> Sorry, I don't know anything about commercial - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<ghostnob> o ok... thanks dthacker
<dthacker> ghostnob:  you may want to look at "man hostname" to check out the options.
<ghostnob> ok...
<ghostnob> and I can't run GUI in the server edition. when I type startx it gives me error even though I installed init
<rhineheart_m> ghostnob, do you really want the GUI?
<ghostnob> yeah...
<dthacker> rhineheart_m: I did a quick search on packages.ubuntu.com, and I don't see plesk.
<rhineheart_m> dthacker, thanks for your fingers for doing the search..
<dthacker> ghostnob: use the command line force. let go of your gui-ness ;)
<ghostnob> just want to see how it looks before I setup everything finally on my second computer
<ghostnob> coolll
<ghostnob> I just force
<ghostnob> I should just type force
<dthacker> rhineheart_m: you might want to bookmark that page, very handy
<dthacker> !restricted
<ubotu> For multimedia issues, this page has useful information: https://help.ubuntu.com/community/RestrictedFormats - See also http://help.ubuntu.com/ubuntu/desktopguide/C/common-tasks-chap.html - But please use free formats if you can: https://help.ubuntu.com/community/FreeFormats
 * dthacker shakes his head and mutters.
<patoe1> hello
<patoe1> is anyone here?
<foo> patoe1: Ask your question, always.
<patoe1> ok im thinging about downloading and installing unbuntu server for this computer, but i was wondering how diffrent it is from kubuntu and xubuntu
<foo> ubuntu server doesn't have X, it's all command line
<patoe1> oh...
<patoe1> lol that sounds really difficult
<patoe1> well whats the best linux os for a less experienced person
<foo> Ubuntu server is meant to be run on a server
<foo> "best" is subjective. kubuntu or ubuntu should be fine
<patoe1> is xubuntu good to?
<patoe1> i find the Xfce better :p
<foo> Or, that too. Again, "best" is subjective. Try them, see what you like, and stick with it :)
<patoe1> ok :D
<patoe1> and can i run like mac programs on linux? or windows programs on linux or does it have to be linux based (made for)
<foo> Depends. You can run some windows programs with WINE
<foo> I haven't ever tried / needed to run mac programs on linux, so I'm not sure... but there might be a project out there
<patoe1> ok thanks
<murray98> hi guys, I'm doing an upgrade of torrentflux-b4rt and running into a wall repeatedly.
<murray98> whenever I enter " tar -xjvf torrentflux-b4rt_1.0-beta2.tar.bz2"
<murray98> I get "bzip2: (stdin) is not a bzip2 file.
<murray98> tar: Child returned status 2
<murray98> tar: Error exit delayed from previous errors"
<murray98> would be eternally grateful if anyone can offer a smidgen of advice!:)
<sommer> are you sure the file isn't corrupted?
<sommer> the bzip file that is
<sommer> try file torrentflux-b4rt_1.0-beta2.tar.bz2
<sommer> and see what it says
 * rhineheart_m wants to ask sommer if he knows a better combination of webhosting panel for gutsy server 
<sommer> what's a webhosting panel?
<murray98> hey sommer, it says it's an HTML document text. I used Wget to dl it - what am I doing wrong?
<murray98> here is the website I'm dling from: https://developer.berlios.de/project/showfiles.php?group_id=7000&release_id=14392
<sommer> ya, seems like it didn't actually download a bzip file
<sommer> have you tried a regular browser?
<sommer> you could then copy the file to your server or whatever
<murray98> well, I'm trying to dl it to my Ubuntu Server via SSH via my terminal
<sommer> sure, but sometimes downloads are "wrapped" by server side scripts... which probably confuses wget
<murray98> i see
<sommer> you can use scp to copy the file from a client to your server
<murray98> ok - i will look up the  man page on scp. But there's no way to dl it via my terminal onto my server?
<sommer> not sure try wget --help
<sommer> I don't mean to rtfm you, but I"m not that familiar with wget's options
<murray98> totally cool. I major appreciate you taking the time!
<sommer> np
<murray98> i will check out the help section, see if there's some insight there.
<murray98> thanks a lot
<sommer> murray98: welcome
<sommer> rhineheart_m: what web hosting panel are you looking for?
<rhineheart_m> sommer, the one that could manage basic services for webhosting. like that of plesk
<sommer> I guess the only one I know of would be eBox
<sommer> do you host websites?
<rhineheart_m> sommer, yeah I do
<sommer> ah, cool
<sommer> I hate to say it, but I think at this point there may not be too many options for that kind of thing
<sommer> but there's always the configuration files :-)
<teamcobra> sommer: ever use lamdaemon?
<sommer> nope, can't say I have
<teamcobra> ok
<sommer> is that a management app?
<teamcobra> yes, and it can handle quotas through a script supplied
<sommer> cool
<sommer> I've been contemplating cfengine and puppet myself
<sommer> lately anyway :)
<teamcobra> but it keeps telling me that my admin user (ldap admin dn) must be a valid unix account to work... and admin is a user
<teamcobra> :/
<sommer> oh woops, didn't realize you were talking about LDAP
<sommer> is that the groups thing?
<teamcobra> yea
<sommer> ya, that's kind of a hard situation for me... I've only dealt with situations where I control the LDAP server and the client server
<sommer> are the attributes in the screenshot the only ones you get?
<teamcobra> yeah, I'm having a near aneurysm ;p
<sommer> heh, I can relate to that
<sommer> I think the issue is that you don't have the attributes for what you are trying to accomplish
<teamcobra> yes, but if I can make a group (ldap or posix), I can force each new user to be a member of a group in liferay
<sommer> sure
<sommer> so depending on how you create the user would determine how they are added to the group
<teamcobra> yes, but there's a configuration box to add users to a group automatically
<sommer> but is that group LDAP or posix?
<teamcobra> pretty sure it's ldap, 1 sec
<sommer> are you familiar with Active directory?
<teamcobra> ldap defined posixGroup
<teamcobra> not very :/ a little bit, I've gotten auth working, and creating users on 2 different machines
<sommer> cool, I was just going for an anology to help explain
<sommer> either way, what you  have is an LDAP group and an local system, or posix group, and the Group ID number needs to match between the two
<sommer> if that makes sense
<teamcobra> ahh, I wasn't sure if they could share numbers
<teamcobra> that clears up a _LOT_
<teamcobra> brb
<sommer> sure they can they're two different systems :-)
<sommer> at least that's the way I think of it
<sommer> but they can serve the same function, so it can be confusing
<teamcobra> and the ldap group will inherit the posix group's quota?
<sommer> basically
<sommer> yes
<sommer> if you configure the system to look at ldap for group information
<sommer> which is where nss-ldap/libpam-ldap comes in
<sommer> err, libnss-ldap
<teamcobra> ok, I do have that installed, will look at its conf
<sommer> there's also a guide in the ubuntu wiki, not sure how up to date it is though
<sommer> should point you in the right direction
<creAtion> interesting to see that Sun are looking to Ubuntu Servers for the development in the future
<creAtion> http://seekingalpha.com/article/71011-sun-microsystems-next-linux-move
<teamcobra> also interesting that they've grabbed virtualbox
<creAtion> yeah I hope MySQL and virtualbox don't change their licensing
<sommer> is virtualbox sun?
<teamcobra> it is now
<sommer> eh, didn't know that, kinda cool
<creAtion> if Sun want to work more closely with Ubuntu Server hopefully we can get things like ZFS ported over
<sommer> seems to me that sun sees the value of the community around MySQL so I don't think they'd change the license
<sommer> zfs would be cool too :)
<teamcobra> raidz would be very cool ;)
<teamcobra> Cannot write to `radiofreefinland_20051030_Matti_Hautsalo_KEPA_global_trade_WTO_32kbps.mp3' (Disk quota exceeded).
<teamcobra> :D :D now to try it with liferay, brb, victory cigarette is in order ;)
<sommer> I'm with that
<murray98> hey summer - so I found my dl link thanks to you and got that sorted. Now I'm having a bit of trouble with my db-config file
<murray98> oops, sommer!
<murray98> bleary eyed misspelling
<sommer> db-config?
<sommer> for you site?
<murray98> yes, this file: inc/config/config.db.php
<murray98> for my TF-b4rt
<sommer> ah, what db are you using?
<murray98> mysql (did I answer that right? Sorry, kind of a noob with this)
<sommer> yep, sounds about right
<sommer> murray98: have you setup a database and database user for you site?
<murray98> yes. I had tf-b4rt running great on my server. I have since tried to upgrade to a newer version and am having probs. In the upgrade instructions, they say "Restore your db-config-file (inc/config/config.db.php)"
<sommer> ah, did it get overwritten in the upgrade? if so do you have a backup?
<sommer> of the config file that is
<murray98> well, it seems like my "upgrade" only consisted of dl'ing one file, replacing my html dir with the new one, then restoring this config file.
<murray98> i don't have a backup. nothing super valuable on there, fwiw.
<murray98> not to throw a manual your way (!) but here are the 4-step instructions, if you have a sec: http://tf-b4rt.berlios.de/forum/index.php/topic,1200.0.html
<murray98> i'm merely upgrading
<murray98> what I did was change the config.db.php.dist file to "config.db.php."
<murray98> i saw that mentioned in a previous forum so I tried it out. Would that be the source of my problem?
<sommer> ah, did you try renaming config.db.php. back to the original name?
<murray98> no!
<murray98> ok, lemme try now.
<sommer> I'd to a copy instead of a mv... so that you have a backup :)
<murray98> k
<murray98> now reboot?
<sommer> not sure, probably shouldn't need to
<sommer> maybe try refreshing our browser
<ghostnob> can someone help. I need to run my ubuntu server edition in GUI mode. How can I do that?
<sommer> !servergui
<ubotu> Ubuntu server does not install a desktop environment or X11 by default in order to enhance security, efficiency and performance.  !eBox provides a GUI system management option via a web interface.  See https://help.ubuntu.com/community/ServerGUI for more background and options.
<sommer> ghostnob: the above site should document the process
<ghostnob> ok.. thanks... I'm just testing thing and I will surely go through the documentation... thanks
<sommer> no problem
<murray98> darn it. still says my Tf-b4rt is not found.
<murray98> how can I verify that I have the correct settings for my config.db.php file?
<zylstra555> Hello, my Ubuntu server is having problems staying connected. I honestly dont have the slightest idea of where to start. Can someone help me with this?
<sommer> is "Tf-b4rt" the name of your site?
<murray98> the URL I'm acccessing when I use tf-b4rt, from my other computer, is http://192.168.1.124/torrentflux/index.php
<sommer> murray98: does it work from a browser? or just from a torrent client?
<sommer> as in should it work from a regular browser?
<murray98> sommer: yes, it works from a browser.
<sommer> did your apache configuration change?
<murray98> not intentionally :( . How can I verify I have the correct settings in place? All I was trying to do was delete my docroot path
<sommer> yaaa, that could be an issue... which files did you delete?
<sommer> the default is /var/www
<sommer> zylstra555: what do mean by staying connected?  network?
<zylstra555> sommer: Its suddenly not accessible to the outside for an unknown period of time
<sommer> is there anything in the logs?
<sommer> /var/log/syslog ?
<zylstra555> sommer: I am not sure where they would be located
<murray98> sommer: I deleted /usr/share/torrentflux/www
<murray98> that should be /usr/share/torrentflux/www/
<zylstra555> sommer: Theres a slight problem... I cannot access it at at the moment.. (I usually am right next to it, but, I am currently attempting remote access) and its not responding..
<sommer> murray98: ya, I'd be that apache is still looking for that directory
<zylstra555> sommer: I'm pinging it continuously in hopes that it might come back up.
<sommer> murray98: you should try extracting the new version into there
<sommer> zylstra555: ya, could be anything then... hardware, network, etc
<murray98> sommer: extract the entire original tarball into there?
<sommer> murray98: from what I can tell that's what the upgrade instructions say... then copy your db.config.php, or whatever, into there
<murray98> or just the html dir?
<murray98> hmmm. interesting. will give it a whirl and report back. Thanks!
<sommer> np
<zylstra555> sommer: Well, during any time, it is accessible to the local network that its connected to. Its something to do with how the external connection is being kept, I dont think its a router issue though. I wonder if its my ISP dropping the connection
<sommer> zylstra555: possibly, but it's hard to tell without input from the box itself
<sommer> how long has it been inaccessible?
<zylstra555> sommer: Thats the other problem, I dont know how long exactly it stays connected for.
<sommer> okay, because you're not connected now? or because you can't connect?
<zylstra555> sommer: I am not able to connect at the moment. (I know I should have waited until I could, but, I was hoping to get some ideas of what might be the problem)
<sommer> how do you connect ssh?
<zylstra555> sommer: Yes. Its a webserver, its not responding to the following (that I have tested so far) Pinging, FTP, SSH, or just attempting to visit it
<sommer> for me that would mean something serious, but depending on your ISP they could be blocking I guess
<sommer> the next time you have access you could try changing the ssh port to something non-standard... that should help determine if the ISP is doing something anyway
<zylstra555> sommer: I read their policy, theres nothing against home servers, and its a small ISP (I doubt they even monitor bandwidth, which, I hardly use any of as well)
<rhineheart_m> can anybody here help me to diagnose why I am not receiving mails from yahoo.com?
<concatenate> sure, describe the problem
<concatenate> you have your own domain and do your own DNS and MTA?
<rhineheart_m> yeah.. actually.. it did work before..
<murray98> sommer: thx for your advice. I've got to figure out more info about what my docroot path even is. I may have deleted some valuable stuff and not known it. Anyway - more tomorrow. thanks for your help.
<rhineheart_m> how would I know if it is related to firewall configurations.
<sommer> murray98: welcome, I think I'm done for night as well
<rhineheart_m> it just started when I changed router.. but I port forwarded already 143 and 110
<concatenate> do you get mail from anyone rhinehart_m?
<rhineheart_m> concatenate, within the system. I can receive mails.. like sending from other user in the same domain
<rhineheart_m> but from yahoo.. it won't
<concatenate> rhineheart_m: you have a dumb home router like a linksys? So you have MX records pointing to your domain?
<concatenate> if so what's the domain?
<rhineheart_m> yeah.. I have it.. linksys
<concatenate> rhineheart_m: I want to make sure that you're talking about a full internet mail setup where anyone can send mail to your domain, and you have a mail server like postfix or sendmail or exim that accepts it and delivers it
<concatenate> you only mentioned POP/IMAP so far
<rhineheart_m> concatenate, I have postfix and courrier-imap running in my box
<concatenate> rhineheart_m: ok so what's the domain? I can test it remotely quite easily
<rhineheart_m> concatenate, can I send it in PM?
<concatenate> sure
<concatenate> or just email me at nate at campin.net and I'll see it
<rhineheart_m> how about just telling me how to test it? I am working remotely..
<concatenate> rhineheart_m: you should have MX records or an A record for the domain, then ppl should be able to telnet to port 25 on the host and see a SMTP server there accepting mail for the domain
<concatenate> rhineheart_m: I normally type in the "rcpt to:" and "mail from" and such right into telnet to test it out - send an email manually
<rhineheart_m> concatenate, you mean.. I need port 25 to be open not just ports 110 and 143?
<concatenate> rhineheart_m: correct that's the SMTP port
<rhineheart_m> concatenate, that's the port I missed to open
<concatenate> sometimes you'll open up some higher ports for non STARTTLS encrypted mail too, but that's not required or even normal
<rhineheart_m> so, if ports 110 143 are only open.. it can't receive mails from the outside, concatenate?
<concatenate> rhineheart_m: correct
<concatenate> 110 and 143 are POP and IMAP
<concatenate> you don't really want those unless you force STARTTLS - those aren't encrypted
<rhineheart_m> concatenate, and they are for fetching mails from postfix only.. correct?
<concatenate> you want the SSL wrapped version
<concatenate> correct,from courier or whatever you're using
<concatenate> postfix is just SMTP and delivery, strictly speaking
<unop_> rhineheart_m, you need port 25 forwarded if your SMTP server is ever to recieve mails from another
<rhineheart_m> concatenate, within the LAN
<concatenate> it doesn't do POP or IMAP
<concatenate> rhineheart_m: once your mail works just do away with POP and IMAP and use a SSL version (or wrap with stunnel) and use the SSL ports for them
<rhineheart_m> concatenate, that's a great idea! how to do it?
<concatenate> egrep 'imaps|pop3s' /etc/services
<concatenate> you'll see the port numbers
<concatenate> on ubuntu server? I seem to only have Debian boxes handy ;)
<rhineheart_m> I am using ubuntu... egrep 'imaps|pop3s' /etc/service gives me an error
<concatenate> with stunnel it's easy, and surely documented on the stunnel site, it was there years ago anyways, sure to still be there, this is really common
<concatenate> oh /etc/services
<concatenate> sorry
<concatenate> rhineheart_m: oh I had it right, you musta cut-n-pasted and missed a char
<rhineheart_m> I got 993 and 995
<concatenate> rhineheart_m: yup that's them, now google for how to configure courier to do SSL or for how to wrap it with stunnel
<concatenate> the first it better, since it can do STARTTLS for nicely behaved clients
<concatenate> but the wrapper thing works fine
<concatenate> then don't even support unencrypted POP or IMAP
<concatenate> those days are over, don't even do it
<rhineheart_m> thanks.. what client you use to fetch mail?
<concatenate> rhineheart_m: depends on whether it's work or home - at work I use fetchmail to get mail from exchange then a local postfix/procmail and I read it with mutt
<concatenate> personal used to be postfix and local mutt on my mail server
<concatenate> these days it's actually hosted by google after my mail server died :(
<concatenate> and I just use the web interface for now
<rhineheart_m> do you host multiple websites concatenate ?
<concatenate> rhineheart_m: well just for me and family, my job is running web sites though, but not a hosting company, online advertising
<rhineheart_m> do you use web based control panel ?
<concatenate> rhineheart_m: no, it's only me administering it, so no need
<rhineheart_m> okay.. I have plans actually to host some sites...maybe 4
<concatenate> it's just for family so I do all the work for them, at work it's all done by SA's so it's file-based changes, pushed by cfengine
<rhineheart_m> but I need web interface for them...got an idea?
<concatenate> rhineheart_m: I wouldn't actually know, I don't deal with those - I hear about cpanel all the time but I don't even know if it's open source
<rhineheart_m> concatenate, yeah.. its not free
<concatenate> rhineheart_m: webmin might be able to help, but I've really never used that either
<concatenate> you should find some mailing list full of web hosting types and ask them
<rhineheart_m> where?
<concatenate> rhineheart_m: check the debian lists - I seem to remember a debian hosting/ISP list or something
<rhineheart_m> concatenate, have you heard about ISPConfig?
<dennister> hey guys...hoping someone can help me restore a postgres db...never done this b4 and don't even know if I can
<dennister> situation is that i had a working db on one server, then I rebuilt the server and have the environment ready to restore the dump I'd done from old server...hostname is different tho, so ???
<dennister> <-------total newb when it comes to postgres
<dennister> pg_restore ...???
<stefg> Hi, i could use some opinions on wether to use lvm or or not when setting up a new server (quad xeon, 4gb, 4x 500 GB, probably in some raid 5 setup). Anyone uses vm ?
<stefg> *lvm
<theunixgeek> I ordered a Ubuntu server CD via shipit. Where can I see a photo of what the CD case looks like?
<faulkes-> stefg: yes and no is the basic opinion
<faulkes-> LVM is great, if you are familiar with how it works and how it handles RAID
<faulkes-> however, I see many many problems with admins who are unfamiliar with it when a disk issue arises
<faulkes-> as such, I would suggest reading up on LVM and what LVM can do for you before making a decision to use it
<faulkes-> (note, especially in a raid1 config)
<stefg> faulkes-: haha .... so is the extra complexity worth the benefits.... and that's the point: what happens in case of disk failure. i know i will have to fmiliarize myself with it, but what about the other admins, who are challenged enough with windows
<faulkes-> the most typical problem I see with LVM is folks who use it in a raid1 config, unfortunately there is an issue right now, in that if there is a dirty disk in raid1 config, it won't mount
<stefg> faulkes-: i see...  i'm envisioning raid 5 anyway,
<faulkes-> s/dirty/failed/etc.. as raid would think of the disk, not in general if it just requires an fsck
<faulkes-> stefg: a good starting point would be the official forums to see some of the issues, which may be more specific to your own case
<faulkes-> http://ubuntuforums.org/forumdisplay.php?f=7
<faulkes-> are the server forums
<faulkes-> i.e. are you using dell perc, etc..
<stefg> faulkes-: yup, i skimmed over some forum posts already, and generally understand what it is about. i just have a hard time judging if the added complexity and (possible) i/o overhead pays off in my case
<faulkes-> depends on your case ;)
<stefg> faulkes-: samba/ftp and LAMP only for Intranet purposes. some vmware virtual machines. not likely to need more storage space in the next 2 years (so being able to enlarge volumes is nice to have, but not mandatory)
<faulkes-> i/o overhead is generally overrated in my opinion except in very specific situations
<faulkes-> and in those situations you just mentioned, even with heavy use on the samba/ftp/lamp side of things, the i/o overhead is minimal
<faulkes-> as it would seem you are configuring for disaster situation in which a disk dies and such
<stefg> the underlying bad feeling i have is: i'd like to have an idea what data go where in physical terms, and i fell i'll never know on which disk an array is actually placed when using lvm
<faulkes-> a not unreasonable fear
<faulkes-> but not one I can address
 * faulkes- is not an lvm expert 
<faulkes-> then again though, you do take backups right? ;)
<stefg> faulkes-: real men ftp their stuff and let the world mirror it :-) .... of course i'll have a backup server in another building
<symtab> any ideas how i can get root privileges on ubuntu server?
<symtab> 7.10
<symtab> sudo /bin/bash -i
<symtab> from the user i created when i installed system
<symtab> returns
<symtab> user not in the sudoers file
<lamont> the initial user should be in sudoers
<lamont> although these days, I think that's via %admin or such
<symtab> %admin?
<symtab> whats that?
<lamont> the admin group
<lamont> see man 5 sudoers :)
<lamont> worst case, rebooting in single user mode (recovery), or from a livecd will allow one to fix the disk
<mralphabet> sudo su -
<lamont> mralphabet: that assumes that the user is in sudoers....
<mralphabet> true
 * lamont -> yard work
<fromport> otherwise: reboot with live disk , chroot into disk environment and set rootpasswd "passwd root"
<Death_Sargent> Need help with php5-mysql not workping properly
<Death_Sargent> trying to use e107 cms and all i get back is "e107 requires PHP to be installed or compiled with the MySQL extension to work correctly, please see the MySQL manual for more information."
<symtab> i managed to get root
<symtab> now how do i configure  network card to use dhcp
<symtab> ?
<symtab> is there a tool for this?
<symtab> or do i have to manually edit a file?
<symtab> like /etc/network/interfaces?
<ivoks>               iface eth1 inet dhcp
<ivoks> man interfaces
<Death_Sargent> Need help with php5-mysql not workping properly
<Death_Sargent> trying to use e107 cms and all i get back is "e107 requires PHP to be installed or compiled with the MySQL extension to work correctly, please see the MySQL manual for more information."
<ivoks> did you install php5-mysql?
<Death_Sargent> acording to the admin yes
<ivoks> was apache restarted?
<Death_Sargent> he says he ran apt-get install php5-mysql
<Death_Sargent> and it stated it was installed
<Death_Sargent> not sure about that
<Deeps> create a phpinfo page to confirm the module is loaded correctly
<Death_Sargent> Hod do i do that
<Deeps> create a php file containing: <? phpinfo(); ?>
<Deeps> and open that in your browser
<Deeps> and search for mysql
<Yahooadam> Hey, my server has just started giving me issues
<Yahooadam> when it boots, it hits rc.local, and then crashes - before it crashes its pingable by the network, but then it just dissapears again, and wont accept keyboard input (but still displays a picture on the screen)
<Yahooadam> is there any particular log i should look at that might help? (it works fine in recovery mode, and memtest looked ok for the 5 or so mins i ran it for)
<Deeps> what's in your rc.local?
<Yahooadam> nothing
<Yahooadam> its a tiny bit after rc.local (when the boot process is finished i guess) that it actually stops responding
<ivoks> then start your system in rescue mode
<ivoks> and start services one by one
<Kahan> my server just froze or something (became un pingable and unreachable via ssh at least) so I plugged the old CRT back in and rebooted it, now it just freezes at "loading GRUB" : /
<Deeps> Kahan: sounds like a disk problem, try a livecd and an fsck, if it makes it through that, try reinstalling grub on the disk
<Kahan> Deeps, just fsck with no additional args?
<Deeps> Kahan: well you'll need to specify the target file system
<Deeps> Death_Sargent: http://i28.tinypic.com/2z3nmg3.png you'll want to see something like that in your phpinfo output
<Deeps> Death_Sargent: or rather, within the output, they'll be more sections than just the mysql one
<Kahan> Deeps, kk thanks, I'll check the man page, gonnae give that a try (AFK)
<Death_Sargent> Deeps, ok I am just having trouble with making the php document as web design is new for me
<Yahooadam> how can i work out what starts at boot?
<Deeps> Yahooadam: look in /etc/rc2.d. links starting with S are started at runlevel 2, which is the default multiuser runlevel in ubuntu
<Deeps> Yahooadam: the number following the S denotes priority, and the order in which services are started
<Yahooadam> ooo
<Yahooadam> it appears mysql just crashed my computer
<Yahooadam> yep, mysql did it ...
<Yahooadam> ok, so how do i stop mysql crashing my computer, preferably without loosing all my data ....
<Deeps> look through the logs in /var/log (syslog + mysql's)
<Kahan> thanks Deeps, that fixed it
<Deeps> your databases should be in /var/lib/mysql
 * Yahooadam goes to reboot his server
<Deeps> make a backup of that
<Deeps> Kahan: good stuff! :)
<ivoks> mysql crashed it?
<Yahooadam> ivoks - yep
<ivoks> i don't see how mysql could crash it
<ivoks> mysql could consume your CPU, or RAM, or rape your disk, but if it crashed, you have a hardware problem
<Yahooadam> well i started mysql and the server stopped responding ...
<ivoks> that doesn't mean a thing
<ivoks> start it in resuce mode hit tail -f /var/log/kern.log
<ivoks> and wait for 5 minutes
<Yahooadam> right, ive started everything i had last time except mysql
<Yahooadam> waiting 5-10 mins
<Deeps> if nothing occurs in the next 5-10min, try an fsck
<Yahooadam> still responding ok
<Yahooadam> how would i run fsck on the root filesystem?
<Yahooadam> (i mean, due to it being mounted ...)
<unop__> Yahooadam, it's best done offline
<Yahooadam> like, use a livecd or something
<Yahooadam> all fs's are clean, but mysql is starting now
<Yahooadam> so it mustve been something else
<ivoks> check logs
<ivoks> they might some information
<youngmusicorg> I got something strange going on here. Since a server reboot yesterday, I get about 20 of these messages when i log in: -bash: [: !=: unary operator expected
<youngmusicorg> furthermore, ls gives this errror: ls: unrecognized prefix: do
<youngmusicorg> and rm simply does not work
<youngmusicorg> Haven't tried other command, but i doubt they're all ok
<youngmusicorg> obviously something is damaged on my system, but what?
<sommer> youngmusicorg: all are your partitions mounted?
<youngmusicorg> yes, they are
<youngmusicorg> df -h
<sommer> mmmm... di you see any error messages during boot?
<sommer> did
<youngmusicorg> I'm on ssh, but let me check the messagelog
<sommer> youngmusicorg: you also might check /var/log/boot
<youngmusicorg> dmesg seems normal
<youngmusicorg> and log/boot is empty
<youngmusicorg> :q
<youngmusicorg> (sorry about the commands)
<youngmusicorg> it seems nothing has been logged since that new startup
<sommer> what heppens if you start another shell using the bash command?
<youngmusicorg> I started sh yesterday. I can do that and it doesn't give an error. But commands like ls still give the same errors.
<sommer> the other commands give the same error?
<youngmusicorg> ah, i see my system name has also disappeared. The prompt now shows yvan@(none):~$
<youngmusicorg> well, not every command gives an error
<sommer> it almost seems like something didn't get executed during boot, can you reboot the system?
<sommer> it might be better to have physical access to see any errors
<youngmusicorg> touch works, but rm gives a segmentation fault
<youngmusicorg> problem is, if i reboot the machine and it doesn't come up, I'll have to travel 2 hours to get there :-(
<sommer> ya, I figured that would be an issue
<youngmusicorg> but it might be my only option indeed
<sommer> it just sounds like there a major problems with the system
<youngmusicorg> that's what i thought. But it's a bit strange because nothing really changed on it
<sommer> if it was one command that would be one thing, but since it's multiple commands...
<sommer> could be a hardware issue, as well
<youngmusicorg> wel, i'd better get over there in the morning.
<sommer> you could always try rebooting and hope :)
<sommer> are other services working?
<youngmusicorg> thanks for trying to help though
<sommer> you're welcome
<youngmusicorg> i need ssh to this server to reach the rest of the servers. And still have work on those. So it's best to leave it up for now
<youngmusicorg> Since there isn't an ubuntu-server+1, i'll ask this here also. I don't know if it has anything to do with this being a hardy machine anyway. But I tried to start apache on a new machine just now, and it couldn't start because port 80 is in use by a program called "iroffero". I've never heard of that though. Does anyone here know what it is?
<Nafallo> http://forum.joomlafacile.com/showthread.php?p=280880
<youngmusicorg> that's in french...
<youngmusicorg> but there is indeed a joomla site on this server.
<keithclark> Is it possible to do a ssh session over the internet through a firewall?
<Nafallo> yes
<keithclark> Nafallo, do I have to modify the firewall settings on the server?
<Nafallo> keithclark: how do you expect me to answer that question?
<youngmusicorg> keithclark: if i'm not mistaken, ssh listens on port 22. So you'll have to make sure your firewall allows access to that port
<keithclark> Nafallo, not at all, thanks.....I'll keep searching.  I'm new to this ssh and firewall stuff and I thought it was a legitimate question.
<Nafallo> keithclark: all firewalls are different.
<keithclark> youngmusicorg, thanks.....I'll check that!
<keithclark> Nafallo, understood.  I was just looking for port advice and such.  I guess I didn't really provide enough information
<keithclark> hmm...I am trying to rstart a ssh session over the internet and the terminal just sits there with no response.  Any ideas what I'm doing wrong?
<keithclark> ah, ok, timeout.  This is where the firewall is taking effect.
<keithclark> ok, firewall solved
<keithclark> next to the ssh service
<keithclark> hmm, firewall disabled and ssh enabled yet I cannot make contact, any ideas?
<keithclark> hmm, I can't get seem to get connecte
<keithclark> d
<keithclark> let me try rebooting into pclos so that I will be pclos to pclos machines
<keithclark> ok, that worked, I guess pclos likes pclos and not ubuntu
<keithclark> but not in ubuntu...hmmm
<keithclark> ok, I guess I'm stuck in pclos when networking between these two machines....
<keithclark> No big deal
<keithclark> no other solution?
<Loosewheel> Tatster: How did you do with the RAID 1  LVM problem?
<Tatster> Haven't had another chance to look at it yet.
<Tatster> I'm choosing to tread very carefully because I very keen to keep the data safe that's on there, at least until I can get an up to date backup from it!
<Loosewheel> I've been trying to get my 'index.html' file to link to an OO Presentation file...with no luck
<Tatster> under apache ?
<Loosewheel> Yes
<Tatster> does apache serve the OO file correctly via direct URL ?
<Loosewheel> I can get it to link at the server, but not from another machine.
<Tatster> how about from an IP address based URL rather than a name ie http://1.2.3.4/name-of-oo-file.ext
<Loosewheel> I've set it up on the LAN only with 192.168.x.x. (I don't have a domain name yet). I can link to www. ok
<Tatster> ok, so your webserver is somewhere on the LAN and you are trying to access via http://192.168.x.x./nameoffile
<Loosewheel> I've been trying to access http://192.168.x.x/......and have tried to make the link to file that i've put in /home, and /var/web directories
<Loosewheel> I'm so new at this its hard to ask a good question. Just harmering away trying to learn something
<c0ldfusion> Loosewheel: so you don't have a link to the document when you access http://192...
<c0ldfusion> is that what u mean
<Loosewheel> yes
<c0ldfusion> Do you know how to write html code?
<Loosewheel> I used oowriter to make the page and can link to http://www.somewhere but have problem trying to link to the presentation file.         and no I don't
<c0ldfusion> OK; so oowriter created your index.html?
<c0ldfusion> what did you save the file oowriter created as?
<Loosewheel> Where should I put the oo file, (directory)? Yes
<Loosewheel> I save as html
<c0ldfusion> you need to name the file "index.html" and place it in the /var/www/ directory
<c0ldfusion> unless we're talking about your presentation
<Loosewheel> ok. I have one index.html file with a  link to the internet. Works. I shoved the presentation file in a folder under /var/www
<c0ldfusion> ok now you need to edit your index.html and create a link to the presentation
<c0ldfusion> so what is the path/filename of the presentation
<Loosewheel> I tried
<c0ldfusion> I'm going to give you an example but I need the path/filename
<Loosewheel> /var/www/Security/system-security.html
<c0ldfusion> in your index.html you will find a line that begins with <body>
<c0ldfusion> on the next line, for example, put this:
<c0ldfusion> <a href=Security/system-security.html>My Presentation</a>
<Loosewheel> Ok, and thank you. I'll fire it up and give that a whirl
<c0ldfusion> yw
<ksclarke> I just installed 7.10; when I boot it goes through all the messages but stops before it gives me a prompt... once I hit a key I get a prompt... how do I set it so it doesn't wait for that key stroke?
<ksclarke> is there something in the bios or is it something in ubuntu server?
<Cahan> ksclarke, I'm afraid I don't know, does that for my 7.04 as well
<zylstra555> sommer: You dont happen to still be there, do you?
<sommer> zylstra555: just came back :)
<zylstra555> sommer: Would you mind kind of resuming from last [night]? I can access my server again
<sommer> sure, I need some refreshing though
<zylstra555> sommer: Okay, randomly, my server just goes offline for unknown periods of time
<zylstra555> It just starts timing out
<zylstra555> this only happens to the external network, the localnetwork can continue to access it fine
<sommer> ah, I remember now...
<sommer> did you try changing the ssh port?
<zylstra555> sommer: That would be through the router (which, I do not have access to remotely)
<sommer> ah, gotcha
<sommer> my thought then is to look through the logs for network errors
<zylstra555> sommer: But, I do know one thing, its definitely not a port being blocked by the ISP. I have a web application that I access via a port that is not known to be dedicated to a specific service
<zylstra555> sommer: Which logs specifically?
<sommer> try /var/log/syslog, and /var/log/daemon.log
<sommer> maybe dmesg as well
<sommer> grep -i ssh /var/log/syslog will check for ssh issues, for example
<zylstra555> sommer: Well, its not a problem specific to SSH. Its everything, it just becomes completely inaccessible. (>	grep -i ssh /var/log/syslog didnt return anything though)
<sommer> mmm... and clients on the lan can access the server when it's unavailble from the Internet?
<zylstra555> sommer: That is correct
<sommer> does your router have any errors?
<zylstra555> so, that causes me to suspect three things, router, ISP, or an unlikely but possible messed up configuration file
<zylstra555> sommer: It does have a log, I have checked it before, it didnt say anything out of the norm.
<sommer> I agree, I'd look into the router next
<sommer> ah
<sommer> DNS maybe?
<zylstra555> sommer: What could DNS possibly be causing?
<sommer> when you can't access the server from the Internet, can it access services on the Internet?
<sommer> DNS unavailable perhaps?
<sommer> from the server's prospective that is
<zylstra555> sommer: I am pretty sure that it can access the internet while users cannot connect to it. (Since I frequently run sudo apt-get update and that works fine)
<sommer> do you run that from a cron job?
<zylstra555> sommer: No, I usually run the update command manually
<zylstra555> (the server is usually literally right behind me, I am in a remote location at the moment)
<sommer> ah, so even when access from outside is unavailable you can still apt-get update?
<zylstra555> sommer: Yes, when I am there, I am able to run apt-get update when its not able to access anything from the outside
<sommer> hrmmm... that does then seem to be something with the ISP to me
<zylstra555> sommer: Perhaps. Is there any way to rule that out?
<sommer> maybe contact your ISP when the service is unavailable and get their input
<sommer> maybe they can access the service since they're closer on the network, kind of thing
<zylstra555> This is the only error in the daemon.log file that I see having to possibly do with the internet: Apr  3 21:15:28 ubuntu dhclient: can't create /var/lib/dhcp3/dhclient.eth0.leases: Permission denied
<sommer> are you using the Ubuntu Server Edition or server applications installed on the desktop edition?
<sommer> I guess I've assumed that the server has a static IP
<zylstra555> sommer: I am using Ubuntu Server 8.4 Hardy Heron. I had the same problem when using 7.10
<sommer> and it's configured with a static IP?
<zylstra555> sommer: Well, its a dynamic IP that never changes
<zylstra555> (if that makes any sense)
<sommer> ah, configured in your dhcp server?  are you sure it never changes ;)
<zylstra555> The server itself connects to the router (directly) and obtains the same address every time: 192.168.1.7
<sommer> just to rule that possiblity out I'd recommend setting a static IP on the server
<zylstra555> All the necessary ports are forwarded. I can guarantee you the address never changes though
<zylstra555> http://66.172.101.25
<zylstra555> (www.zylstrablog.co.nr)
<zylstra555> Which, is running right now
<sommer> is that a static IP from your ISP ?
<zylstra555> sommer: No, not static. I believe the main reason why the address never changes, besides they do always reassign the same address as most ISP's do, is because to connect to my ISP, you need to sign into it via PPoE, which, is configured in the router. (Its a Fiberoptic connection)
<sommer> from my experience I usually get the same IP from my ISP, but if my router goes down due to power outage, or whatever, I sometimes get a new one.
<zylstra555> sommer: I never do (and I just had a 2.5 hour power outage)
<sommer> ah, I'd bet it's either something with your ISP, or with your IP address, public or private
<sommer> some ISPs will block you from running a website on a home type of account... at least in the US
<zylstra555> sommer: I suppose I can call them. (Ill just tell them its a remote FTP backup that I do from my computer that keeps disconnection, in case they dont want home servers, which, isnt against their policy)
<zylstra555> sommer: Thanks for your help
<sommer> some require a business level account which means more $$$ you have to pay them
<sommer> zylstra555: no problem
<zylstra555> sommer: Its a small ISP, they have business accounts, but, the only difference is a higher price. Its the equivalent of the home account, but, with a bit more bandwidth
<BarryToeman> Does a default Ubuntu Server install automatically apply security updates?
<zylstra555> BarryToeman: The last time I checked, no. You can schedule it to do so through cron jobs (which, I dont personally use, so, you would have to ask someone else)
<BarryToeman> zylstra555: thanks.
#ubuntu-server 2008-04-06
<zylstra555> sommer: Is there a way to cause the computer to reconnect every, say, 45 minutes or so?
<zylstra555> sommer: Like, to drop its current connection, and reconnect?
<sommer> zylstra555: from your ISP?  you could setup a cron job to restart networking I guess...
<zylstra555> sommer: But, it is connected through a router, so that probably wouldent work
<zylstra555> sommer: What if I set up a computer in a remote location to just sign into its FTP server every 45 minutes, and then disconnect? Do you think it could perhaps keep the connection up?
<sommer> ya, what you might try though is a setting a cron job to ping an Internet host every so often... to make sure the connection is working
<sommer> that's a good idea too
<zylstra555> sommer: How would I do that?
<sommer> zylstra555: you would need to have access to another outside computer I guess
<zylstra555> sommer: Rather, what is the ping command? (I can get the Cron job up, I use Webmin for things like that, which, is my way of cheating CLI)
<sommer> ah
<sommer> zylstra555: this page covers it pretty well: https://help.ubuntu.com/community/CronHowto
<zylstra555> sommer: Is there a difference between scheduled commands, and scheduled cron jobs?
<sommer> not sure what "scheduled commands" are... do you configure that through the gui?  I'd think they are the same things though
<sommer> or they could be at jobs :)
<zylstra555> sommer: Ill just go with Cron jobs, it should work fine
<sommer> cool
<keithclark> This may be a stupid question, but I have to ask.  I have ssh working but is there a graphical front end for it instead of using the terminal to start programs?
<sommer> gnome-terminal :)
<keithclark> :)
<sommer> you could create a shell script with the commands you'd like to execute, then create a launcher
<keithclark> Yeah, good idea
<sommer> you'd probably want to setup ssh-keys if you haven't, to avoide having to enter a password
<keithclark> No, I've not done that.  Is that easy to do?
<zylstra555> sommer: Once again, thanks. Hopefully, pinging will fix the problem
<sommer> yep, here's some instructions: http://doc.ubuntu.com/ubuntu/serverguide/C/openssh-server.html
 * zylstra555 over and out
<sommer> zylstra555: you're welcome
<keithclark> Awesome, thanks
<sommer> np
<keithclark> Now, If I could just figure out port forwarding on my router
<Deeps> www.portforward.com
<keithclark> deeps, thanks....you guys are just a never ending information pool!
<keithclark> Deeps, amazing database!
<keithclark> Is there a way to distribute the computation of copying a dvd?
<Cahan> somehow rtorrent is corrupting my filesystem and causing my server to freeze up (Ubuntu 7.04), and I need to boot a live cd and run fsck to fix it. rtorrent is run from a ReiserFS partition and is saving to an Ext3 partition if that helps
<mindframe> Cahan, you using the most recent stable?
<Cahan> whichever one apt-get installs, I only installed it this mornign
<mindframe> bad idea
<mindframe> http://libtorrent.rakshasa.no/
<mindframe> compile/install the latest stable
<Cahan> mindframe, I see, thank you, brb then, need to fix the server first
<kgoetz> if the rtorent in ubuntu is broken a bug should be filed
<mindframe> if its not the most current stable then there are bugs :)
<kgoetz> there are bugs in the most current stable too
<mindframe> nothing that i've noticed so far
<mindframe> and i use it quite a bit
<Cahan> mindframe, you think I should remove the current install before compiling the latest one?
<mindframe> yes
<mindframe> apt-get remove --purge
<kgoetz> Cahan: if your building from source checkinstall may be helpful to you.
<kgoetz> (dont know if you've built stuff from source or no)
<Cahan> not by had no, but I've read a guide once :p
<kgoetz> also, backporting mifght work if your so inclined
<Cahan> mindframe, is purge required? or would a remove suffice?
<mindframe> well a few of the config syntax changed somewhere between those versions so yes
<kgoetz> if you build from source you'll use different config file anyway
<kgoetz> s/will/should
<Cahan> righto
<mindframe> i mean .rtorrent.rc
<mindframe> version from apt might not even produce that
<kgoetz> user configuration? i doubt its patched to remove stuff
<kgoetz> !info rtorrent gutsy
<ubotu> rtorrent (source: rtorrent): ncurses BitTorrent client based on LibTorrent. In component universe, is extra. Version 0.7.4-2ubuntu2 (gutsy), package size 285 kB, installed size 768 kB
<kgoetz> what came before gutsy? i think thats what 7.04 will be
<mindframe> feisty
<mindframe> !info rtorrent feisty
<kgoetz> !info rtorrent feisty
<mindframe> heh
<kgoetz> snap ;
<kgoetz> ;)
<ubotu> rtorrent (source: rtorrent): ncurses BitTorrent client based on LibTorrent. In component universe, is extra. Version 0.6.4-1 (feisty), package size 314 kB, installed size 860 kB
<mindframe> yeah that's a terrible version to use
<kgoetz> !info rtorrent hardy
<ubotu> rtorrent (source: rtorrent): ncurses BitTorrent client based on LibTorrent. In component universe, is extra. Version 0.7.9-1 (hardy), package size 329 kB, installed size 924 kB
<mindframe> i remember having quite a few problems with it when feisty was current
<kgoetz> wonder if theres a backported version
<kgoetz> or how hard a backport from hardy would be
<mindframe> probably easier just to compile
<kgoetz> i prefer to backport if i can, but each to their own :)
<Cahan> tar -xvf right?
<mindframe> yeah its nice to keep everything in-house
<mindframe> zxvf
<mindframe> then cd into libtorrent dir.  ./configure && make && sudo make install
<mindframe> then cd to rtorrent dir and do the same
<kgoetz> use checkintall if you can. makes it easier to remove the thing later
<mindframe> they both have an uninstall option in the makefile
<mindframe> make uninstall
<kgoetz> you have to hang onto the source though
<mindframe> true
<mindframe> im gonna try out checkinstall
<mindframe> looks neat
<kgoetz> it works 'well enough' for small things. not sure i'd try to checkintall OO.o or Linux, but not tried :)
<Cahan> huh, i have no C compiler installed
<kgoetz> Cahan: not by default no
<kgoetz> !be
<ubotu> Sorry, I don't know anything about be - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi
<kgoetz> bah
<kgoetz> !build-essential
<ubotu> Compiling software from source? Read the tips at https://help.ubuntu.com/community/CompilingSoftware (But remember to search for pre-built !packages first)
<Cahan> thanks kgoetz
<kgoetz> as factoids go, thats pretty useless
<kgoetz> !b-e
<ubotu> Compiling software from source? Read the tips at https://help.ubuntu.com/community/CompilingSoftware (But remember to search for pre-built !packages first)
<kgoetz> alias
<SwissPhoenix> Hi there, I just toying around with hardy and noticed that any eth interface other than eth0 is being renamed to eth#_rename. I tried adding more interfaces to the 70-persistent-net.rules file, but that does not the trick....
<DsB> hi to all
<DsB> how can i manage a proxy server install in ubuntu from the web browser, is there some pakage that will do that?
<dthacker> ebox?
<faulkes-> proxy server? depends on the proxy server
<faulkes-> squid has a web ui built in with it iirc
<Diogo_79> hi to all
<Diogo_79> can i post some questions that i have about ubuntu-server
<Diogo_79> ?
<faulkes-> that is what the channel is for, you have no need to ask, to ask questions
 * fromport like humble ;-)
<BCMM> Is there a way to see the messages that scrolled on boot, over an SSH connection/
<BCMM> the machine has no working monitor
<Nafallo> cat /var/log/dmesg
<BCMM> and some kind of error is happening during boot up
<BCMM> hmm
<BCMM> isn't that just teh same as dmesg output?
<BCMM> i need to see the output of init
<Nafallo> ah
<Nafallo> serial?
<BCMM> ah, how do you do that?
<Nafallo> how to configure that beforehand II?RC
<Nafallo> there are a page about it on help.u.c/community :-)
<BCMM> thanks
<BCMM> hmm what exactly do you mean by "serial"? what should i search for?
<BCMM> does it require extra hardware?
<Nafallo> yes
<Nafallo> rs232
<faulkes-> basicly you are connecting a serial console cable to another device which can see those messages
<faulkes-> i.e. a console server (2511, 2611 in cisco land) or other manufactuer, but it could be another linux box, wyse terminal, etc..
<Diogo_79> is squid capable of block ports?
<Diogo_79> how can i configurate ubuntu to block msn and porno sites with squid?
<faulkes-> that is best answered by going to the primary squid site
<faulkes-> however, you can only block http based msn stuff, if you wish to either proxy or filter msn traffic, that is something you want to do with ip tables
<faulkes-> as for porno sites, that's a bit trickier, squid allows you to do url regex filtering
<faulkes-> so, you could tell it any url that contains "porn" or "sex" would be disallowed
<faulkes-> I'm unusure if it does content level filtering
<faulkes-> you might want to look at dans guardian for that (which is squid based iirc)
<Diogo_79> ok, faulkes thanks for the help
<faulkes-> squid is a bit tricky to learn for configuration but generally once you get the syntax down, you'll be good
<faulkes-> the primary squid site has some good howto/material
<Diogo_79> tel me one thing
<Diogo_79> is there a good web administration utility for configuration of squid on ubuntu server
<dthacker> Diogo_79: there are some lists you can pull in that list most of the adult sites, but you'll have to read the logs.
<dthacker> and add sites as needed
<faulkes-> Diogo: squid itself has a built in web management facility, I'm not sure about how much it covers as I don't use it
<faulkes-> iirc it's just a cache administrator function
<faulkes-> dans guardian may have more, you would have to investigate
<dthacker> Diogo_79: no idea on the web utility. I just use the command line
<Diogo_79> ok
<faulkes-> and of course, it all depends on what you want / require in the way of a "web management facility"
 * faulkes- is a firm believer in vi being the management facility
<Diogo_79> web management only for a local computer with ssh access on ubuntu
<dthacker> faulkes-: ++
<faulkes-> "web management" is a very broad topic, you'd have to be more specific
<Diogo_79> manage squid with the web bwoser on a client computer
<faulkes-> and yes, you can configure web based management utilities for local/local lan only access
<Diogo_79> sorry my bad english
<faulkes-> I'm going to assume when you say "manage" you mean the ability to configure squid as required (i.e. add new rules, etc..)
<Diogo_79> yes
<Diogo_79> you are rigth
<faulkes-> on that, I'm not sure what exists, although I can imagine that stuff does, in ubuntu particular, I could not say, other than squid does have it's own administrative server portion which is web accessable
<faulkes-> to what extent it will meet your needs, you will have to look at it
<faulkes-> primary site will give you that information I imagine
<faulkes-> and no need to apologize for your english, this is a multi-national channel
<Diogo_79> thanks
 * faulkes- returns to beating on a rebranded bastardized version of IOS on a particular vendors switch
 * faulkes- grumbles about it
<Diogo_79> tel me faulkes is squid a firewall what i mean is that squid can filter or block inside traffic to internet
<Diogo_79> but it cannot block outside trafic to inside local area network?
<Tatster> Diogo_79: You may also want to have a look at Ebox (http://ebox-platform.com/ ) it's kind of like a web management framework.
<chimp___> during installation of ubuntu server i didnt select the LAMP option, but i want to retrospectively, is it worth reinstalling the server (its a fresh install) or is there a package that will install them together seemlessly like the LAMP option is supposed to
<Cahan> chimp___, your better off installimg the packages seperate imo
<chimp___> Any reason Cahan?
<Cahan> saves you installing things you don't need
<chimp___> Ok :)
<dthacker> what's an example of something LAMP loads that's typically not needed?
<Cahan> apache ;p
<Cahan> lighttpd ftw
<dthacker> for specific use cases.....
<Cahan> I don't know, I installed things as I needed them
<chimp___> Basically im very new to all this, so if installing them seperately is difficult, then would the lamp option be simpler?
<Cahan> chimp___, I did it for the first time a couple of days ago, there are good resources on the ubuntu site
<dthacker> chimp___: tell you what, try it separately once, then if you find you are spending too much time installing, use LAMP next time.
<dthacker> If you are in a hurry, use LAMP
<chimp___> I imagine that doing it myself will at least teach me :)
<mralphabet> chimp___: sudo tasksel
<soren> ScottK: The postfix documentation clams that the default for virtual_alias_domains is $virtual_alias_maps. my virtual_alias_maps is set to hash:/etc/postfix/virtual, and I have a few @my-domain.com addresses in there. However, postfix rejects e-mails destined for whatver@my-domain.com if I don't explicitly add my-domain.com to virtual_alias_domains..
<soren> ScottK: Am I misreading the docs, are they faulty, or is postfix misbehaving?
<soren> ScottK: Never mind. I apparantly need more hand holding from the documentation than everyone else :/
<ScottK> soren: Glad you got it figured out.
<ScottK> soren: Upstream for Postfix often suggests that the documentation is written with the advanced Postfix user in mind.  It's easy to get cross-threaded in there.
<soren> ScottK: Yeah. The problem turned out to be that I needed a line like "ubuntu-dk.org dummy-value" in my virtual file.
<soren> ScottK: The docs sort of led me to believe that it'd magically work if I just put a "foo@ubuntu-dk.org destination@address.org" in there, but that was not the case.
<soren> I understand why, though.
<ScottK> Upstream is reasonably accepting of patches to improve clarity of the documentation.
<sainzeo> hello - i have an install of ubuntu server 7.10 on a parallels VM and it stalls when booting at running local scripts...any ideas?
 * faulkes- grumbles at incorrect labelling
#ubuntu-server 2009-03-30
<dustin> what is the purpose of LTSP chroot?
<owh> dustin: Huh?
<dustin> setting up ubuntu server 8.10 and there is an option to enable LTSP chroot and I was wondering if I need it
<dustin> well if nobedy here uses it I will assume that it is not necesary
<owh> I'm just having a little look-see. Gimmie a mo.
<twb> dustin: LTSP is the Linux Thinclient Server Project
<twb> dustin: unless you are setting up a thinclient server, you don't need LTSP
<owh> dustin: I'm not familiar with it, but it appears to relate to the LSTP client builder. I'm *guessing* that it is to allow you to chroot to the client image, so you can update it and modify it without affecting the server itself, but I may be wrong.
<owh> dustin: As twb says, if you're not using LTSP, then no.
<dustin> ok that makes sence
<dustin> is there a command line method to fix my grub it didnt burn right onto the disk
<twb> dustin: there is.
<twb> dustin: are you sure grub is at fault?  What are the symptoms?
<dustin> during install the disk flagged an error loading grub
<dustin> as in installing
<dustin> but if I can run a live cd and fix it after installing I think that I can work it out
<twb> Hmm, are you still in the installer?
<dustin> yes
<twb> Switch to vt4.
<twb> Can you see anything about grub there?
<dustin> ah point!!
<dustin> brb
<dustin> beautifull grub didnt get installed on the disk my burner is doing some of the wierdest stuff
<dustin> at least this coppy had the base system on it :)
<owh> Perhaps you're trying to install grub on the wrong device?
<dustin> it didnt offer an option but I would think that its target is md0 which is my root
<twb> The target should be the disk itself, not a partition or md device
<dustin> wow I have had so many difficulties with this install........................................but it will be well worth the effort
<owh> And as a trick for new players, if you need to be able to boot from either drive if the array fails you need to manually copy the boot-block across too.
<owh> Unless there is a better way that I don't know of :)
<dustin> laemmy guess.........I shoulda partitioned for two virtual md devices and dedicated one as a /boot partition
<dustin> that way I would have my /boot on two drives in its own little place
<dustin> or am I miss reading you?
<twb> dustin: are you using RAID5 or RAID1?
<dustin> raid1
<twb> dustin: then it ought to work...
<dustin> this is a very small server
<dustin> wow I just found out that apt-get isnt available in install bash
<dustin> ok I have a command line how do I install and configure grub from there
<dustin> I also have apt-get
<twb> dustin: chroot into the root filesystem (probably /target).
<twb> Oops, sorry.
<twb> Don't do that.
<twb> Instead, grub-install --root-directory=/target /dev/sda, where sda is the appropriate disk.
<dustin> lol
<twb> You may need to use --recheck.
<mattt> you guys have any suggestions for creating minimal chroot environments for users?
<mattt> debootstrap seems a bit overkill
<twb> mattt: define `minimal'.
<mattt> twb: ls, cat, find, tar, vi,e tc.
<twb> mattt: debootstrap creates a chroot that contains the minimum necessary to be a policy-compliant Debian system.
<infinity> mattt: debootstrap --variant=buildd probably gets you what you want, more or less.
<twb> mattt: though I think it defaults to standard, not minimal
<twb> infinity: thanks.
<mattt> ok, thanks guys
<twb> infinity: is that also what pbuilder uses?
<mattt> otherwise, i've seen jailer, which seems another option
<infinity> twb: Probably.  I don't use pbuilder.
<twb> mattt: you should be aware that chroot(2) offers *zero* protection to your host system against the root user within the chroot.
<infinity> twb: (As the buildd maintainer, I use sbuild and chroots identical to the buildds...)
<twb> infinity: fair enough
<dustin> twb: bash grub-install: not found
<twb> dustin: yeah, OK, so you haven't got grub.  You do need to chroot into target
<twb> mattt: if you want a secure chroot-like system, I suggest you look at xen and/or openvz.  For more complete virtualization, there is kvm.
<mattt> twb: heh, well ... this is actually going to be used on a domU :P
<twb> mattt: OK, no worries
<mattt> twb: well, there are still issues ... because the domU runs stuff that i don't want the chroot to see
<dustin> twb: this may sound stupid but what is the command for the chroot
<twb> dustin: chroot /target, I think
<dustin> thats what I thought but wanted to confirm b4 messing up
<dustin> twb: bash: chroot: cannot change root directory to /target: no such file or directory
<twb> dustin: OK, you need to work out where it's mounted
<twb> dustin: look at /proc/mounts
<dustin> twb does this install have you agravated yet ;)
<twb> dustin: nope.
<dustin> well this might-- I am root and do not have permission to /proc/mounts
<dustin> should I cat /proc/mounts
<twb> That's what I said.
<dustin> ok then I misread u
<dustin> I see things that are wrong???????????
<twb> Insufficient data.
<dustin> md0 is showing as ext2 and it is formatted to ext3
<twb> That is OK.
<dustin> ok as long as u agree with it I will over look that part does it matter that it is reporting errors
<twb> dustin: what errors?
<twb> dustin: also, please use punctuation and capitalization appropriately.
<dustin> line reads /dev/md0 / ext2 rw, errors=continue 0 0   (maybe I am freaking out and missreading)
<twb> dustin: that says "if you see an error, continue".
<twb> dustin: it doesn't mean there ARE errors.
<dustin> I have been staring at this screen for 12 hours now so it is entirely possible that I am just freaking out
<twb> I am now confused as to what environment you are in.
<twb> I thought you were in the installer still.
<dustin> after installer failed to load grub I started in rescue mode so that I could access a command line
<twb> dustin: you mean that you ran the install CD in rescue mode?
<dustin> as far as I can tell I am in a "safe mode" command line
<twb> Please just answer the question.
<dustin> I am in rescue mode "NOW"
<dustin> would you like me to start from the beginning and reload from the start because I have nothing to loose by doing so
<dustin> twb: I am sorry for anny confusion I may have caused jumping around
<dustin> I am having a bit of a day
<twb> dustin: OK, please confirm that /usr/sbin/grub-install doesn't exist at the moment.
<dustin> confirmed
<twb> dustin: OK, is /usr/sbin mentioned in "echo $PATH"?
<dustin> yes
<twb> That's weird
<twb> 12:29 <dustin> twb: bash grub-install: not found
<infinity> twb: He said it wasn't installed.
<infinity> twb: (He confirmed its nonexistence...)
<twb> infinity: if it's not installed, then why is it in his path?
<twb> That's a pretty fundamental contradiction
<infinity> 19:45 < twb> dustin: OK, please confirm that /usr/sbin/grub-install doesn't exist at the moment.
<infinity> 19:46 < dustin> confirmed
<infinity> He comfirmed that it doesn't exist.
<twb> Oh sorry, brain fart
<twb> dustin: can you check if lilo exists in any of the directories in your $PATH?
<twb> dustin: don't run it, I just want to know if it's there
<dustin> its ok I have had to tripple confirm these errors to myself becouse I dont believe it
<twb> dustin: actually, you can just do "dpkg -l \*lilo*" and see if it has "ii" on the left
<dustin> output of $PATH= /sbin:/usr/sbin:/usr/bin
<dustin> "| status=not/inst/Cfg-files" ect
<dustin> last was dpkg output
<twb> dustin: yes, there should be a line at the bottom of dpkg's output saying either "ii" if it's installed or something like "pn" if it isn't.
<dustin> is "un" the one u want?
<twb> OK, interesting.
<twb> dustin: that means you have somehow managed to install this system without ANY bootloader.
<dustin> if I seem choppy or inattentive its becouse I run both my server and my desktop on the same monitor
<dustin> yes!!
<dustin> that is correct
<twb> dustin: well, that wasn't established before.
<twb> dustin: it might have been installed, but not installed into the MBR properly.
<dustin> sorry I am a bit off today
<twb> No problem.
<dustin> it might be the lortabs after my knee surgery
<twb> dustin: what you need to do now is get grub installed.  That means getting either the install CD or the network working, then doing an "apt-get install grub".
<dustin> I will apt-get it
<dustin> brb
<dustin> "grub has no installation candidate"..........*moans*
<twb> dustin: edit /etc/apt/sources.list.
<twb> dustin: there should be a commented-out reference to the CD.
<owh> twb: Perhaps he didn't like your bed-side manner?
<twb> owh: hmm?
<owh> twb: The person you've patiently been helping for the past hour and a half.
<matthew-21> Hi, how do I unmount a harddrive?
<owh> matthew-21: In what context?
<owh> matthew-21: As-in, what are you trying to do. The unmount command is umount, but if you're asking, I'm guessing that's not what you're looking for.
<dustin> my isp is about to get an earfull
<twb> dustin: do you think that will help?
<dustin> twb: u still on?
<dustin> I cant exit my editor....................................I have never used it b4
<dustin> no I was typing into irc and found out that I was disconnected
<twb> Describe the editor.
<dustin> how do I save and exit vim?
<twb> I wish to know if it is nano or vi.
<dustin> its the newer version of vi
<friartuck> dustin :x
<twb> Type ESC :q RET
<friartuck> dustin save and exit=:x  . exit without save=:q!
<dustin> I still cant install grub even with cd enabled
<dustin> I think it is missing from the disk
<owh> dustin: After uncommenting the source, did you run apt-get update?
<dustin> I felt like I missed something
<twb> Sorry, yes, you will need to run apt-get update.
<twb> You may also need to run apt-cdrom, IIRC it is particularly stupid about that.
<dustin> when I ran that update I was informed about apt-cdrom
<dustin> grub is unpacking huray!!!!!
<dustin> ok now to configure grub
<dustin> how do I check it to verify its current settings
<dustin> twb: can you walk me through grub settup in command line
<owh> dustin: May I suggest that you do some reading on the subject?
<twb> dustin: you should be able to just run "grub-install /dev/sda", where sda is the appropriate disk
<dustin> ok
<twb> Actually, that might not work.  Try it and see.
<owh> twb: With an array?
<dustin> it seems to have liked that command
<dustin> I am going to reboot and see how it goes
<dustin> well thats a N/G but I am going to rub my eyes and take ten after that I will read the grub man file and try again
<dustin> thank all of you for your help and tollerance
<matthew-21> How do I give users user quotas for home dir?
<twb> matthew-21: you need to install the quota package, then mount /home with -o usrquota, then generate an initiate quota database.
<owh> twb: You mean initial right?
<matthew-21> Okay, I've installed the quota package, now how do I give them a certain quota to use? lol
<matthew-21> I mean so they can only upload a certain amount of stuff.
<twb> owh: yes
<twb> matthew-21: oh yes, you also need to allocate each user a quota -- otherwise it won't be enforced
<matthew-21> What are the commands to do this?
<twb> quota or edquota, IIRC
<twb> dpkg -L quota | grep bin/ will tell you
<owh> matthew-21: You could just click on the first link of this google search "linux home dir quotas" and read the whole thing from start to end.
<matthew-21> would this quota system work if I was using an external harddrive? like giving them a quota on the harddrive.
<owh> matthew-21: Well, it likely depends on how you've mounted that drive.
<owh> matthew-21: If the external hard drive changes, you're possibly going to run into issues identifying which drive it is. If the drive is just in an external case, but always there, it makes no difference.
<matthew-21> I typed this.  "mount -t vfat /dev/sdb1 /var/www/tb/"
<twb> The quota system doesn't care WHERE the drive is, only that it is mounted with -o usrquota
<twb> Quotas may not work with vfat.
<owh> I suspect that it doesn't support vfat.
<matthew-21> and then link it to a folder in the users home dir.
<twb> Do not use FAT, as it is a bloody awful filesystem
<twb> matthew-21: quotas do not span disks.
<twb> matthew-21: if you're trying to put quotas on, say, ~user/public_html, and public_html is a symlink, then you need to set up quotas for the place that public_html points to.
<owh> matthew-21: Let me suggest that you take a step back and actually describe what you're trying to do and how it's currently setup, because from what I'm reading here, there are some serious problems.
<matthew-21> I have an external harddrive, that I am leaving plugged in to the server all the time.  I want to set up quotas on it, and so far iv mounted it using the command "mount -t vfat /dev/sdb1 /var/www/tb" I'm just wondering how to prevent users from taking up the entire harddrive laughs.
<owh> matthew-21: That does not appear to be the whole story because you're mounting it related to the web-root but sym-linking to a user.
<matthew-21> I haven't done that yet, I was checking here before I did anything else.
<owh> matthew-21: So, how are the user accounts related to the web-root?
<owh> matthew-21: Is the external drive ever used anywhere else on another machine?
<matthew-21> it was before, but I'm trying to make it into a server drive.
<owh> matthew-21: What about the user home directories and their relationship to the /var/www/tb tree?
<matthew-21> I want the files that users upload to be accessible on the internet, that's why I add in web-root.
<owh> matthew-21: I'm guessing there are multiple users?
<matthew-21> yes
<owh> matthew-21: Are all their files going to be uploaded into the same directory?
<matthew-21> no
<owh> matthew-21: So, how does that answer relate to linking their home directories with a sym-link to /var/www/tb ?
<matthew-21> I would create seperate folders as I gave users space.
<owh> matthew-21: So, how would the structure look?
<matthew-21> like if there was a user named bob, the tree would be /var/www/tb/bob
<owh> matthew-21: In addition to that structure, /var/www/tb/{username}, do the users also have /home/{username}
<matthew-21> ah, how would I get quotas for each folder though?
<owh> matthew-21: It's per drive, per user/group
<owh> matthew-21: Read this: http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch28_:_Managing_Disk_Usage_with_Quotas
<matthew-21> is there a way to do this without partitioning?
<owh> matthew-21: Well, the external drive is a partition all by itself.
<matthew-21> ah, I see. do I need to edit the /etc/fstab file?
<owh> Yup
<matthew-21> or can I just say external harddrive is partition.
<owh> matthew-21: fstab is a mechanism to automatically mount a drive. I suppose you could manually mount it each time you reboot, but after a power-failure/reboot, the mount would not be there.
<twb> Normally you should refer to external drives' filesystems by UUID.
<twb> For FAT, I believe this is an eight-byte string XXXX-XXXX.
<owh> I gotta say that using FAT for a server mounted drive that is intended to support quotas and be served as a web-volume make little sense to me.
<matthew-21> I would format the drive, but I have stuff backed up onto it and cannot put the files anywhere else.
<twb> matthew-21: if you aren't using RAID, then you have a SPOF already
<matthew-21> ? what do you mean.
<twb> You ought to go buy a second disk, and RAID1 them.  During that transition you could also convert the filesystem.
<owh> SPOF == Single Point Of Failure
<twb> matthew-21: I mean that if that hard drive dies (the Single Point of Failure) then you have lost that data forever
<matthew-21> ah
<matthew-21> okay, I would like to ask a different question if that's okay with you, how would I authorize my website so people would need a username and password to log in to the actual website?
<matthew-21> I think that it is possible.
<matthew-21> But I am not sure.
<matthew-21> thank you for your help though, I really appreciate it.
<twb> matthew-21: that depends on a large range of factors.
<matthew-21> I don't want anything really secure, just a way to secure apache and my site a bit.
<twb> Those phrases are rather contradictory.
<friartuck> matthew-21 http://ubuntu-tutorials.com/2007/10/06/limiting-access-to-websitesdirectories-with-htaccess/
<owh> matthew-21: The moment your server is connected to the 'net, it's waiting to be compromised. A "little bit" of security is not a sensical statement.
<twb> You could, for example, only allow access to the website when connecting over an encrypted VPN (which you would set up).
<twb> You need to ask questions like "what is the threat model?"
<owh> twb: I'm not sure that what you're saying is meaningful in the context. I agree, but I don't think it helps. I've been struggling to communicate these same concepts in other channels, "How do you help those without any meaningful background."
<centaur5> I'm trying to find out which port LTSP listens on cause I thought tftpd-hpa has to be running but I can't find an instance of that process. Anyone have some info?
<owh> twb: It's the phenomenon of: "This cannot be that hard, look, I can run the installer and it all just works."
<owh> twb: Unfortunately computing isn't quite yet as developed as say driving a car.
<owh> centaur5: Isn't it run by an inetd process - in which case, it's likely in /etc/services
<centaur5> owh: Interesting, tftp is port 69 but netstat doesn't show anything waiting on that port. How does that work?
<twb> owh: haha, "explanations are hard so we resorted to car analogies"
<owh> centaur5: Hmm, does it work? As in, if you telnet to port 69, do you get a response?
<twb> centaur5: inetd should be listening to that port.
<PhotoJim> twb: car analogies can be used for almost anything :)
<owh> twb: If you have a better analogy, I'm all for it.
<twb> "security analysis isn't as easy as falling over"
<twb> "Just because you can catch a ball doesn't mean you understand differential calculus"
<owh> twb: Right, that's all fine, but how do you explain that to someone who comes to you with a differential calculus question without the knowledge to understand the answer?
<twb> owh: well the comments above are first meant to help the guy understand that he really doesn't know differential calculus
<twb> One they understand that they don't understand, you can move onto phase #2: gettin' some schoolin'
<PhotoJim> I lecture at a university.  I find analogies to be highly effective at times.
<owh> And did you see evidence of this "moving to phase #2"?
<twb> PhotoJim: lecturing to filthy, delinquent undergrads, or lecturing to humans?
<owh> When I spend time in IRC, I find I'm trying for a balance between telling the person asking the question what the answer is and explaining where to find the answer. It's easy just to give the answer, but over time it takes more time.
<twb> Nod.
<PhotoJim> twb: All human, few filthy, very few delinquent.
<owh> I used to run a helpdesk and I spent many months arguing that teaching users was cheaper than helping them. Over time management began to see a drop-off in the number of calls because users began to get a clue. Until this happened though, the call stats were abysmal.
<PhotoJim> owh: short term pain for long term gain, as they say.
<twb> PhotoJim: lucky bastard.
<PhotoJim> twb: some students are a challenge of my patience, and some are a true pleasure.  but that is true of most groups of humans.
<owh> twb: I respect that you are trying to show that a VPN is one approach, all I was doing was questioning if the person whom you were giving the advice to understood even the words, let alone the concepts.
<owh> PhotoJim: Yup.
<owh> A few months ago I started composing an email to ubuntu-devel-discuss about this phenomenon. As Ubuntu becomes more popular, we run the risk of being drowned in requests for help.
<PhotoJim> the Internet is a good example of that sort of effect.
<owh> That is, it might get to the point where we couldn't help despite our best efforts because there was too much need to get help.
<owh> PhotoJim: Yeah, and it's getting worse.
<PhotoJim> yes, that's true, although in defense of it, we wouldn't have multi-megabit cheap broadband if hardly anyone were on the Net.
<owh> PhotoJim: More an more people yammering for help, less and less actual help available.
<PhotoJim> yes, true.
<owh> PhotoJim: And more and more dis-information.
<owh> The internet is beginning to be a race to the bottom.
<twb> PhotoJim: here undergrads are mostly at university to drink and have promiscuous sex, AFAICT :-/
<twb> They should all be locked up.
<PhotoJim> twb: I teach fourth-year business students.  the horny clueless ones have dropped out by then. :)
<owh> I saw a forum recently where the answer voted by the forum members as being the most helpful was in-fact incorrect and the answer that was actually correct was voted down as being not relevant.
<Rafael> i have ubuntu serve and i am a newby. i have some question and hope somebody can help?
<PhotoJim> owh;  heh.  that's discouraging.
 * owh was gob-smacked.
<owh> Rafael: Sure, ask away.
<twb> owh: well, forums are for people who are too stupid to use usenet.
<twb> (Assuming you mean web forums.)
<owh> twb: Hmm, I'm a list-moderator on a large list and we continually get requests to "upgrade" to a web-forum.
<twb> Smack them with gmane.org
<owh> twb: "No, don't you understand, that's not the same." "We want a web-forum with whim's and upload and..."
<mattt> any bash gurus here?  :)
<Rafael> 1) i am trying to connect server to 3 window computer..i am assuming i hve to use samba..my first question is the following...can a network storage adapter be conected to a router and make backups of the data of server as windows documents?
<owh> mattt: That just depends on the level of guru required - specifically, what the actual question is :)
<mattt> owh: any idea why this doesn't work?  bins="cp,ls"; for x in /bin/{$bins}; do echo $x; done
<owh> mattt: Separate it with a space.
<owh> mattt: The delimiter isn't a comma.
<Rafael> 2) any advise of any network storage enclosure that will connect to the server without any problem, and also if i would like to connect one at home how can i do this?
<mattt> owh: what i'm looking for it to do is echo /bin/cp and /bin/ls, not /bin{cp,ls}
<mattt> err /bin/{cp,ls}
<owh> mattt: Yes, I understand that, bins="cp,ls" is delimited with a comma.
<mattt> owh: space doesn't work ... and this does work "for x in /bin/{cp,ls}; do echo $x; done"
<PhotoJim> Rafael: few network enclosures, especially at the low end, support Linux networking.  A cheaper way to accomplish that goal is to use a USB2 enclosure that you can use with native Linux filesystems.
<owh> mattt: bins="a b" ; for a in $bins ; do echo $a ; done
<PhotoJim> Rafael: there are higher-end drive enclosures that are called Network Attached Storage that will do Linux filesystems and networking natively, but they cost a lot more than ones that do Windows networking.
<mattt> owh: but if i have a common prefix (/bin in this case), /bin/{cp,ls} is a bit tidier
<twb> USB2 is pretty slow, though.
<mattt> owh: but i see what you're getting at
<twb> eSATA sounds sexy, but I haven't tried it myself
<PhotoJim> twb: the SMB drive enclosure I have is slower than USB2.
<twb> PhotoJim: haha
<owh> mattt: echo /bin/{cp,ls}
<PhotoJim> eSATA is great, if you can get compatible stuff.  the only time I've tried it, the drive I got, combined with the enclosure and SATA card, wouldn't talk.
<PhotoJim> twb: could be the speed Linux does SMBFS, perhaps.  but the enclosure is painfully slow.
<twb> ITYM CIFS
<Rafael> PhotoJim: sorry for my ignorance..the server (linux) is going to safe for example word documets, then if i am using samba, shouldnt it be windows compatible so it can safe in word format. ie..server goes down i can have windows computer look into enclosure and still read documents?
<owh> twb: I use rsync, USB2 is plenty fast :)
<owh> <grin>
<PhotoJim> USB2 isn't great but it's not awful either.  USB1.1 is awful. :)
<twb> owh: I'm comparing the speed of USB2 to the speed of e.g. the SATA bus.
<owh> twb: Sure, depends on usage requirements though.
<twb> Nod.
<mattt> owh: bins=`echo /bin/{cp,ls}`; for x in $bins; do echo $x; done
<mattt> owh: is that what you mean?  cuz that does ... seem to work.  :)
<PhotoJim> the nice thing about USB2 (assuming one doesn't have a working Linux NAS or eSATA device) is that you can backup your whole disk into one device, and then remove it for safekeeping.
<owh> mattt: I was just showing you what different methods of expansion are available. Glad to see that you have what you need.
<twb> PhotoJim: another guy I know has a udev rule to do that automatically when it detects the drive (by UUID).
<PhotoJim> twb: oh, that's slick.
<mattt> owh: cheers
<twb> PhotoJim: so he just plugs it in when he gets home, esentially
<owh> twb: Did he document this anywhere - I mean, it's all nice and well fixing stuff, but if it ain't written down, it didn't happen.
<PhotoJim> I recently got a pair of terabyte drives and put them in RAID1.  I think I'm going to get another pair, one for a spare RAID1 drive (online) and another in an enclosure for a removable backup drive.
<twb> owh: yeah, probably on a blog
<twb> owh: I can't be bothered finding the reference, sorry.
 * owh is reminded to make a post about MYOB running off a server drive, even if MYOB tells me that it doesn't work :)
<owh> twb: That's cool :)
<twb> It was probably Russell Coker, if you wanna google
<Rafael> do not want to overwellm chat, but this is what i want to do: i am a doctor and build ubuntu server to store data (word and pdf documents). i want to make backups of this into and external drive (network attach enclosure), in a way that if server fails backups can be read from enclosure, at the same time would like to do same at home so baiscally have 2 backups...where can i read or learn about this?
<Rafael> also if i want to connect network attach storage from home to office, any suggestion on good router for office that will make this coenction easily and secure
<centaur5> Is tftp (port 69) the only thing that LTSP requires or are there more ports that need to be open?
<owh> Rafael: You are setting up a whole lot of interdependent processes there. These are the ones I can see off the top of my head.
<owh> 1. Ubuntu Server,
<owh> 2. Samba server to serve word and pdf documents to users
<owh> 3. Network storage mount
<owh> 4. Server backups to same
<owh> 5. Network storage fail-over
<owh> 6. Remote access to LAN
<owh> 7. Remote access to remote storage
<owh> Crap
<owh> Sorry 'bout that. Seems <shift-enter> is a separate post.
<loginhelp> hello to all. just poppin in to ask if anyone knows a very simple way of setting up a network where a client computer boots up and authenticates to a server which will then load up the users home folder and desktop settings?
<friartuck> Rafael no, that cannot be done easily. you need a network admin for rent or just use one usb drive and take it with you to work and home.
<owh> loginhelp: The edubuntu server will do that out of the box.
<loginhelp> owh: does edubuntu server have lamp setup as well? right now i'm trying to config ubuntu server 8.04
<owh> Rafael: This stuff is not complicated to learn, but it will test your patience. My question would be: "Is it cheaper to do this yourself, or is it cheaper to pay someone to do this?"
<owh> loginhelp: Well it's the same project. I suspect it also has a LAMP task. Of course you can also install LTSP into ubuntu-server.
<owh> Rafael: To give you a car analogy: "Do you maintain your own car, or do you pay a mechanic?"
<friartuck> ha
<owh> Someone was paying attention <grin>
<Rafael> owh: i like computers and to learn, that is a hobby for me even thought you might be rigth since linux is completely new for me....i buld the ubuntu server and hve install samba already and tsting with no problem...i am also plaing with webmin and ebox, but would like to have some type of security so that is why i am trying to do the backups
<friartuck> Rafael if you want security then uninstall webmin, that is very insecure.
<twb> Rafael: backups are "business continuity" or "disaster recovery".  The term "security" usually means security against other people.
<twb> Yes, webmin is absolutely to be avoided.
<Rafael> owh: i do mantein all my windows computer but i am learning linux..so so far is fun, as long as i learn and progress, but you are rigth that if it becomes problematic then it can be a headache
<owh> Rafael: Well, as you've no doubt found out at this point, the questions you've already solved are not documented in one single place. The more you add, the bigger the resources you need to understand and build.
<loginhelp> twb: is webmin not good?
<Rafael> twb: sorry, when i mean security i mean..server crashes and data is safe ..sorry for gramar
<friartuck> loginhelp no, webmin is not good. it is full of vulnerabilities.
<owh> Rafael: As twb points out, "business continuity" is what you're really asking about and if we gave you "partial" information, there would potentially be liability issues. Don't get me wrong, and I suspect the same is true for twb, we are happy to help, but be mindful of the landscape you're stepping onto.
<loginhelp> any other alternatives to managing a server with gui on a client?
<owh> loginhelp: landscape and/or ssh
<Rafael> well i know tht webmin is not on the repository and ebox is, but i have configure the box on raid 1 and raid 5 and after erasing one of the 3 hard drives i can start the rebuld process in less than 1 minute..i agree from reading that is vulnerable to other things
<owh> Rafael: There is a reason it's not in the repository, the ubuntu-server team made that choice, specifically. We've discussed it several times even.
<twb> It's probably easier for #ubuntu-server to help with specific problems such as "how do I make <app> do <feature>" compared to broad things like "how can I make my server secure and fast" or "how can I share files on the internet?"
<owh> Fair summary.
<twb> For those kinds of broad questions, you probably want to talk to either a local professional, or to a local linux user group.
<Rafael> thanks guys for your help
<owh> Rafael: To note, there are amateurs and professionals in this channel, so don't be afraid of asking.
<twb> Regarding webmin, I have personally audited both parts of its core, and some modules.  The code is absolute crap, and it WILL be full of horribleness.
<owh> May I observe that this is another example of the same phenomenon. The questions are legitimate, the need is real, but as a community we may not have the tools to help such a need. That worries me.
<Rafael> do not worry, but there are diferent type of doctors, some that does not know how o turn on a computer and some that we like and go deep into learning computer issues..when we divert the question to liabilites we loose the purpose of having amateurs and professional..that is fine and one moer etime thanks for the help
<friartuck> owh i disagree. people screw-up their cars all the time because they think they can fix it. (another car analogy.)
<twb> In my professional opinion as a sysadmin, *none* of the web-based administrative tools I have seen are of even remotely good quality.  Admittedly, I have not closely looked at ebox.
<owh> Rafael: No, I think you mis-understand.
<Rafael> or they can learn mechanics and fix cars
<twb> For the simplest things such as "I want to add another user", I would probably try to deploy gnome-system-tools over VNC, only accesible from the local (i.e. trusted) network.  I have not actually trialled this yet.
<loginhelp> so i am guessing there is no way i'm going to find a step A to step Z on how to get a school network going in 3 hours?
<owh> Rafael: I was trying to comment on the size of the question you were asking. It's not that I don't believe you can do it, or that we cannot help you achieve it, it's that there are *many* issues that will come up while you're doing it. Some of those relate directly to the running of your company.
<friartuck> loginhelp no, it's deep topic.
<owh> loginhelp: Boot from the edubuntu live cd.
<twb> loginhelp: you could certainly get *something* running, but it probably wouldn't be safe to deploy.
<Rafael> owh: you rigth and we should not go into webmin, i was just mentioning about 1 feature that is great but have learn that rest is a disater, but well..my question was that i am building this slowly nd was wandering how to do what i ask at the begining...thanks anyway
<owh> loginhelp: Safe to demo, not safe to run for real.
<Rafael> owh: will the forum be a better palce..or where should i start...
<friartuck> Rafael http://www.amazon.com/Beginning-Ubuntu-Server-Administration-Professional/dp/1590599233
<loginhelp> i think after a month of reading ldap,nfs,nis i realize i should have stepped in here first.
<owh> Rafael: Let me suggest something to you. You can do with it what you will. Start a web-page. Write down what you're trying to achieve and document what you learn. Structure the document as a project plan, then complete the steps. As you learn, you will understand the landscape you are operating in. I'm sure that we'll be here to answer specific questions along the way.
<owh> Rafael: You may also start with reading the ubuntu server guide, it won't be complete, but it touches on many concepts: http://doc.ubuntu.com/ubuntu/serverguide/C/index.html
<owh> loginhelp: The reading was not a waste of time. It gives you an understanding of what you ask.
<Rafael> Owh: thanks for your advice
<owh> Rafael: Pleasure.
<Rafael> thanks and good nigth
<Rafael> owh: can all this http://doc.ubuntu.com/ubuntu/serverguide/C/index.html be download or found  as pdf file
<loginhelp> thanks. i'm gonna try the edubuntu server. apart from the authentication, i'm also hoping to have mail, wordpress, gallery2, twiki, so that the school can have their own online manual, news, a place to display their works and blog. any words of wisdom so this attempt can be more successful?
<owh> Rafael: You can install it locally and move it to a portable drive. It's as HTML: apt-get install ubuntu-serverguide
<owh> Rafael: I don't know of a PDF version.
<friartuck> loginhelp mail will take the most work, security.
<Rafael> owh: last question for today..any advise of router (thinking about secure conecting from home to server) and about any brand for network storage adapter
<friartuck> Rafael you should consider a network admin to setup the vpn connection.
<loginhelp> oh, another thing, is it too taxing for a PIII server if i have a media server on it as well?
<owh> Rafael: A router that supports VPN would be smart. A network storage adapter that supports Windows File Sharing, since you want to be able to serve clients in the case of failure. The question really isn't one of "What do I buy?" - even though you think it is.
<owh> loginhelp: That depends on what it's doing, how much RAM it has and whom it's serving with what. Ie, that's like asking: "Is a Ferrari a good car to buy?"
<Rafael> owh:so what should the question be? and thanks for the repsonse...
<loginhelp> actually its a sony vaio, p4 1.2 GHz, 128 Mb
<owh> Rafael: I'm struggling to even form a coherent response to your question. The interdependencies are too great to give a meaningful answer. Things like: "What kind of existing LAN is there?" "What kind of users are there?" "What kind of internet connection is there?" "How much data is there?" "How often does it change?" "What size documents are there?" "How old is the existing server hardware?" - these are just questions that each go i
<owh> Rafael: This kind of conversation is normally done one-on-one by an expert. A forum like IRC is a potential place where you might have such a conversation, but I for one charge for that process. I'm willing to help you resolve problems, but I'm not able to justify providing answers like this because I'm not sure how I can make that sustainable.
<friartuck> Rafael you can find a local Cisco vendor and get an ASA for $800 and have it setup for about $500. security and expertise are important here.
<Rafael> owh: do not feel bad...i could anser this question but will do so when i really need it in future, so far i am in the testing fase..playing and slowly learning..so if no problems i continue..if it complicates then i drop it..so far is inside my hobby, and if it works and develops to be "bussines continuity"then fine, if not will conitnue as i am or look for profesional..my mistake was to mention i am a docotor, then liability was
<Rafael> mention..and so on..just imaging i am an amateur thrying to build what i mention...but very honestly..thanks for your answers, believe it or not they help a lot..no harm fealings
<owh> If others here have ways that they can think of where I sit here online and help like that and gain an income to pay the rent, I'm happy to entertain the notion.
<owh> Rafael: The doctor and the liability have no relation in my mind. The issue is because you are doing this in your business, regardless of you being a doctor.
<owh> Rafael: The same is true for others coming here and asking questions about backups and security. I shudder at the issues related to their "toy" being used for real and breaking.
<Rafael> owh: like i said..no harm feelings and thanks one more time ..beleive it or not your help is apreciat it..will keep palying and at the end will use or discard this projeect....
<owh> For every complex problem there is an answer that is clear, simple, and wrong. --H L Mencken
<JanC> owh: For every complex problem there are even more answers that are confusing, complicated & wrong. --JanC
<loginhelp> owh: is landscape not free?
<JanC> ;)
<owh> JanC: And I am sure that I am responsible for some of those :)
<owh> loginhelp: Nope.
<owh> loginhelp: Uh, that should be yes :)
<owh> loginhelp: As in: "Yes, it's not free."
<JanC> well, a part of landscape (the client) is
<owh> Fat lot of good that will do you :)
<JanC> but that's not really useful on its own probably  ;)
<loginhelp> does mac&win  have ltsp support?
<Kamping_Kaiser> huh?
<goksu> hello again. :)
<mattt> evening
<jahor> morning ;o)
<kraut> moin
<rags> 'night
<Ethos> hi guys
<Ethos> anyone setup a connection to mssql before from ubuntu server?
<_ruben> !info freetds-dev
<ubottu> freetds-dev (source: freetds): MS SQL and Sybase client library (static libs and headers). In component main, is optional. Version 0.82-3ubuntu1 (intrepid), package size 411 kB, installed size 1224 kB
<_ruben> never used it though
<Ethos> I've tried installing some bits nad bobs and following a few guides but nothing seems to work
<domas> hehe
<domas> actually I did first 'freetds' package for an opensource distribution
<domas> that was ages ago, for freebsd
<domas> no other OS had freetds package Ä®)
<domas> funny though, I had some people telling me how they use freetds to me, and they were using my package not knowing it :)
<Ethos> heh
<Ethos> that's cool
<Ethos> So surely you must be able to guide me? :DD
<tomy> hello
<domas> Ethos: I can guide MySQL stuff way better nowadays :)
<goksu> question: is it possible to connect to a betrieve 6.5 database from ubuntu?
<goksu> btrieve that is. like odbc.
<goksu> where should I ask that question? which channel?
<goksu> domas: is it possible to connect to a btrieve 6.5 database from ubuntu? I am asking you because you have worked on a similar project years ago you said.
<domas> goksu: *shrug*, if there're linux drivers, yes :))
<domas> but I don't see any packaged
<goksu> domas: I am using primavera planning software. the backend is a btrieve 6.5 db engine. runs on windows. But I want to do my work under linux.
<goksu> where can I ask?
 * domas points to http://www.pervasive.com/developerzone/platforms/linux.asp
<domas> I guess you can use JDBC
<domas> (or linux-odbc)
<goksu> domas: I hope JDBC or linux-odbc connects to that old a db engine. thank you very much for the information. :)
<domas> goksu: use mysql!
<goksu> domas: I am using mysql for most of my work. primavera uses btrieve6.5 and that does not work under wine.
<uvirtbot> New bug: #351562 in mysql-dfsg-5.0 (main) "mysql server install failed" [Undecided,New] https://launchpad.net/bugs/351562
<jahor> anyone could confirm "bacula in dapper (2.2.8) catalog upgrade from hardy (1.3.6)" https://bugs.launchpad.net/ubuntu/+source/bacula/+bug/347206 ?
<uvirtbot> Launchpad bug 347206 in bacula "bacula in dapper (2.2.8) catalog upgrade from hardy (1.3.6)" [Undecided,New]
<ivoks> it's probably valid
<ivoks> there is a script for upgrade
<jahor> btw i don't know if it should be fixed when hardy is almost 1 year here
<ivoks> the fact is that bacula wasn't supported before hardy
<ivoks> for a reason
<ivoks> so we changed it's package scripts to get it included
<ivoks> its
<jahor> so it looks i missed that it was from universe ;(
<ivoks> well, we should've think about upgrade
<ivoks> but the problem was that it was too complicated to do it
<ivoks> since bacula in dapper used dbconfig for database management
<ivoks> and in hardy we used custom scripts
<ivoks> still, bug is valid
<orudie> is there a way to make it so that both mydomain.com and www.mydomain.com would both show as www.mydomain.com in the browser ?
<jahor> ivoks: ok. for now bacula looks that its working now
<ivoks> jahor: there are update scripts
<ivoks> irc
<ivoks> iirc
<ivoks> /usr/share/bacula-director/update_mysql_tables
<\sh> orudie: not using serveralias in apache but having a separate vhost for mydomain.com and Redirect / http://www.mydomain.com/
<ivoks> what's wrong with serveralias?
<\sh> ivoks: ServerName www.mydomain.com + ServerAlias mydomain.com won't give "www.mydomain.com" as result when accessing "mydomain.com" ,-)
<ivoks> of course
<ivoks> why would someone require that?
<stickystyle> orudie: You can use a mod_rewrite rule.
<\sh> ivoks: don't ask me...ask orudie ;)
<ivoks> content is what counts, not the location bar
<ivoks> orudie: you really want to change URL in location bar or just render same web page?
<stickystyle> ivoks: Well the location can count with SEO.
<orudie> ivoks, change URL
<stickystyle> orudie: http://httpd.apache.org/docs/2.0/misc/rewriteguide.html#url
<ivoks> 100 people, 100 ideas :)
<stickystyle> ivoks: That's what linux is all about ;)
<\sh> depending on what someone wants to achieve...using mod_rewrite could be expensive ...
<\sh> especially when mod_rewrite is used in .htaccess ,-)
<orudie> stickystyle, RewriteCond is in which file ?
<stickystyle> \sh: Yeah, well having AllowOveride on on a server is a performance hit in itself.
<\sh> orudie: in /etc/apache2/sites-available/<your vhost file> or in .htaccess under your docroot
<stickystyle> orudie: In the conf file for the site.
<\sh> stickystyle: yepp :)
<orudie> Invalid command 'RewriteCond', perhaps misspelled or defined by a module not included in the server configuration
<orudie> stickystyle ^
<\sh> orudie: ls -al /etc/apache2/mods-enabled/ check for rewrite.conf
<\sh> or rewrite.load
<orudie> \sh ^
<\sh> normally not enabled by default
<stickystyle> sounds like you don't have mod_rewrite loaded $sudo a2enmod rewrite
<ivoks> a2enmod rewrite
<\sh> orudie: have a look at https://help.ubuntu.com/8.04/serverguide/C/web-servers.html (when you use hardy)
<orudie> ok no error after sudo a2enmod rewrite , apache2 reload , but doesnt do the job still
<ivoks> orudie is an ex-win admin; they don't read :D
<\sh> ivoks: lol
<ivoks> apache2 force-reload
<ivoks> not reload
<ivoks> or restart
<orudie> i did restart
<orudie> same
<ivoks> RewriteEngine on
<ivoks> before RewriteCond or RewriteRule
<ivoks> then reload
<orudie> same
<ivoks> now you are lying
<orudie> no
<orudie> domain is selsovet.com
<ivoks> so what?
<ivoks> :)
<stickystyle> orudie: just worked for me.
<stickystyle> As in I went to selsovet.com and it redirected me to www.selsovet.com
<orudie> you tuped selsovet.com in the browser and it turned into www.selsovet.com ?
<ivoks> yep
<orudie> why doesnt work for me :( ?
<orudie> oh it just did
<ivoks> cause you are using internet explorer :)
<orudie> yay !
<Nafallo> orudie: because your browser caches stuff.
<orudie> ivoks, no firefox
<Nafallo> ;-)
<ivoks> another lie
<ivoks> this orudie guy... lies all the time :D
<orudie> awwww come on
<ivoks> hehe
<zul> hey ivoks
<ivoks> zul: hi there!
<orudie> so it should be ServerName selsovet.com and below it ServerName www.selsovet.com so that they both point to the same dir ?
<Nafallo> ServerName www.selsovet.com\nServerAlias selsovet.com
<ivoks> um...
<Nafallo> that's how I would do it anyway.
<ivoks> right
<ivoks> where '\n' is enter
<ivoks> :D
<Nafallo> newline
<orudie> yay !
<orudie> good stuff
<ivoks> orudie: web site is awsome :)
<orudie> which one selsovet.com ?
<ivoks> yes
<orudie> :)
<ivoks> omg... raid5 with 8 drives
<ivoks> disater waiting to happen
<goksu> is this IRC channel logged? can I get a copy of yesterdays comments? I need to reach the comments I used yesterday.
<yann2> ivoks > why?
<yann2> I got a raid5 on 5 drives, you're scaring me :)
<yann2> 6 sorry
<ivoks> yann2: raid5 allows one failed drive
<yann2> got 2 hot spares :]
<ivoks> with 8 disks, chances to have two failed drive at the same time isn't small
<ivoks> hot spares don't help here
<ivoks> http://www.hardwaresecrets.com/article/314/2
<jahor> ivoks: back here .. there are update scripts for bacula catalog database, but not for that big version dump (i solved it by copying it from non-ubuntu package)
<ivoks> jahor: this one i pasted is for 1.3 to 2.4 upgrade
<beawesomeinstead> ivoks: do you have any plans on improving your mail stack before jaunty release? the reason i ask is that dovecot was updated to 1.2beta4 in intrepid and a bit afraid that jaunty mail stack will be moved to 1.2beta4 too
<ivoks> beawesomeinstead: dovecot in intrepid is 1.1.4
<ivoks> and 1.1.11 in jaunty
<beawesomeinstead> $ dovecot --version     => 1.2.beta1 on my desktop, weird
<ivoks> you are pulling that from somewhere else
<jahor> ivoks: its for 1.38 to 2.0, in dapper (universe) was 1.36 and that is the root of the problem
<ivoks> jahor: SQL interface in bacula has different version
<ivoks> jahor: there is 8 and 9, iirc
<ivoks> no, 9 and 10
<ivoks> 1.38.x used 9, while 2.0.x used 10
<ivoks> you are right, that script isn't enough for dapper->hardy
<jahor> ivoks: do not miss mi notice of 1.38.x vs 1.36 ;)
<ivoks> :)
<ivoks> beawesomeinstead: anyway, there are only packaging changes that should get into jaunty today or tomorrow
<ivoks> beawesomeinstead: no version updates are allowed in jaunty any more
<jahor> but i know that in dapper LTS it was in universe and so it was unsupported by LTS
<ivoks> still, it's a bug
<ivoks> we should've upgrade it
<ivoks> we shouldn't make that mistake with 8.04->10.04
<ivoks> my mistake
<jahor> ivoks: ok i will try to prepare a fix and append it to the bug (maybe my first contribution to ubuntu ;o)
<ivoks> that would be great
<beawesomeinstead> oh cool
<jwstolk_work> I would like to power down (cleanly) my ubuntu-server when its power putton is pressed. (like ubuntu desktop). All I can find about this is that I probably need to install ACPI.
<jwstolk_work> but the "acpi-support" package also pulls in things like x11-xserver-utils...
<jwstolk_work> Is there a simple way to start a shutdown script when the power button is pressed?
<ivoks> you need acpid
<ivoks> not acpi-support
<ivoks> basically, you just need to load kernel modules
<ivoks> acpid will do that for you
<jwstolk_work> ok thanks.  aptitude listed that one as "displays information on ACPI devices" so I was wondering if it actually did something :)
<uvirtbot> New bug: #351648 in mailman (main) "update mailman to 2.1.12" [Undecided,New] https://launchpad.net/bugs/351648
<Iceman_B^Ltop> anyone familiar with 8.10 server? http://pastebin.ubuntu.com/140815/ <-- what do the lines 2-4 mean ?
<ScottK> kirkland: I think ^^^ is up your alley.
<erik1> Hello, I used an Airlancer MC-11 (orinoco) wifi card during the install of Ubuntu Server. After a succesful install the interface voor the airlancer card does not show up. What can be wrong?
<kirkland> scfh: Iceman_B^Ltop: the two ecryptfs warnings are benign
<kirkland> ScottK: ^
<kirkland> scfh: sorry
<friartuck> Iceman_B^Ltop friends in China? http://www.geoiptool.com/en/?IP=125.81.125.80
<kirkland> ScottK: Iceman_B^Ltop: and are fixed (removed) in jaunty
<ScottK> kirkland: Thanks.
<kirkland> Iceman_B^Ltop: ScottK: i don't know about the UDP errors
<kirkland> ScottK: thanks for the heads up ;-)
<ScottK> kirkland: You're welcome.
<Iceman_B^Ltop> kirkland: I can explain the UDP errors
<Iceman_B^Ltop>  Iwanted to know the other things :)
<kirkland> Iceman_B^Ltop: ah, okay, yeah those are two benign warnings
<Iceman_B^Ltop> okido
<Iceman_B^Ltop> friartuck: friends, I wish
<kirkland> unknown items that should be scrubbed from the ecryptfs mount string
<Iceman_B^Ltop> they are torrent noise
<Iceman_B^Ltop> those UDP things
<kirkland> if they're not scrubbed, the kernel says "i don't know what to do with these"
<Iceman_B^Ltop> okay
<kirkland> and drops them
<Iceman_B^Ltop> how can I perform Wireshark-like tasks from the command line?
<friartuck> Iceman_B^Ltop tcpdump on linux, snoop on solaris.
<Iceman_B^Ltop> Im having random disconnections when I'm SSH-ed into my server, from the local network
<Iceman_B^Ltop> this wasnt the case with 8.10 Desktop but ever since I installed server.....I;ve had them
<Iceman_B^Ltop> nobody with similar experiences?
<Iceman_B^Ltop> I;ve also had them when I ssh from a node that;s physically connected to the server machine
<Iceman_B^Ltop> and its very annoying :/
<ScottK> Iceman_B^Ltop: As friartuck says, tcpdump is the package you want.
<sbeattie> Iceman_B^Ltop: I second tcpdump, but another option would be tshark, which is the text/cli version of wireshark.
<Iceman_B^Ltop> okay
<Iceman_B^Ltop> I have no GUI, in case that matters
<Iceman_B^Ltop> i'll install both Tshark and tcpdump
<jkakar> Is there any documentation describing how one uploads KVM images to a Eucalyptus cloud?
<jkakar> I followed the instructions in soren's blog on the weekend, but haven't managed to figure out how to upload an image.
<friartuck> Iceman_B^Ltop tcpdump is probably already there: http://www.tcpdump.org/tcpdump_man.html
<Iceman_B^Ltop> okay
<Iceman_B^Ltop> "software caused network about" is the message I keep getting form putty, and I can't ping my server right now
<Iceman_B^Ltop> and now it just started responding to pings again, this is too strange
<Iceman_B^Ltop> sorry, the putty error message reads "Network error: software caused connection abort"
<Iceman_B^Ltop> does anyone know if the OpenSSH package that ships on the Server cd differs significantly, config-wise, from the package you can get through apt-get ?
<JanC> I think there shouldn't be any difference except for security or serious bug fixes
<Iceman_B^Ltop> okay
<Iceman_B^Ltop> then I have no clue but I think my server is a bit dodgy, it keeps dropping the connection
<ScottK> Iceman_B^Ltop: It's identical unless there have been post release updates (as JanC says).
<Iceman_B^Ltop> alright
<Iceman_B^Ltop> so it can't be that
<Iceman_B^Ltop> I'v already asked in #ubuntu, if setting the card to half-duplex would make a difference
<ScottK> Iceman_B^Ltop: What kind of connection do you have?
<friartuck> Iceman_B^Ltop half duplex not good. look for errors on the nic: ifconfig -a
<Iceman_B^Ltop> ScottK: my current setup is [this laptop(XP)]------[switch]-------[router]------[server]
<Iceman_B^Ltop> all ethernet, 100Mbit
<ScottK> No firewall in there?
<Iceman_B^Ltop> there is also a modem connected to the router.
<genii> Iceman_B^Ltop: Maybe check that both ends of your network cable are wired 568-B compliant
<Iceman_B^Ltop> no, everything whould be bridged, its all LAN
<Iceman_B^Ltop> genii: I had 0 problems with the Dekstop install of Ubuntu, just that it all 256megs of my ram in that machine
<Iceman_B^Ltop> right now, even when I SSH into the router, and from there to my server, I get dropping connections
<ScottK> Do you get packet loss when you ping?
<genii> Iceman_B^Ltop: I've had this half-duplex problem previously, it ended up being cable that was ok at short distances and not 568-B wired... then on longer cable runs dhcp kept dropping, half-duplex, etc with same wiring order as short cord. It specifically had to be wired in the order 568-B standard requires
<Iceman_B^Ltop> when my connection breaks, yes. otherwise I can ping fine. I do get more pingdrops from my laptop then I get when I ping from my router
<Iceman_B^Ltop> oh, the cable is a factoray made Cat6 I think, but again, itworked fine with ibex desktop
<jmedina> Iceman_B^Ltop: could you pastebin the output from ethtool ethX from the server and desktop?
<Iceman_B^Ltop> I can only give you the output from the server, cause I dont have the desktop installed anymore
<Iceman_B^Ltop> hang on
<Iceman_B^Ltop> jmedina: http://pastebin.ubuntu.com/140874/
<ivoks_> mathiaz: any chance you could look at that dovecot-postfix bug/patch?
<mathiaz> ivoks_: hi - I've already look at it once. I still need to think about it a bit more.
<ivoks_> ok
<mathiaz> ivoks_: how important is it for the release?
<ivoks_> er... very :)
<mathiaz> ivoks_: right - right now the package works correclty.
<mathiaz> ivoks_: except for a specific use case.
<ivoks_> not quite
<mathiaz> ivoks_: not quite?
<ivoks_> on reinstall or new version, ucf is ignored
<ivoks_> and smtp-auth with outlook doesn't work
<mathiaz> ivoks: oh right. These should be fixed for release then.
<mathiaz> ivoks: I was only refering to the case where dovecot.conf local changes weren't taking into account.
<ivoks> ah, well, i've added that to that patch
<ivoks> it's not big deal to do it
<ivoks> and it would be great addition for users
<ivoks> since some will upgrade their intrepid server and would like to have dovecot-postfix
<ivoks> merging their config in would be a big plus
<Nafallo> hmm
<Nafallo> how advanced is that dovecot-postfix thing?
<Nafallo> can it do postgresql backend? :-)
<mathiaz> ivoks: right. Doing so, since we already have the logic to merge in place, I though why not use the default dovecot.conf?
<ivoks> Nafallo: it uses shadow as backend
<mathiaz> ivoks: ie to merge the dovecot-postfix.conf file *into* the existing dovecot.conf?
<ivoks> mathiaz: we aren't allowed to do that
<ivoks> dovecot.conf is from another binary package
<Nafallo> ivoks: oh gah. not what I want then. thanks :-)
<ivoks> and -imapd and -pop3d are not doing a good thing changing dovecot.conf
<ivoks> Nafallo: dovecot-postfix is just configuration for dovecot and postfix, noting else
<mathiaz> ivoks: hm. I wonder if the fact that we use ucf to handle config changes wouldn't help.
<Nafallo> ivoks: yeah. mostly wondered if it had a dpkg-reconfigure wrapper for some more advanced configurations as well :-)
<mathiaz> ivoks: for -pop3d and -imap, I agree
<ivoks> mathiaz: my proposal was to use ucf and merge diff into dovecot
<ivoks> mathiaz: but cjwatson said that's wrong
<mathiaz> ivoks: I always wanted to look if it would be possible to split this configuration into its own configuration file
<mathiaz> ivoks: ok.
<mathiaz> ivoks: ie - have a configuration file to enable pop3
<mathiaz> ivoks: another one to enable imapd
<mathiaz> ivoks: basically having one configuration file per service
<mathiaz> ivoks: or daemon
<mathiaz> ivoks: rather than having one monolitic configuration file.
<ivoks> mathiaz: that would be great... but this is something i'd rather see upstream doing
<Iceman_B^Ltop> jmedina: any clue ?
<mathiaz> ivoks: right. I think that the dovecot configuration supports include files.
<ivoks> mathiaz: iirc, not for every part of configuration
<jmedina> Iceman_B^Ltop: everyting looks ok, but I dont know if that is from desktop or server
<ivoks> only for ldap and sql
<jmedina> I asked for both
<mathiaz> ivoks: hm - in the case of pop3 and imapd we'd be interested in the protocol command line
<mathiaz> ivoks: hm - in the case of pop3 and imapd we'd be interested in the protocol option
<Iceman_B^Ltop> jmedina: what do you mean? I can only provide you with data from the server. I have no desktop. If you mean this machine, it's an XP Laptop
<mathiaz> ivoks: I wonder if something similar to the master configuration of postfix would be useful
<jmedina> Iceman_B^Ltop: well looks fine, did sniff with tcpdump for any problems?
<mathiaz> ivoks: anyway - these are just thoughts.
<ivoks> mathiaz: hm... it might... maybe we could do something with ucf and wrapper tool for config
<Iceman_B^Ltop> I would love to, but I'm new to this. meanign I have no idea what to sniff for
<mathiaz> ivoks: one day I'll look into what the dovecot configuration engine can exactly do
<mathiaz> ivoks: meanwhile I'
<mathiaz> ivoks: meanwhile I'll have another look at your patch for dovecot-postfix.
<ivoks> we should contact upstream
<mathiaz> ivoks: definetly.
<ivoks> (i should contact upstream) :)
<ivoks> mathiaz: i know it might be late in release schedule, but those changes are important - i worked on that patch for couple of days and tested it
<ivoks> mathiaz: so, it should be ok
<mathiaz> ivoks: well - these are clearly bug fixes
<mathiaz> ivoks: so we can include them in jaunty
<cuba> hey
<cuba> I'm giving an IP by dhcpd to 5 hosts, and only ubuntu-server doesn't react on dhcpd responses after dhclient requests....the packets are there, but the ubuntu server's dhclient ignores them
<cuba> it simply doesn't set the interface at all
<cuba> it is fresh installation
<ivoks> [dhclient interface] doesn't work?
<cuba> it is eth0, up and running
<cuba> in multicast
<ivoks> so, dhclient eth0 doesn't work?
<cuba> listening on LPF/eth0/08:00:27:95:0c:a6
<cuba> sending on LPF/eth0/08:00:27:95:0c:a6
<cuba> isn't that weird ?
<ivoks> no, that's normal
<ivoks> nothing after that?
<cuba> discover on eth0 to 255.255.255.255 port 67
<cuba> but no result
<ivoks> logs on server?
<cuba> but tcpdump is catching the dhcpd response packets
<ivoks> dhcp server
<Iceman_B^Ltop> jmedina: any idea what I should sniff for? or should I capture everything for a minute or 3 ?
<cuba> ivoks, I need to go now, I'll be back later
<centaur5> Does LTSP require that you have desktop packages on the server or does ltsp-build-client install desktop packages in the /opt/ltsp/ folder?
<olcafo_> is there an access control system in place for the iscsi service? I need to know it if I could set something up with multiple iscsi initiators access the same target. just doing priliminary research at the moment.
<olcafo> any good resources on the internet for implementing iscsi in linux/ubuntu would be great.
<ivoks> http://www.cuddletech.com/articles/iscsi/
<ivoks> ?
<stickystyle> olcafo: you want more than one initiator to access the same target at the same time?
<olcafo> yup
<olcafo> great, something just came up. I'll be back.
<Iceman_B^Ltop> oh hey
<Iceman_B^Ltop> my server loses connections to the internet as well
<stickystyle> olcafo: Just wanted to make sure you where aware that you can't just do that with iscsi, you need to throw a clusterd file system into the mix also - like GFS.
<ivoks> maybe he doesn't want it mounted
<zul> mathiaz: ping you know for all of those bugs who dont chose a password for mysql why we do a big fat warning when they dont choose a password and let it continue or log it somewhere where we can ask users if they report a bug to check if they entered a password or not
<mathiaz> zul: are you triagging mysql bugs?
<mathiaz> zul: I'm doing the same :)
<zul> mathiaz: some of them
<mathiaz> zul: I'm going through the New bugs
<mathiaz> zul: which one are you doing?
<zul> im going through some of the old ones and newer ones
<juliux> hi can somebody take a look at the phpmyadmin package in hardy? see http://www.phpmyadmin.net/home_page/security/PMASA-2008-7.php
<juliux> hardy still has 2.11.3-1
<drbobb> hello, i have an LVM question, just started to play around with it
<drbobb> why would lvextend refuse to grow my logical volume?
<drbobb> the message is: device-mapper: reload ioctl failed: Invalid argument
<ivoks> invalid argument
<genii> What arguments did you use with the lvextend command?
<drbobb> lvextend -l +100%FREE /dev/VG0/OPT
<ivoks> try without +
<drbobb> ok this is one of two lv's in its volume group
<drbobb> and my idea is to expand it to fill all remaining free space
<ivoks> then run vgdisplay
<ivoks> and check how much free extents are there
<ivoks> and then extend it
<drbobb> Free  PE / Size       19022 / 74.30 GB
<drbobb> that's what you wanted to know?
<ivoks> lvextend -l +19022 /dev/VG0/OPT
 * genii ponders if it's -l or -I
<ivoks> it's -l
<ivoks> small L
<drbobb> well i didn't make up that command, i followed the manpage
<genii> ivoks: I don't use it enough to know, thanks
<ivoks> l and I are different letters
<ivoks> you should consider different font :)
<drbobb> oh, exactly the same output
<drbobb> using number of extents made no difference
<ivoks> "lvextend  -L  +54 /dev/vg01/lvol10 /dev/sdk3"
<ivoks> hm
<ivoks> is the filesystem mounted?
<drbobb> i tried both ways, mounted and unmounted
<drbobb> made no difference
<ivoks> it must be unmounted
<drbobb> it still doesn;t work
<jmedina> umounted? /me always resize with mounted FS
<drbobb> nowhere do the docs say it must be unmounted
<drbobb> by i tried both ways anyway
<ivoks> right, that's true
<drbobb> s/by/but/
<ivoks> filesystem is mounted, not the partition
<drbobb> right, and to resize a jfs, it must be mounted anyway
<ivoks> same with xfs
<drbobb> well i have jfs
<ivoks> how about
<drbobb> would it be a problem that the other lvm on this volume group houses my root fs?
<drbobb> s/lvm/lv/
<ivoks> lvextend -L100%FREE /dev/VG0/OPT
<ivoks> capital L
<ivoks> bah
<ivoks> ignore that
<drbobb> nope
<ivoks> lvextend -L+74.30G /dev/VG0/OPT
<drbobb> same
<drbobb> no difference
<ivoks> huh?!
<drbobb> same message
<ivoks> ls -dl /dev/VG0/OPT
<drbobb> oh i forgot to say, there's one more line:
<drbobb> Failed to suspend OPT
<drbobb> it's a symlink, /dev/VG0/OPT -> /dev/mapper/VG0-OPT
<drbobb> btw this is all on a freshly installed ubuntu-server 8.04
<ivoks> there weren't any lvm partitions on that disk before?
<drbobb> i repartitioned the whole disk at installation time
<genii> Maybe you need to use lvchange to have it mounted ro first
<ivoks> drbobb: never mind that, were there lvm partitions before installation?
<drbobb> ivoks: hey, now i don't recall, i wiped the previous system clean
<drbobb> i think it was a redhat 9
<ivoks> note that formating disk (or even creating new partition table) doesn't do anything
<ivoks> metadata from previous partitions can be preserved even if the whole disk is repartitioned
<drbobb> well it does rewrite the partition table, doesn't it
<ivoks> it does
<ivoks> but you can recover partitions
<ivoks> so, clearly, not everything is formated :)
<drbobb> yes
<drbobb> but i've owerwritten much of the drive with new data by now
<ivoks> but, back to the problem...
<drbobb> well, about 80GB out of 200
<ivoks> vgscan
<Iceman_B^Ltop> jmedina: still there?
<drbobb> yeah, worked fine
<Iceman_B^Ltop> any idea how long I should let the tcpdump run for ?
<drbobb> nothing extraneous found
<drbobb> Found volume group "VG0" using metadata type lvm2
<drbobb> etc.
<ivoks> vgdisplay
<ivoks> paste it on pastebin
<ivoks> Iceman_B^Ltop: not too long :)
<ivoks> Iceman_B^Ltop: otherwise, you'll have very big file :)
<jmedina> Iceman_B^Ltop: yeap, but im busy doing real work :S, it is not about running and create a big file, is about analize dumped data and look for problems
<drbobb> ivoks: as you like
<drbobb> http://paste.ubuntu.com/140947/
<drbobb> nothing unusual
<drbobb> the second vg is on another physical drive
<ivoks> drbobb: lvdisplay
<drbobb> yes, what do you want to know about it?
<drbobb> http://paste.ubuntu.com/140949/
<ivoks> drbobb: everything
<drbobb> there it is
<ivoks> well, hm, it should wor
<ivoks> k
<drbobb> yeah i thought so too
<drbobb> brb
<Iceman_B^Ltop> jmedina: ah okay
<Iceman_B^Ltop> ivoks: I let it run for like 15 mns, and I have a 23k file
<ivoks> nice
<Iceman_B^Ltop> ivoks: I gave sudo tcpdump -c 1000 -w tcpdump_30mar now
<drbobb> ivoks: ok, so i've been able to extend the lv by smaller increments
<drbobb> but at the point where there are 249 free PE's, i can't extend it any more
<drbobb> so why would LVM insist that 996 MB of my VG must be wasted?
<uvirtbot> New bug: #322647 in mysql-dfsg-5.0 (main) "mysql-server fails to instal with apparmour errors" [Undecided,Incomplete] https://launchpad.net/bugs/322647
<ivoks> drbobb: metatada
<drbobb> ok, it does seem like a lot though
<drbobb> and it still is kind of puzzling
<drbobb> on my second VG, which hosts a single LV, vgdisplay shows 0 free PEs
<drbobb> kind of inconsistent, isn't it
<ivoks> free PS should be 0
<ivoks> PE
<drbobb> yeah what i mean
<drbobb> i see 249 PEs on VG0
<ivoks> i really don't know where's the problem
<drbobb> free PEs that is
<drbobb> but LVM won't let me expand any LV
<drbobb> well i don't know what the problem is
<drbobb> but it sure soesn't make sense to me
<drbobb> s/soesn't/doesn't/
<drbobb> i don't think i've seen it mentioned in the docs anywhere, that you should account for some PEs being taken by metadata when computing by hom much you can grow your LVs
<drbobb> s/hom/how/
<drbobb> man how terribly i type
<ivoks> free PEs already have included metadata
<ivoks> so, at the end, you shouldn't have any free PEs
<ivoks> why you can't achive that, i can't tell
<drbobb> and it's not just a little, it's 249 PEs
<drbobb> actually i though the metadata area/s is/are separate, and not included in the count of PEs
<ivoks> it's not, it is separate
<orudie> ivoks, i'm in a terrible terrible mood today
<orudie> i feel like ripping someone's head off and then kicking it around
<drbobb> so, it ought to be possible to allocate all available PEs to LVs
<drbobb> hm, i've managed to create another LV out of the 249 free extents
<drbobb> hey maybe i do worry too much, but it makes me uneasy when something as essential as storage management doesn't work as expected
<kansan__> how do i add a user such that they will only have access to a given directory and all subdirectories & files underneath that directory?
<ivoks> 10.04 will have nice name
<ivoks> ten-o-four
<ivoks> :)
<User777> Okay so I have installed and configured (mostly) proftpd on a newish ubuntu server installation and the person i need to FTP in to this box is experiencing error 550.  Could anyone tell me what I need to do to make the entire filesystem just 'open' to his user.  I understand that may not sound secure but its what i need to happen. please help?
<jmedina> User777: dont chroot ftp users
<User777> okay, so what can id o to let this FTP user upload/download anywhere in the filesystem?
<User777> i do^
<User777> i am not a Linux expert by any stretch of the imagination
<Deeps> terrible idea, but ftp in as root user
<User777> hmm
<User777> there would a line in the config file that I would have to change right?
<User777> i assume thats turned off by default
<Deeps> i have no idea i'm afraid, manuals come into play here
<User777> thanks
 * User777 is looking around the config
<User777> connection established waiting for welcome message.  Error: Could not connect to Server
<User777> i put in what the manual says is the correct line...(RootLogin yes)
<User777> restarted the server..and I get that
<User777> any ideas?
<beawesomeinstead> User777: did you change the Port in sshd_config?
<beawesomeinstead> User777: shouldn't it be PermitRootLogin yes?
<Iceman_B^Ltop> okay, I have run tcpdump and I have a file now, where to go from here ?
<Iceman_B^Ltop> what do I do with the file ?
<User777> i havent touched sshd_config....and according to the sites docs its RootLogin...I will try with PermitRootLogin now..whats this about sshd_config?
<beawesomeinstead> User777: where did you set RootLogin yes then?
<User777> proftpd.conf
<User777> was that not the correct place?
<beawesomeinstead> User777: ah, i thought you set similar option for SSH server, nevermind then
<User777> i tried (PermitRootLogin on) and restarted..i am greeted with the same error  in my ftp client "Unable to connect to server"
<User777> I shouldnt have to add root to anything right?
<olcafo> is anyone using GFS? is it well supported in Ubuntu?
<ivoks> i do
<ivoks> no problems at all
<ivoks> 'night
<olcafo> ivoks: I'm thinking of using it with iSCSI
<olcafo> ha, maybe catch you later then.
<ivoks> i use it with drbd
<User777> okay so i just got fed up and uninstalled proftpd
<User777> can anyone reccomend an ftp server that is actually easy to configure?
<olcafo> User777: littel late here, but ProFTP works out of the box on a ubuntu install.
<User777> appearantly not
<User777> got to go
<olcafo> chances are that if it doesn't work, then there is somthing external happening that will prevent any other FTP client to work.
<olcafo> I need to know if I'm going down the right path here. I ultimatey need to have SMB shares that are ever increasing in size (TB wise) and also perhaps host virtual images through it. I'm thinking 10GbE, iSCSI, LVM, GFS with a couple of VM hosts accessing the VM images.
<olcafo> ok, so that was confusing. there's really two seperat thing I want to do with it.
<olcafo> 1. an SMB server connected to the targets, 2. a couple of KVM hypervisors connected to targets with the VM images.
<owh> That's clear as mud and twice as thick.
<olcafo> haha! :)
<jmedina> that looks fine, with that setup you can do livemigrations
<olcafo> that's the idea. I'm not of the rocker then, good!
 * jmedina is preparing a similar envyroment for a xen course, well less the 10GbE network
<olcafo> wow, that's exacly what I wanted to hear.
<jmedina> that is common envirment for virtualized datacenters
<olcafo> then why am I having such a hard time finding documentation. I've sort of peiced all this together from different sources, but I've found nothing that talks about it as whole.
<jmedina> olcafo: well it s goog time to start documenting this scenario, I can help
<olcafo> that's a great idea. where do we start?
<jmedina> probably with the goals of the project
<olcafo> I'll be working on this for deployment sometime next year.
<jmedina> I read a document from suse describing a scneario like this, it is from 2006
<olcafo> jmedina: yes, what I meant is what is the forum, or official place to do this online.
<jmedina> gooogle for suse xen live migration
<jmedina> olcafo: not sure, but I think you can use the wiki
<jmedina> I think that is the place for community contributions
<olcafo> well, I guess I'll finall have to set up an account ;)
<jmedina> olcafo: have you testd KVM live migrations?
<jmedina> olcafo: I havent used KVM, only Xen, but looks like it not too different in the implementation
<olcafo> no, not yet. although live migrations are not really one of the requirements for my deployment environment, it would be interesting to test out though.
<olcafo> jmedina: I currently don't have the hardware to try such a thing. most of the stuff in this office is pre-KVM compliant. Hence the big upgrade next year.
<jmedina> ok, that is why we use xen
<olcafo> I have zero experience using xen. limited VMware and a good amount of KVM is what I've been exposed to.
#ubuntu-server 2009-03-31
<olcafo> xen's live migration sounds pretty awsome...
<olcafo> kvm's live migration sounds equally awsome... now I'm going to have to set something up so I can try it!
<arrrghhh> anyone use mt-daap (aka firefly)?  i installed the version from the repo's, and that failed when it tried to add a file.  so i figured i would compile the newest from source, and i can't seem to compile it without libid3tag dependency...
<dustin> is there a place where we can send recommends for next puppy version?
<arrrghhh> puppy?  what does this room have to do with puppy?
<dustin> I think a search bar in the package manager would be steller
<dustin> ah heck I am in the wrong tab
<dustin> sorry
<arrrghhh> i was confused there for a second lol
<dustin> using puppy to rescue my server and irc has like 10 tabs and everyone in each has had something to offer
<dustin> I am so glad for irc
<olcafo> does anyone here have certifications? If so, which ones?
<arrrghhh> still not sure what that has to do with ubuntu-server... or even ubuntu.
<olcafo> ubuntu has server certification I believe
<arrrghhh> doubt it.  at least not an 'official' cert.
<arrrghhh> nothing like RHEL or SLES.
<mathiaz> kees: jdstrand: what's your opinion on bug 293258?
<uvirtbot> Launchpad bug 293258 in mysql-dfsg-5.0 "mysql user has home directory writable by mysqld" [Undecided,Confirmed] https://launchpad.net/bugs/293258
<arrrghhh> anybody use mt-daap or firefly on their ubuntu-server?
<olcafo> arrrghhh: what about http://www.ubuntu.com/training/certificationcourses
<arrrghhh> olcafo, yes, but i don't think those are like the RHEL or SLES certs.
<olcafo> arrrghhh: I suppose not, now than I'm looking at it. Still would be interesting to hear about.
<arrrghhh> certainly.  but they won't have the same type of clout the other certs will (unless you KNOW the company wants ubuntu certs, which, i have never run into unfortunately.)
<uvirtbot> New bug: #351254 in mailman (main) "Need version bump - 2.1.11 broken with Python 2.6 (dup-of: 351648)" [Undecided,New] https://launchpad.net/bugs/351254
<infinity> mathiaz: FILE privs are often considered inherently insecure in the first place, but it might be better for the mysql user to have "/nonexistent" as its home directory to at least prevent the dotfile attack vector.
<infinity> kees, jdstrand: ^^
<LumpToe> Strange issue:  I can access my server from a remote network and my server can access all devices locally but not any of the external ip addresses?  Is this a gateway setup issue?
<goofey> LumpToe: maybe a DNS issue?
<goofey> LumpToe: oh, wait, can;t acces IP addresses-  sorry, ignore me
<goofey> LumpToe: can you traceroute or mrt from the server to see where it fails?
<LumpToe> Yeah dig manages to resolve the addresses but tracepath stops at the route
<goofey> that does sound like a gateway issue
<LumpToe> This is a brand new install and a DHCP issued address from the router
<LumpToe> How can I see the gateways used on my ubuntu box
<goofey> I was just wondering that
<owh> LumpToe: route -n
<LumpToe> 0.0.0.0
<LumpToe> brb  nature calls
<goofey> LumpToe: my server has 2 lines:
<goofey> 192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
<goofey> 0.0.0.0         192.168.1.1     0.0.0.0         UG    100    0        0 eth0
<lwizardl> how can you tell which raid controllers are supported for doing installs?
<LumpToe> back
<LumpToe> goofey: My server has the same two lines
<LumpToe> goofey: It was my router.  Strange.  I went through some of the route tables and wiped out anything with the same IP address.
<Sam-I-Am> anyone here ever use the dhcp3 package w/ ldap patch?
<twb> !anyone
<ubottu> A large amount of the first questions asked in this channel start with "Does anyone/anybody..."  Why not ask your next question (the real one) and find out?
<Sam-I-Am> lol
<Sam-I-Am> so... when i give it credentials for its account on ldap, it breaks with the error of "success" ... when i give it the wrong password, it seems to re-bind anonymously and works except it can't write to the ldap dn, so it fails.
<Sam-I-Am> binding with ldapsearch with or without credentials works fine
<twb> Sam-I-Am: does libnss talk to LDAP?
<twb> That is, does your /etc/nsswitch use ldap?
<Sam-I-Am> not on the ldap server
<Sam-I-Am> i dont think the patch uses libnss
<Sam-I-Am> seems to have all of its config in dhcpd.conf
<twb> Hmm, OK.
<Sam-I-Am> the debug mode doesnt provide any useful information... and the documentation is scarce.  pretty much had to reverse engineer the schema file to figure out what i should have in my ldap tree.
<twb> I hates LDAP
<Sam-I-Am> i love ldap...
<Sam-I-Am> so, if i could figure out how to make the package even work... i'd consider writing useful documentation for it.
<Sam-I-Am> might need to find its maintainer...
<Sam-I-Am> the guy who wrote it is nowhere to be found
<Sam-I-Am> its a universe package
<aranyik> hi :)
<aranyik> how can I troubleshoot my settings if I did everything as "How-To: Set up a LAN gateway with DHCP, Dynamic DNS and iptables on Debian Etch" said. and I still cannot ping the NIC that connects to internet ???
<twb> aranyik: Etch is not Ubuntu
<aranyik> ok
<aranyik> but its debian
<twb> aranyik: this is not a Debian support channel.
<twb> aranyik: try #debian on irc.debian.org (OFTC).
<aranyik> ok..
<aranyik> then how can i make the same in ubuntu
<aranyik> ?
<aranyik> i know dhcp3-server is supported
<aranyik> how about bind9?
<aranyik> i think it is also
<twb> aranyik: I don't know what you're trying to achieve.
<twb> aranyik: for Ubuntu you probably want to read the Server Guide.
<aranyik> i tried it at forst
<aranyik> at first
<aranyik> and it wasnt working
<aranyik> then every thread in ubuntu will show similar ways
<Sam-I-Am> twb: doing some digging... seems like intrepid grabbed a buggy package version from debian... might be fixed in lenny... which means it'll probably be in jaunty
<aranyik> but its still not working
<twb> !enter
<ubottu> Please try to keep your questions/responses on one line - don't use the "Enter" key as punctuation!
<aranyik> a router is a very easy thing to set up, but i never had so much problem since i tried on ubuntu
<twb> Get:2 http://au.archive.ubuntu.com hardy-updates/main ubuntu-docs 8.06.1 (tar) [42.5MB]
<twb> ...ugh, forty megabytes?
<twb> What, did some jackass forget to "make clean"?
<Sam-I-Am> lol
<Sam-I-Am> or gzip
<twb> Sam-I-Am: no, it's gzippe
<twb> However I don't understand why Ubuntu 8.04's version is 8.06-1...
<twb> They ought to call it 8.04.1-1 or something.
<Sam-I-Am> yeh
<Sam-I-Am> probably a typo
<twb> Sam-I-Am: er, not likely
<twb> Sam-I-Am: more likely that they version the docs based on when they are released, and 8.06 is from -updates
<twb> Hmph.  The source directory of the tarball is ubuntu-docs-8.04.2~hardy
<twb> I bet lintian doesn't like that
<Sam-I-Am> lol
<twb> apt-cache policy ubuntu-serverguide says:
<twb>      8.06.1       0 500 http://mirror.internode.on.net hardy-updates/main Packages
<twb>      8.04.2~hardy 0 500 http://mirror.internode.on.net hardy/main Packages
<twb> OK, so what happens in that file, AFAICT, is that there are canonical English .xml files, and then .po files that contain each English paragraph and its translation (i.e. each paragraph occurs 1 + 2*(number of translations) times).  Then on top of *that*, I think the autogenerated translated .xml files are reproduced in the source?
<twb> Check this shit out:
<twb> fgrep -rl 'You can use the CVSROOT environment variable to store the CVS root' * | wc -l
<twb> 83
<Sam-I-Am> huh...
<twb> That's right *eighty three* copies of the same text in the source tarball
<Sam-I-Am> well aint that special
<twb> I'm sure someone is just being lazy.  That can't be the only way to do it
<twb> Maybe it's something horrible like Canonical builds its docs using its unpublished internal code, so the source package actually contains postprocessed files.
<oh_noes> is there a package from repo i can install for VMWare Tool on Ubuntu Server?
<oh_noes> Or do I have to manually install it
<twb> oh_noes: do you mean so you can run ubuntu-server as a guest inside vmware?
<twb> oh_noes: on Debian there is open-vm-tools, I can't see it in 8.04
<twb> oh_noes: that it the "install VMware Tools" thing built properly in a .deb
<twb> s/it/is/
<oh_noes> its not, atm it's all manually, and you have to install your kernel source
<oh_noes> I wasnt sure if Ubuntu had package for it, but thats ok
<twb> It's in 8.10
<twb> http://packages.ubuntu.com/open-vm-tools
<twb> I guess you could backport it
<Sam-I-Am> and the 8.10 package works flawlessly
<Sam-I-Am> although vmware is a little weird on detecting if the guest is running vmware tools...
<twb> I'd certainly be more inclined to trust backporting the intrepid package over using the shitty virtual CD full of scripts and sharballs that vmware-server itself mounts.
<Sam-I-Am> yeah, also known as... binary blobs
<Sam-I-Am> and m-a just makes it so easy to get modules in open-vm-tools
<twb> Sam-I-Am: actually, no
<twb> Sam-I-Am: the vmware tools .iso contains pre-compiled .ko files only for RHEL kernels
<twb> Sam-I-Am: there's also the module source in there, which would be used on an ubuntu guest
<Sam-I-Am> and some others... like sles i think
<Sam-I-Am> true
<Sam-I-Am> but their builds break on 'modern' systems
<Sam-I-Am> thanks to module build dependencies
<Sam-I-Am> i made a patch some time ago for it...
<Sam-I-Am> other folks seem to combine vmware-tools and open-vm-tools
<Sam-I-Am> almost seems like ubuntu is getting along better with vmware these days than redhate since they're pushing their own vm system
<twb> Eh, vmware can FOAD as far as I'm concerned.
<Sam-I-Am> i really like virtualbox
<twb> qemu -curses along makes qemu beat it, not to mention stuff like -tftp
<Sam-I-Am> but trying to sell it to management is difficult
<Sam-I-Am> "what, we dont have to pay for it?"
<twb> Well, virtualbox has a non-free edition
<twb> That's part of the reason I mistrust it
<twb> Sam-I-Am: you can sell it to your management by calling the OSE "the demo version"
<Sam-I-Am> lol
<oh_noes> thats not a valid argument
<Sam-I-Am> well, my first thing is trying to convert management from centos/rhel to ubuntu
<oh_noes> Ubuntu has a non-free edition (support) so you cant mistrust it
<Sam-I-Am> its quite the uphill battle
<twb> oh_noes: I do mistrust canonical.
<twb> oh_noes: I would VASTLY prefer Ubuntu to be a Debian blend or subproject rather than a fork that syncs irregularly.
<twb> oh_noes: but providing support is quite different to having having two separate versions of a product.
<twb> It's not like Ubuntu has a RHEL and a CentOS version
<oh_noes> agreed, but they only charge for the features business want.  For example OSE has USB support etc, and the non-free has remote desktop.
<Sam-I-Am> yeah, the packages arent 3 years old :)
<twb> oh_noes: that's precisely my point.
<twb> oh_noes: it's the same business model as cedega has.
<twb> It's based on treating to wider community as second-class citizens.
<Sam-I-Am> welp, time for bed here
<Sam-I-Am> laters
<oh_noes> I disagree.  Its giving them wider community what they want for free, and charging business for anything additional they need.
<oh_noes> Sure, if they start removing functionality used by the wider community, then they break this
<oh_noes> but currently, what they are doing functionality wise I think is a great compromise
<twb> Apart from the fact that they're deliberately taking away features I want, but aren't prepared to pay for.
<twb> It means that if I want those features I have to add them into a fork of the product.  It's a divisive business model.
<twb> If they took a "sell consulting" approach, then all the code could be open, and everybody would be working on the same codebase.
<quizme> how can i tell if I'm using x86_64 or AMD64 architecture ?
<quizme> what is an unstripped build ?
<quizme> am i allowed to use the multiverse directory if I'm on 8.04 ?
<p_quarles> quizme: they're the same architecture
<p_quarles> I don't know what an unstripped build is, and yes, there's a multiverse repo for every Ubuntu version
<quizme> p_quarles: I mean  how can i tell if i'm 32 bit or 64 bit
<p_quarles> quizme: the CPU or the kernel?
<quizme> i'm not sure
<quizme> "x86_64 or AMD64" is that cpu or kernel ?
<p_quarles> quizme: again, x86_64 and AMD64 are the SAME THING; what I'm asking is whether you're trying to figure out if your hardware is 64-bit capable, or if the OS you're running is 64-bit
<p_quarles> or (this might be easier) what's your real quetion? why do you need to know?
<quizme> my real question is
<quizme> how do i install ffmpeg
<quizme> for 8.04
<p_quarles> quizme: sudo apt-get install ffmpeg
<p_quarles> the dependencies and architecture questions are automatically resolved by the apt-get program
<quizme> how about the codecs and libraries?
<quizme> i have 8.04 on my server
<quizme> https://wiki.edubuntu.org/ffmpeg  <--- i found this
<quizme> it looks like there is a bunch of other commands i need to do also
<quizme> Unstripped build of FFmpeg for Ubuntu 8.10 Intrepid  <--- like what does that mean?
<p_quarles> quizme: okay, I can see where your questions came from now, and let me just say that they are meaningless outside of that context; so start with the "real question" next time, okay? :D
<p_quarles> now, to answer: follow those commands exactly and you should be good
<p_quarles> to find out if you're running 64-bits, run in the terminal: uname -4
<p_quarles> oops, that should be uname -r
<p_quarles> if it's 64 bits, it will contain the term "x86_64" in the output
<p_quarles> as for "unstripped build", that just means a copy of the codec pack as Fluendo distributes it, rather than the modified way Ubuntu ships it
<quizme> 2.6.21.7-2.fc8xen
<quizme> that's my uname -r
<quizme> who is Fluendo ?
<p_quarles> so it's a Xen virtual machine? anyway, not 64 bits, so you can skip the section in question
<quizme> Unstripped build of FFmpeg for Ubuntu 8.10 Intrepid  <----- i'm runing 8.04 though
<quizme> i am running on AWS / EC2
<quizme> so is that a dangerous command to run ?
<p_quarles> what command?
<quizme> sudo apt-get install libavcodec-unstripped-51 libavdevice-unstripped-52 libavformat-unstripped-52 libavutil-unstripped-49 libpostproc-unstripped-51 libswscale-unstripped-0   <--- this command
<p_quarles> no
<quizme> ok
<quizme> but why does it say 8.10 ?
<p_quarles> oh, looking again, it appears to say those packages are available through apt-get only in 8.10
<p_quarles> for older versions, you'll need to use the instructions below
<quizme> can i upgrade my whole system to 8.10 ?
<quizme> is that safe ?
<quizme> from 8.04 to 8l.10
<p_quarles> safe is relative; if you're asking "can it break?" the answer is yes; if you're asking
<p_quarles> "is it supposed to break?" the answer is no
<p_quarles> "safety" in my view is having a backup plan, and not relying on unfamiliar (to you) software to make things flawless; the latter is almost always unrealistic
<quizme> good advice
<p_quarles> anyway, the majority of version upgrade experiences are pretty smooth, but there is a significant minority that runs into big bumps during the process
<uvirtbot> New bug: #352154 in openssh (main) "ssh-agent stops responding" [Undecided,New] https://launchpad.net/bugs/352154
<rags> I want my ubuntu server to act as a gateway...wht I understand is I have to enable routing(net.ipv4.ip_forward=1)..Is there anything more I have to do? this server is connected to another router.
<rags> do I need to setup iptables and NAT?
<simplexi1> rags: depends which kind sharing you want to configure
<simplexi1> rags: options are transparent bridge and NAT, google those
<rags> simplexi1: thx..will check...I just want net access for the clinet machines behind the ubuntu server...I suppose that means a transparent bridge.
<simplexi1> rags: nat if you dont want access to it from anywhere, or bridge if you want acces it from another place than server
<rags> will the forwarding work with just wht I have done...since nat is already present on the router?
<quizme> if you say:  apt-get install a b c d e f ....... to reverse that can you say: apt-get remove a b c d e f ......... and that will bring the state of your system to exactly where it was before ?
<rst-uanic> quizme: if a had a dependency h, h would not be removed after step 2
<rst-uanic> but if h was the only a dependency for a it would be listed as a package that is no longer in use, and you can remove it with sudo apt-get autotemove
<quizme> oh i c
<rst-uanic> well
<rst-uanic> if you will do sudo apt-get install pack1 pack2 pack3, it will give the list of all the packages that would be installed
<rst-uanic> you can save it
<quizme> what if c was installed before step 1 ?
<rst-uanic> it would not be listed in the packages that would be installed
<rst-uanic> so you won't delete it after
<quizme> do you mean uninstalled ?
<rst-uanic> look
<quizme> basically i am wondering if i uninstall a b c d e f I don't want it to wreck anything else that may want it there
<rst-uanic> if you had pack1 installed, and you would run sudo apt-get install pack1 pack2 pack3, pack1 would not be listed as the package that would be really installed
<rst-uanic> so you will know that you should not remove it :)
<quizme> the problem is
<quizme> i already ran apt-get install
<quizme> so i can't see  that list
<rst-uanic> heh)
<twb> quizme: /var/log/dpkg*log
<quizme> twb: i c.... i have to dig in there .... thanks
<quizme> oh boy
<quizme> i need a bubble bath
<rst-uanic> :) just look at the timestamp
<twb> quizme: had you used aptitude, there would be /var/log/aptitude, which is more readable
<quizme> i c
<quizme> hmm
<quizme> ok
<quizme> i'll use aptitude from now on
<quizme> this is goign to be hell
<rst-uanic> quizme: there is /var/log/apt/term.log
<rst-uanic> quizme: you would see the output of apt-get you ran before
<quizme> basically to get my system back to a state S0 at time t0, i should remove all packages installed after t0 if they weren't already in the system before t0.
<quizme> then type in apt-get autoremove
<quizme> it seems like that could be automated with a script
<rst-uanic> quizme: look at the /var/log/apt/term.log
<twb> That still is not guaranteed to get you back to what you had before.
<twb> In particular, removing (instead of purging) will not remove config files
<twb> And some buggy packages will leave stuff in /etc or /var even after you purge them
<twb> database packages sometimes do that to avoid data loss.
<quizme> it would be cool if there was a program that could bring your software and library state to a certain time point by dragging a "scrubber control" like in a video player.
<rst-uanic> quizme: it is called time machine in mac os x ;)
<quizme> twb: so there is no clean way to make a time machine with apt-get ?
<twb> It has been discussed, but does not exist yet.
<twb> In particular it will be easier once btrfs is in production, as it (like ZFS) supports snapshots.
<twb> In general, removing packages and purging them (aptitude purge ~c) should be sufficient
<twb> It's just not GUARANTEED to be identical
<quizme> ok
<quizme> so i'll just aptitude purge all the packages in installed today
<quizme> then reinstall what i was supposed to
<quizme> hopefully that doesn't break anything else
<quizme> aptitude purge ibavcodec-unstripped-51 libavdevice-unstripped-52 libavformat-unstripped-52 libavutil-unstripped-49 libpostproc-unstripped-51 libswscale-unstripped-0   <--- this looks pretty safe doesn't it ?
<rst-uanic> quizme: sudo apt-get purge
<Counterspell> I'm on Ubuntu 8.04.2 LTS and for some reason I can't get vim installed correctly. The package installs but vim complains about missing features (such as no syntax highlighting). Anyone know what's going on?
<friartuck> Counterspell did you customize ~/.vimrc?
<Counterspell> yes
<friartuck> Counterspell rename it and give it a try without it.
<Counterspell> why is the vim build screwed up?
<Counterspell> ok
<Counterspell> of course that will work
<Counterspell> but i want those features
<friartuck> Counterspell no, I think you messed up your .vimrc
<friartuck> :)
<friartuck> syntax
<Counterspell> no vimrc is ok
<Counterspell> i just copied it from my other box
<Counterspell> nothing wrong with it
<friartuck> hm
<friartuck> regular sudo apt-get vim?
<Counterspell> someone think the build for server would be 'more stable' without syntax highlighting?
<friartuck> no
<Counterspell> yes normal sudo apt-get install vim
<friartuck> Counterspell are you invoking with vi? maybe try with vim?
<Counterspell> nope; let me see i just did apt-get update and now it looks like i can install vim-full
<friartuck> I don't have an 8.04 box. np with 8.10.
<Counterspell> i think i'm all set now
<Counterspell> thanks man
<Counterspell> fyi; install vim-nox is the way to go
<Counterspell> where are packages downloaded to again? i want to delete some downloaded packages
<Counterspell> Counterspell: /var/cache/apt/archives
<Counterspell> Counterspell: thanks
<friartuck> spaces in file names...wrote a simple script to inventory the permissions for files and dirs with full path. all works except file names with spaces. some help? http://pastebin.com/m11102c41
<friartuck> pointer?
<kraut> moin
<_ruben> friartuck: i assume something like this would work (not-tested) : find ~/.nx -name "*" -print0 | xargs -0 ls -Alhd
<friartuck> _ruben interesting, that may work better. Thx! I'm working with sed to figure out the first script.
<Counterspell> does apt-get build-dep only install the dependencies of a package?
<friartuck> _ruben your's fixed the space problem anyways.
<Counterspell> how do I install only the dependencies of a package?
<espacious> hi i installed gallery2 but i cant get the relative paths right...i tryed almost everything...
<espacious> http://gallery.menalto.com/node/77317
<espacious> i followed that
<espacious> can anyone throw an eye?
<espacious> galler2 path is /usr/share/gallery2
<espacious> wordpres in /var/www/wordpress
<espacious> domainname.com is linked in /var/www/wordpress
<uvirtbot> New bug: #299455 in mysql-dfsg-5.0 (main) "mysql init script fails if debian-start is not executable" [Low,Triaged] https://launchpad.net/bugs/299455
<jdbrowne> During installation, tasksel offers the 'virtualisation' option. When this option is checked, the installer does not put the main user in the libvirt group. The installer automatically put the main user in the 'admin' group, it is desirable to put the main user in libvirt so that the user can use virsh 'out of the box'.
<uvirtbot> New bug: #352321 in mysql-dfsg-5.0 (main) "mysql queries "lose" results" [Undecided,New] https://launchpad.net/bugs/352321
<jdbrowne> Additionaly, the default network is broken at install. Its default setting makes it fail on startup. Better deactivate it (not autostarted) than having a broken default configuration. Another reason why the default network should not be autostarted : it is a NAT configuration which is not adapted to several use case. In doubt, better let the user choose than make a choice for him that he will need to de-configure.
<Ethos> how do I create a launch to a executable?
<rst-uanic> is this irc room logged somewhere?
<jpds> !logs | rst-uanic
<ubottu> rst-uanic: Official channel logs can be found at http://irclogs.ubuntu.com/ - For LoCo channels, http://logs.ubuntu-eu.org/freenode/
<rst-uanic> jpds: thanks :)
<orudie> ivoks, hi
<orudie> question. how can I give user permissions to actually write to /var/www directory ?
<orudie> whithout chown , cause that just screws things up
<giovani> orudie: are you familiar with linux permissions?
<giovani> do an ls -ld /var/www
<giovani> and paste the output (should only be one line) here
<orudie> drwxr-xr-x 6 root root 4096 Mar 30 01:52 /var/www
<giovani> ok, typically, root does not own /var/www
<giovani> in ubuntu/debian, www-data does
<giovani> did you change it?
<orudie> nope
<giovani> do you have a webserver installed?
<orudie> yeah
<giovani> which one?
<orudie> installed it with tasksel apche2
<orudie> with 4 active vhosts
<giovani> ok ...
<giovani> and who/what needs to write to this directory?
<orudie> oh
<orudie> i need to upload files with ssh
<orudie> i mean sftp
<giovani> well typically that's not done directly in /var/www
<giovani> you might create a directory like /var/www/user/
<giovani> and then chown that directory for your user
<orudie> giovani, oh so i have /var/www/site1 , so i can do chown username /var/www/site1 ? wont this mess things up with apache's permissions ?
<yann2> yes it would
<giovani> orudie: you can have your user, and apache's group own the directory
<yann2> giovani > you could put www-data in the group that owns the directory
<yann2> in the group of the directory I meant
<giovani> yann2: I just said that
<yann2> ok I misunderstood :) thought you told him to put www-data as group
<yann2> sorry
<giovani> I did, I must have just misunderstood you :)
<giovani> why create ANOTHER group?
<yann2> :) I would create a group like "website1", and put user and www-data in it
<giovani> what's the advantage of that, in this situation?
<yann2> more flexibility if there are several people working on the website?
<yann2> could give access to some people to one website but not the other one
<yann2> non sense if he is the only user :)
<giovani> ok ... he hasn't said anything about that
<giovani> but yes, in that situation, that would be more flexible, it's just more complex if it's not required
<yann2> I always found web permissions to be extraordinary complex and unstatisfying :(
<orudie> yann2, can you help me create a group ?
<orudie> yann2, that will let me do what you are talking about
<giovani> yann2: linux permissions being very lacking don't help -- which is why real ACLs are usually brought in :)
<ScottK-palm> ivoks: We're looking at leaping to clamav 0.95 before release. There's a draft package in ubuntu-clamav PPA. Could you test it with amavisd-new?
<ivoks> i might...
<ivoks> i just can't tell when :)
 * ivoks whishes cloning is allowed :)
<ScottK-palm> I probably have about two days to decide.
<ScottK-palm> ivoks: I don't know anyone else I'd trust to do it and I'm pretty tied up working on porting libclamav rdepends.
<ivoks> i'll test it
<ScottK-palm> ivoks: Thanks.
 * ScottK-palm gets back to $WORK.
<Fenix|work> Greetings
<Fenix|work> I need some suggestions with TAR
<Fenix|work> I have an old version of tar that doesn't support inline bzip and gzip, that also splits archives once they hit 2048 MB, and the folder I'm trying to archive is 2078 MB.
<Fenix|work> how can I tar and bzip simultaneously?
<giovani> Fenix|work: run "tar --version" for me
<giovani> I don't know what "old version" means exactly
<Fenix|work> not supported :)
<Fenix|work> hehe
<Fenix|work> that old
<giovani> you're running ubuntu server?
<Fenix|work> I run several... but this one is not.
<giovani> this is #ubuntu-server
<Fenix|work> it's an antiquated BSD derivative.
<giovani> you'd need to read the manpage for the version you have
<Deeps> hi, i'm having a problem with an old version of winzip ;)
<giovani> as for bziping ... you can simply tar it and pipe that to bzip
<Fenix|work> giovani, I do, and I have... but there are bright minds here and it appeared noone was doing anything 'pressing' so I thought to ask.
<giovani> but beyond that ... clearly you'd have to refer to documentation that came with your version of tar ... it's not ubuntu, and it's not supported here
<Deeps> tar to stdout, pipe to bzip
<giovani> so the manpage makes no mention of file limit?
<Fenix|work> nope
<Deeps> have bzip create the file rather than tar
<giovani> Deeps: sounds like the advice I just gave :)
<Deeps> indeed
<Deeps> hopefully you can advise me with my winzip problem next ;)
<Fenix|work> giovani, to give Deeps credit, you mentioned to tar it and pipe into bzip... he suggested just to bzip without the tar
<ivoks> use cpio
<ivoks> don't use tar if it's old
<Fenix|work> Deeps, I'd be glad to help you with your winzip problem.
<ivoks> move it to another machine and tar it :D
<Deeps> Fenix|work: umm, actually, i suggested the exact same thing that giovani did
<Fenix|work> I was just thinking about mounting it via NFS to one of my ubuntu boxes
<giovani> heh
<Fenix|work> I missed the tar to stdio... just saw the 'have bzip create the file
<Deeps> bzip doesn't create archive files
<giovani> Deeps: I believe you have to replace the winzip flux capacitor
<Deeps> which is why you tar it first
<Deeps> if you tar to stdout, it shouldn't be splitting anything as it's not creating any files, it's simply being piped to bzip to create instead
<Deeps> (which is what giovani suggested :))
<giovani> of course, that relies on tar supporting STDOUT redirection, which, considering it doesn't support printing its version, might be a stretch
<Fenix|work> poke fun that the poor soul who has to administer some old piece of crap...
<giovani> however, it MIGHT evade the 2GB limit
<giovani> depending on its cause
<Deeps> for all we know, the system's so old it doesn't support files > 2gb ;)
<Fenix|work> 2.6GB of source code should compile pretty small
<Fenix|work> compress
<Fenix|work> jeeze
<Fenix|work> what is wrong with my brain today
<ivoks> sounds usuall to me :)
<giovani> all that dust you've been breathing in that's been stuck in that computer since the 1980s
 * Deeps gets back on with his windows MCE install
<Deeps> although given that it's off-topic hour, anyone know a linux alternative that'll work with an xbox360?
<giovani> "work with"?
<Deeps> the 360 has a MS-bodged upnp-av stack
<Deeps> so it'll only read networked media if it's coming from WMPv11 or a WinMCE (XP-MCE, Vista)
<giovani> talk to the folks at LinuxMCE
<Fenix|work> Deeps, have you visited the xbox-linux.org site?
<giovani> #linuxmce
<giovani> good project
<giovani> also http://smart-home-blog.com/archives/836
<Deeps> Fenix|work: just did, thats for running linux on the xbox, not reading network media from an xbox360 (totally different machine)
<Deeps> giovani: ta
<giovani> xbmc seems to have some support
<Deeps> yeah, all this stuff's for the xbox, not the xbox360
<Deeps> nm, </offtopic>
<giovani> Deeps: just talk with #xbmc and #linuxmce
<giovani> they'll know more than us
<Deeps> aye
<Deeps> < Deeps> although given that it's off-topic hour || was the only reason i asked ;
 * Fenix|work thinks most people knows more than him
<Deeps> ;)
<giovani> most people can barely operate a computer
<giovani> so, given that you know what "tar" is ... I figure you're already in the top 1%
<Fenix|work> I get paid to barely operate several servers... the advantages of knowing that little bit extra.
<Fenix|work> They get their retribution by giving me this crusty old BSD derivative called QNX.  And not even the new version.
<ivoks> qnx?
<ivoks> ah lol
<giovani> QNX rocks!
<Fenix|work> I'm having fun trying to port over GCC 3
<Fenix|work> so I stand a chance at porting over some more up-to-date tools.
<Fenix|work> giovani, 6 yeah... 4, not so much from an administrative point of view
<giovani> you should run the "QNX is cool!" application
<giovani> http://upload.wikimedia.org/wikipedia/en/f/fd/Qnx_floppy.gif
<giovani> right next to "Towers of Hanoi"
<giovani> :)
<Fenix|work> hehe
<Deeps> if anyone's interested, the correct answer to my question was GeeXboX uShare ;)
<Fenix|work> Deeps, will keep that in mind
<giovani> Deeps: yeah, I figure I have to use MS MCE
<giovani> to get all the features I need
<giovani> nobody else supports QAM decryption with CableCards
<Deeps> fun
<giovani> because Linux = evil
<giovani> clearly
<Fenix|work> linux = scarey... like 'the earth is flat' scarey.
<Fenix|work> most devs don't want to fall off the edge of the earth, so they stay home
<giovani> exactly
<Fenix|work> and management doesn't want to use linux because they think they have to release their source code.
<giovani> haha
<ivoks> Fenix|work: talk to them
<ivoks> take the lead
<Fenix|work> they'll tell me... what's tar?
<ivoks> tar is a program we use every day - on linux it works, here it doesn't
<ivoks> and i have to backport real tar, which takes couple of hours
<ivoks> that's why you have to pay me more
<ivoks> simple as that :)
<Fenix|work> ivoks, and where is 'here'?
<Fenix|work> :)
<ivoks> Fenix|work: at your company
<Fenix|work> ?
<ivoks> you said management doesn't know what tar is and are affraid of linux
<ivoks> just let them know that with linux everything would be cheaper, and you'll get the green light
<ivoks> time to go...
<ZipmaO> Someone think that they can help me with a mail-sending batch script not running correctly when run as a cron job?
<psyferre> hey folks, I've got some servers that a pair of gigabit ethernet adapters each.  I bonded the nics and just found that they are all negotiating a 10 mb connection instead of 1000.  Can anyone give me a shove in the right direction to fixing that?  My google-fu is failing me here... i must be searching for the wrong things
<acicula> let me check my magic 8 ball...( just ask your question)
<ZipmaO> acicula?
<giovani> psyferre: you use ethtool to try and negotiate at 1000mbps?
<psyferre> giovani: i'd been looking at mii-tool, at the -F options, but they only appear to support up to 100baseT
<psyferre> giovani: looking at ethtool now
<acicula> ZipmaO: just ask your question, or describe the problem, if someone knows they'll give you an answer
<psyferre> giovani: looks like ethtool -s bond0 speed 1000 is all i need, correct?
<giovani> psyferre: try it :)
<psyferre> giovani: :D  sorry, i'm a *nix novice and am trying to solve a production server problem quickly... we didn't realize the problem until an hour ago and are frantically trying to resolve it :)
<psyferre> giovani: i'll try to find something "safe" to try it on
<giovani> psyferre: the reason I say try it ... is because I haven't had the problem before -- I'm giving you my best advice
<giovani> but I can't be sure of what will work
<psyferre> giovani: i understand, thank you very much for the advice :)
<giovani> you can run ethtool bond0
<giovani> to find out some basic info
<giovani> that's harmless
<psyferre> giovani: okay, thank you :)
<greenfly> I'm not sure that ethtool will be effective against the bond0 device, it may have to be run against the individual nics
<giovani> that's a good point, greenfly
<psyferre> hmmph.  "No data available"
<greenfly> another issue is that I thought that gigabit ports required autoneg
<giovani> since it's interacting directly with the MII
<greenfly> so if you are getting 10mbit it's possible the switchport isn't set up properly
<giovani> just confirming what greenfly said -- yes, autoneg is required for 1000Mbps (had to look it up)
<psyferre> greenfly: i guess that's possible, though most of the switch ports are setup exactly the same way
<greenfly> so I'd be looking at the switch ports first
<greenfly> and make sure they are set to gig and autoneg
<giovani> however, it seems a number of PHYs support forcing 1000
<giovani> but it's non-standard
<greenfly> because otherwise you'll ultimately have to set your server's nics to autoneg in which case they'll possibly negotiate down to 10mbit again
<psyferre> greenfly, giovani: yes, the switch ports are set to autonegotiate and max capacity
<greenfly> maybe try hard-coding the switch ports themselves to gig?
<psyferre> they currently report 1000 mbps full duplex on those two ports
<greenfly> are they actually gig ports?
<giovani> what indicated to you that you were neged at 10Mbps?
<mathiaz> sommer: is https://wiki.ubuntu.com/JauntyServerGuide up-to-date wrt to the sections that need to be reviewed?
<greenfly> psyferre: if you /did/ want to hard-code an ethernet port to 1000 and turn off autoneg this is how you would do it (as root):
<psyferre> if i run mii-tool -v bond0 it reports the link speed at 10mb
<greenfly> psyferre: ethtool -s eth0 speed 1000 duplex full autoneg off
<greenfly> don't run it against the bond0 interface, but test eth0
<psyferre> greenfly: okay
<greenfly> psyferre: note that sometimes when I've run ethtool it hasn't disrupted service--other times it has
<greenfly> also, this won't persist after a reboot so ideally you'll figure out some way for autoneg to work
<dustin_> how do I activate my ftp server on ubuntu server 8.10? is there a page I can visit?
<sommer> mathiaz: yes, it is now
<greenfly> dustin_: there are a few guides around but the main way is to figure out what ftp service you want to run and use the package manager to install it
<mathiaz> sommer: thank ya
<psyferre> greenfly: okay, i wonder... when i created the bond0 interface i used this line from a tutorial: options bonding mode=0 miimon=100
<psyferre>  ... maybe there's another option i should have used?
<greenfly> psyferre: no, that doesn't affect the speed of the interface, just how it's bonded and what timeout it uses to determine when to failover
<psyferre> greenfly: okay, thank you
<greenfly> but I wouldn't run miitool or ethtool tests against bond0
<psyferre> greenfly: is there a better way that you would recommend to find at what speed the bond is operating?
<dustin_> greenfly: which ftp service is easiest to configure from command line?
<greenfly> psyferre: either ethtool against eth0 and eth1 (or whatever your two nics are) or actual speed test (ie using rsync or scp to transfer a file)
<psyferre> greenfly: they both report 1000baseT full duplex
<genii> balance-rr often confuses other machines you connect to
<greenfly> psyferre: then it sounds to me like your interfaces are actually at the correct speed
<psyferre> greenfly: according to ethtool anyway
<giovani> well, ethtool should be reading directly from the chipset
<genii> Doesn't the bond interface use a pseudo intel e100 driver ?
<dustin_> greenfly: pureftpd or proftp? which has the least setup?
<dustin_> greenfly: I know how to use both with gui tools but not command line
<greenfly> dustin_: if either is packaged in main it should have a pretty straightforward setup
<jmedina> dustin_: use pure-ftpd
<greenfly> if you just want a simple one, try to find one that can use local unix accounts
<giovani> genii: if it did, how would it supply more than 100Mbps from multiple bonded 100Mbps interfaces?
<genii> giovani: Yes, thats just what I was thinking about
<giovani> but we know that it does ...
<jmedina> pure ftpd is controlled arguments in the command line, and you can use puredb to create virtual users, you can control quotas, bw limits, access by hours.
<dustin_> jmedina: where can I find a man online for pure-ftpd? this will reduce my questions in chat ;)
<giovani> dustin_: first google hit for "pure-ftpd"
<jmedina> dustin_: you can read pure-ftpd(8) and for ubuntu pure-ftpd-wrapper (8)
<dustin_> doing that atm but I am getting a lot of roundy-rounds ;(
<psyferre> giovani, greenfly: I am utterly failing at getting a transfer speed out of scp... could you give me a hint?  I tried -v and got loads of debugging messages, but i don't see anything that indicates the speed of the transfer
<giovani> psyferre: when it's transfering a fle it gives the speed on the right, afaik
<greenfly> yeah same here
<greenfly> otherwise you could use rsync with --progress
<psyferre> giovani: heh, i see nothing that isn't directly in front of my face, that is.  *sigh*  Sorry about that...  27.3MB/s
<psyferre> it was a 28 mb file... maybe i should try something larger?
<giovani> psyferre: that's definitely not 10Mbps :)
<giovani> yes, something larger would help
<psyferre> giovani: yup! :)  at least i know that much :)
<giovani> dd if=/dev/zero of=/testfile bs=1024k count=512  --  that should do it
<giovani> keep in mind, scp has significant overheard
<psyferre> hmm... just transferred an ubuntu iso... transfer speed hovers around 20 mbps
<giovani> you mean 20MBps?
<giovani> you reported 27.3MBps just a minute ago
<giovani> that's very different from 20Mbps
<psyferre> yes, sorry... lazy shift key
<psyferre> :)
<greenfly> that's still more than 100Mbit
<psyferre> that's true... i've never been good at thinking in megabit terms... so i must be good to go!
<PhotoJim> psyferre: divide by 10 for MB from Mb... it's not exact.  but it'll get you in the ballpark.
<PhotoJim> psyferre: it worked in the modem days.  with start-stop bits, it was 10 bits per byte.
<PhotoJim> psyferre: with overhead, that overstates it a little but it's still reasonable.
<psyferre> PhotoJim: thanks :)
<psyferre> greenfly, giovani, and everyone else who commented: Thank you very much for helping a novice figure out what the heck is going on.  I really appreciate it.
<Iceman_B^Ltop> is there any difference in TIA-568B and TIA-568A wired cabling, besides the pin order?
<Iceman_B^Ltop> they should perform equally good, right ?
<genii> Iceman_B^Ltop: Yup. Just use same order on both ends
<christian_> hello
<genii> Iceman_B^Ltop: I generally use B
<christian_> Somebody use a mail server with multiples domains??
<giovani> christian_: of course, it's a common setup
<christian_> giovani do you have a mail server with postfix???
<giovani> christian_: yes
<christian_> and various domains??
<giovani> yes ...
<christian_> I have a mail server
<jmedina> I use postfix with virtual domains in ldap and mysql
<christian_> I do not understand how to use ldap and mysql
<christian_> in my mail server
<giovani> christian_: neither are required for virtual domains
<giovani> but postfix provides great documentation on setting it up, if you'd like
<christian_> yes i view this information, but i dont understand how to use my domain1, with my domain2
<christian_> I have squirrelmail
<christian_> an d the users how to check your mails
<jmedina> christian_: with simple plain setup you map mails address to local users, and if you want foo@domain1.com and foo@domain2.com with different mailbox, you need to create to different users and user a map
<jmedina> if you want both domains go to the same mailbox, just add domain2 to mydestination
<christian_> which is it the setup??
<jmedina> for more info read Postfix Virtual Domain Hosting Howto: http://www.postfix.org/VIRTUAL_README.html
<christian_> I read about the configuration of postfix
<jmedina> I use postfix+mysql for virtual hosting for different customers
<Iceman_B^Ltop> genii: I'm looking at a factory sealed cable that says 568-A but apprantly is wired up as 568-B
<Iceman_B^Ltop> but on both ends, so that shouldnt be ap roblem
<giovani> Iceman_B^Ltop: yep, a non-issue
<Iceman_B^Ltop> okay
<giovani> 568-B is far more common
<giovani> -A is considered obsolete
<mathiaz> kirkland: does kvm/libvirt support snapshot?
<jmedina> christian_: for a simple setup without mysql or ldap this howto looks good:
<jmedina> http://www.akadia.com/services/postfix_separate_mailboxes.html
<kirkland> mathiaz: yes, much better in kvm-84
<mathiaz> kirkland: is this feature available from virsh?
<jmedina> how does kvm handles snapshots?
<mathiaz> kirkland: here is my scenario:
<christian_> jmedina, What for use mysql for clients
<christian_> is ts neccesary?
<kirkland> mathiaz: i have not idea about virsh
<jmedina> you dont use mysql for clients, you only store mail accounts in database
<kirkland> mathiaz: let's talk to aliguori in #ubuntu-virt
<jmedina> I prefere mysql because you can use a web based frontend like postfixadmin
<kirkland> mathiaz: doh... he just checked out
<kirkland> mathiaz: here is fine
<mathiaz> kirkland: I'd like to run my jaunty base vm all the time (named j-base) and when I need to create a test vm based on jaunty, I would run a command (create_vm.sh j-base t-dovecot) that will snapshot the j-base vm and create the t-dovecot vm
<jmedina> with postfix admin manage virtual domains, different admins, mail quotas, mail forwarding, aliases
<mathiaz> kirkland: and then I would ssh into t-dovecot
<mathiaz> kirkland: do all my testing, and when I'm done I would just delete_vm.sh t-dovecot
<mathiaz> kirkland: for now I'm using lv to hold the j-base filesystem and lvm snapshot to handle the snapshoting
<mathiaz> kirkland: however I can only create a snapshot if the j-base vm is *not* running
<mathiaz> kirkland: for consistency
<kirkland> mathiaz: see -snapshot in http://manpages.ubuntu.com/manpages/jaunty/en/man1/qemu.1.html
<mathiaz> kirkland: which means that my j-base vm doesn't run most of the time.
<mathiaz> kirkland: thanks for the pointer. I'm gonna have to think about this a bit more.
<kirkland> mathiaz: mee too ....
<kirkland> mathiaz: i think using that -snapshot option to kvm, you should be able to master off of your base vm, and snapshot your testing to an auxilliary file
<mathiaz> kirkland: right. That seems like a good option.
<mathiaz> kirkland: however how would handle a live vm running from the master file?
<mathiaz> kirkland: could suspending the master vm work?
<mathiaz> kirkland: take a snapshot of the root block device and boot from there?
<mathiaz> kirkland: in my current setup I'm also doing that, except that the master vm is always off.
<mathiaz> kirkland: and I need to boot once in a while to update the system correclty.
<mathiaz> kirkland: to boot the master vm
<mathiaz> kirkland: I would like to avoid that
<kirkland> mathiaz: hmm, there is a "saveback" command you can issue
<kirkland> mathiaz:         Ctrl-a s
<kirkland>             Save disk data back to file (if -snapshot)
<mathiaz> kirkland: right - could the guest issue a saveback command?
<mathiaz> kirkland: it seems that the guest is the one that knows when it's safe to be snapshotted
<mathiaz> kirkland: you don't want to take a snapshot of the master vm in the middle of an apt-get upgrade
<kirkland> mathiaz: right
<kirkland> mathiaz: looks like you want this ctrl-a s command when you *know* you want to saveback
<mathiaz> kirkland: right - something like a checkpoint command
<Iceman_B^Ltop> giovani / genii: Im posting about my problem on the Ubuntu forums. My server keeps dropping SSH connections, and I found that internet connections lag out too at that point
<giovani> Iceman_B^Ltop: what makes you think it's ubuntu-related?
<giovani> you're probably suffering bad packet loss
<Iceman_B^Ltop> giovani: I had Ubuntu 8,10 desktop on that same machine up till a week ago
<Iceman_B^Ltop> same hardware, except the hdd
<Iceman_B^Ltop> no problems at all
<giovani> well ... things can change, cables can be bad, hardware can go bad
<Iceman_B^Ltop> except that Ibex Desktop has a GUI which I dont use on a headless machine, and it ate all 256 megs of ram
<giovani> ubuntu server and ubuntu desktop are almost identical at lower levels
<giovani> but, alright
<Iceman_B^Ltop> giovani: how high is the change of that coinciding with the switch to a different OS ?
<giovani> it's not a different OS
<giovani> I'd say it's almost nil
<giovani> the ethernet driver will be the same, unless you were using an old kernel before, and have updated now
<giovani> is it possible that it's related to the server kernel? yes ... but I'd think it's damn unlikely
<Iceman_B^Ltop> I have not the slightest idea. I though ti would be my network at first
<Iceman_B^Ltop> take a look here if you want http://ubuntuforums.org/showpost.php?p=6985576&postcount=55
<Iceman_B^Ltop> and the tread itself I posted a small update after that
<giovani> I'd forget this application-specific diagnosis
<giovani> do a long ping test
<giovani> and establish that packet loss is the issue
<jmedina> Iceman_B^Ltop: could you pastebin the output from: ip -s link
<Iceman_B^Ltop> i'll try
<Iceman_B^Ltop> http://pastebin.ubuntu.com/141603/
<Iceman_B|SSH> server here /o/
<Iceman_B^Ltop> connection dropped...
<Iceman_B^Ltop> there it goes
<billyk> how can I list all jpeg files in the home folder without being in that directory?  (tried ls -aR /home/*.jpg and it didnt work for me).  A step further, how could I list only jpg's with an underscore in the filename e.g. *_*.jpg
<billyk> sorry, I meant home folder and subdirectories
<sommer> billyk: probably something like find /home/$user -name "*.jpg"
<billyk> I should have said I'm trying to use this with mogrify
<billyk> I can do mogrify *.jpg if i'm in that directory, but I have a bunch of subdirectories I want to resize images in in multiple home directories
<Deeps> find /home/$user -name "*.jpg" | while read file; do mogrify $file; done
<giovani> Deeps: don't you think using -exec would be better?
<billyk> awesome
<billyk> noob question but what does $user do?
<billyk> and $file
<billyk> variables?
<billyk> like a shell script?
<giovani> a shell script is just what's interpreted by bash
<giovani> everything you run in bash is a script
<friartuck> billyk good intro: http://tldp.org/LDP/abs/html/
<giovani> $user there is not a defined variable -- I think he just used it as a placeholder for you to fill in
<giovani> $file is a variable, as is referenced b the while look
<giovani> loop*
<friartuck> I think it's $USER and not $user
<giovani> well, for the current user, sure
<billyk> will that do all the user accts?
<giovani> who knows if he wants that :)
<giovani> billyk: no
<giovani> you'd need to wrap it in a for loop
<giovani> for all the dirs in /home
<billyk> just all subdirectories in the home folder
<billyk> no easier way to do that than a loop?
<giovani> sure, just back out the find execution to /home
<giovani> that'll apply to any directory in home
<billyk> cool thanks!
<billyk> gonna go read that bash guide now
<Deeps> giovani: could be, i like while loops ;)
<billyk> so I can the *.jpg in quotes is a regex?
<giovani> no, that's not regex
<billyk> so could I put "*_*.jpg"
<billyk> oh
<Deeps> thats still not regex, but yes
<giovani> regex would be something like ".*?.jpg"
<Deeps> ".*_.*\.jpg"
<giovani> but yeah, what you want to do will work
<MagicFab> dendrobates, :)
<antdedyet> win 26
<antdedyet> lose 27, heh
<uvirtbot> New bug: #351378 in dhcp3 (main) "dhclient fails for virtual interfaces (IP aliases)" [Undecided,New] https://launchpad.net/bugs/351378
<genii> Probably if the master interface already has an IP from same dhcp server, likely
<genii> (since MAC would not change)
<antdedyet> anyone here got a canonical partner sales contact for a partner?
<antdedyet> ours is out of office
<antdedyet> and the temporary counterpart has been unresponsive
<billyk> bash syntax question - this obviously doesnt work, but it probably best explains what I'm trying to do- if (! mogrify -identify 1.jpg | grep 800x600) (newline) then mogrify -resize 800x600 1.jpg (newline) fi
<billyk> mogrify -identify 1.jpg | grep 800x600 only outputs data if the image is the right size.  I want the -resize command to only be run if it's not the right size
<billyk> for some reason mogrify -resize still changes an image's hash even if it's already the right resolution (bad for rsync)
<jesperronn> Hey anybody able to help me with a preseed question? (isolinux.cfg)
<jesperronn> I'm currently working on creating an unattended preseeded ubuntu server install
<giovani> billyk: try using || between the grep command and the second morgrify command
<giovani> it means the 3rd command will only run if the grep fails
<billyk> cool
<jesperronn> However, first thing that comes up (in front of the installer menu) is Language selection. Question is How do I remove that language selection? Which command could I add to isolinux.cfg?
<jesperronn> Here is my current isolinux.cfg:
<jesperronn> include menu.cfg
<jesperronn> default Brownpaper
<jesperronn> prompt 0
<jesperronn> timeout 0
<jesperronn> gfxboot bootlogo
<jesperronn> label Brownpaper
<jesperronn>   menu label ^Brownpaper customized installation
<jesperronn>   kernel /install/vmlinuz
<jesperronn>   append file=/cdrom/brownpaper.seed locale=en_US console-setup/layoutcode=us initrd=/install/initrd.gz quiet --
<giovani> jesperronn: FAR too much pasting -- use pastebin next time
<jesperronn> (sorry for the many lines) -- thanks for tip @giovani
<giovani> billyk: that work out ok?
<jesperronn> Here it is in pastebin: http://pastebin.com/d2e568e5
<jesperronn> My challenges:  1) surpass the Language selection menu. 2) making menu item "brownpaper start automatically" if possible.
<giovani> jesperronn: your question is pretty specific, and not common knowledge for someone to have -- so wait around
<billyk> giovani: yeah.  Thanks! :-)  if the grep doesnt fail though, it outputs the result of that command to the terminal.  will that be okay for a shell script?  or do I need > /dev/null or something?
<jesperronn> @giovani: thanks for your tip! I presume this is the best forum for the question even it's specific. Any links to documentation/api or examples is appreciated
<giovani> billyk: || is not similar to | -- || = OR  and | = pipe
<giovani> so, nothing is being passed to the last command
<giovani> it's just only being run if grep fails
<giovani> if you wanted to run a command only if grep succeeded you'd use &&
<giovani> jesperronn: the wiki, google, and ubuntuforums.org probably have a good bit of info on the topic
<giovani> billyk: and just if you're curious, the way that bash knows whether or not grep "succeeded", it's based solely on exit status -- it doesn't read grep's output or anything else
<billyk> ah
<billyk> trying to digest all that :-)
<giovani> billyk: yeah ... don't worry about digesting it all at once
<giovani> I'm really far from a bash expert -- you just pick up a few things every time you try something new
<giovani> mastering piping and output/input redirection are the most important bash skills
<giovani> in my opinion
<billyk> yeah, it's obviously really useful
<giovani> you feel comfortable with those?
<billyk> not yet
<giovani> i.e. < and > and >> and | ?
<billyk> haha
<giovani> well, and 2> :)
<giovani> ok, quick recap ... `programname < filename` takes everything in the file 'filename' and sends it to the input of 'programname'
<billyk> mogrify -identify logo.png | grep 668x476 || mogrify -adaptive-resize 668x476! logo.png still shows grep's output
<giovani> billyk: "shows" you mean it prints to the console?
<giovani> is that a problem?
<billyk> will it be if I have that line in a .sh?
<giovani> it'll print to the console ... nothing bad
<giovani> you can fix that though if you need
<billyk> okay.  when you execute a shell script from cron though, where would that output go?
<Deeps> email
<giovani> most people would send console output to /dev/null (basically, discard it) instead of printing it when using cron, so that it doesn't get emailed back to the user
<giovani> mogrify -identify logo.png | grep 668x476 > /dev/null || mogrify -adaptive-resize 668x476! logo.png
<giovani> should do it
<giovani> try it out
<giovani> the other option (specifically with grep) is to run it with the -q option
<giovani> it suppresses all output
<giovani> mogrify -identify logo.png | grep -q 668x476 || mogrify -adaptive-resize 668x476! logo.png
<billyk> okay
<billyk> why doesnt it work with > /dev/null at the end?
<giovani> but that'll only work with grep -- not all apps have options to not output anything -- so knowing about > /dev/null is important
<billyk> yeah
<giovani> billyk: why doesn't what work?
<billyk> mogrify -identify logo.png | grep 668x476 || mogrify -adaptive-resize 668x476! logo.png > /dev/null doesnt suppress output
<giovani> because > /dev/null is applying to the command to the left of it
<giovani> which, in your case, is mogrify, not grep
<giovani> so it needs to go after grep -- since it's grep that has the output you want to suppress
<billyk> ooh
<billyk> I thought the output was just piped to the -resize command
<giovani> billyk: nope, remember || is NOT a pipe
<giovani> it's a special OR operator, despite looking similar to pipe :)
<giovani> so, because it's not a pipe, grep's output is going directly to the console
<giovani> (unless you redirect it with > /dev/null)
<giovani> billyk: make sense? or still not clear?
<billyk> giovani: no, I got it :-)
<giovani> awesome :)
<billyk> now I'm curious about the 2> though
<giovani> ah, well, that's simple enough to cover
<billyk> is that on http://tldp.org/LDP/abs/html/ ?
<giovani> so, when we say "output" we mean STDOUT
<giovani> and when we say "input" we mean STDIN
<giovani> so, STDOUT is >
<giovani> STDIN is <
<giovani> there's one more ... STDERR -- which is 2>
<giovani> which is supposed to only be used for error-related info, and not general output
<billyk> ok, remember some of that from basic C programming
<billyk> so STDOUT is what's output to the terminal, or what's passed in a pipe? or both?
<giovani> STDOUT by default goes to the terminal, unless it's redirected with > or |
<giovani> > being used to output to files, and | to pass it to the STDIN of the next application after the pipe
<billyk> cool
<giovani> 2> takes just the STDERR, and outputs it to a file
<giovani> in many cron jobs, people want to either collect both info and error messages in one place, or discard them both, they do this with `program &> filename`
<billyk> if you use 2> where does the stdout go?
<giovani> whereever you instruct it to
<giovani> i.e. `programname 2> myerror.log`
<billyk> so I can do command -argument  2> error.log > output.txt ?
<giovani> yep
<giovani> or, let's say, for example, you wanted to pipe both STDERR and STDOUT to another program
<giovani> you'd use redirection to accomplish that
<giovani> `programname 2>&1 | secondprogram`
<giovani> 2> clearly takes STDERR and then pushes it into STDOUT
<giovani> and then pipe takes all STDOUT (which now includes STDERR) and passes it to STDIN of secondprogram
<billyk> can the secondprogram differentiate the STDERR from the STDIN?
<giovani> nope
<giovani> the &1 may seem arbitrary, but, in reality, each of the three file descriptors (STDIN, STDOUT, and STDERR) have numbers, 0, 1, and 2
<giovani> so 1> is the same as > which is STDOUT
<giovani> and 2> is STDERR
<billyk> cool
<christian_> hi giovani...
<christian_> hlep me please
<billyk> it might be pointless to do this, but how would you save stderr to a file and then pipe stdout to a command?
<christian_> i cant do the email server with two domains
<giovani> billyk: in that case, you'd use the 'tee' command
<giovani> which both reads its STDIN, writes it to a file, and also sends it to STDOUT
<giovani> so, `programname | tee outputfile | secondprogram`
<giovani> would take the STDOUT from 'programname', write it to 'outputfile' and also send it to 'secondprogram'
<giovani> now I'm off
<giovani> later
<billyk> giovani: Thanks so much!
<MatBoy> damn I have a problem
<billyk> MatBoy: what is it?
<MatBoy> billyk: I love myself.... :/
<baffle> dustin__: Your screen profiles rock btw. I've used screen for over 10 years, but never got around to actually making myself a proper profile. :)
<Iceman_B|SSH> hmm, capturing from my laptop only reveals SSH packets...
<jmedina> Iceman_B|SSH: sitll problems netwok problems?
<Iceman_B^Ltop> jmedina: yup, still
<jmedina> Iceman_B^Ltop: please paste output from "ip -s link"
<cjwatson> sigh, if only jesperronn had stuck around another hour I could have answered his question
<cjwatson> (the answer is to put a language code of your choice, e.g. "en", in /isolinux/lang on the CD)
<PhotoJim> baffle: thanks for mentioning those profiles.  I had no idea they existed.  I'm going to install them and play with them.
<dustin__> ok kinda embarrased but I didnt know I had a profile??? :S
<dustin__> or did baffle have the wron guy?
<Iceman_B^Ltop> jmedina: http://pastebin.ubuntu.com/141730/
<PhotoJim> the right Dustin is on as user Kirkland
<jmedina> Iceman_B^Ltop: looks fine, no errors, dropeed or overrun
<kirkland> PhotoJim: ?
<Iceman_B^Ltop> jmedina: okay
<baffle> dustin__: Maybe the wrong guy. :-)
<dustin__> baffle: where can I go to see that great profile I never made?
<dustin__> :)
<dustin__> brb I is gonna fix my name :D
<baffle> dustin__: I assumed you were Dustin Kirkland.
<mds58> ahhh so much better
<PhotoJim> kirkland: Baffle was commenting that he really likes your screen-profiles package.
<baffle> mds58: But you should apt-get install screen-profiles then. :)
<Iceman_B^Ltop> jmedina: well, I have no clue then. apart from installing Ubuntu desktop, and seeing wether or not the problems cease
<Iceman_B^Ltop> if they dont, it might be hardware
<Iceman_B^Ltop> I have a tcpdump output as well
<jmedina> Iceman_B^Ltop: have you tested in a livecd?
<kirkland> PhotoJim: oh, sweet
<kirkland> baffle: thanks!
<Iceman_B^Ltop> jmedina: can't say I have
<Iceman_B^Ltop> but the server is running headless, can I still use a live cd then ?
<Iceman_B^Ltop> or do I really need a screen
<Iceman_B^Ltop> and keyboard
<PhotoJim> Iceman_B^Ltop: screen and keyboard are still useful.  if things go wrong, it is often useful to be able to do a console login from the machine itself.
<baffle> kirkland: Tried looking into 256 color profiles? Nice color shading etc.
<baffle> kirkland: As in http://www.frexx.de/xterm-256-notes/
<Iceman_B^Ltop> PhotoJim: just the 2 things I dont have
#ubuntu-server 2009-04-01
<mds58> I am kinda new to php, if I write a php document to load onto my server will I need the libraries on the client computer that I am loading the file from?
<mds58> or am I overthinking this?
<baffle> mds58: Only if you want to test it locally.
<baffle> mds58: Wich usually is a good idea.
<mds58> ok ty baffle that realy cleared it up for me
<kirkland> baffle: cool
<kirkland> baffle: what do we need to do to take advantage of that?
<kirkland> baffle: different compile options?
<baffle> kirkland: According to that webpage, it allready is supported in screen.
<kirkland> baffle: i dorked around with it a little and couldn't get it to work
<baffle> kirkland: Gnome-terminal supports 256 colors out of the box.
<baffle> kirkland: But not all terms (like xterm?) does.
<kirkland> baffle: if you can demo a couple of lines in an .screenrc file that work, i'm *all* ears ;-)
<baffle> kirkland: Also, you *might* need to have "ncurses-term" installed, as I fear the 256 color terminfo files are there.
<Iceman_B|SSH> clear
<Iceman_B|SSH> er
<Alysum> hi - Id like some advice on how to move /var/log to another location without breaking things ta
<PhotoJim> Alysum: to another location...?
<Alysum> yes i.e. /mnt
<Alysum> another partition
<PhotoJim> Alysum: I would strongly recommend against physically moving it.  but if you want it on another device, just mount that device partition at /var/log ...
<PhotoJim> Alysum: copy the contents of /var/log somewhere else first, mount the partition, then copy everything back.
<Alysum> I'd put a symlink /var/log => /mnt/var/log
<PhotoJim> Alysum: if you prefer another location, I strongly recommend against it.  but you could make a soft link to point to it, and put that anywhere.
<Alysum> I did that but then nothing was logging
<jmedina> Alysum: did you restart syslog?
<twb> Alysum: why do you *want* to move it?
<Alysum> well /dev/sda has a few hundreds MB free...
<jmedina> syslog trunks files
<jmedina> *truncates
<Alysum> symlink is allright isnt it?
<Alysum> I know it's risky but lots of people put the whole OS on just a 5GB partition
<Alysum> i.e. Amazon Web Services EC2
<jmedina> I have moved /var/log to another particion with the following procedure
<Alysum> yes?
<twb> 4GiB is heaps for the root filesystem of a server.
<jmedina> rsyn /var/log to new location, stop temporaly sysklogd (at night), mount new location as /var/log, and then restart sysklogd
<twb> For a desktop I might allocate 8GiB.
<twb> Of course, I would normally use LVM so it is trivial to grow the space allocated.
<Alysum> ./dev/sda1             4.0G  3.1G  713M  82% /
<twb> Alysum: cd / && sudo du -mx | sort -nr | head
<twb> Alysum: that will tell you where all the space is going
<PhotoJim> twb: it isn't trivial if you don't use LVM, but it's not impossible.  I just resized partitions on an ancient system I have.  pulled the drive, hooked it up by USB to my laptop, about a half an hour and it was done.
<Alysum> so you mounted /var/log on top of /var/log with the contents in there already
<twb> PhotoJim: granted.
<twb> PhotoJim: it's a lot harder if you have to shuffle partitions around, though.
<PhotoJim> twb: agreed.  but it isn't something a person has to avoid if they didn't use logical volumes, and their partitions are inappropriately sized.
<twb> In any case, my "rule of thumb" is still: use LVM
<PhotoJim> if you go to single user mode, does logging still occur?  I swapped /home and /usr in single user mode once.  but not sure that'd be wise with /var.
<twb> Particularly since most of the machines I deal with are in other countries.
<Alysum> I need to do this on a live server so it's quite critical to come up with a good plan :(
<jmedina> twb: I agree, lvm rules for remote systems, and of course grub fallback :D
<Alysum> btw to stop syslog do I just kill the process?
<jmedina> Alysum: well it is not needed to restart
<PhotoJim> twb: that does add to the difficulty level :)
<Alysum> no but if anything that cant log it crashed
<jmedina> I would sync, add new entry to /etc/fstab then
<jmedina> invoke-rc.d sysklogd stop && mount -a && invoke-rc.d sysklogd start :D
<twb> You can use "lsof | grep var/log" to find out what, if anything, is using that directory subtree.
<Alysum> yes ta
<Alysum> allright I found 1 non rotated log og 1.9GB that is going to be fixed so no need to move /var/log :)
<twb> Alysum: hooray
<Alysum> lol
<twb> Alysum: now write a logrotate rule for that file
<Alysum> indeed
<twb> Out of curiosity, what created that file?
<Sam-I-Am> moo
<twb> Sam-I-Am: this IRC channel does not contain easter eggs.
<Sam-I-Am> haha
<Sam-I-Am> not yet :P
<WebsiteEbayLike> can i have some help im makeing a web site and what verson of Apache should i download for ubuntu sever 8.10
<JanC> there is only one version of Apache in Ubuntu...
<toothy> hi guys... is ebox the web-based mgmt tool of choice for intrepid?
<WebsiteEbayLike> can you give me a link 1st time useing ubuntu sever
<toothy> i just saw in the docs that it's totally broken, but i was told that Ubuntu sides iwth ebox
<JanC> WebsiteEbayLike: I suggest you read the server guide first
<WebsiteEbayLike> ok sorry
<JanC> http://doc.ubuntu.com/ubuntu/serverguide/C/
<fabio> hi everyone
<fabio> i'm fabio
<twb> fabio: do you have a question?
<fabio> whrer can i find ubuntu developers?
<JanC> everywhere
<fabio> someone here understands the concept of gconf-editor
<WebsiteEbayLike> could i use this one for the ubuntu sever http://www.archicentral.com/mirrors/apache/httpd/httpd-2.2.11.tar.gz
<WebsiteEbayLike> 8.10
<fabio> and ldap autentication?
<JanC> WebsiteEbayLike: you don't seem to understand how Ubuntu works, so please go read the server guide first
<JanC> (and no, that's not the one you need)
<WebsiteEbayLike> ok im sorry i just needed a littel bit of help )-:
<JanC> WebsiteEbayLike: you're making things way more complicated than needed, you'll see that when you read the docs
<fabio> the server already comes with apache
<JanC> fabio: depends, not after a minimal server install
<JanC> but maybe he has it already, true
<fabio> i'm using 8.04 yet
<fabio> guys where are you from?
<fabio> i'm from brazil
<JanC> Belgium, so *really* time to go to bed  ;)
<fabio> nice dreams
<fabio> janc? female or male?
<PhotoJim> JanC: might as well just stay up now ;)
<JanC> or just sleep late  ;)
<WebsiteEbayLike> i got it off ubuntu.com off of sever its about 700mb so dos it already got it in it
<JanC> fabio: it's not really custom here to ask that
<fabio> dude i m new here
<JanC> WebsiteEbayLike: it has apache on the CD yes
<WebsiteEbayLike> ok thank you
<fabio> i have to meet you guys
<fabio> belgium is a nice place to live
<JanC> WebsiteEbayLike: but you would know that when...
<fabio> who uses ldap server?
<JanC> if you like cold & rainy places yes
<twb> JanC: and fries
<fabio> i like well developed places
<fabio> and blondies
<PhotoJim> JanC: in my part of Canada, if the precipitation is falling as rain, it can't truly be cold. :)
<fabio> here in rio is too hot
<JanC> PhotoJim: I can imagine  ;)
<PhotoJim> JanC: although one of the coldest days in recent memory for me was a September day in Edinburgh :)
<fabio> who have tested win7
<fabio> ?
<PhotoJim> I'm more the type to get a 2.6.29 kernel running on a 486 than to test bleeding-edge Windows.
<fabio> good answer
<fabio> arch kernel'?
<JanC> linux kernel I suppose  :P
<PhotoJim> Yup.  Debian in this case, but only because that's what that machine has had on it for almost a decade.
<fabio> i did this test with gentoo
<PhotoJim> I migrated to Ubuntu on my GUI-equipped machines, and now I have Ubuntu on my new server.  (this 486 was my server in 2002.  experiment.)
<fabio> 4 days compilig
<incidence> anybody knows if there is a better way than suexec? to run apache2 with the file owners permissions
<PhotoJim> fabio: my 486 takes 4 days just to compile the kernel.  building the userland would probably take the better part of a month.
<PhotoJim> incidence: sorry, I don't.  hopefully someone else does.
<fabio> a fully compiled instalation is very funny
<fabio>  i love to do that
<JanC> PhotoJim: biggest slowdown is lack of memory, I guess?
<PhotoJim> JanC: and the ISA bus. :)
<PhotoJim> JanC: I have 32 MB on that machine, which is a ton for a 486 built in 1991, but not a lot by today's standards to say the least.
<JanC> yeah, mine had only 4 MiB IIRC
<JanC> never ran linux on that though
<PhotoJim> 4 is theoretically possible, but it would not be fun.
<PhotoJim> my stripped-down 2.6.27 kernel (the latest I actually have running) is 1.6 MiB.
<toothy> hi guys... is ebox the web-based mgmt tool of choice for intrepid?  (sorry for double post here... inet is going in and out)
<twb> toothy: it seems to be the one endorsed by Ubuntu Serevr.
<toothy> twb, do you know if it is functional on 8.10?
<toothy> the docs say its no good
<toothy> but the bug says a fix was released
<twb> I don't know.
<friartuck> how do i add time stamp to stderr with tar? quick test: "tar -cvf test.tar /var/run 2>> tar.log" generates socket erros.
<twb> { tar -foo; date; } >tar.log 2>&1
<friartuck> twb eh, I only want the errors. tried { tar -foo; date; } 2>>tar.log  and I'm getting the errors, but no timestamp
<twb> friartuck: that's because date is printed the stdout
<friartuck> ah
<twb> 2>> is a bashism, btw
<twb> Oh, maybe not.
<friartuck> just want to append...
<_ruben> possibly something like (again not tested): tar -foo 1>/dev/null 2>&1 | while read line; do date; echo $line; done >> tar.log
<twb> _ruben: you have those redirects the wrong way around
<friartuck> _ruben no dice,
<friartuck> I'll switch the redirects
<twb> You also need to suppress the trailing newline from date, which is hard
<twb> Better would be just to pipe it into logger
<twb> i.e. go via syslog
<friartuck> twb yeah, I could break it out to separate file with syslog-ng.
<twb> IIRC syslog-ng isn't in the main category.
<twb> Between syslog-ng and rsyslog I'd recommend the latter, because that's the new Debian default (and therefore I expect Ubuntu to follow suit in the next LTS release.)
<friartuck> twb haven't seen rsyslog, thx. looking into that.
<_ruben> like i said .. didnt test it ;) .. i never seem to accurately remember in which order to place those redirects :p
<friartuck> _ruben np. seems like it should be doable. I'm still fiddling with it.
<_ruben> as for the new line after date .. there's several more or less nasty tricks for that :)
<_ruben> echo -n `date` being one of em :p
<kraut> moin
<uvirtbot`> New bug: #352934 in likewise-open5 (universe) "Painful" [Undecided,New] https://launchpad.net/bugs/352934
<friartuck> _ruben twb i got the time stamp prepended to tar's stderr...but now I lost my stdout. where did stdout go? http://pastebin.com/m77797a8d
<_ruben> you redirect stuff to the while loop from 2 sides, that cant be right
<friartuck> _ruben well, it works
<_ruben> you stdout gets redirected to the while loop .. but you also redirect the contents of tmperr to it .. not sure what would be the result .. one might be in the way of the other
<_ruben> but instead of fiddling with code snippets, why dont you outline what you want exactly ;)
<friartuck> _ruben it's an academic exercise. I'm making an incremental backup program with tar and saw the socket errors. I know I can ignore them...I just kinda fell down a rabbit-hole.
<Ethos> anyone know much  about  php5-mssql_5.2.6-2ubuntu4.1_i386.deb?
<_ruben> friartuck: well .. which info (stdout/stderr/etc) do you want to end up where, etc?
<friartuck> _ruben I found the stdout. I put 1>&0 before the stderr redirect. http://pastebin.com/m7c621642 . stdout to the screen, stderr to a text file with prepended time-stamp.
<friartuck> it works ^^
<incorrect> I was thinking about building an ipsec vpn server that would work with the vpn wizard,   are there any docs/howto's for this?
<friartuck> incorrect here's a start: https://help.ubuntu.com/community/OpenVPN  ipcop and smoothwall would be easier.
<incorrect> friartuck, I think i will want to do a very specific config,  I am not a fan of either,  i use pfsense for my firewalls, but i've found ipsec support some what limited
<emma7> hi guys
<emma7> can any of you help me with setting up a gui on ubuntu server?
<emma7> come on guys
<friartuck> emma7 google "ubuntu server gui"
<Ethos> sudo apt-get install ubuntu-desktop
<emma7> thanx Ethos, i just typed that command but the output is negative. "E: couldn't find package ubuntu-desktop" please help
<Ethos> try sudo apt-get update then the command again
<emma7> let me try that
<emma7> the result is still negative, it says "W: some index files failed to download, they have been ignored, or old ones used intead."
<emma7> do i need to have the machine connected to the internet?
<friartuck> emma7 yes.
<Ethos> yes :\
<emma7> but how do verify that my server is connected coz yes i have plugged in my lAN cable and the lights are on. can you please give me a command that will show my ip config.
<friartuck> emma7 ifconfig -a
<emma7> ok, i think i have to manually assign an ip add to this machine, how do i do that?
<friartuck> emma7 try first: sudo /etc/init.d/networking restart
<emma7> it says that "No such file or directory
<emma7> friatuck are you still there?
<friartuck> yea
<emma7> thanx, do i have any chance of making this work?
<friartuck> emma7 did you have network when you did the install?
<emma7> the cable was unplugged. why?
<emma7> do i have to redo the installation with the network cable plugged in?
<_ruben> if you lack /etc/init.d/networking, then you (atleast) lack the netbase package .. you could try to install with your install cd/dvd present (using sudo apt-get install netbase)
<emma7> let me try that ruben
<friartuck> emma7 you will probably need to edit /etc/apt/sources.list and uncomment the first line "#deb cdrom...."
<_ruben> incorrect: if you want plain ipsec (so not l2tp over ipsec like windows does for instance, and osx as well), look into openswan or strongswan (openswan being my personal favorite)
<incorrect> thanks
<emma7> it says permission denied friatuck
<friartuck> sudo vi /etc/apt/sources.list
<emma7> good i am getting somewhere now fraituck
<emma7> now what must i edit
<friartuck> emma7 the line that starts with "#deb cdrom...". uncomment that.
<friartuck> should be the first line
<emma7> got it
<friartuck> emma7 try: sudo apt-get install netbase
<emma7> ho do i uncomment it?
<friartuck> ha
<friartuck> emma7 put the cursor over the # and hit x. that should remove it. then do:  :x   and hit enter.
<emma7> i am a dummy in linux, please help, i have tried using the backspace on my key board but it cant work
<emma7> great the x did it
<friartuck> to exit vi do:  :x [enter]
<ivoks> anyone with jaunty -server kernel here?
<_ruben> ivoks: i upgraded my fileserver to jaunty beta last night
<ivoks> _ruben: you have -server kenel?
<_ruben> yeah
<emma7> thanx iam bac in config mode
<ivoks> _ruben: modinfo drbd?
<friartuck> emma7 try: sudo apt-get install netbase
<_ruben> ivoks: http://paste.ubuntu.com/141956/
<ivoks> _ruben: it works?
<ivoks> hm
<ivoks> is anyone here using clustered filesystem?
<_ruben> not yet, its still somewhere down there on my todo list :/
<ivoks> _ruben: did you pick one or still looking at options?
<_ruben> ivoks: vmfs would be my choice, as it would be used by esxi nodes :)
<ivoks> o, i was asking about linux filesystems
<_ruben> i know
<_ruben> i looked into those briefly ages ago
<emma7> friatuck netbase is still the newest version
<emma7> friatuck netbase is still the newest version
<_ruben> then you either did something wrong earlier, or messed up during the install
<_ruben> you could try: sudo apt-get install --reinstall netbase .. not sure if that'd break anything though
<emma7> friatuc are you gone?
<friartuck> emma7 looking for easy way...might be easier to just reinstall. would you loose any data?
<emma7> no
<_ruben> and further more, installing (ubuntu-)desktop on your server install, makes it a desktop really, not a server ;)
<friartuck> yes, good point. emma7, do you want a desktop? better to use the desktop version.
<emma7> but will u still be available like 15 minutes from coz i really find you helpfull
<friartuck> ha
<emma7> no i need a server not a desktop
<friartuck> emma7 then learn server and don't install a gui.
<_ruben> then why ask how to install a desktop ?
<emma7> ok i bet i have to
<friartuck> emma7 start reading here: https://help.ubuntu.com/8.10/serverguide/C/index.html
<emma7> thanks friatuck, by the way i am from Zambia, Africa.
<friartuck> emma7 nice, I'm from Texas, America. :)
<emma7> bye
<ivoks> ok... i would really like to replace gfs with ocfs
<ivoks> :)
<ivoks> and rhcs with linuxha
<_ruben> linux-ha i do use, still v1 though, v2 has quite a steep learning curve imo
<_ruben> works like a charm for our loadbalancers and firewalls .. anyways, im off for a bit
<uvirtbot`> New bug: #352446 in tomcat6 (main) "libtomcat6-java depends on libecj-java (dup-of: 347393)" [Undecided,New] https://launchpad.net/bugs/352446
<ivoks> is anyone using redhat cluster suite on ubuntu... at all?
<jussi01> Hrm, anyone know which file I need to edit to change the http upload size limit?
<ivoks> http does't have upload size limit
<ivoks> you are probably thinking of php
<ivoks> that's it... i' moving to fedora.
<ivoks> sommer: ping
<ivoks> sommer: bah, never mind :)
<sommer> ivoks: ok :-)
<emretemp> hi all, I want to run my tomcat web application server with a non-root user (named as tomcat), at the command prompt I can do it typing "su - tomcat /opt/tomcat/bin/start.sh"  everything is cool with this unless I mark tomcat user with "nologin" option at /etc/password file. well what  I want is to run tomcat with a non-root user, and I also dont want that user to be able to login with a ssh from outside. any solutions to this?
<Doonz> Hey Guys im running VMWARE-Server 2.0 on my system. I have ports 902,8222,8333 all forwarded through the router. Anyone know why i cant seem to access the web gui on the VMWARE server?
<genii> emretemp: PErhaps set shell to something like /bin/false
<simplexio> emretemp: change /bin/bash to /bin/false in /etc/passwd
<genii> I think also chsh might work
<AnRkey> is there a way to get OOo to use M$ formats by defaults via preseeding or script?
<emretemp> genii, simplexio  thx all, bin/false is the way to go
<AnRkey> bin/false is a life saver :P
<genii> emretemp: When there is no viable shell, login is impossible even with password, etc
<ivoks> AnRkey: i guess you can do that
<AnRkey> ivoks, do what?
<ivoks> AnRkey: i would start looking at /usr/lib/openoffice/basis3.0/presets/config
<AnRkey> ahh, ta
<emretemp> genii,  yup, simple logic, dont ya all love linux.
<ivoks> AnRkey: or somewhere there
<AnRkey> thanks, looking now
<beniwtv> hi all... on a NFS exported directory, I'm getting Input/Output error when executing a script. Any ideas?
<giovani> beniwtv: it's failed in some way? network, etc
<giovani> did you soft or hard mount it
<beniwtv> giovani: In fstab, I'm using: <ip>:/media/RAID/internal/machines/vm01 /media/DATA     nfs     timeo=14,intr
<beniwtv> giovani: But I'm suspecting it is something I need to configure...
<beniwtv> Maybe by default you can't execute things? All other things seem to work fine, like ls, cp and so on...
<giovani> you have execute permissions?
<giovani> however, adding the "soft" option to your /etc/fstab line
<giovani> will attempt to resolve network/reconnection issues silently in the background
<giovani> rather than reporting them directly to the application
<beniwtv> giovani: it is: -rwxr-xr-x 1 root root 195 2009-03-31 19:54 /media/DATA/Virtual/copy-tgz.sh
<beniwtv> the soft option didn't change anything, though :/
<giovani> (you'd have to remount it after changing an option, of course)
<beniwtv> Anyway, it shouldn't be a network problem, as both servers are on the same switch of the same lan
<giovani> but yeah, it might be another problem
<beniwtv> giovani: yeah, I did unmount and remount
<giovani> it's bash reporting an input/output error?
<giovani> or the script you're trying to run?
<beniwtv> giovani: -bash: /media/DATA/Virtual/copy-tgz.sh: Input/output error
<giovani> and if you try and access the file locally on the nfs host it works fine?
<beniwtv> giovani: Oops... It did work fine before, but now it doesn't. Not even a "cat"! *sigh* So that's it... thanks
<giovani> heh
<giovani> possibly a corrupted fs or bad drive
<beniwtv> giovani: I have had problems with the RAID.... the test server I'm using doesn't really like my 4U RAID array, so yeah, bad day I guess :(
<beniwtv> giovani: I used lots of workarounds to get it working, but it falls apart... Time to get a new RAID card... :)
<giovani> good luck
<EAGLE914> hi I have problems setting up a samba server on my network. I followed the Ubuntu documentation, but when I try to find the server on a windows computer I cannot find the server.
<giovani> EAGLE914: if you try using the IP directly? i.e. \\IP.OF.SERVER\ does that work?
<EAGLE914> This is my first time using linux so. How do I find the server ip? The server run Ubuntu 8.10
<giovani> oh boy
<giovani> on the server, run "ifconfig"
<giovani> the IP address should be printed there
<giovani> but if you don't know the IP of your server ... setting up samba is probably too complex right now
<EAGLE914> I wrote ipconfig in terminal and the it said that the command does not exist
<giovani> I said ifconfig
<giovani> not ipconfig
<EAGLE914> oh sorry. My bad
<EAGLE914> yes i found the ip adress for the server
<giovani> right, so you need to address it directly, in windows
<giovani> \\192.168.1.101
<giovani> for example, in windows explorer
<EAGLE914> ok
<EAGLE914> I can't find it
<EAGLE914> i tried to ping it from cmd and it was no response
<farmorg> hi, anyonr here know about sparc? have been trying to install 8.04 server on an ultra 60 & getting the "Booting Linux" prob - ie it's the last thing displayed on ttya. been googleing & found ppl talking about it but no fix. i have input-device & output-device set to ttya in openboot & have also tried booting with install video=atyfb:off mem=384 as per docs but no luck :(
<Iceman_B^Ltop> is there any way to reinitialize eth0 ?
<Iceman_B^Ltop> perhaps that will help
<giovani> Iceman_B^Ltop: sudo ifdown eth0
<giovani> Iceman_B^Ltop: sudo ifup eth0
<giovani> what's wrong though?
<Iceman_B^Ltop> same problems
<genii> That won't usually re-initialize it, just bring it down and back up
<EAGLE914> I cannot connect to the server
<Iceman_B^Ltop> check out my SSH clone here in the channel
<giovani> EAGLE914: then you've probably misconfigured things
<genii> Iceman_B^Ltop: Try instead:  sudo /etc/init.d/networking restart
<EAGLE914> I thought so
<giovani> EAGLE914: it sounds like you need to tackle some linux basics before you go configuring samba -- which isn't terribly simple
<Iceman_B^Ltop> genii: thanks, I'll try that
<EAGLE914> I have a friend that is good at programming and he is helping me
<EAGLE914> but he has never created a server before.......
<Iceman_B^Ltop> Samba...configuration
<Iceman_B^Ltop> *shudder*
<Iceman_B^Ltop> does Samba support file locking and such a la Windows XP ?
<EAGLE914> thanks for the help, but I need to go now
<giovani> genii: reading /etc/init.d/networking -- all it does is issue ifup and ifdown, with a few options ...
<giovani> is there something I'm missing that makes it more complete?
<Iceman_B^Ltop> genii: nope, that didnt help. I'm stilling having networkdrops on my server. apprantly the entire connection just...."jams" cause it's not only my SSH connection on the LAN, but also its connection to irc
<giovani> Iceman_B^Ltop: you were the one with no packet loss reported on the interface though, right?
<Iceman_B^Ltop> define packetloss. if I do "ip -s link" everything seems fine
<Iceman_B^Ltop> its just that at random intervals, the eth0 stops responding or something
<Iceman_B^Ltop> a few minutes later its back up again
<giovani> ok
<Iceman_B^Ltop> but yeah, that was probably me, ive been here with the same issue for a few days
<giovani> are you able to sit physically at the machine, or use some other method of accessing it during these times?
<Iceman_B^Ltop> unfortunaely not. Im getting a monitor later tonight. Im still short a keyboard
<Iceman_B^Ltop> so everything I do is via putty
<Iceman_B^Ltop> the server itself doesnt reboot btw
<giovani> ok, it could be a bad network card, could be a bad driver
<giovani> could be something non-network related, like cpu/mem/fragmentation
<giovani> what kind of hardware is it?
<Iceman_B^Ltop> Dell Dimension 8200
<genii> I had a Realtek that would drop the connection every time a download started
<Iceman_B^Ltop> P4 1,6Ghz, 256MB ram, 100Mbit ethcard, 160 GB drive
<giovani> Iceman_B^Ltop: you know what kind of card is in there?
<Iceman_B^Ltop> hmm, can I find out over the console?
<genii> Could also be mtu mismatch, but unlikely
<giovani> Iceman_B^Ltop: yep -- run "lspci" from the console
<giovani> and pastebin the output
<Iceman_B^Ltop> remember, 8.10 dekstop had no such qualms, the only difference in hardware was the hdd
<Iceman_B^Ltop> alright
<giovani> oh yes ... we went over this
<giovani> it's highly unlikely that the server kernel is the cause here
<genii> All else being equal, likely is the -server driver for eth0 is different then the -generic
<Iceman_B^Ltop> http://pastebin.ubuntu.com/142135/
<giovani> 3com, hmm
<Iceman_B^Ltop> another stupid thing could be that I tapped the eth card while I was working on switching the hdd
<giovani> genii: well, we could have him install the generic kernel and see how that functions ...
<Iceman_B^Ltop> maybe I knocked it loose or something, though unblikely
<genii> giovani: Would narrow it down quick if it was the case. I've seen this before with restricted drivers
<giovani> Iceman_B^Ltop: how often does this happen?
<Iceman_B^Ltop> eversince I installed 8.10 server
<Iceman_B^Ltop> half a week ago
<giovani> no
<giovani> how often
<Iceman_B^Ltop> oh
<giovani> as in ... frequency
<Iceman_B^Ltop> everytime you see Iceman_B|SSH drop
<Iceman_B^Ltop> a few times / hour
<giovani> ok
<Iceman_B^Ltop> seemingly random intervals too
<giovani> I'd recommend installing the -generic kernel, just to see if that's the cause
<giovani> "sudo apt-get install linux-image-generic"
<Iceman_B^Ltop> I;ll try that, if someone can wlak me through that. never switched Kernels before, or built one
<Iceman_B^Ltop> ...that was easy :p how long will it take?
<giovani> you don't need to build it, it's a package, it's the kernel that would be installed with ubuntu desktop
<giovani> dependong on your net connection ... 10-15 min?
<Iceman_B^Ltop> oh okay
<Iceman_B^Ltop> its running
<giovani> you'll need to reboot
<giovani> to boot into the kernel
<giovani> and then when it boots back up type "uname -a" to make sure that it's running -generic rather than -server
<Iceman_B^Ltop> okay
<giovani> and then sit back and wait for the problem to happen again -- if it never happens again (wait a while before concluding that), then, the problem's been isolated
<Iceman_B^Ltop> wait, will that show up in GRUB? or rather, do I need to select that kernel in grub ?
<giovani> it should make itself the default grub kernel, it won't replace your old one
<Iceman_B^Ltop> (it;s installing now)
<giovani> if you remove it later, it'll remove itself from grub
<Iceman_B^Ltop> okay. so when I issue a reboot command, I could let the machine stand unattended and it should boot into the generic kernel?
<giovani> yes
<giovani> but confirm that it has with "uname -a" (which prints which kernel the machine is running)
<Iceman_B^Ltop> alright
<Iceman_B^Ltop> "sudo shutdown -r now"?
<Iceman_B^Ltop> or reboot
<giovani> same thing
<Iceman_B^Ltop> giovani, Im checking out the menu.lst before I reboot
<giovani> Iceman_B^Ltop: k
<Iceman_B^Ltop> it says "default 0" but the generic kernel is is listed as third
<Iceman_B^Ltop> should I change the 0 into 2 ?
<giovani> Iceman_B^Ltop: yes
<ScottK> Anyone here running amavisd-new on Jaunty that could test a new clamav package?
<cemc> ScottK: no, but I have a vm and could set it up quickly
<ScottK> cemc: That'd be great.  It should work fine with 0.95, but I want to make sure.
<Iceman_B^Ltop> giovani: I think its rebooting now
<cemc> ScottK: it's in the PPA ?
<ScottK> cemc: No.  The amavisd-new from the archive should just work.
<cemc> ah, right, ok
<Iceman_B^Ltop> ravi@Rin-chan:~$ uname -a
<Iceman_B^Ltop> Linux Rin-chan 2.6.27-11-generic #1 SMP Thu Jan 29 19:24:39 UTC 2009 i686 GNU/Linux
<Iceman_B^Ltop> guess it worked
<Iceman_B^Ltop> giovani: I suspect the problem still remains, but we'll see
<giovani> Iceman_B^Ltop: what version of ubuntu are you running now?
<Iceman_B^Ltop> Linux Rin-chan 2.6.27-11-generic #1 SMP Thu Jan 29 19:24:39 UTC 2009 i686 GNU/Linux <-- I just installed that onto 8.10 server
<Iceman_B^Ltop> like you suggested
<giovani> ok, so, 8.10
<giovani> but you were running 8.04 before
<giovani> so either this is a driver change from 8.05 to 8.10, or your hardware has become damaged in the meantime, which is what I first suspected
<Iceman_B^Ltop> nono, I wasnt running 8.04
<Iceman_B^Ltop> 8.10 desktop before the hdd switch
<giovani> oh ok
<Iceman_B^Ltop> okay, perhaps it is hardware related...
<giovani> well the kernel might've changed slightly -- possible that a bug was introduced
<giovani> but unlikely
<Iceman_B^Ltop> hmm
<Iceman_B^Ltop> I'll take my server apart and check everything. maybe I broke something. I'll try another cable first though. it seems unlikely but maybe the cat6 is the culprit. I dont know how much difference another kernel makes for the eth card
<genii> Many kernel modules differ between kernels.
<Iceman_B^Ltop> different cable doesnt change a thing
<genii> Iceman_B^Ltop: Does apt-cache policy linux-restricted-modules-server    show that it is installed?
<Iceman_B^Ltop> W: Unable to locate package linux-restricted-modules-server
<Iceman_B^Ltop> genii ^
<genii> Hm
<genii> Seems on my Hardy to be in repos -proposed/universe -updates/universe and -security/universe
<Iceman_B|SSH> k, I SSH-ed to an exernal point, and from there through my router, into my server
<kees> kirkland: you uploaded powerman ?
<kirkland> kees: i did
<kees> some feedback: it should have been versioned -0ubuntu1
<kirkland> kees: ah, right ...  it was still in Debian ITP at the time
<kees> kirkland: sure, but all the more reason to have it be -0ubuntu1 so we could sync -1
<kirkland> kees: yup, agreed
<Iceman_B^Ltop> genii / giovani: the problem remains. When I SSH into the server from an external point, that too gets dropped
<kirkland> kees: is that it?
 * kirkland was expecting a more thorough tongue lashing :-)
<kees> kirkland: nothing really to do about it.  :)  I just wanted to point it out in case you're faced with a similar situation in the future.  :)
<genii> Iceman_B^Ltop: What driver does:  lsmod|grep 3c               report? 3c59x or another?
<kirkland> kees: yes, thanks
<kirkland> kees: were you examining the MIR?
<kees> kirkland: am still, yes.
<Iceman_B^Ltop> genii: http://pastebin.ubuntu.com/142184/
<genii> Reading
<kees> kirkland, nijaba: bug 337226 updated (a bit more work is needed...)
<uvirtbot> Launchpad bug 337226 in powerman "MIR - include powerman" [High,Incomplete] https://launchpad.net/bugs/337226
<genii> OK, driver is right. Used to sometimes use wrong driver for TX versions (3c905 driver)
<Iceman_B^Ltop> okay
<genii> Iceman_B^Ltop: You are ssh in over lan-lan or lan-internet-lan path?
<kirkland> kees: cool, thanks.
<genii> (some routers etc don't like martians)
<kirkland> kees: just looked over the feedback, looks good.  i'll punt to aquette for fixage
<Iceman_B^Ltop> genii: both
<Iceman_B^Ltop> both give the same trange behaviour
<Iceman_B^Ltop> I'm doing lan-lan now
<cemc> ScottK: amavisd-new seems to work ok, tested with clamd and clamscan, too
<ScottK> cemc: Great
<ivoks> cemc: thanks :D
<ivoks> there's no reason why it shouldn't though...
<cemc> yep
<ivoks> take care...
<Big_Ham> can I call phpinfo() from the shell?
<Big_Ham> without writing a PHP page and serving it up?
<genii> php cli?
<Big_Ham> well, I'm having a GD issue and I saw a pretty verbose output of php handlers with a phpinfo () page
<Big_Ham> my last task on the web side of this server ... Joomla has a plugin that handles image resizing for the purposes of thumbnails and it's not working
<Big_Ham> it says GD is the problem ... and I've already seen that I have GD installed, so I'm looking for some more verbose output
<genii> php -a       will take you into interactive, you can issue any php based command there
<Big_Ham> hmmmm, php5-cli wasn't installed
<Big_Ham> interesting
<ball> Is it difficult to set up DHCP service on an Ubuntu Server box?
<Big_Ham> where are PHP log files kept?
<giovani> ball: nope, dnsmasq has a really simple, and easy-to-configure DHCP daemon
<ball> giovani: thanks, I'll look for that.
<ball> hello McKinley
<giovani> ball: google around for some instructions on setting it up -- but it's simple
<giovani> (it also provides dns, if you want that)
<ball> I don't, DNS is a whole separate nightmare.
<giovani> that's fine ... dns is optional
<giovani> but not a nightmare
<giovani> (dnsmasq isn't for running authoritative dns ... it's designed as an easy caching server for your internal network)
<ball> Okay.
<ball> That might be useful then.
<Bergcube> I am considering looking into the Citrix XenServer.  (http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686939)  As far as I can tell the management tool XenCenter is m$ windows only.  Are there any good Open Source alternatives for the management tool?
<giovani> Bergcube: I think xenserver is different from normal open-source xen
<giovani> there are some control panels for xen
<McKinley> Hi ball and others, I am trying to setup a router for 2 networks on the same machine.  I have ufw and all the masquerading working. I think I am having some route problems. I haven't setup up any explicite routes.  In this interfaces file, http://pastie.org/433917 , I am able to use the network on 192.168.2.1 to reach the Internet, but the network on eth1:1 doesn't route and thus never reaches my POSTROUTING rules.  What routes 
<Bergcube> giovani~  I am sure it is somewhat different.  But I am a wuss.  The Citrix XenServer comes in one easy install-cd, whereas the help pages for xen under ubuntu left me feeling like the largest human "huh?" in history...
<giovani> Bergcube: maybe ask in #xen?
<Bergcube> giovani~  Yeah, I guess.  Old habits die hard I guess.  I run a number of Ubuntu servers directly on the iron, and usually I've gotten great help in here.  Sorry for the offtopicness.
<Iceman_B^Ltop> =_=
<Iceman_B^Ltop> graaah
<JessicaParker> can anyone assist with teh followiing ? googled it lots of people with the issue - no solution though........This kernel requires the following features not present on the CPU: 0:6 Unable to boot - please use a kernel appropriate for your CPU.
<ball> JessicaParker: is that with a kernel that shipped with Ubuntu Server?
<JessicaParker> yes just copy the iso straight to the cd and overwrote eveything
<JessicaParker> completed set up then on re-boot the problem happened
<JessicaParker> ball: any ideas ?
<ball> JessicaParker: what cpu?
<JessicaParker> not sure its an intel centrino bought in 2002
<ball> JessicaParker: Might be a Pentium M then, or something
<ball> What version of Ubuntu?
<JessicaParker> its the latest server lts  edition, just downloaded it toda
<JessicaParker> http://releases.ubuntu.com/hardy/ubuntu-8.04.2-server-i386.iso
<giovani> that's the latest server LTS release, the latest server release is 8.10
<giovani> oh sorry, you did say LTS
<giovani> my mistake
<JessicaParker> do u think 8.10 may be better
<giovani> it's possible that the newer kernel in 8.10 might help
<giovani> it's worth a try
<JessicaParker> its not for production as yet anyway just trying some things out.
<JessicaParker> but i trust there is no easier way to get over this issue that anyone here knows off.........in which case i will try 8.10 to start with
<giovani> the issue is not one I've seen before -- and with a regular intel cpu ... it seems most likely to be a bug
<giovani> if you've asked for help elsewhere, and not gotten any response ... I'd try 8.10
<JessicaParker> thanks i will try that as a start
<McKinley> Can anyone help with basic routing? In this format: up route add [-net|-host] <host/net>/<mask> gw <host/IP> dev <Interface> How do I state that I want all non 192.168.20/24 traffic to route to address 64.244.99.99 on eth0?
<McKinley> I am confused on the net and host parts
<friartuck> McKinley the default route is what you are looking for. that is specified as 0.0.0.0 0,0,0,0.
<McKinley> post-up route add -net 192.168.2.0 netmask 255.255.255.0 gw 64.244.99.99? Is this right?
<giovani> or "route add default gw 64.244.99.99 eth0"
<giovani> McKinley: no ... you said you want "non 192.168.20" traffic to be routed
<giovani> now you're entering a route to route that traffic
<McKinley> friartuck: I have aliases on each adapter.  So, default route won't be enough, right?
<friartuck> McKinley default means....anything that is not explicitly specified. the default route usually points to the internet. you only need one default route. not sure about your question.
<McKinley> friartuck: I have a real ip on eth0 and another real on eth0:1. I need eth1 to route to eth0 and eth1:1 to route to eth0:1. In this case I can't use a default route. I need two different explicite routes, but I don't know how.
<genii> Hm. Could I use a bonding driver for dhcp-server to lan?
<friartuck> McKinley do you have two internet connections? dual-homed?
<ball> I was thinking yesterday about dual-homing
<ball> If I had one wired and one satellite connection, it would be nice to send video and voice over the wire and things like sftp and rsync up into space
<ball> ...with failover in both cases
<Deeps> source based routing needed here?
<ball> ...not sure how hard that is to do on Ubuntu though
<Deeps> from srcip1 route via gwip1, from srcip2 route via gwip2?
<McKinley> friartuck: Yes, they are 66.244.88.88/29 and 66.244.99.99/29 for arguments sake. They are up on eth0 and a ping -I and a traceroute -s works just fine. wget --bind-address to whatsmyip.us shows that I am getting out on either address just fine with expected results.
<McKinley> ball: my networks are staying logically separate, fyi.
<ball> McKinley: ok
<McKinley> ball: no failover or anything. Just 2 routers in one box with two NICs only.
<ball> I think I need a nap
<poningru> anyone alive?
<poningru> quick question regarding vmbuilder man pages
<poningru> and file:///home/poningru/Dev/ubuntu/ubuntu-doc/build/serverguide/C/jeos-and-vmbuilder.html
<poningru> err
<poningru> damn it
<poningru> sorry
<poningru> cant find the doc for jeos and vmbuilder online
<poningru> anyway
 * ball is almost alive
<poningru> question is what does --net= stand for?
<poningru> and does --addpkg= handle dependencies?
<poningru> I think \sh is the person to ask but he seems afk
<infinity> vmbuilder is soren's baby.
<poningru> obtained with vmbuilder
<poningru> yeah
<infinity> (not \sh)
<poningru> oh
<poningru> sorry
<poningru> doh
<toothy> Using vsftp, is it possible to jail a user to a directory other than his home dir ?
<friartuck> McKinley I think you need policy based routing. I know how to do this with Cisco, it looks like it can be done with iproute2, but I'm not familiar. take a look here: http://lartc.org/howto/
<Deeps> i can help with that
<friartuck> McKinley this one is more definitive, I think this is what you want: http://www.linuxhorizon.ro/iproute2.html
<McKinley> friartuck: and simply doing route add -host won't do it?  Will the host 192.168.2.1 not be able to talk to the rest of 192.168.2.0?  I'm checking out the link.
<Deeps> i dont properly understand the problem though, making it tricky to try and help, heh
<friartuck> McKinley hm, I think I misunderstood your question.
<infinity> toothy: AFAIK, vsftp isn't that flexible.  You're either chrooting to home, or not at all.
<McKinley> friartuck: I'll reask it in a bit. Reading up, thanks.
<infinity> toothy: proftpd is much more flexible in that regard, but not technically "supported"... I'm not sure what other ftp daemons allow that level of customisation.
<infinity> toothy: What I tend to do is just chroot users to their home directory, and bind-mount the bits I want them to have access to.
<infinity> toothy: (ie: bind-mountind /srv/domain.com/www to /home/user/www.domain.com/)
<toothy> infinity, sorry, but what is bind mount ?
<toothy> can a symlink do the same?
<infinity> toothy: "mount -o bind /srv/domain.com/www /home/user/www.domain.com"
<infinity> toothy: And no, symlinks won't work.  vsftpd won't follow a symlink from a chrooted location outside the chroot, for obvious reasons.
<maxb> Is there any difference between "-o bind" and "--bind" ?
<infinity> toothy: (If symlinks worked, users could just use them to break out)
<infinity> maxb: Other than the fact that I'm an onld UNIX hack who refuses to admit that mount has any switches other than "-t" and "-o", probably not.
<infinity> s/onld/old/
<toothy> ah that makes sense
<poningru> infinity, lol
<toothy> infinity, should i add this mount to fstab?
<poningru> toothy, yes if you want it to be held in restart
<poningru> as in to be there after you restart the machine
<poningru> fs type is none btw
<infinity> maxb: In all honesty, I just prefer "-o loop" and "-o bind" because it reminds me what to put in the options field in fstab if I want to make it persistent.
<toothy> poningru, infinity thanks guys... ill give a shit
<Deeps> --bind is just an alias to -o bind
<toothy> *shot
<toothy> :X
<poningru> lol
<toothy> i give a shit! lol thanks again
<McKinley> friartuck: that second link is very clear. Thanks. I'll see if it works.
<Iceman_B^Ltop> yay, I got a monitor
<ball> Servers shouldn't have monitors ;-)
<Iceman_B^Ltop> I know
<Iceman_B^Ltop> they "should" not
<toothy> Servers shouldn't have iTunes :/
<poningru> toothy, heh
<poningru> mpaad
<poningru> ftw
<poningru> wow this is cool vmbuilder I mean, used to build from iso
<Iceman_B^Ltop> can I force output to the vga port when I'm connected via SSH ?
<giovani> Iceman_B^Ltop: they're different terminals
<Iceman_B^Ltop> okay
<giovani> so, no, I don't believe that's possible
<uvirtbot> New bug: #352779 in dhcp3 (main) "Bad MTU for eth0 in 9.04 amd64" [High,Incomplete] https://launchpad.net/bugs/352779
<giovani> your local terminals are tty1-6, usually, and your ssh terminals are pts/0-X
<Iceman_B^Ltop> okay
<poningru> anyone alive?
<poningru> creating stuff with vmbuilder
<poningru> http://pastebin.com/m7155f84d
<poningru> the command I used
<poningru> I cant get it to list stuff under virsh -c qemu:///system
<poningru> err nm
<centHOGGr> hi, I can't start my xserver coming from just a ubuntu server install.. aptitude install LXDE .. reboot .. no X
<toothy> infinity, question about jailing an FTP directory...  since vsftp can jail to a home directory, does it make sense to simply make the dir i want to jail him into his home dir?    something like   sudo useradd -d /www/web/whatever username  ?
<infinity> toothy: You *could*... But if he'll ever access the system in any other way (for mail, for instance, or whatever), that's bound to cause you headaches.
<toothy> got it, thank you
<centHOGGr> hi, how can you change xserver to only use the vesa driver? thx
<ball> Anyone happen to know what (if any) package contains whetstone and dhrystone benchmarks?
#ubuntu-server 2009-04-02
<sergevn> are the read error raid values something to worry about?
<sergevn> http://pastebin.ubuntu.com/142410/
<foo> Having some NFS problems. Any tips? I can't seem to mount the share. The services are running. How can I query the NFS server to make sure it's running?
<Deeps> rpcinfo
<Deeps> you may find your firewall is blocking, or your hosts.allow/hosts.deny files
<foo> rpcinfo -p 192.168.0.2 is showing me something, but I'm not sure I'm using it properly. Any way to have rpcinfo spit out the shares?
<Deeps> exportfs -v
<foo> yeah, I see that on the server... any way to query the server from the client?
<foo> I do see the shares on the server
<foo> thanks Deeps
<Deeps> rpcinfo from the client will let you know if you can reach the remote nfs server
<Deeps> exporfs will show you what shares you have defined on the server, with what ip restrictions (if any)
<Deeps> exportfs*
<foo>  /public         192.168.0.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,anonuid=1000,anongid=1001)
<foo> I see that on the server
<foo> I wonder if this is a version conclict or something
<Deeps> dunno if there's anything you can run from the client to see what shares the server has
<foo> rpcinfo -p theip returns     100003    3   tcp   2049  nfs
<foo> with some other stuff...
<foo> So I think it sees nfs
<foo>  sudo mount public
<foo> mount: special device 192.168.0.2:/public does not exist
<foo> but I mount and get that
<Deeps> what's your line in your fstab?
<foo> this has been working for years, can't think of anything that has changed other than maybe versions during an upgrade
<foo> 192.168.0.2:/public /home/matt/public auto defaults 0 0
<Deeps> you've not defined a filesystem type
<sbeattie> foo: showmount -e <server> will query the server from the client as to what's exported.
<Deeps> that may be whats causing a problem
<foo> mount: wrong fs type, bad option, bad superblock on 192.168.0.2:/public, missing codepage or helper program, or other error
<foo> Deeps: nah, thought that was it
<Deeps> try from cli, mount -t nfs -o soft 192.168.0.2:/public /home/matt/public
<sbeattie> anything in dmesg?
<foo> errrrrrrr, I needed nfs-common installed. How the heck did that get uninstalled...
 * foo mumbles under breath
<foo> thanks
<Doonz> interesting. when i copy files from a usb drive on my ubuntu server my network connection on it pretty much dies. this normal?
<twb> Doonz: it is not.
<Doonz> thats what i thought
<twb> Doonz: I would guess (GUESS, mind you) that the USB bus and the NIC are on the same PCI interrupt, so they fight.
<Doonz> oh
<twb> I once had a similar problem where the NIC and sound card where on the same interrupt, so downloading stuff caused the audio to sputter.
<Doonz> that would make sense since both these are on board
<twb> You should also run dmesg and look for anything suspicious, particularly at the bottom of the output
<Doonz> ill have to wait till the file copy is done cause i cant ssh into the box
<twb> You can also use top and iostat (iowait?) to find out if you're simply "filling up" the machine's resources.
<Doonz> i have htop
<twb> Whatever.
<Doonz> and its a 2x dual core xeons with 8gb ram and only one processor show any activity and its only 6% and ram is only 500mb of 8gb
<twb> hand waving
 * Doonz confused
<twb> I've given you my initial analysis, you need to do further investigation to isolate the fault
<Doonz> ok i like your response better then the last one
<Doonz> thnx
<Doonz> twb you know anything about software raid?
<twb> Yes.
<Doonz> say i build a raid 5 array with 4 disk's. Later i want to add a 5th to that array. is it hard to grow the array or even possible?
<twb> I haven't done that myself.  I think it is easy.
<Doonz> ok
<twb> It would probably be slow to rebalance, though
<Doonz> wich is fine
<Doonz> this is just for pure storage (media server)
<twb> If you have 5 disks, you might want to consider having two parity disks instead of one (sometimes misnamed "raid 6").
<oh_noes> If I installed Ubuntu server in English, can I stil  create Filenaes in chinese (or anyu other .. ie internationalization)  format?
<twb> oh_noes: yes.
<twb> The filesystem should use UTF-8 by default
<Iceman_B|SSH> YOU GUYS WERE RIGHT
<oh_noes> including SSH/vt100?
<Iceman_B|SSH> excuse my caps
<Iceman_B|SSH> genii: giovani : I think I figured out what the problem was
<Iceman_B|SSH> it turned out that my Wii was configured with the same IP address as my server
<Iceman_B|SSH> >_<
<Iceman_B|SSH> Wireshark pointed it out, I saw ARP packets with the MAC belonging to Nintendo. At first I didnt think much of it, but then decided to check my Wii settings
<Iceman_B|SSH> and there it was. So the dropping connections -should- be over
<twb> oh_noes: er, not, UTF-8 is still used everywhere by default
<twb> oh_noes: you will probably just see ? or gibberish on your terminal if you use a vt100, though.
<twb> s/not/no/
<twb> Iceman_B|SSH: this is why I tend to configure the netire network with a top-down DHCP setup.
<twb> *entire (sheesh)
<twb> I just use /etc/ethers to ensure that servers get a fixed IP from the DHCP server.
<twb> That way if I want to change settings, I only have to change the appropriate DHCP server, rather than both the router and the server's static IP configs.
<oh_noes> twb: thanks, so all aspects except for terminal support UTF-8?
<twb> oh_noes: most terminals should support some/all of Unicode, too.
<twb> For example, xterm will display the ISO 8859 parts correctly but will struggle with CJK.
<twb> urxvt and mrxvt respectively can do a good job of CJK and bidi.
<leonel> scottK  python-clamav  is  done ??  I was going to start  to fix it for clamav 0.95
<Iceman_B|SSH> twb: I know, and that's clever. Maybe I should put my Wii in DHCP mode as well. I started putting in Manual IP's for my own devices and tracking them in a textfile. Since I dont have THAT many devices.
<twb> Iceman_B|SSH: /etc/ethers can be that text file, and still manage everything via DHCP
<Iceman_B|SSH> The reason I use static IP's is that I want a fast p2p connection on my laptop(NAT) and a stable line when Im playing on my XBOX360
<Iceman_B|SSH> ah, but I have a DD-wrt router that does the DHCP-ing
<Iceman_B|SSH> hm, now that I think about it,I think I put in static leases
<Iceman_B|SSH> which is probably what I should do instead of manually assigning an IP
<twb> If dd-wrt uses dnsmasq for DHCP, then /etc/ethers will Just Work
<twb> I do that on openwrt.
<Iceman_B|SSH> it does. but I configure it using the webinterface
<twb> Ah well, I wouldn't know about web UIs
<Iceman_B|SSH> well, I imagine that they have the same effect in the end
<twb> Except that drawing continuously updating usage graphs with ajax and svg is massively sexy.
<Iceman_B|SSH> that sir, is correct
<Iceman_B|SSH> I dont know where dd-wrt saves its network config, the dirlayout is all funky
<twb> That's one reason to use openwrt instead ;-)
<Iceman_B|SSH> when I look in Aptitude, I see 3 kernels: linux-image-2.6.27-11-generic, linux-image-2.6.27-11-server and linux-image-2.6.27-7-server.
<Iceman_B|SSH> in currentl using generic, but I want to switch back to server
<Iceman_B|SSH> what packages can I remove after a reboot and how do I do that?
<kraut> moin
<lukehasnoname> morning
<mattt> evening
<genii> Is there some limit on number of post-up directives allowed in /etc/network/interfaces ? I had quite a few in there to correct my routing but only the first few were being executed for some reason. Then I copy/pasted them out to a script and ran that from a post-up directive which worked. No order was changed, etc.
<uvirtbot> New bug: #330883 in samba (main) "Browsing samba shared printers show nothing" [Undecided,New] https://launchpad.net/bugs/330883
<uvirtbot> New bug: #353642 in smb4k (universe) "smb4k fail to show shared printers (dup-of: 330883)" [Undecided,New] https://launchpad.net/bugs/353642
<naymyo> hi
<naymyo> why less users online?
<naymyo> where to ask about ubuntu server administration and maintenace?
<Jeeves_> Yes, here. :)
<naymyo> ok i have questions about minimum hardware requirement for Ubuntu server
<Jeeves_> PII, 128MB ram
<Jeeves_> or so
<Jeeves_> but what do you want to do?
<rst-uanic> https://help.ubuntu.com/community/Installation/SystemRequirements
<naymyo> bare minimum is so low , isn't it? ok for server?
<rst-uanic> depending on what you want to do
<naymyo> ok thanks brother
<naymyo> the other thing is as i have no internet on my test lab server at home, i have to use APTonCD which i made at work
<naymyo> so for all commands- apt-get install ******** from CD,  i have to put deb that cdrom in sources.lst , right? how to put that entry in detail?
<naymyo> or .iso file i made
<zox> naymyo: maybe apt-cdrom --cdrom /media/cdrom will help
<serwou> hi
<serwou> I've reported this NIS server bug : https://bugs.launchpad.net/ubuntu/+source/nis/+bug/353698 Anyone already had the issue before? I would like to fix the problem asap :-/
<uvirtbot> Launchpad bug 353698 in nis "Ypserv segfault on Ubuntu 8.10 Intrepid" [Undecided,New]
<mrwes> I know clamav does realtime monitoring of incoming email, but can it do realtime monitoring of folders too?
<uvirtbot> New bug: #353759 in openssh (main) "ubuntu 9.04 beta: ssh-agent doesn't work" [Undecided,New] https://launchpad.net/bugs/353759
<mat1211> Hi, I was wondering if anyone knew how I could give a user who is not an admin permision to upload to a certain directory?
<ivoks> acl
<loginhelp> Good evening to all. I just installed an ltsp server and am having problems with a thin client. It boots from the nic, gets to the login screen then it waits and goes blank.
<loginhelp> i dunno what i just did right now but i was able to go into the shell and login from there. It says the password is incorrect. Is the ltsp user list different from the one entered into the users manager in ubuntu?
<orudie> is Simple Directory Listing any good ?
<mat1211> Hi, I have a question about file permissions.  I have a dirrectory, called uploads.  Lets say that the usergroup is named uploaders, how would I give users in the uploaders group permission to write to the uploads dir?
<jpds> mat1211: chmod 0770 /path/to/uploaders && chown user:uploaders /path/... ?
<mat1211> Ah.  What do you mean path to uploaders? lol
<jpds> No, uploads, sorry.
<mat1211> and then for the chown command, its the username:groupname /path to dir I want to set write perms for?/
<jpds> Yep.
<mat1211> ah, thanks.
<mat1211> When I do chown it says opperation not permitted.
<mat1211> What can I do so it doesn't say that? lol
<jpds> mat1211: Put 'sudo' in front of the command so root does it.
<mat1211> I did.
<jpds> And it still doesn't let you?
<mat1211> said not permited, maybe because its /var/www?
<mat1211> its really strange.
<jpds> mat1211: Which command are you running exactly?
<mat1211> The exact command is, with different username and dirs, "sudo chown user1:uploaders /var/www/uploads
<mat1211> I think it should work but...
<mat1211> Would it matter that the dir is on an external hd?
<jpds> I don't think so, is it mounted over NFS or SSHFS or something?
<mat1211> vfat
<mat1211> only thing iv got :P
<mat1211> Don't think that would make a difference though.
<loginhelp> does anyone know why an ltsp client will not login to an edubuntu server?
<ball> How much disk space does Ubuntu Server need?  I'm thinking of putting it on a flash stick
<_coredump_> ~700mb iirc
<serwou> ball: /dev/sda1             1.7G  459M  1.2G  28% /
<ball> Okay, that could work
<ball> I should probably tell it not to swap though
<mib_e205oh19> hi i have a wifi problem can anyone help me please
<CrummyGummy> Hiya, I installed Ubuntu server with a password protetcted LUKS LVM partition. Any ideas how I can make it boot without a prompt?
<ball> un-password-protect it?
<CrummyGummy> aaaah, the root of my question. I don't want to reinstall, just clear teh password.
<CrummyGummy> so how do I do it?
 * ball shrugs
<quizme> For ubuntu, apache 2, where in the directory structure is the appropriate place to put an .conf file for a subset of websites hosted on that server?  I want to put all of my mod_rails configurations in one place.  in mods-available?
<orudie> what is the syntax of a command of the 'find' utility to search for files owned by a particular user ?
<yann2> man find ? :)
<yann2>        -user uname
<yann2>               File is owned by user uname (numeric user ID allowed).
<orudie> please help me out
<orudie> can i use it after the user had been removed along with his home directory
<Deeps> yes, using the numeric user id
<orudie> and how can i know what the user id is , i already removed this user
<ball> orudie: can you look at one of the files that he or she created?
<orudie> nope
<giovani> orudie: yeah, if you find one file of the user's do ls -n filename
<orudie> thats the thing
<giovani> and it'll list the uid:gid for the file
<Deeps> look in one of your backups, from before you deleted the user
<orudie> sudo find -user 'theuseriremoved' - does not return any output, so that means no files are located on the system
<giovani> orudie: incorrect
<orudie> errr
<giovani> it has to have an entry in /etc/passwd to equate the username to a UID ... since you've deleted the user, it's not there
<Deeps> plus thats only searching within the current relative path, you need to find /
<giovani> the filesystem doesn't know anything about usernames -- only UIDs ... so unless you can find the UID of the user, this won't help
<Deeps> (unless you're sitting in / already)
<giovani> orudie: I know this doesn't help now, but in the future "deluser --remove-all-files" will do what you want
<giovani> while also deleting the user
<orudie> so what if i recreate the user with the same user name and do deluser --remove-all-files
<giovani> that will only work if it has the same UID
<giovani> once again ... filesystems know nothing about usernames -- it's simply using the entry in /etc/passwd to map a name to a UID
<giovani> so if the UID is different, that won't help you
<orudie> so how can i find out his UID now, find user username from / returns one file in cron jobs, which is really bad :(
<ball> How can I tell whether Ubuntu Server found any temperature sensors?
<orudie> thats the user that had been hacked
<orudie> lol
<ball> That's "cracked" btw orudie
<ball> hello mathiaz
<giovani> heh
<giovani> no need for silly semantics
<orudie> so which one of you did it !? lol
<giovani> orudie: if you found a file owned by the user, then do ls -n filename
<giovani> and then it'll list the UID of the file
<orudie> giovani, can i pm you ?
<giovani> orudie: I don't see the need
<orudie> heh ok
<giovani> if you have an ubuntu support question, ask here
<orudie> ok so the output of ls -n /file/path is -rw------- 1 1004 108 228 Mar 25 20:31
<giovani> ok
<orudie> which is the UID ?
<giovani> so the UID of the user who owns that file is 1004
<orudie> ok
<orudie> so how can i remove all the files associated with this UID ?
<giovani> well, find them first
<giovani> sudo find / -uid 1004
<orudie> yup, still shows only that 1 file
<orudie> in cron
<giovani> ok
<giovani> well, that's it
<orudie> so just remove that file ?
<orudie> or i have to look at cron configs as well
<ball> Oh well, I'm off out to buy some lightbulbs and listen to the Ubuntu UK podcast
<orudie> ball, sounds exciting
 * ball grins
<ball> I'll have to take a tape measure because I can't think in inches
<orudie> can i disable ssh client connections for some of my users and if yes will they still be able to use the email accounts ?
<orudie> how can i disable ssh clieng logins for some of my users ?
<jpds> orudie: DenyUsers user1 user2 in /etc/ssh/sshd_config
<jpds> orudie: See: man sshd_config
<orudie> it all be in one line
<orudie> separated by spaces ?
<jpds> Yes, separated by spaces as the manpage says.
<orudie> :)
<orudie> is it wise to deny root ?
<orudie> if one of my users is in the sudoers file
<jpds> That would be: "PermitRootLogin"
<orudie> PermitRootLogin yes , what if i change that to no ?
<jpds> You won't be able to login as root.
<orudie> ok
<jpds> However, Ubuntu doesn't have root enabled by default so, they can't unless you've enabled root.
<orudie> yeah root is now enabled
<jpds> If you want to deny root logins use PermitRootLogin
<uvirtbot> New bug: #353594 in apache2 (main) "Apache trying to start before network interfaces are up" [Undecided,Incomplete] https://launchpad.net/bugs/353594
<cemc> can i use /dev/shm to store temp files, or should I create another ramdisk ?
<uvirtbot> New bug: #348126 in openssh (main) "ssh are using ssh-userauth but ignores private key" [Undecided,New] https://launchpad.net/bugs/348126
<Doonz> Hey guys i have a headless 8.10 server. WHat im trying to do is install a desktop on it so i can vnc from my xp box to that boxes desktop. can anyone point me to a guide that i can follow
<leonel> a desktop on a server ????
<olcafo> I use Putty to ssh into my headless ubuntu servers.
<olcafo> from windows
<Doonz> yes
<cemc> Doonz: just install vncserver
<Doonz> sudo apt-get install vncserver
<cemc> I have vnc4server
<cemc> and run the server with -localhost, then connect through putty/ssh with port forward, then vncviewer localhost:2 ;)
<Doonz> this server is in a dc
<Doonz> so nothing will be local
<Doonz> trying to set it up for a new ubuntu user
<Doonz> trying to make it like it was with his window both a
<cemc> ummm... don't think it's the best idea :)
<Doonz> box*
<cemc> vnc is slow :0
<Doonz> 100mbit both links
<Doonz> not worried about speed
<cemc> will be still slow trust me :)
<cemc> that's not a good ubuntu experience for a new user, through vnc ;)
<cemc> IMHO
<Doonz> and the terminal is much better
 * jmedina recomends FreeNX, it is much faster than VNC and it uses SSH for secure connections :D
<jmedina> the only problem I see is that it open new X sesion, AFAIK it can't attach to DISPLAY 0
<orudie> jmedina, hi
<jmedina> orudie: hi
<sendric> hello
<sendric> somebody can help me ?
<pmatulis> sendric: what do you desire?
<sendric> i would like to install and configure postfix to my laptop with ubuntu 8.10 at home, to be able to send email via php
<sendric> i have a dynamic ip address
<lamont> postfix does not send mail via php. it sends it via smtp.  (just sayin'....)
<sendric> ouuu
<sendric> :d
<sendric> i'm new in linux
<sendric> you can understand :p
<sendric> but still i need to configure my laptop to be able to send email via php
<lamont> I understand that there is a steep learning curve around mailers, yes.  what exactly do you mean when you say "send email via php"?  - where does php come into the equation, since it's totally separate from anything email
<olcafo> php has a sendmail plugin.
<sendric> i mean if i use this code:
<sendric> <?php
<sendric> $to = "myemailaddress@hotmail.com";
<sendric> $subject = "Test mail";
<lamont> so I rather expect that you want to have php send email, which would put it out of my knowledge base and into the php-guys
<sendric> $message = "Hello! This is a simple email message.";
<sendric> $from = "support@server.com";
<sendric> $headers = "From: $from";
<sendric> mail($to,$subject,$message,$headers);
<sendric> ?>
<lamont> other than the experience of blocked dialup addresses, and your ISP blocking the port, the default install of postfix, as an internet site or internet site with relay should just work.
<sendric> hmm
 * jmedina recomends using php-pear Mail function, it supoprts SMTP-AUTH
<pmatulis> sendric: yes, first get to send email from the command line and move on after that to the other stuff
<tinjaw> syslog question
<tinjaw> I have bandwidth messages showing up in two places: /var/log/syslog and /var/log/bandwidth
<tinjaw> I'm looking at syslog.conf
<tinjaw> I see kern.=debug   -/var/log/bandwidth
<andol> mathiaz: Really getting serious on the mysql bug triage? :)
<tinjaw> and, earlier, I see  *.*;auth.authpriv.none   -/var/log/syslog
<mathiaz> andol: me ? no... - my bot ? yes :)
<tinjaw> so, they are going to /var/log/syslog because of *.*     correct?
<mathiaz> andol: just kidding - It was time for some spring cleaning activities there
<tinjaw> they are going to /var/log/bandwidth because of kern.=debug    correct?
<andol> mathiaz: Well, kind of impressive anyhow :)
<tinjaw> so how do I exclude them from /var/log/syslog ??
<mathiaz> andol: now the next step is to get all my recurring questions turned into apport scripts
<mathiaz> andol: so that next time bug description will be even more accurate
<andol> mathiaz: make sense
<mathiaz> andol: most of the bugs are related to mysql failing to start/stop
 * andol makes a mental note to look into apport capabities in that regard
<mathiaz> andol: so this is the first target for apport integration and improvement
<andol> mathiaz: Talking about aport magic...
<andol> mathiaz: Lots of time you see automated responses in say France, or some even more obscure language :) How possible would it be to have apport or launchpad to some magical translation?
<andol> s/France/French/
<beawesomeinstead> does postfix in dovecot-postfix/jaunty allow sending emails through it to other servers without authentication?
<beawesomeinstead> *default configuration
<giovani> heh
<tinjaw> syslog-ng is what is installed by default on 8.10 server. correct?
<uvirtbot> New bug: #347108 in lyricue (universe) "package lyricue 1.9.4-0ubuntu2 failed to install/upgrade: dependency problems - leaving unconfigured" [Undecided,Confirmed] https://launchpad.net/bugs/347108
<bittin`> https://wiki.ubuntu.com/MobileTeam/Meeting/2009/20090402
<centHOGGr> hi, anybody here use webmin to view syslogs?
<JordiGH> I've got a server running F.
<JordiGH> No way a dist-upgrade to latest I-I willw work, right?
<JordiGH> Uhm, can I say "fucked" here?
<centHOGGr> ha
<centHOGGr> yeah if it's about server
<centHOGGr> if you do a dist-upgrade now what happens
<centHOGGr> any upgrade?
<JordiGH> Well...
<ewook> running F ? centHOGGr yes and no
<JordiGH> I haven't tried yet.
<JordiGH> FF, whatever it's called.
<centHOGGr> <jaunty alpha
<JordiGH> Fabulous Fox or whatever.
<ScottK> Beta is out now
<centHOGGr> yeah waiting till the full comes out
<centHOGGr> hi, anybody here use webmin to view syslogs?
<JordiGH> Ubuntu 7.04
<JordiGH> Wanna bring it to 9.04
<JordiGH> Or whatever seems reasonable.
<ScottK> JordiGH: You have to upgrade to Gutsy first.
<mattt> centHOGGr: no, why?
<centHOGGr> what is the uname -a
<JordiGH> So... FF -> GG -> II?
<JordiGH> Er, missed HH in there.
<centHOGGr> mattt: hi, i just want to look at the logs and I use webmin
<JordiGH> Do I have to do four dist-upgrades? :-(
<ScottK> JordiGH: Yes.
<centHOGGr> JordiGH: unless you are dealing with older hw
<centHOGGr> but I'm using alpha with a 500mhz super 7
<centHOGGr> pretty well
<JordiGH> Server has one gig of RAM...  Intel(R) Core(TM)2 Quad CPU           @ 2.40GHz
<JordiGH> Doesn't seem too old, right?
<centHOGGr> nah your fine :)
<centHOGGr> JordiGH: what is the uname -a on your server... maybe you can install the latest kernel
<JordiGH> 2.6.20-17-generic
<JordiGH> Why would I want the latest kernel?
<centHOGGr> i thought that was an upgrade
<centHOGGr> <2.6.28-11-generic
<JordiGH> The only reason I want to upgrade is so that I can install mercurial on the server.
<JordiGH> FF's repos are gone.
<centHOGGr> oh yeah
<JordiGH> Oh, are they releasing JJ soon?
<centHOGGr> this month
<centHOGGr> every 4th month and 10th month
<JordiGH> What if it isn't ready?
<mattt> centHOGGr: i'm pretty sure it's possible, but it's been a while since i've touched webmin and i always try to avoid it if possible.  :)
<centHOGGr> mattt: ok, how can you look at logs thru SSH? with nano or something better?
<jmedina> centHOGGr: tail+grep
<centHOGGr> ok
<jmedina> I like multitail for colorized logs
<jmedina> I prefer syslog-ng + php interface
<centHOGGr> so 'log' tail+grep
<centHOGGr> syslog-ng
<mattt> centHOGGr: i like less, if the file's not too large ... or tail is good to look at a file as it's being written to
<chrisadams> hi guys, what's the keystroke to organise in order of memory being used again?
<chrisadams> when you're in top?
<mfoster> just press h and it will tell you
<Iceman_B^Ltop> how do I scroll within top ?
<uvirtbot> New bug: #319515 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.0.67-0ubuntu6 failed to install/upgrade: subprocess pre-installation script returned error exit status 1" [Undecided,Invalid] https://launchpad.net/bugs/319515
<Baversjo> hi
<Baversjo> In my iptables configuration i have allowed connections to source port 22, is there any way i can access my server? I was supposed to allow connections on destination port 22 but got it wrong. Please help :)
<Baversjo> Anyone? I'm completely locked out of my server as i confused myself with source and destination port. Can I connect to my server with the "source" port 22?
<Alblasco1702> is that server at home?
<Baversjo> No :*(
<Baversjo> In france
<Baversjo> Am I f**ked? :(
<olcafo> I think you might be.
<olcafo> I make a point of not messing with IP tables if I only have SSH access to a server.
<Baversjo> okey :P
<Alblasco1702> it is to late for that point now
<Baversjo> YEE :(
<Baversjo> Damn, I had a server with 100 mbit/s dedicated internet..
<olcafo> Alblasco1702: yeah, I suppose so.
<Baversjo> for like 10 minutes
<Baversjo> But what is source port? I've read that your computer takes a random source port or something and send a connection request to the server
<Baversjo> can't I modify the source port and access my server?
<Baversjo> Can't find anything really usefull on google..
<Baversjo> Damn, it must suck to be a Network admin dealing with these problems all the time
<olcafo> Baversjo: what you're describing sounds like FTP, not SSH. But I could be wrong.
<olcafo> Not really my area of expertise.
<Alblasco1702> do you have something running on destination port?
<Baversjo> Yes, I have a SSH server listening on port 22 at that server. The problem is that I'm allowing SOURCE port of 22 to the server, while it instead should be destination port.
<Alblasco1702> olcafo ftp = port 21
<Baversjo> When I applied the changes I was like nooooo and I still had a ssh connection up with the server so I searched google for a way to reset IPTABLES but after 2 minutes the connection was aborted and now I'm locked out forever :(
<olcafo> Alblasco1702: yes, but does SSH assign a random port for connections like FTP.
<olcafo> Alblasco1702: that's all I meant. Like I said above though, I'm not expert in this area.
<Baversjo> So I can try to connect 60000 times and if I'm lucky it will choose port 22 and I will be able to connect? :P
<Alblasco1702> you mean secure ftp?
<Baversjo> SSH
<Baversjo> SSH
<olcafo> sftp is SSH. I just talking about regular old ftp... I'm not the one with the problem though, so it's not really important.
<Baversjo> sftp is not SSH. Sftp is FTP over SSH :)
<olcafo> :P
<Iceman_B^Ltop> isnt sftp an alltogether different protocol than FTP ?
<Alblasco1702> Baversjo i think that you have to contact the administrator on the collocation and ask of they have something as a remote insight.
#ubuntu-server 2009-04-03
<Iceman_B^Ltop> what is the default sudo timer ?
<Vog-work> Hello all, I'm getting -Alert! /dev/mapper/ddf1_RAID on a 8.10 server install, booting of of a fake Intel SATA raid
<Vog-work> I can run a dmraid -ay and exit out of the shell and ubuntu fill finish loading.
<Vog-work> But I still can't figure out why it can't boot normally. Is there some sort of module you need to have loaded inorder to get your array recognized?
<Vog-work> looks like https://bugs.launchpad.net/ubuntu/+source/linux/+bug/314395 is experiencing the same problem.
<uvirtbot> Launchpad bug 314395 in linux "Unable to boot Ubuntu 8.10 w/ RAID 1" [Undecided,New]
<Vog-work> running  apt-get upgrade to see if any updates takes care of this...
<mat1211> Hi, how do I disable the root user? lol
<mat1211> enabled it by mistake.
<jtaji> mat1211: sudo passwd -l
<uvirtbot> New bug: #354188 in mysql-dfsg-5.0 (main) "Add apport hook to gather relevant information" [Wishlist,Triaged] https://launchpad.net/bugs/354188
<Vog-work> updating the system does not solve the booting problem for bug 314395
<uvirtbot> Launchpad bug 314395 in linux "Unable to boot Ubuntu 8.10 w/ RAID 1" [Undecided,New] https://launchpad.net/bugs/314395
<Vog-work> could bug 220493 have anything to do with it? I'm running a raid 1 not 4 or 5
<uvirtbot> Launchpad bug 220493 in linux "[Hardy][Regression] dmraid45 target missing in latest kernel" [Medium,Fix committed] https://launchpad.net/bugs/220493
<Vog-work> Time for dinner bbl.
<uvirtbot> New bug: #352841 in openssh (main) "SCP over IPv6 address is very Slow. Takes Hours" [Undecided,New] https://launchpad.net/bugs/352841
<centaur5> Do xserver packages have to be installed on the LTSP server in order for the client to login to X?
<oh_noes> I have a .deb file I wrote and its doing something weird in the post install script.   Is it possible to turn on debugging during "apt-get install" to output all the pre/post install scripts with debug (bash -x) ?
<mattt> oh_noes: dpkg has some debugging switches i believe?
<Maelaian> I'm looking at switching over a group of servers on fedora and a handful of VM's in vmware esx. I was wondering how lightweight/minimalistic a basic ubuntu server install was compared to other distros, and how it faired performance wise, specifically when being virtualized, so that I could get the most out of the shared resources. I'd love to do a gentoo stage1 install or something like LFS, which I've done before for personal us
<ivoks> well, no open ports
<ivoks> meaning - no services by default
<_ruben> ubuntu runs like a charm virtualized
<ivoks> oh, virtualized
<_ruben> as for minimalistic, look into JeOS .. disk footprint isnt much smaller than standard server install though
<ivoks> yeah... it even has kernel specialized for running as a guest
<ltsphelp> can anyone help with ltsp and thin-clients not booting?
<Maelaian> ouch no 64 bit?
<ivoks> ltsphelp: well, i don't know exactly how ltsp works, but i'm guessing it relay on common tools; so, what's the problem?
<ivoks> Maelaian: ?
<friartuck> Maelaian there is a 64-bit version.
<Maelaian> Of JeOS?
<friartuck> Maelaian oh, I thought you were talking about ubuntu-server.
<ivoks> jeos isfor appliances
<ivoks> i need new keyboard :/
<Maelaian> Well a base for appliances, but it seems to be ubuntu tuned for what I am looking to do.
<ltsphelp> the thinclient fails at authentication. the auth logs say the user cannot be found
<lukehasnoname> Anyone used Grails?
<ivoks> ltsphelp: you get that from logs? could you paste on pastebin the exact error
<Maelaian> Having a 64bit VM is kind of silly I suppose why JeOS doesnt seem to offer it, but I have 2 very specific apps that only come in 64 bit versions.
<ivoks> Maelaian: use ubuntu-server with linux-virtual kernel and strip it down
<Maelaian> Ok, what would stripping it down involve?
<ivoks> removing wireless-tools :)
<Maelaian> I know ubuntu has the uhh similar to yum tool, would it be done through that utility basically?
<ivoks> yes
<ivoks> ubuntu-server is already quite bare system...
<Maelaian> And specifying the kernel would be done post install using the same utility?
<ivoks> right
<ivoks> apt-get install linux-virtual
<Maelaian> Apt, thats right.
<Maelaian> Alright, I think I can do some installs and performance testing and comparisons done then and go from there, thanks for the info.
<ivoks> np
<ltsphelp> paste-bin? i'm on another computer... goes sumfin like this
<ltsphelp> sshd[5729]: pam_lwidentify(sshd:auth) PAM config: global:krb5_ccache_type 'FILE'
<ltsphelp> sshd[5729]: pam_lwidentify(sshd:auth): failed to get GP info
<ltsphelp> sshd[5729]: pam_lwidentify(sshd:auth): getting password (0x00000000)
<ltsphelp> sshd[5729]: pam_lwidentify(sshd:auth): request failed
<ltsphelp> sshd[5729]: pam_lwidentify(sshd:auth): User 'xxx' is not known
<ivoks> pam_lwidentify is for active directory, iirc
<ivoks> what do you need it for on ltsp?
<ltsphelp> sori. i installed the alternate cd and added edubuntu thinking that the clients will just connect. dunno where the pam things come into it.
<ivoks> have you tried in #edubuntu?
<ltsphelp> thanks. i'll check
<_ruben> the -virtual kernels is pretty much the same as the -server kernel, but with less kmods available (this applies to 8.10 and newer, with 8.04 the kernels differ a bit)
<_ruben> i just use -server kernels for my vms .. has paravirtualization and all
<kwork> how can i fix packages that have failed at configuration because i manualy removed some files, otherwise the application is working
<kwork> can i somehow mark package configured manualy ?
<p_quarles> kwork: sudo dpkg-reconfigure package_name
<kwork> p_quarles, tnx
<kraut> moin
<pi_> hello all. I use ubuntu-vm-builder to generate VM on a Hardy server (64bit). The host disk is running RAID1 and a LVM partion on top of RAID. After generate (log at http://viettug.org/attachments/download/148/kvm.log) i cannot boot into VM (the guest grub doesnot work). Any idea?
<atomic__> does someone know how to route SMTP traffic through a specific interface with ip rule ?
<cemc> how can I unde a revoke-full in openvpn ?
<cemc> undo*
<dayo2> i'm following this guide: http://ubuntuforums.org/showthread.php?p=7004774#post7004774  2. Add a proxy entry to the apt system. This is for the gui Synaptics. How do I add a proxy entry on the server?
<ewook> dayo2: 2 add a proxy entry to the apt system. 3 is only for synaptic.
<dayo2> ewook: awesome. thanks!
<owh> Hi all. Over the weekend I intend to upgrade a 7.10 server to 8.04. I realise I'll need to use do-release-upgrade. The server is remote and on a medium speed link. I'd like to have it download packages while I sleep. How do I do that?
<yann2> you can run the upgrade in a screen and hope it wont ask too many questions ;)
<owh> Not really the answer I was looking for.
<owh> I'd love it to actually go about downloading all the stuff and actually running it when I'm watching.
<yann2> may be an argument of apt-get
<yann2> I think there was like --download-only
<yann2> let me check
<owh> Does do-release-upgrade have a download only option?
<yann2> -d, --download-only
<yann2> sudo apt-get upgrade -d
<yann2> or do-release-upgrade, didnt know about that one
<yann2> does do-release-upgrade really exists? its not in the man
<owh> The recommended process is using do-release-upgrade, it takes care of all kinds of magic behind the scenes, fixing known transition problems etc.
<jpds> update-manager-core: /usr/bin/do-release-upgrade
<yann2> right
<yann2> well I am just upgrading to jaunty with a dist-upgrade so I hope it'll be fine :)
<yann2> so your problem is that it's actually undocumented
<yann2> jpds > if I were you I'd run a sudo apt-get dist-upgrade -d , and then the day after a do-release-upgrade
<yann2> I bet that it'll work
<jpds> That's what I was thinking.
<jpds> d-r-u doesn't have a download-only option in the source.
<owh> Now that's a canny thought. I wonder if it will work, or just delete all the packages it just downloaded.
<yann2> ubuntu has a slight habit of creating tools and forgetting about the man sometimes
<yann2> owh > it's worth a try :P
<owh> Hmm, just realised that dist-upgrade will only work if I change the sources.list
<owh> That looks like asking for trouble :(
<owh> It's amazing how conservative you become if your server is not in the same room :-)
<embrik> ï»¿I would really like to use ubuntu on my workstations - but I can't figure out how to setup server with "roamin profile"
<ivoks> what's that?
<embrik> roaming profiles is a windows-expression I think - in practical use - my pupils can log into any workstaion at school and get their own desktop and so on
<ivoks> each workstation should mount /home from NFS server
<ivoks> and you could use LDAP for username/password
<embrik> ivoks: I've been reading about nfs, but I find it a bit difficult. I'm in need of a howto which explains it step by step. Yes, you're right. Each ws must mount home/%user%
<ivoks> have you looked for howtos?
<embrik> ivoks: yes, I find howtos about nfs and ldap - but I can't figure out what to do. They don't expalin how to mount home from nfs-server. I'm not very technical, but have no problems with following a howto :-)
<ivoks> mount -t nfs server_ip:/exported/path /home
<embrik> ivoks: ok - but where comes the username?
<ivoks> why do you need username?
<ivoks> export whole /home
<embrik> ivoks: I don't understand. How does nfs know that it is jamesk's home which are supposed to be mounted?
<_ruben> you dont
<_ruben> you mount *all* homes
<_ruben> file permissions take care of the rest
<ivoks> embrik: jamesk's home is /home/jamesk
<ivoks> embrik: if you mounted /home, then anything on top of it will be there
<ivoks> embrik: do you understand concept of home directory on unix?
<embrik> _ruben: ok - I see, but when jamesk opens home folder ( a shortcut on his desktop) he ends in his own homefolder?
<ivoks> embrik: imagine My Documents
<embrik> ivoks: I understand the concept .'-)
<ivoks> er... My Documents is wrong example
<ivoks> maybe that's why you are confused
<ivoks> what's the name of the directory were all the data of all users is stored in Windows?
<embrik> all users i think
<ivoks> nope
<ivoks> top of that is...
<ivoks> Documents and Settings?
<embrik> documents and settings?
<ivoks> right
<ivoks> So, Documents and Settings = /home
<ivoks> if you mount /home, then all user's data is there
<ivoks> Documents and Settings/Administrator /home/jamesk
<ivoks> so, if you share Documents and Settings from server and mount it on clients as Documents and Settings
<ivoks> then all users have their data on all computers - right?
<embrik> ivoks: you and ruben may have enlightened me a bit to day :-) What you have told me noe may get me started
<embrik> ivoks: i follow you - ubuntu server edition has got both nfs and ldap?
<ivoks> yes
<dayo2> ivoks: do u have any good links for nfs and such?
<ivoks> dayo2: man exports :D
<embrik> what do you think about this: https://help.ubuntu.com/community/SettingUpNFSHowTo
<incorrect> is it possible to set a different text mode for the installer?
<dayo2> ivoks: embrik: thanks, that's a good start
<ivoks> embrik: good start; it might get you all the way
<ivoks> incorrect: ?
<ivoks> embrik: notice the: /home 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)
<incorrect> ivoks, I am pxe installing my servers,  I want a larger text console during installation so i can read the output
<embrik> ivoks: i've got an exisitng server with ldap and about 200 users. Could i export them and import them into the new server?
<ivoks> embrik: yes
<ivoks> embrik: there are two ways
<_ruben> replication comes to mind
<ivoks> embrik: one is slapcat/slapadd - you do this when slapd is offline
<_ruben> (which is something ive been meaning to look into)
<ivoks> embrik: creating ldif file and importing it - you do this when slapd in online
<ivoks> incorrect: i belive you can change it
<embrik> ivoks: great, I must save this log :-)
<ivoks> incorrect: default is 80x24, iirc
<_ruben> incorrect: you can probably just add an appropriate vga= line to the boot cmdline
<ivoks> incorrect: like vga = 773
<ivoks> that's 1024x768x256 :D
<ivoks> embrik: take you time
<ivoks> embrik: get familliar with nfs and ldap before doing anything
<_ruben> assuming framebuffer is available during install
<embrik> ivoks: I have a test network :-) I will not get into production before I've had an expert to look into it :-)
<ivoks> embrik: but once you do it, you'll feel good about your self, cause you'll know a lot more than you thought it's possible :)
<ivoks> nfs/ldap does some strange things to humans :)
<ivoks> i belive we have some helper apps in ubuntu
<embrik> ivoks: right, well thank you - have to finish dinner - bye
<ivoks> auth-client-config - pam and NSS profile switcher
<embrik> ivoks: are you talking to me? I'm on my way to the kitchen...
<ivoks> yes
<embrik> ivoks: are the gui-apps?
<ivoks> no, ubuntu server has 0 gui apps
<ivoks> it doesn't have gui at all
<embrik> unless you install desktop
<ivoks> that doesn't change a thing
<aurax> sup all, i have networking question maybe someone can help out...
<aurax> hello, i need help with weird problem that i'm experiencing in my network. i have two juniper 4200EX switches (48-poe) i have random disconnect of client from the switch and it's looks like negotiation problems. some times the connection breaks and sometime it re-negotiate at 10mbit, i have disabled stp,rstp protocols just to make sure that if there's a network loop stp won't disconnect clients. any idea?
<ivoks> app of the year: apache directory studio
<mat1211> Hi, I have a question.  When I try to give a group of users permission to write to a directory, how do I do this? I can't figure out how to get the chown command to work properly lol
<maxb> chown username:groupname (or chgrp groupname)
<BlueT_> maxb: or, chmod g+rw /the/path/to/dir/
<BlueT_> mat1211: or, chmod g+rw /the/path/to/dir/
<BlueT_> maxb: sorry, wrong person
<maxb> Indeed, both are part of the solution.
<maxb> g+s may also be advisable
<maxb> Sadly Linux provides no way to grant write permissions to a group, and prevent users from writing files writable only by themselves individually in that directory
<ivoks> ?
<embrik> maxb: chmod -R 760 /name_on_directory (owner: all permisions, group write permission, anybody else no permissions
<maxb> embrik: You've omitted group traverse permission, which is almost certainly a mistake, and that still doesn't stop users in the group from creating files not writeable by the group.
<embrik> Now I understand. A user creates a new codument wchich will be read only for other users in the same group. Yes, thta's annoying. Mus run a cron job every 15 minutes to fix it
<jpds> embrik: chmod -R 7... <- won't that make all files exectuable?
<embrik> jpds: yes 7= r+w+x
<jpds> Exactly. :) Probably not something one wants to do.
<embrik> jpds: maybe not, but I always give the owner rwx, don't know why.
<maxb> You should almost always have r and x set as a pair on directories
<jpds> Something like: find . -type d | xargs chmod 0770 - would be better.
<jpds> s/./"/path/to/dir/"/
<mat1211> Okay, sorry I got disconnected for a sec.  What I want to do is give a group of users write permissions for only one directory, and that dir is /var/www/uploads.  I try and do this but when I use the sudo chown command it says operation not permitted.  Is there another way?
<ivoks> what's the name of the group?
<mat1211> the name of the group is hmm, lets say uploaders
<stickystyle> mat1211: If your user account is not the owner, and your not in the group 'uploaders', then you will not be permitted to make that change.  Use sudo.
<mat1211> I do use sudo.
<ivoks> sudo chgrp /var/www/uploads
<ivoks> bah
<ivoks> sudo chgrp uploaders /var/www/uploads
<ivoks> sudo chmod g+rwx /var/www/uploads
<ivoks> looks like IBM really owns Sun
<ivoks> $7 billion
<stickystyle> ivoks: what site has coverage?
<ivoks> phone...
<mat1211> thx
<ivoks> http://www.nytimes.com/2009/04/03/technology/business-computing/03blue.html?_r=2&ref=technology
<stickystyle> Ah, so it's not 100% final just yet.
<ivoks> it's not, but this would mean that big blue is back
<ivoks> with a bang :)
<stickystyle> I'm not exactly sure Sun provides much in the way of 'bang' these days.
<stickystyle> If takes place though I would like to see the PR mess that is becoming MySQL get cleaned up.
<ivoks> stickystyle: it's not the Sun that will bang, but the whole profile of IBM
<ivoks> almost full control of UNIX
<mat1211> When I do the chgrp thing it still says opperation not permitted.  would it matter if I was using an external hd?
<ivoks> mat1211: what filesystem is that?
<PhotoJim> mat1211: if you're using a Windows filesystem, absolutely yes.
<stickystyle> mat1211: send a pastebin of $mount
<ivoks> probably FAT
<mat1211> I think I may be using a windows fs, its vfat
<PhotoJim> mat1211: you'll have to reformat it with EXT3 (or another Linux-specific filesystem).
<mat1211> arrgghh lol
<ivoks> but all his pr0... data is there!
<ivoks> :)
<mat1211> how do I reformat it with a linux fs? and if I do that will windows recognize it?
<ivoks> windows is ego-centric
<mat1211> ?
<PhotoJim> mat1211: Why do you need Windows to recognize it?  No, Windows won't recognize it.
<ivoks> it know only about its own filesystems
<mat1211> I have a windows computer, I am getting a apple comp soon but for now I may need windows comp to work with harddrive.
<mat1211> is there a driver I can install onto my ubuntu server that will allow me to do these things?
<ivoks> FAT doesn't support users
<PhotoJim> mat1211: if you're using it for web hosting on your Linux machine, you won't be able to use it on your other machines anyway.  I think you'd be better off to get a different hard disk for this other use.  They are getting cheap enough.
<ivoks> so, you can't set up users on FAT
<ivoks> this has nothing to do with OS
<mat1211> hmm, what is the command to reformat the disk with the right filesystem?
<mat1211> sudo umount /mnt
<mat1211> woops
<mat1211> wrong window lol
<PhotoJim> mat1211: mkfs.ext3 (assuming you want to use EXT3, it's a good, commonly used filesystem).  but you need to know how the drive is partitioned first.  and if you do this, you'll erase everything on it.  do some googling before you begin.
<mat1211> its only one big disk
<mat1211> so...
<ivoks> mat1211: there's a ext3 driver for windows
<ivoks> and i think OSX supports ext3 anyway
<PhotoJim> what is Linux calling the drive?  it should be sdx.
<PhotoJim> ivoks: I'm pretty sure OS X supports it.  but I think his solution is NFS, not Windows-compatible file systems.
<ivoks> nfs?
<PhotoJim> Network File System.
<PhotoJim> i.e. networking.
<PhotoJim> If you need to be able to write to this disk from other systems, enable NFS on the Linux machine, and mount the disk as a remote file system on the client machine.  That works on Linux or OS X.
<PhotoJim> and if you enable Samba, you can mount it on Windows too.
<ivoks> oh, right
<PhotoJim> I export /var/www as an NFS file mount.  and I enable it in Samba.  I mount it to my Windows machines as W:.
<PhotoJim> that way I can copy web content right to it from any machine on my LAN.
<incorrect> in my preseed file I have a disk recipe,  that is cool but i want a second one for the other hard drive, is this possible?
<mat1211> what is nfs? :P like networking?
<incorrect> mat1211, some people say its like magic
<mat1211> ......
<incorrect> some people say if you close your eyes for just long enough and wish really really hard miracles happen
<jpds> kirkland: ping.
<jpds> kirkland: ecrytfs is freaking me out on me: http://pastebin.com/f144931b7
<stickystyle> PhotoJim: Windows servers can mount NFS also.
<mat1211> whats better about nfs than say ext3 or whatever the other is?
<incorrect> mat1211, you don't actually know what nfs ?
<incorrect> mat1211, or are you actually just trolling?
<mat1211> no, I actually don't know nfs, I'm quite new at this stuff.
<incorrect> ok nfs is a protocol that allows you to export your local file system
<incorrect> your local file system could be, ext2,3,4,jfs,xfs etc
<mat1211> I, see...
<mat1211> and could I do this nfs thing without reformatting my harddrive?
<mat1211> lol
<incorrect> yes
<incorrect> are you using ubuntu?
<mat1211> yes im using ubuntu.
<incorrect> why do you think you want nfs?
<mat1211> How do I set my external hd to use nfs?
<mat1211> So I can get my external harddrive to work with users, or would I still need ext3 for that
<incorrect> http://ubuntuforums.org/showthread.php?t=249889
<incorrect> what file system is your external hard drive?
<mat1211> fat
<incorrect> you probably want samba then
<incorrect> can't say i've ever tried to nfs export fat
<stickystyle> mat1211: The NFS solution was proposed as  a way for you to be able to share the data on that FAT drive between the three different OS's you mentioned.  So you would need the drive formated with a filesystem that supports users and POSIX permissions, then you would be creating a network share by way of NFS or samba that your windows or mac could mount over the network.
<mat1211> ah, I see.
<mat1211> Thanks lol
<incorrect> how can i seed partman to partition 2 drives
<PhotoJim> stickystyle: Windows servers can?  didn't know that.  how about Windows clients?
<mathiaz> kirkland: hey - I've got some feedback on kvm 84 on hardy
<mathiaz> kirkland: I've been running it for a few weeks now
<mathiaz> kirkland: it's stable for my usage pattern
<PhotoJim> mat1211: yeah, NFS has nothing to do with the file system on your disk.  what it lets you do is read and write data to and from that disk, without you having to disconnect it from the Ubuntu server.  any machine on your network could write or read data to or from that disk.
<mathiaz> kirkland: however I've noticed some performance changes
<PhotoJim> mat1211: you wouldn't need to disconnect it to put stuff on it.  just put data on it over the network.
<mathiaz> kirkland: especially on the host load
<stickystyle> PhotoJim: Win2k, WinXP (didn't relize it could also) can both use 'services for unix' from MS. 2k3 has it built in.
<PhotoJim> stickystyle: I didn't realize that was even an option.  good to know.
<mathiaz> kirkland: if I do a dist-upgrade (for example) in a guest the host load goes way up (8 to 10)
<PhotoJim> stickystyle: I tend to just stick external EXT3/whatever drives on my server and access them over the network, but it's good to have options.
<mathiaz> kirkland: and the guest can become unresponsive for a couple of seconds
<stickystyle> PhotoJim: Options are what the whole linux game is all about :)
<PhotoJim> stickystyle: true indeed!
<mathiaz> kirkland: unfortunately I don't have any metrics to backup this claim - it's just my perception of using guests.
<PhotoJim> stickystyle: although there are enough options that some of the options are unnecessary much of the time, so one has to learn about them serendipitously :)
<mathiaz> kirkland: but something has definetly change performance wise
<stickystyle> PhotoJim: also true indeed.
<mathiaz> kirkland: with two or three guests running at the same the load on the host can go up to 20/30 sometimes
<uvirtbot> New bug: #354568 in likewise-open5 (universe) "Likewise Open5 does not unregister pam-auth-update profile when removed" [Medium,Triaged] https://launchpad.net/bugs/354568
<mathiaz> kirkland: If I install packages in all the guests at the same time
<mathiaz> kirkland: what do you think about that?
<mathiaz> ivoks: hi - did you have some time to test the evolution-mapi plugin?
<ivoks> mathiaz: nope, the exchange environment is broken :(
<mathiaz> ivoks: you mean that you cannot test it or that the plugin is broken?
<ivoks> other reported that it works, so i belive it is working
<ivoks> i cannot test
<mathiaz> ivoks: ok - I was thinking about writing a call for testing
<mathiaz> ivoks: to get more coverage on the plugin
<mathiaz> ivoks: on the ubuntuserver blog
<ivoks> sure... i still don't see it as a server topic, but well... :)
<ivoks> it's an enterprise topic :D
<mathiaz> ivoks: right - I'd say that ubuntu server users are more likely to have access to an exchange environment
<uvirtbot> New bug: #354578 in likewise-open5 (universe) "Joining/leaving the domain leaves a modified SSH config" [Low,Confirmed] https://launchpad.net/bugs/354578
<genii> Interesting bug
<uvirtbot> New bug: #354580 in likewise-open5 (universe) "Joining/leaving the domain leaves backup files everywhere, even after purge" [Low,Confirmed] https://launchpad.net/bugs/354580
<ttx> genii: nothing like etckeeper to reveal naughty packages.
<Vog-work> Hey there has anyone figured out a fix for https://bugs.launchpad.net/ubuntu/+source/linux/+bug/314395
<uvirtbot> Launchpad bug 314395 in linux "Unable to boot Ubuntu 8.10 w/ RAID 1" [Undecided,New]
<uvirtbot> New bug: #354585 in mysql-dfsg-5.1 (universe) "package mysql-server-5.1 5.1.31-1ubuntu2 failed to install/upgrade: sub-processo post-installation script retornou estado de sa?da de erro 1" [Undecided,New] https://launchpad.net/bugs/354585
<mathiaz> jdstrand_: regarding qrt and README.multi-purpose vm - is there a reason to use bind+dhcpd rather than dnsmasq?
<jdstrand_> mathiaz: mostly because bind and dhcpd are the ISC reference implementations and in wider use
<mathiaz> jdstrand: I'm looking at automating the process of creating a multipurpose vm
<mathiaz> jdstrand: in order to make easier to setup a test environment
 * sbeattie votes for dnsmasq 
<mathiaz> jdstrand: and it seems that using dnsmasq as the dns/dhcp server in such an environment is easier
<jdstrand> mathiaz: totally agree with ease of use
<mathiaz> jdstrand: OTOH dnsmasq is in universe, while bind+dhcpd are in main
<ScottK> JDStone: Did you get my ping on clamav updates?
<ScottK> Err sorry JDStone.
<ScottK> jdstrand: ^^^
<jdstrand> mathiaz: I wonder if you will have all the functionality required when using dnsmasq though. eg dnssec, tsig, dynamic updates, ...
<mathiaz> jdstrand: right - I'm looking at the dnsmasq man page.
<jdstrand> mathiaz: we (I) started that document so that I could test security updates and functionality against a fully loaded vm. that me be a different use case fro what you have
<mathiaz> jdstrand: dynamic updates are automatic since dnsmasq does both dhcp and dns
<jdstrand> ScottK: no I didn't
<ScottK> [19:40:59] <ScottK> jdstrand: Would you please have a look at Bug #354190 - it's both security fixes and apparmor profile fixes.  I think it's ready to go.
<mathiaz> jdstrand: right - IIUC the multipurpose vm is a system that runs in your testing environment and provide standard servicesd
<uvirtbot> Launchpad bug 354190 in clamav "Security fixes from clamav 0.95 need backport" [Medium,In progress] https://launchpad.net/bugs/354190
<mathiaz> jdstrand: it's not supposed to be the system to be tested
<jdstrand> mathiaz: is dnsmasq able to do all the dhcpd goodies? like ntp-server, etc? do you care?
<ScottK> That was in #ubuntu-hardened last night.
<mathiaz> jdstrand: ntp-server -> handing out the ntp-server option?
<jdstrand> ScottK: ack. thanks
<jdstrand> mathiaz: yes, and others like tftp, etc
<ScottK> jdstrand: No problem.
<mathiaz> jdstrand: yes.
<mathiaz> jdstrand: everything related to Dynamic updates is not needed for dnsmasq
<mathiaz> jdstrand: it's include OOTB
<jdstrand> mathiaz: and I suppose it'll do all the SRV records that can be used with kerberos (this isn't in that document yet, but planned)
<mathiaz> jdstrand: now IIUC dnssec is not supported by dnsmasq
<jdstrand> mathiaz: honestly, if it greatly speeds development to use dnsmasq, I'm not sure dnssec is enough of a reason not to use it
<mathiaz> jdstrand: SRV and TXT records are supported
<jdstrand> mathiaz: if you do use dnsmasq, can I request that you update README.multipurpose-vm to include it
<jdstrand> ?
<jdstrand> I'd like to have more than your script for documentation ;)
<mathiaz> jdstrand: sure - I'll give it a shot
<jdstrand> mathiaz: cool, thanks
<jdstrand> nxvl: hey, have you been coordinating with ScottK on clamav? specifically bug #354190?
<uvirtbot> Launchpad bug 354190 in clamav "Security fixes from clamav 0.95 need backport" [Medium,In progress] https://launchpad.net/bugs/354190
<oruwork> hi, i need help with sshd key
<ScottK> jdstrand: We've been talking about clamav stuff, but I don't recall if we discussed that one.
<jdstrand> nxvl, ScottK: I'll get intrepid going-- just thinking about hardy and earlier
<ScottK> jdstrand: In the bug I make recommendations about how to deal with the earlier releases.
<ScottK> nxvl is working on libclamav rdepends for Jaunty right now.
 * jdstrand nods
<nxvl> yup
<nxvl> once we are finish with jaunty i was going to start with the SR stuff
<jdstrand> I just didn't see nxvl referenced in the bug, so wanted to know what was happening there
<jdstrand> cool. thanks nxvl!
<jdstrand> and ScottK! :)
<ScottK> He's in the ubuntu-clamav team so he gets all the bugmail.
<jdstrand> ok cool
<ScottK> Actually, maybe he doesn't
<nxvl> actually i don't
<ScottK> I think that just goes to me now that I consider it.
<nxvl> the team is not subscribed
 * ScottK needs to look into that.
<oruwork> should i change the ssh listen port from 22 to 2222?
<oruwork> or can i change it to any other port ?
<Deeps> you can change to any port you want
<Deeps> moving away from port 22 reduces the risk from brute force attacks, but increases inconvenience
<oruwork> will the hacker be able to tell which port sshd is listening on ?
<stickystyle> oruwork: Yes, anyone can tell what port ssh is open on by scanning all available ports on you box, looking for the one that sshd answers on.  However most bots that are scanning these days go for the low hanging fruit and just focus on seeing if ssh is open on port 22 (and port 2222 more recently)
<oruwork> is there a way to jail users in their home directoires ?
<stickystyle> oruwork: for ftp/sftp usage or shell?
<oruwork> for shell
<oruwork> and for any
<oruwork> but shell primarly
<stickystyle> Sure you probably *could* do that, it would be a major pain to administer though.  lets step back and ask *why* you want to do this.
<oruwork> my system had been compromised
<oruwork> one of the users had a really weak password
<Deeps> ubuntu forums has a relatively straightfoward guide on how to do it if you want to jail users into a shared jail
<Deeps> if you want each user in their own jail, it's basically the same as described in the forums, but creating a new jail for each user
<oruwork> do you have a url Deeps ?
<Deeps> better served would be enforcing more secure passwords though, i think you can do that with a pam module
<genii> Deeps: In that case /home is their root?
<genii> (group jail)
<Deeps> genii: /home/jail/home/$user
<Deeps> you can have unjailed users too
<Deeps> oruwork: nope, google ubuntu user jail should give you relevant hits though
<Deeps> genii: so the jail root would be /home/jail
<oruwork> Deeps, is this what you are talking about? http://ubuntuforums.org/showthread.php?t=248724
<genii> Deeps: Interesting
<Deeps> oruwork: that looks relevant too, yep
<Deeps> oruwork: although it's a bit old  (sept 2006?)
<oruwork> yeah
<genii> I wonder how that would work with hashed usernames
<Deeps> searching the forums directly may be better than googling, and will give results in date order too
<jdstrand> ScottK: hmmm. I see that the intrepid debdiff for -security has apparmor profile fixes. Those shouldn't be part of the security update. I think I should strip that out, upload to -security and then add them back in for a separate upload to -proposed after the security update goes out
<jdstrand> ScottK: while the changes are easy to see as correct, it is policy to not correct non-security bugs in -security
<ScottK> jdstrand: Your call.  For clamav I'd call people turning off apparmor due to profile problems a security issue, but up to you.
<jdstrand> ScottK: heh. ok, they could also try the -proposed update or modify their profile...
<jdstrand> ;)
<jdstrand> ScottK: I'd be happy to do the upload to -proposed
<ScottK> My major fear is we get no takers to verify and then we have two versions to maintain for a long time.
<ScottK> I do recommend staring at it a bit and seeing if you can convince yourself it's a security issue.
<jdstrand> ScottK: I see your point and am tempted by bug #312695, but ultimately I feel this is a regular bug as it does not cross privilege boundaries or cause data loss. I'm going to split it out
<uvirtbot> Launchpad bug 312695 in clamav "freshclam blocked by apparmor" [Medium,Fix released] https://launchpad.net/bugs/312695
<ScottK> jdstrand: OK.  Your call.
<zul> mathiaz: ping
<goofey> !seen Keyser_Soze
<ubottu> I have no seen command
<oruwork> i downloaded jailkit-2.6.tar.bz2 , how can i install it ?
<mathiaz> zul: hi
<zul> mathiaz: debian unstable has php 5.2.9 isnt that something we might want for jaunty even though its a bit late
<mathiaz> zul: hm - jaunty is at 5.2.6 now
<zul> with a lot of backported patches
<mathiaz> zul: right. It would be a two minor release bump ( .7 and .9)
<mathiaz> zul: mostly bug fixes
<mathiaz> zul: is there an ABI bump?
<zul> im not sure i only was aware about it this morning
<zul> i think it might break packages in universe though
<mathiaz> bdmurray: sbeattie: is there a standard reply for marking a bug invalid because the reporter is unable to provide the requested information?
<bdmurray> mathiaz: unable or has taken too long w/o responding?
<mathiaz> bdmurray: unable - bug 322647
<uvirtbot> Launchpad bug 322647 in mysql-dfsg-5.0 "mysql-server fails to instal with apparmour errors" [Undecided,Incomplete] https://launchpad.net/bugs/322647
<mathiaz> bdmurray: he wiped his system and doesn't have the log anymore
<bdmurray> mathiaz: no standard reply for that
<mathiaz> bdmurray: ok. I'll make something up
<ivoks> mathiaz: still interested in ldap stuff? :)
<mathiaz> ivoks: it depends - what's your offer?
<ivoks> mathiaz: management tool that beats evertyhing seen before
<mathiaz> ivoks: I'm your man - shoot!
<ivoks> mathiaz: http://directory.apache.org/studio/
<ivoks> it just too beautifull to be truth
 * jmedina loves apache directory studio
<ivoks> and they have screenshots made in ubuntu!
<ivoks> how cool is that?! :D
 * jmedina also has ads screenshots
<jmedina> in ubuntu of course
<jmedina> it is really cool, you can do batch operations
<ivoks> i've been using it for couple of days... i still think i'm dreaming
<jmedina> jojojo
<jmedina> it has everything, it is really functional and it has good GUI
<ivoks> schema editor
<jmedina> and it is nothing slow
<jmedina> yeap
<jmedina> log operations
<ivoks> yeah... it's snapier than some browsers ;)
<jmedina> you can see ldif like operations
<ivoks> jmedina: an ultimate tool
<jmedina> the only thing I didnt like it is the fist time you want to use 3 panels
<jmedina> I really dont know how I did it :S
<mathiaz> ivoks: how schema and DIT independent is it?
<ivoks> mathiaz: how can it be dependet at all?
<ivoks> mathiaz: it pulls DIT and schema from server
<mathiaz> ivoks: does it require LDAP knowledge or can it be used by ordinary users (ie can a secretrary use it to update the phonebook)?
<ivoks> mathiaz: well, it for admins, but after 2 hours of introduction, a secretary could use it too
<ivoks> it makes openldap much easier
<mathiaz> ivoks: ok
<ivoks> for secretary it has export to excel and import from it
<jmedina> of course with good acls
<jmedina> :D
<ivoks> :)
<jmedina> yeap import/export rules
<mathiaz> ivoks: well - I'm not interested in having an excel import
<ivoks> :)
<ivoks> csv
<ivoks> ldif
<mathiaz> ivoks: I'd rather have one tool to be used by the end user
<mathiaz> ivoks: so that the secretary doesn't need to use excel to update the phonebook
<ivoks> this one could be used by the end user, if acls are set up right and operator gets an itroduction
<ivoks> introduction
<ivoks> click on name on the left side, double click on the phone, enter it and press enter
<mathiaz> ivoks: are ACI taken into account with displaying attributes?
<ivoks> how hard can that be? :)
<ivoks> mathiaz: i haven't tried that yet
<mathiaz> ivoks: ie - if the logged in user doesn't have access to a specific attribute, it should be displayed at all
<ivoks> but it bolds musthave attributes
<mathiaz> ivoks: ie - if the logged in user doesn't have access to a specific attribute, it should *not* be displayed at all
<ivoks> i know what you ask
<ivoks> i haven't tried that yet
<ivoks> i might now :)
<mathiaz> ivoks: if that's supported then it can be used by any end users
<jmedina> I used Mandriva Directory server when was called Linbox directory server
<ivoks> mathiaz: err...
<mathiaz> ivoks: So that the UI would actually be configured by ACI and the LDAP administrator
<ivoks> mathiaz: if openldap server doesn't return attributes which are hiden, how can ads show them?
<jmedina> I like because you can create your own plugins, Im trying to create a plugin to manage amavisd-new attributes via web interface
<ivoks> jmedina: other have done it already :)
<jmedina> ivoks: a plugin for MDS?
<ScottK> jdstrand: Clamav 0.95.1 (bug fix only) will be out on Tuesday.  I'm travelling next week, so I'd appreciate it if you could hang out on #debian-clamav and coordinate geting the tarball from them, uploading, etc.
<ivoks> jmedina: no, a web interface
<mathiaz> ivoks: if ads supports building a dynamic UI component based on the returned attributes that would fit the use case
<ScottK> I may have internet access, but not for certain.
<ivoks> mathiaz: dynamic ui?
<mathiaz> ivoks: yes - according to the logged in user, the UI will have different attributes showed
<ivoks> mathiaz: as i said, it shows what ldap passes
<mathiaz> ivoks: great - I think should just take a look at it ;)
<ivoks> mathiaz: so, if ldap doesn't provide userPassword for some user, then that attribute won't be in the ui
<ivoks> mathiaz: go with the full suite, not a plugin for eclipse
<jdstrand> ScottK: you are talking about for Jaunty?
<mathiaz> ivoks: full suite?
<ivoks> mathiaz: http://directory.apache.org/studio/downloads.html
<ivoks> mathiaz: there's plugin and application
<ivoks> mathiaz: go with the application
<ivoks> plugin seems to be broken for jaunty's eclipse
<mathiaz> ivoks: well I'm download 73M - that must by the full suite
<ivoks> yes
<Maelaian> crucial sent me dual rank ram, but never gave me an option to choose between single/dual rank. How does one normally distinguish between the two?
<ivoks> jmedina: have you tried editing ACL's in ADS?
<jmedina> ivoks: nop, I rarely edit acls
<ivoks> ok
<jmedina> Im still getting usde to cn=config
<ivoks> yeah, me too
<jmedina> most because I only use hardy for production servers :S
<jmedina> so most of time I use slapd.conf but cn=config is a big thing, afaik it was requested by hp when they wanted to migrate their directory infraestrucutre to openldap
<jmedina> at that time it was not possible, so hp and symas sat to work together and created all the required overlays, including cn=config, constrains and others
<jmedina> then in 2008 they migrated everything to openldap
<jmedina> ivoks: have you used ebox for directory?
<ivoks> nope
<jmedina> what I like about ebox its samba integration and granular acls to shares
<genii> Hm. If I have DSL routers to a bond0 (which gets a LAN ip) how would I go about port forwarding to some box on the lan?
<Deeps> come again?
<genii> Deeps: Currently I have lan-eth0-nat'd to bond0-dsl routers                  But if I want forward port 80 for instance inwards to a web server on lan, it becomes sticky
<ivoks> mathiaz: fwiw, i can cofirm that acls do work
<ivoks> mathiaz: attributs hiden for the user don't show up in GUI
<mathiaz> ivoks: awesome
<mathiaz> ivoks: that means that any end user could use it without having to figure out what all the attributes are
<ivoks> mathiaz: correct
<ivoks> mathiaz: secretary could just have first and last name and the phone number
<mathiaz> ivoks: so now the next step is whether there is a mechanism in studio to be able to customize the UI representation for a specific attribute
<ivoks> to show description instead of the name
<Deeps> genii: i still dont really understand, but iptables -t nat -A PREROUTING -i bond0 -p tcp --dport 80 -j DNAT --to ip.of.natted.machine.with.webserver
<ivoks> 'Full name' instead of cn
<mathiaz> ivoks: ex: for the phone number use another label instead phone number
<Deeps> genii: may or maybe all you need
<mathiaz> ivoks: something like that (useful for translation)
<Deeps> genii: may or may not*
<mathiaz> ivoks: or if the corporate culture calls it differently
<jmedina> genii: you also need to enable IP forwarding
<jmedina> echo 1 > /proc/sys/net/ipv4/ip_forward
<ivoks> mathiaz: i'm not sure if that's possible :/
<genii> jmedina: I have ipv4 forwarded already, thanks
<ivoks> mathiaz: haha! it is :)
<ivoks> mathiaz: and it has built in support for different languages
<mathiaz> ivoks: my point being that an end user should see things like carLicense, employeeType, jpegPhoto
<mathiaz> ivoks: my point being that an end user should *not* see things like carLicense, employeeType, jpegPhoto
<mathiaz> ivoks: or any of the attribute name
<ivoks> ?
<ivoks> should or shouldn't? :)
<mathiaz> ivoks: should *not*
<mathiaz> ivoks: it's computer jargon - it should have a descriptive name
<ivoks> right... instead of carLicense he would see License of user's car
<mathiaz> ivoks: for the end user
<mathiaz> ivoks: yes - something like that.
<ivoks> that's possible
<mathiaz> ivoks: by changing the schema?
<mathiaz> ivoks: and editing the DESC ?
<ivoks> let me check
<mathiaz> ivoks: that would be the most natural place
<ivoks> right, choose an attribute
<ivoks> hit f6
<ivoks> and - rename it :)
<genii> Deeps: I'll try that laetr, thanks
<ivoks> that's editing desc in schema
<mathiaz> ivoks: and then the UI reflects it when you edit an object?
<ivoks> yes
<genii> Deeps: I suppose I'll require to forward from each DSL router port 80 to ip of bond0 then
<mathiaz> ivoks: awesome
<ivoks> it just looks silly
<Deeps> genii: you're doublenatting? yuck
<ivoks> maybe i'm doing something wrong:
<ivoks> displayName;lang-hr-imeiprezime
<genii> Deeps: When I had eth0 and bond0 on same lan range didn't work. So I have eth0/lan on 192.168.0.x and bond0/DSL routers on 192.168.1.x with nat from eth0 to bond0, currently
<Deeps> genii: doublenat, ugly
<genii> Deeps: I agree
<Deeps> genii: unless.... you can forward ports to 192.168.0.x on your dsl routers
<Deeps> and add a static route on your routers to route 192.168.0.x via the lan ip of bond0
<genii> Deeps: I tried that but they are crappy routers with no route adding capability
<Deeps> this really sounds like bargain basement bonding lol
<genii> Deeps: This co bought 4 DSL connections then called me to try and aggregate them. So the dsl modems were bridged and bond0 had issues trying to bond ppp0 ppp1 etc etc. So added routers between and got it going
<genii> Deeps: Yeah they are pretty cheap there too
<Deeps> bargain basement bonding
<uvirtbot> New bug: #354498 in likewise-open5 (universe) "Leaving a domain breaks NetworkManager DHCP" [High,Confirmed] https://launchpad.net/bugs/354498
<ivoks> anyway... it's a good start :)
<oruwork> how can I list hidden files ?
<oruwork> with ls command
<genii> oruwork:  with -a
<ivoks> bye all
<oruwork> I have smtpd.csr file, and i think its a public certificate file
<oruwork> ivoks, hi, bye brother !
<oruwork> and every time i use thunderbird to send out an email, its telling me to view the certificate
<jmedina> orudie: csr files usuallly are Certificate signing requests, so it is not a public cert
<ScottK> jdstrand: yes.
<oruwork> question. I have a public certificate for my mail server, and every time i use thunderbird to check or send mail, it is asking me to view it
<kirkland> jpds: pong
<kirkland> jpds: what's your underlying filesystem?  ext4?
<kirkland> mathiaz: okay, regarding kvm-84 and your performance issues....
<kirkland> mathiaz: are you using virtio on either disk or network?
<kirkland> jpds: there's a #ecryptfs channel on irc.oftc.net
<kirkland> jpds: i recommend going there to discuss this
<kirkland> jpds: ping me and tyhicks there
<embrik> I've use webmin in debian for some years - is e-Box a similar app?
<jpds> kirkland: It was ext4, but now I've reinstalled...
<kirkland> jpds: i've encountered some nastiness on ext4 as wekk
<kirkland> well
<kirkland> jpds: we're interested in recording those, if possible
<kirkland> jpds: but I, too, reinstalled with ext3
<jpds> kirkland: I decided to go with encrypted-private instead -home this time, I'll let you know if anyting happens.
<kirkland> jpds: cool, cheers
<oruwork> question. I have a public certificate for my mail server, and every time i use thunderbird to check or send mail, it is asking me to view it - how can i stop this ?
<jmedina> oruwork: you need a certificate, I guess you are using self-signed cert
<mathiaz> kirkland: I'm using virtio on both
<kirkland> mathiaz: my guess would be that virtio accelerates the guests so much, that they max out the processing on the host more quickly
<kirkland> mathiaz: and it's not throttled
<oruwork> jmedina, yeah i followed the guide to set up the mail server, its working but mozilla is bothering me about a certificate every time
<oruwork> mosilla thunderbird that is
<jmedina> oruwork: again, what type of cert?
<oruwork> and MS outlook 2003 is not asking anything
<mathiaz> kirkland: that is probably the case
<jmedina> mozilla's cert management *ucks
<mathiaz> I'm using lv in the same vg that has only RAID1 pv
<mathiaz> kirkland: ^
<mathiaz> kirkland: or I'm using files located on the same filesystem
<mathiaz> kirkland: is there a way to say virtio to be more laid back?
<oruwork> not sure, i followed this guide to create a certificate https://help.ubuntu.com/8.10/serverguide/C/postfix.html#postfix-smtp-authentication
<oruwork> jmedina, digital certificate for TLS
<oruwork> orudie, thats how they called it in server guide
<jmedina> oruwork: :S
<jmedina> well probably it is a self-signed
<oruwork> yeah
<jmedina> well I really dont like how thunderbird plays with self signed certificates, I always build my own CA
<jmedina> Im not sure if there is easy solution about that
<jmedina> probably someone else
<jmedina> ask ivoks i think he uses thunderbird
<jmedina> I only use kontact and does the job :D
<oruwork> yeah i tried both outlook and thunderbird, outlook doesnt say anything about the certificate
<oruwork> sends and receives mail scielently without any errors
<oruwork> thunderbird however, i have to click accept 3 times after pressing the send button
<oruwork> its annoing kinda
<jmedina> :d that is annoying
<oruwork> yeah
<oruwork> lol
<oruwork> ivoks left
<oruwork> he helps me out a lot :)
<jmedina> oruwork: well time to google, I would go to create your own CA, or use startssl free certs
<oruwork> CA ?
<jmedina> with is the same, you still have to import root cert to your clientes
<jmedina> Certificate Authority
<oruwork> yeah , i'm in the section of importing a certificate, just dont know where to get it from
<oruwork> and why its bothering me for it
<oruwork> jmedina, https://www.startssl.com/ ?
<jmedina> oruwork: I thinkg you better ask in mozilla or thunderbird channel, this has nothing to do with server
<jmedina> oruwork: yeap they issue free certs for mail clients or servers
<jmedina> you can also suscribe to cacert.org, they only provide 6months free certs with your own domain
<jmedina> you can create your own certs
<kirkland> mathiaz: ionice?  nice?
<kirkland> mathiaz: I'm not sure, honestly
<Maelaian> Are there any plans for a 64bit JeOS?
<trondkla> somebody have a mail program to recommend? For sending mail out from a web server :)?
<Maelaian> sendmail?
<mathiaz> trondkla: postfix is the default mail server in Ubuntu.
<Nafallo> Maelaian: there are no plans for further JeOS'es after 8.04 I don't think.
<trondkla> ok, thanks :) will check out both
<Maelaian> You mean 8.10?
<Nafallo> Maelaian: there is a minimal server install now however.
<Maelaian> Oh? Does that allow 64bit?
<Nafallo> yes.
<Nafallo> Maelaian: and no, I meant 8.04
<Maelaian> Well hell, where were you last night.
<Nafallo> I was visiting pubs. why?
<OscarTgrouch> is there a 64bit ubuntu version that will work on intel Core2?
<mathiaz> Maelaian: You can install a minimal server by hitting F6 at the boot prompt when installing Ubuntu Server
<Maelaian> I see it, but on f4
<mathiaz> Maelaian: oh right - F4 then
<mathiaz> Maelaian: this is option will install what used to be called JeOS
<Maelaian> Good, I didn't like the name.
<mathiaz> OscarTgrouch: there is only one version of 64bit Ubuntu Server and it should work on intel Core2.
<OscarTgrouch>  is there any benifit to running VMware server on ubuntu server 64 bit over ubuntu 32 bit when running multiple windows xp 32 bit systems?
<Maelaian> So I F4, hit enter, the menu goes away, and then use the install ubuntu like normal?
<Maelaian> It didn't really gibe an indiciation hitting f4 then enter did anything
<infinity> OscarTgrouch: Assuming VMware is happy running on 64-bit, then yes, the benefit would be better memory management and a generally more responsive system... (Unlike all other 32/64-bit variants, x86_64 has more registers than x86_32, and generally performs faster, despite the more bloated memory usage)
<mathiaz> Maelaian: that should do it
<JDStone> someone said my name
<Maelaian> and does using the minimal for virtual negate the apt-get install linux-virtual for the kernel?
<OscarTgrouch> thanks
<mathiaz> Maelaian: how did you diagnose that?
<Maelaian> I wanted to know if it was still necessary to do install it, or if it was the default.
<mathiaz> Maelaian: it's the default
<Maelaian> Ok, this is exactly what I wanted.
<Maelaian> I knew it had to exist.
<uvirtbot> New bug: #335341 in apache2 (main) "package apache2-utils 2.2.9-7ubuntu3 failed to install/upgrade: package apache2-utils is already installed and configured" [Low,Incomplete] https://launchpad.net/bugs/335341
<yeason1> quick question... is there a way to run a command, ex: virtual machine, from ssh and keep it running even after I disconnect...?
<infinity> yeason1: Background it, and then disown it, so losing the parent shell doesn't kill it.
<yeason1> ah, I know how to background it, how do I disown it?
<infinity> yeason1: "help disown" in a shell.
<yeason1> fair enough, thnx =)
<yeason1> infinity: thanks for the info, got what I needed
<Keyser_Soze> screen does that too
<Keyser_Soze> even aloows you to grab the shell from another computer via ssh
<Deeps> screen++
<andol> Just noticed bug #205996. Is it to late to have it "fixed" for Jaunty? The matter of changing the default ServerTokens should be fairly trivial I guess? How much discussion is required to find the proper one? (Myself I kind of like "ServerTokens OS").
<uvirtbot> Launchpad bug 205996 in apache2 "ServerTokens Full in apache2.conf (security risk?)" [Wishlist,Triaged] https://launchpad.net/bugs/205996
<andol> Well, guess I should have checked its actual status in Jaunty before I said anything :) Just a minute
<andol> yes, "ServerTokens Full" is still the default in Jaunty.
#ubuntu-server 2009-04-04
<owh> Salutations. I've just completed a do-release-upgrade from gutsy to hardy. This server will only boot if acpi=off. I've manually edited menu.lst and added that back in, but I have two questions. 1) How do I make it automagically add it? 2) Do I have to run something to make it "stick", like with lilo?
<owh> Hmm, I also recall that this is running an array. In the past I had to manually copy the boot block across with dd, do I still have to do that?
<owh> Finally, what checks do you recommend before I reboot this remote machine into its new OS?
<jmedina> owh: nothing
<owh> jmedina: Nothing?
<jmedina> grub reads menu.lst each time, boot info is not stored in MBR or boot partition
<owh> Cool, so that takes care of q's 2 and 3. 1 is for later, what about 4?
<infinity> owh: Find the line starting with "# kopt" in /boot/grub/menu.lst
<infinity> owh: Mine looks like "# kopt=root=UUID=b42d1082-e4b3-4eb0-98ce-f1ac84027c7e ro"
<owh> infinity: Yup
<infinity> owh: If you add "acpi=off" to the end of that, then subsequent runs of "update-grub" (which is what kernel installs do) will auto-append it to all your kernel lines.
<owh> Right below that is kopt_2_6, does that have any bearing?
<owh> It's showing my boot device.
<owh> # kopt_2_6=root=/dev/md2 ro
<infinity> owh: You shouldn't need _2_6 or similar, unless you are trying to have specific options for specific versions.
<owh> I've never tweaked it, but the fact that it's showing my array seems significant.
<infinity> owh: I can't think of anything that would automatically create versioned kopt stuff, except maybe some sketchy third-party configs.
<infinity> owh: But if the regular kopt isn't configured correctly, then either fix that, or use _2_6...
<owh> This was a virgin install gutsy and I don't install third party stuff - makes my job way too hard.
<infinity> owh: The more specific version will "win" when creating the boot stanzas.
<infinity> owh: (So _2_6_24 beats _2_6 beats _2 beats kopt...)
<owh> Seeing that it's running the current kernel, I'm thinking stick with kopts_2_6
<infinity> owh: Yeah.  The only issue with _2_6 is that it'll explode amazingly on upgrade to 3.0.x... But that's probably not around the corner anyway. :)
<infinity> owh: Anyhow, it's not like any of it's irreversible.
 * owh is guessing that there will be other challenges than a kopt_2_6 stanza :)
<owh> Final question. I recall in a previous kernel upgrade the machine chose the wrong network interface and I had to actually travel to the console to fix it. Anything I should check?
<infinity> owh: Modify kopt* how you like, then run update-grub, and diff /boot/grub/menu.lst{~,}
<infinity> owh: If it did what you wanted, yay.
<owh> infinity: Whoot!
<infinity> owh: NICs should ideally be statically mapped via udev rules... See /etc/udev/rules.d/70-persistent-net.rules
<owh> One problem down.
<owh> infinity: Well, the udev rule seems to map the correct MAC address to the correct device name.
<owh> So I'm guessing it won't swap eth0 and eth1 on me this time.
<infinity> owh: Then it should be just fine from here on.
<owh> Well, let me do the reboot thing and see if I come back cursing :-)
<owh> It's the time between "Connection closed by remote host." and the login prompt that kills yah!
<infinity> Takes my sparc buildds about 5 minutes to POST, I feel your pain.
<owh> At the moment I'm getting "Connection refused" :-)
<owh> I've got to invest into a KVM over IP.
 * owh is hoping that it's doing an fsck because this seems to be taking a long time.
<infinity> Well, "connection refused" tends to mean it's half booted, but no sshd running yet.
<infinity> Since not being online at all will usually warrant a "no route to host" or similar.
<owh> Yeah, that's what's keeping me from shouting :)
 * owh is going to step away from the keyboard and come back after a nice soothing cup of tea to see what gives. Thanks infinity for your help.
 * owh releases a sigh of relief.
<owh> Cups of tea can solve server problems :)
<infinity> owh: Good to hear.
<owh> Hmm, just  tried an ssh tunnel for RDP and it tells me that it "Could not request local forwarding." - is that my end or the server end?
<infinity> X11Forwarding disabled in /etc/ssh/sshd_config ?
<infinity> (Assuming it's an X11 client you're trying to forward)
<owh> Nah, it turns out to be on this end. The ssh process didn't close properly when the server went away.
<owh> Well, that's a win for remote admin. XP SP3 install complete across RDP, followed by a gutsy-hardy upgrade over ssh.
<Kamping_Kaiser> owh, tried ssh -vv ?
<owh> Kamping_Kaiser: No, it was a local ssh process that was holding open the local end of the tunnel.
<owh> Nothing like a non-responding server to get your heart going :0
<owh> Beats all the extreme sports I can think of :)
<twb> Inline skateboarding while hanging off the back of a tram is pretty fun.
<uvirtbot> New bug: #354430 in ubuntu (main) "samba mount does not work at boot time" [Undecided,New] https://launchpad.net/bugs/354430
<ULFfuntu> hi
<mat1211> Hi, how would I reformat my external harddrive with the hfs filesystem?
<mat1211> without reformatting the original harddrive that is in the server.
<twb> mat1211: HFS or HFS+?
<mat1211> does it matter?
<twb> They are quite different filesystems.
<mat1211> what are the differences, I just need something that works well with ubuntu and can support users and stuff.
<twb> I don't recall offhand.  I expect user-visible changes regarding case folding, at the very least.
<twb> For OS X, you should definitely use HFS+, not HFS.
<twb> HFS ought to be used only if you are dealing with MacOS 7 or earlier, I think.
<twb> Neither should be used unless you're dealing with OS X.
<twb> Even then NTFS (with ntfs-3g) might be a better interchange format...
<twb> I know from personal experience that if an HFS or HFS+ filesystem becomes corrupt, you can't repair it (i.e. fsck) from Linux.
<mat1211> With hfs+, can I change the owner of a directory? because I cannot do this with the fat32 fs I have now, also would the hfs+ work with windows?
<twb> Neither HFS nor HFS+ work with Windows.
<mat1211> do drivers exist that I can download for windows? anyway, can I change the owner of dirs? and what is actual code to reformat.
<twb> HFS (not HFS+) has a 2GiB file size limit, limits file names to 31 characters, and does not support Unicode in filenames.
<twb> If you are dealing with Windows I strongly encourage you to use ntfs3g over hfs+, as the former should be more mature and robust, and (I believe) works on both OS X and Windows out of the box.
<mat1211> I will probably use hfs+ then
<twb> "On Windows, a fairly complete filesystem driver for HFS+ exists as a commercial software package called MacDrive. This package allows Windows users to read and write HFS+ formatted drives, and read Mac format optical disks. ^[10]"
<twb> http://en.wikipedia.org/wiki/HFS%2B
<mat1211> ah, well then that changes things.
<twb> It seems that HFS+'s journalling support is not implemented on Linux, either.
<mat1211> ntfs3g you say?
<twb> Yes, ntfs-3g.
<twb> The kernel-based ntfs drive is essentially read-only.
<mat1211> How would I set my hd up with this fs, and does ubuntu come with drivers.
<twb> apt-get install ntfs3g or ntfs-3g, I forget which.
<mat1211> and I can write to ntfs3g, right? read only would be bad, im trying to set up a web server so...
<twb> Yeah, ntfs3g has write support.
<twb> If this disk is going to live in the Ubuntu server, it would be much better to use ext3 and provide write access to OS X and Windows desktops using Samba.
<mat1211> ah, and how would I put this on my usb harddrive? is there a reformat command :P
<twb> There are several commands to manipulate a disk's partition table, including gparted, cfdisk, parted and sfdisk.
<twb> Once the disk is partitioned, you would determine the partition's name (e.g. /dev/sde1) and use the mkfs(8) command to create a filesystem on it.
<mat1211> I just have a disk with one big partition, which I just want to change the filesystem of.
<twb> Note that changing the filesystem will destroy any data currently on that partition.
<mat1211> yeah, im backing it up atm.
<mat1211> What is the syntax for mk-fs?
<twb> Check the manpage.
<mat1211> manpage doesn't work, the ssh client im using for windows doesn't support it, im googling it atm.
<twb> Er, you're using putty?
<mat1211> yeah
<mat1211> unfortunately, I use a screen reader and its the only one I can find that even vaguely works
<twb> putty's great IMO
<twb> Maybe man doesn't work because you're using less as the pager?
<twb> Try PAGER=more man mkfs
<twb> You know if you have a braille reader you could probably hook that up to the server itself with good results.
<mat1211> no, I am blind and my windows screen reader isn't working properly with the page.
<mat1211> tried that, doesn't work also
<mat1211> lol
<twb> OK, there might be a trick to it that I don't know about; you'd have to ask the brltty group.
<twb> You can do /msg specbot man foo, to get the OS X manpage for foo.
<twb> Unfortunately specbot doesn't know about Linux manpages.
<mat1211> hmmm
<mat1211> How would I just format this with ext3 fs?
<mat1211> or is that a bad idea.
<twb> mkfs -t ext3 /dev/sde1
<twb> ext3 is the best choice for use by Ubuntu
<twb> But it is not supported well by OS X or Windows, except via a network fs such as CIFS (Samba).
<mat1211> ah.\
<mat1211> btw isn't it sdb not sde? when I mount its usually mount -t /dev/sdb1
<twb> It depends
<twb> I used a high number so you didn't accidentally run it and destroy one of your internal disks
<twb> A high letter, rather
<NaOH> can i bounce a postfix+sasl question off someone?
<twb> NaOH: #postfix?  I mean, you can ask here too :-)
<mat1211> Ah, what does the letter have to do? I didn't know it mattered.
<NaOH> cool, well i got my clients authenticating on a workaround, i have to do a ln -s /var/spool/postfix/var/run/saslauthd /var/run
<NaOH> for it to work properly
<NaOH> i see whats happening, and i'm assuming i have to point a config in the right direction to get this working kosher?
<twb> mat1211: letters are assigned to disks in the order in which they are found, starting at a.
<twb> mat1211: sda1 means SCSI disk a (first disk), partition 1 (first partition).
<twb> mat1211: just about everything is treated as a SCSI disk nowadays, but they used to also have hda1 for PATA disks.
<Maelaian> I just did a dist-upgrade, how do I get rid of the old kernels?
<p_quarles> Maelaian: apt-get remove
<Maelaian> Hmm, how do I tell if I am using the -virtual or -server kernel? uname -a?
<p_quarles> Maelaian: uname -r will tell you that
<Maelaian> It says server, I'm using the x64 server disk, and I did the F4 to the virtual minimal install. I did a dist-upgrade and it appears I'm using the -server now, when I want to be using -virtual.
<Maelaian> yea it server.
<p_quarles> so, is the server a virtual machine?
<Maelaian> It is.
<p_quarles> xen?
<Maelaian> ESX
<p_quarles> not familiar with that type of virtualization
<Maelaian> vmware.
<p_quarles> vmware doesn't need a special kernel
<Maelaian> Vmware recommends using the -virtual kernel as its tuned to run better as a guest.
<Maelaian> Which I do believe the virtual minimal install uses, but i believe it went to -server when I dist-upgraded, is there a way to let it know I always want -virtual and to switch over to that now?
<sbeattie> Maelaian: the -virtual kernel is the same as the -server kernel with a smaller set of modules (and thus self-identifies via uname|/proc/version as -server); "dpkg -l linux-image-*" will show you what you have installed.
<sbeattie> they were separate kernels with independent configs in hardy, but that changed in intrepid and jaunty.
<Maelaian> Alright. From this dpkg -l list, what shows the entry as being the one im using ?
<sbeattie> the version that correlates to the contents of /proc/version_signature
<Maelaian> Ok, yea looks right, just got confused by the uname output.
<sbeattie> yeah, I'd rather it was identified itself as -virtual, to prevent exactly this kind of confusion.
<Maelaian> All I know is that this time I didn't have to take out wireless-tools and wpa_supplicant ;)
<Maelaian> My old kernel is listed in dpkg --get-selections with deinstall next to it. How do I get it out?
<twb> vmware needs a special kernel *module*, however.
<Maelaian> yea the vmware tools?
<Maelaian> like the vmxnet driver etc.
<twb> Maelaian: I meant for the host OS
<twb> Sorry, I didn't read the scrollback too carefully.  Maybe I misunderstood.
<Maelaian> Oh. Sorry, yea, I'm using ESX server.
<rags> anyone know of any log analysis package that has a web interface?
<ropetin> What kind of logs?
<rags> I know of logshow and logcheck, but they just mail you the logs
<rags> genreral logs...
<rags> syslog...just want  a better interface, rather then a text file...
<rags> with some searching and analysis thrown in...
<rags> I know of splunk...but tht is an overkill..I want something light.
<ropetin> I just found something called phpLogCon, not sure if it's any good or not though
<rags> I remember some package called phplog analyser or something..but can't get it's name...
<ropetin> The one I use is even more over kill than splunk!
<ropetin> http://www.syslog.org/wiki/Main/LogAnalyzers
<rags> aha..I think tht's the one..wht do you use btw?
<ropetin> One of those will surely do it
<rags> wonderful!
<ropetin> It's an in-house developed web-app (event analysis is what I do for a living, at least sometimes)
<rags> phpLogCon seems to be the perfect tool...
<mattt> dang wiki is down
<uvirtbot> New bug: #354243 in samba (main) "Nautilus fails to browse any share on network to which Iomega Home Network Hard Drive (MDHD500-N, firmware K108.W15) is attached." [Low,New] https://launchpad.net/bugs/354243
<Deeps> trying to remount / as ext2 (is ext3), `mount -t ext2 -o remount /dev/sda1 /` doesn't appear to be working, any ideas?
<uvirtbot> New bug: #355151 in kerberos-configs (universe) "suarez" [Undecided,New] https://launchpad.net/bugs/355151
<Noble> Hi, I'm looking for a why to get all the computers home (7 or so) to log into a server and get their permissions and storage from there. Is this a samba domain thing? How should I go about it?
<jpds> Anyone know why etckeepers keeps wanting to write to my users's ~/.bzr.log instead of roots?
<jmarsden> jpds: Perhaps you are running etckeeper from sudo or su and it respects the $HOME variable which is still set to point to your user's home directory?
<jpds> jmarsden: Yeah, that's what I thought, but it's not happening on another system I have it.
<jmarsden> jpds: Try su -   and then run etckeeper from there and see if if then does what you want?
<jpds> jmarsden: It does with sudo -i.
<jmarsden> Then that has to be it... sudo -s leaves the $HOME alone, sudo -i changes it...
<jpds> Yep, but it works fine when I install a new package on the other computer with sudo apt-get ...
<ivoks> apt-cacher?
<jpds> ivoks: No, I'm having problems with etckeeper+bzr and it wanting my ~/.bzr.log file.
<jmarsden> Well, as a test you can try something like   sudo HOME=/root whatever-command-you-test   # ... does that work?
<ivoks> ah... :)
<jpds> That works.
<jmarsden> Therefore, $HOME is the culprit.  Why it is Ok on the other machine ... I have no idea :)
<ivoks> ghosts
<mib_d75wrq0u> hi guys.  I am trying to get a command workign so i can use it in php. "sudo -u sap useradd -p test1 test1" returns an error about the account being locked
<mib_d75wrq0u> useradd: unable to lock password file  ---------- to be exact
<mib_d75wrq0u> yet if i do not do the -u it works fine...
<ivoks> what do you want to achive?
<ivoks> add a user?
<mib_d75wrq0u> yes, to the server
<mib_d75wrq0u> but it will be done through php
<ivoks> sudo -u sap = do next command as sap user
<mib_d75wrq0u> my thought is that this can be don using the sudo -u... but im having extremely inconsistant results
<ivoks> can sap user create new users (i guess not)
<mib_d75wrq0u> yes they can
<mib_d75wrq0u> thats whats weird
<mib_d75wrq0u> i can do it as the user with no -u in the comman
<mib_d75wrq0u> d
<jpds> jmarsden: Deleted the bzr files for bzr in /root and my own and now: http://paste.ubuntu.com/144334/
<jpds> Awesomeness.
<ivoks> mib_d75wrq0u: if you run it as 'sudo whatever' you'll run whatever as root
<ivoks> mib_d75wrq0u: 'sudo -u sap whatever' will run as sap user
<mib_d75wrq0u> its not workign though
<mib_d75wrq0u> but when i sudo su into the user sap, and try it
<mib_d75wrq0u> it does
<ivoks> 'sudo su' will make you a root, not sap
<mib_d75wrq0u> sportman1280@illidan:~$ sudo -u sap useradd -p test5 test5 useradd: unable to lock password file
<jmarsden> jpds: Yes, but if you prefix with HOME=/root then it works, right?  So this is just the same thing... it uses $HOME ?
<mib_d75wrq0u> sudo su sap*
<jpds> jmarsden: Yep. Which is odd.
<jmarsden> Well, it's a fairly common thing for a shell script to do, use $HOME to find the home dir...
<ivoks> mib_d75wrq0u: have you configured sudoers?
<mib_d75wrq0u> yes
<ivoks> so, 'sudo -u sap ls' doesn't work?
<jmarsden> jpds: What is odd to me is the *other* one that works for you without setting $HOME !
<mib_d75wrq0u> sudo -u sap ls does
<mib_d75wrq0u> useradd does not :(
<jpds> True.
<ivoks> mib_d75wrq0u: are you sure 'sap' user can create new users?
<jmarsden> mib_d75wrq0u: sap does not have the privs to run useradd -- should it have them?
<mib_d75wrq0u> here is visudo:    sap ALL=(ALL) NOPASSWD:ALL
<ivoks> mib_d75wrq0u: (i doubt it can)
<mib_d75wrq0u> sportman1280@illidan:~$ sudo -u sportman1280 useradd -p test5 test5 useradd: unable to lock password file
<ivoks> this only means that sap user can do whatever it wants without prompting password
<mib_d75wrq0u> i cant do it myself either
<ivoks> that's imporper use of sudo
<ivoks> sudo - make ma root
<mib_d75wrq0u> well yea... but it was test
<mib_d75wrq0u> lol
<ivoks> so sudo -u sportman1280 - make me a sportman1280
<ivoks> of course you can't create a user
<ivoks> only root can do that
<ivoks> so, sap user needs to run 'sudo useradd bla bla bla'
<mib_d75wrq0u> hmmm ok so i think i see where your going
<ScottK> ivoks: Did you get a chance to try out clamav 0.95 yet?
<ivoks> ScottK: haven't someone ack that it works?
<mib_d75wrq0u> well i cant do that with php
<ScottK> ivoks: Yes.  And I uploaded it.  I'm curious how it does for you.
<mib_d75wrq0u> thats wehre this is stemming from... i need to run it during php
<ivoks> mib_d75wrq0u: php cli apps are runing as www-data user
<ivoks> unless you changed something
<ivoks> ScottK: eh... ok, i'll test tomorrow
<mib_d75wrq0u> ivoks: hence me trying to run sudo as sap.  or thats what i was thinking at least
<ivoks> why would you run it as sap?
<ivoks> you need sudo to delegate root privileges, not sap
<mib_d75wrq0u> ivoks so the entire server isnt running with the ability to sudo things without passwords?
<ivoks> you should stop with everything you are doing now
<mib_d75wrq0u> haha
<mib_d75wrq0u> ok
<ivoks> and *read* about sudo
<ivoks> cause you are on a way to do something very bad to your server
<mib_d75wrq0u> well i think i might have been getting desprate and went off the right path then
<mib_d75wrq0u> what would you recommend for running a sudo command in php then?
<mib_d75wrq0u> or where should i look? i cant really find good directions on the process either...
<ivoks_> so:
<ivoks_> 20:04 < ivoks> if you need to create users from web application
<ivoks_> 20:04 < ivoks> you shouldn't use shadow for authentication
<ivoks_> but sql or ldap
<mib_d75wrq0u> its to administer the server
<mib_d75wrq0u> this is just one part of the commands we will be adding
<ivoks_> you are doing a very bad thing
<ivoks_> onw way to solve it is to create apache+fastcgi instead of apache-mpm-prefork
<andol> mib_d75wrq0u: ldap is perfectly fine fÃ¶r system accounts.
<ivoks_> even better than shadow
<mib_d75wrq0u> the project is to create something like cpanel ultimately. so we will be working with mysql and other tools too
<ivoks_> anyway, bye
<mib_d75wrq0u> ok thanks
<mib_d75wrq0u> :)
<andol> mib_d75wrq0u: You know, you really should listen to ivoks_.
<ivoks_> if you are creating cpanel-clone
<ivoks_> take a look at ispconfig
<ivoks_> it populates sql
<mib_d75wrq0u> ok thanks :)
<ivoks_> and then with cron pulls from it
<ivoks_> that's also cr@p
<ivoks_> mib_d75wrq0u: do it with ldap and everybody would love you
<ivoks_> since ldap is more flexibile than shadow
<ivoks_> under ldap account you can define everything you need, not just username and password
<mib_d75wrq0u> ivoks_ i have nothing against ldap, we use it at work. i was just also thinking outside of the username and password realm into the other sytem command
<mib_d75wrq0u> ivoks_ im downloading the ispconfig now :)
<ivoks_> mib_d75wrq0u: web application that can execute root commands without password is best way to disater
<ivoks_> disaster
<ivoks_> even more if it's PHP based
<ivoks_> time for real life :D
<ivoks_> take care
 * ScottK wonders what is this 'real life' that ivoks mentions.
<socialist> are there any stock server kernels that don't require PAE?  I need to do some xen stuff and I've got some old machines without PAE, but all the xen support appears to be in the server kernels
<Deeps> socialist: the -virtual kernels dont work with xen?
<socialist> those are server kernels w/ PAE
<socialist> I'm just building a custom with PAE turned off, no biggie
<mat1211> Hi, what is syntax to reformat a disk in /dev/sdb1 with ext3 fs?
<mat1211> lol
<Deeps> mkfs.ext3 /dev/sdb1
<mat1211> thanks
<giovani> probably requires a prepended sudo, no?
<PhotoJim> mat1211: to help you remember that command, think of mkfs as meaning "MaKe FileSystem"... that's how I remember it
<mat1211> nod
<mat1211> this will probably take a long time lol, also while I am here, what is the command to extract a .rar file to a certain dir?
<giovani> unrar e filename.rar
<giovani> it requires the unrar utility of course
<PhotoJim> mat1211: there is probably syntax to force it to a directory, but I just cd /path/to/directory  and then unrar e filename.rar
<giovani> there is ... append the path
<giovani> unrar e filename.rar /path/to/extract/to/
<mat1211> What if the rar file is in a different dir than I want to extract it in?
<mat1211> ah
<giovani> mat1211: read the manpage
<giovani> it's clearly spelled out there
<PhotoJim> unrar e /path/to/rarfile.rar
<mat1211> I usually would read the manpage, but my screen reader and putty don't work well together. :P
<PhotoJim> try: export TERM=screen  ... and then see if man works better.
<giovani> probably misconfigured putty then
<Deeps> and try using a different pager, i find most displays the colours in manpages properly
<giovani> securecrt > *
<mat1211> The one for windows? nah its just one file.
<giovani> mat1211: putty has dozens of session options
<giovani> but, ok
<mat1211> maybe I downloaded the smaller version by mistake.
<giovani> nope, same version ... it still has options
<Deeps> chances are he's using the default options then, which is usually fine
<mat1211> meh, I'm getting an apple comp soon, which works far better using voiceover and stuff.
<mat1211> so I won't need to ask as many questions on here :P
<wizardslovak> i want to edit my conf file "gedit /etc/mysql/mt/cnf" i am getting error --->gtk-warning cannot open display
<wizardslovak> i want to learn mysql and ubuntu server well but i cant open config file
<wizardslovak> any ideas?
<Deeps> 2 problems, 1- gedit is a graphical editor, it needs a GUI, ubuntu server doesn't use a GUI, so you need a different tool, like nano, pico, vim, or emacs (console text editors)
<Deeps> 2- /etc/mysql/mt/cnf is probably not the correct path nor file, and you probably want /etc/mysql/my.cnf
<wizardslovak> i made mistake my.cnf
<beechbone> I've got a postfix/courier mail server setup. It now requires sasl authentication to send emails to other domains. But I can still send messages from smtp within the server. Any way to stop this?
<wizardslovak> so which texxt sditor you prefer?
<Deeps> wizardslovak: i prefer vim, but it has a steep learning curve. you're probably best suited to nano or pico
<wizardslovak> ok so i will use nano
<wizardslovak> can i ask more questions?
<jpds> Sure.
<jpds> !ask
<ubottu> Please don't ask to ask a question, simply ask the question (all on ONE line, so others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<wizardslovak> i am new to ubuntu server and i never couldnt understood how to set up server which is behing router
<Deeps> there are pages on the wiki that describe how to setup an ubuntu server as a network router
<wizardslovak> can you redirect me there?
<Deeps> https://wiki.ubuntu.com/
<jpds> help.ubuntu.com
<jpds> wizardslovak: Do you mean behind a router?
<jmarsden> Deeps: I think the issue may be more of the server being "behind" a router i.e. NAT port forwarding in a SOHO router ?
<Deeps> the man says "as a", i'll take the english interpretation of that
<wizardslovak> yes , i mean i got couple PCs and i want to use one as server (for studies)
<wizardslovak> i have reouter with dd-wrt
<wizardslovak> so far i am using ubuntu server on my laptop on virtualbox
<wizardslovak> i cannot find nothing about server and router
<jpds> wizardslovak: http://en.wikipedia.org/wiki/Port_forwarding
<jmarsden> wizardslovak: Sounds like you will: (1) set the the Ubuntu server to run whatever service you want to offer; (2) make sure you have the virtualbox stuff set up so that other PCs on your LAN can get to that service; (3) Set up a port forward in your dd-wrt router so that a port on the Public Intenet interface of it is routed through to the port on the Ubuntu server for your desired service.
<wizardslovak> right now i dont want anyone outside to see my servers, but my LAN
<jmarsden> Then ignore part (3) for right now.
<wizardslovak> sorry i know i might be pain in the a** but i am interested in servers and books are not enough
<wizardslovak> better to get support from people which actually use it
<jmarsden> In a sense, virtualbox is making this harder for you than it would be if you had Ubuntu Server on its own real hardware PC... because you have to get its networking stuff right too.  Do you have a spare real PC you can install Ubuntu Server on?
<wizardslovak> yes i do
<jmarsden> Then that might be a good way to go to get started.
<wizardslovak> its old one with 2.4 celeron but it should be enough
<jmarsden> RAM may be more important than CPU for this sort of learning stuff... do you have 512MB or more on the celeron PC?
<wizardslovak> i though ill get used to server on Vbox and then
<wizardslovak> 512
<jmarsden> You can do the Virtualbox thing, but it is one more set of configuration things to get right... 512 should be good.  I'd try that.  See if you can get Apache working on the Celeron PC running Ubuntu Sevrer as a first step, maybe?
<wizardslovak> good idea
<wizardslovak> although i wont do it today
<wizardslovak> for web server, i have to forward 80 or can i forvard different port?
<wizardslovak> if i will forward 80 will i be able to browse internet?
<jmarsden> if you forward TCP port 80 to the server other people from the Internet will be able to browse to your server, yes.
<jmarsden> If you set up the server on your LAn you will be able to browse to it using its LAN address from your laptop without doing any port forwarding at all.
<wizardslovak> will my computers be able to browse internet(not my server)
<jmarsden> Yes, you don't need to do any port forwarding for that, normally, your router will do NAT for you and it will "just work".
<dug_> Do you have to have multiple ips if you want to run multiple servers on different virtual machines?  Or can you forward based on virtual hosts or port numbers
<wizardslovak_> ok i am back
<jmarsden> dug_: You can forward different port numbers to different LAN Ip addresses, so you only need one public IP.
<jmarsden> wizardslovak_: I said: Yes, you don't need to do any port forwarding for that, normally, your router will do NAT for you and it will "just work".
<wizardslovak_> ok
<wizardslovak_> so lets say i want to connect to my server from different location, ill type my ip adress and then my server LAN ip ,correct?
<jmarsden> wizardslovak_: No, for that you will forward a port on your router to a a port on the LAn IP of your server.  Then from outrside you will just connect to that port of your public IP address.
<wizardslovak_> ooo
<wizardslovak_> and i couldnt figure out how to do that
<jmarsden> But earlier you said you didn't want access from the outside... are you changing your mind?
<wizardslovak_> nah not now
<wizardslovak_> i am curious
<jmarsden> One step at a time is easier... get it working on the LAN first.
<wizardslovak_> exactly
<wizardslovak_> first install , and get use to it
<wizardslovak_> so i figure out you work with ubuntu server
<wizardslovak_> how long did it take for you to learn it
<jmarsden> Well, I've worked with Linux since late 1992... :)
<wizardslovak_> i was trying to find some linux server course but no luck
<wizardslovak_> so i've decided to learn by myself
<jmarsden> They do exist... do you run Ubuntu on your desktop/laptop?  That can be a good way to learn Linux in general...
<wizardslovak_> i used to run suse linux for year+ , and then switched for  kubuntu
<wizardslovak_> i have dual boot , windows for games and kubuntu as main OS
<jmarsden> OK.  One place offering Ubuntu Server training stuff is http://beginlinux.com/server_training but I have no idea how good or bad it is...
<jmarsden> Ubuntu Server is really just a server-oriented set of Ubuntu packages, and a kernel configured for server type workloads.  If you are comfortable with KUbuntu and at a shell prompt, you really don't have much new to learn for Ubuntu Server at all.
<wizardslovak_> well i am getting to know shell yet
<dug_> thanks jmarsden
<wizardslovak_> i love apt-get better then yast in suse
<jmarsden> dug_: No problem.
<jmarsden> wizardslovak_: Sounds good.  When I started there was no X on Linux, so I *had* to learn the shell :)
<wizardslovak_> i was fascinated by shell loong time ago
#ubuntu-server 2009-04-05
<wizardslovak_> its much faster to use shell then gui
<wizardslovak_> in gui i prefer kde
<jmarsden> BTW if you want to "go for it", installing Ubuntu Server on the old Celeron machine from CD should only take you 30 minutes or so... and yes, I find the shell is more powerful than a GUI for many tasks... once you know it!
<wizardslovak_> i am trying to unrar .rar file but i am getting "command not found" error
<wizardslovak_> i installed rar with apt-get but still "command not found"
<jmarsden> What exactly are you typing?
<wizardslovak_> sudo unrar e file.rar
<jmarsden> You shouldn't need sudo for that... and... did you install unrar?  rar and unrar are different things.    sudo apt-get install unrar   # should work
<wizardslovak_> ooo i did "apt-get install rar
<jmarsden> Try  sudo apt-get install unrar-free
<wizardslovak_> ok now it works
<jmarsden> Good :)
<wizardslovak_> "apt-get install unrar" and it works
<jmarsden> BTW rar is not a common archive format in the Unix/Linux world, it is more common to use tar.gz or tar.bz2 ; .rar is more common under Windows.
<wizardslovak_> ok done
<wizardslovak_> now is there command to burn .iso into cd
<jmarsden> Yes... more than one... I use wodim  so you can sudo apt-get-install wodim
<wizardslovak_> ok got it
<wizardslovak_> what is command for it?
<jmarsden> man wodim will get you the man page :)  Try just     wodim somefile.iso   # to burn an ISO to a CD-R
<wizardslovak_> "wodim file.iso " ??
<jmarsden> sure.  if your .iso file is called file.iso, type in the command     wodim file.iso
<jmarsden> If it is called junk.iso, type in    wodim junk.iso   :)
<wizardslovak_> actually i am trying to burn ubuntu server
<wizardslovak_> this is faaaaaaaaaaaaast
<jmarsden> OK, so type in   wodim  ubuntu-8.10-server-i386.iso
<wizardslovak_> much faster then using gui software for it
<jmarsden> Yes.  BTW, you might want to read http://tldp.org/LDP/intro-linux/html/intro-linux.html and  http://rute.2038bug.com  for some general Linux and command line learning.
<wizardslovak_> thx man
<jmarsden> No problem.
<wizardslovak_> i am actually writing new command to my txt file so like tht i wont forget commands i already should know ;)
<wizardslovak_> can i add you as freind or something here?
<jmarsden> There's no real concept of "friend" on IRC in terms of telling the IRC client software that, that I know of.  It all depends on your client.  Xchat has a "Friends list" you can add me to if you want :)
<wizardslovak_> xchat? i dont have it i think
<jmarsden> wizardslovak_: Konversation may have a similar thing, I'm just not familiar with it.
<wizardslovak_> well ill be coming here anyways
<jmarsden> OK.  Have fun installing Ubuntu Server :)
<wizardslovak_> heheh
<wizardslovak_> so what os u using?
<jmarsden> Right now Ubuntu 8.10 64bit, with Jaunty in a VM and Debian in another VM and Hardy Server in yet another VM :)
<wizardslovak_> lol
<wizardslovak_> i have dual core cpu and actually i never tried 64bit
<wizardslovak_> is it better then 32?
<jmarsden> I love it... I got a cheap desktop with an E5200 dualcore CPU and 8Gb RAM, and ... I can run lots of VMs :)
<wizardslovak_> u use  vmware or vbox?
<jmarsden> virtualbox-ose
<wizardslovak_> be right back
<dug_> yeah i got a quad-core server with 8gigs, trying to get some virtual machines set up using vmbuilder (command line instead of gui)
<wizardslovak_> i got dual core with 4gbs and works pefrect
<jmarsden> dug_: If you have any issues, ask in #ubuntu-virt and the folks there can probably help you out.  Nice to have a CPU that can do KVM... I just couldn't justify the extra $100 for an E8400 over the E5200 when I bought this hardware...
<wizardslovak_> i paid $300 for this laptop lol
<jmarsden> Sounds like a good deal.  Did the Server CD burn OK?  Are you installing on the old Celeron PC now?? :)
<wizardslovak_> not yet
<wizardslovak_> server cd is ok , i did cd check in vbox and it passed
<wizardslovak_> ;)
<mat1211> How should I mount a drive with an ext3 fs?
<mat1211> also, why didn't the driver change when I typed fdisk -l? I changed the fs :P
<andresmujica1> hmmm just found that ubuntu doesnÂ´t support acl over nfs4... :(
<mat1211> If I change the fs of the entire harddrive instead of one partition will anything bad happen? :P and also, how do I repartition it with just a single partition.
<twb> mat1211: hard drives normally do not have filesystems directly, only on their partitions.
<twb> e.g. /dev/sda is not normally a filesystem
<mat1211> how do I partition a harddrive then?
<twb> Ubuntu will happily put a filesystem on sda and mount it, but other OSes might balk
<twb> As I said yesterday, you can partition a drive using gparted, cfdisk, parted, sfdisk, or some other partitioning tool
<mat1211> cause I reformated a harddrive with ext3, but when I type fdisk -l it still says fat, I was thinking of just making one partition that is the full drive.
<mat1211> in terminal as well?
<twb> fdisk reports what is in the partition table, which might be lies
<mat1211> cause this thing has no screen or anything laugh
<twb> Try using cfdisk to change the partition's type to 82 (ext)
<mat1211> so I won't be able to do any gui things
<twb> Don't use fdisk unless you know EXACTLY what you're doing, it is for experts.  Prefer cfdisk.
<twb> fdisk will happily make partitions that other OSes won't like
<mat1211> what is the syntax for cfdisk, it said couldn't open drive when I typed it.
<twb> sudo cfdisk /dev/sdb
<twb> Manipulating disks at a low level requies root access (or, possibly, to be in the "disk" group).
<mat1211> hmmm
<mat1211> how would I go about deleting a partition? it still says sdb1 is there, but I'm sure I deleted it with mkfs.
<twb> mkfs cannot delete or create partitions.
<twb> mkfs only changes what filesystem is on a partition.
<twb> To delete a partition you would open the disk's partition table with cfdisk /dev/sdb, then select tha appropriate partition and choose "delete"
<mat1211> strange, when I type stuff it still says sdb1 is still there, but at the same time when I enter commands having to do with it, it doesn't exist.
<twb> You then need to write the partition table by choosing "write"
<mat1211> hold on, I'll try something.
<twb> A partition "exists" according to the kernel if it is listed in /proc/partitions.  There may or may not be a device node for it in /dev, which is a separate issue.  Both SHOULD be updated automatically on a modern system such as 8.04.
<mat1211> it didn't work, lol
<mat1211> is there a way to force it to update these things?
<mat1211> because in things such as fdisk -l and cfdisk, I see that a partition is listed, but when I try and interact with them, nothing.
<mat1211> ah, there we go
<andresmujica> partprobe
<andresmujica> reloads partition table in kernel
<andresmujica> with parted is possible too, and with echo something > /proc/..... don't recall the rest..
<mat1211> ah, also, how do I change the extention of a file? I have an incomplete rar archive, and the extention is .rar.filepart.  How do I remove the .filepart?
<PhotoJim> mat1211: you just rename the file... but since it's incomplete, that's not likely to be of much benefit.
<mat1211> it still works.
<mat1211> do I just type sudo rename to do that? lol
<PhotoJim> if you're using a shell, you just use the mv command that usually moves files.
<PhotoJim> but if you move it to the same place with a different name... it works as rename.
<PhotoJim> mv oldname.txt newname.txt
<PhotoJim> you may or may not need to use sudo depending on the read/write permissions
<mat1211> ah, that's different, but thank you.
<PhotoJim> it's slightly counterintuitive but one gets used to these things :)
<Iceman_B|SSH> PhotoJim: I still wonder why there isnt a seperate "ren" command ._.
<ScottK> What would that do that mv doesn't?
<Iceman_B|SSH> be not counter-intuitive?
<PhotoJim> Iceman_B|SSH: you could just make a softlink called "ren" and link it to "mv" :)
<mat1211> softlink? right... strange.
<mat1211> lol
<Iceman_B|SSH> lol, I hadnt thought about that
<mat1211> how would you set that up?
<Iceman_B|SSH> so would that be "touch red" and then make a symlink from ren to mv ?
<Iceman_B|SSH> *ren
<chaverma> i'd like to set /etc/ssh/ssh_config to the default.  where can i find the default file?
<PhotoJim> Iceman_B|SSH: no, you don't "touch" it first.  that creates an empty traditional file.  symlinks are a different type of file that work basically like a pointer to another file.
<Iceman_B|SSH> I see. so there is no need to create an empty file, just, directly create the link itself ?
<ScottK> Yes
<ScottK> man ln for details.
<PhotoJim> Correct.
<mat1211> what is the dir for the mv command, and how would it be a file?
<PhotoJim> "whereis mv" will tell you.
<PhotoJim> I'm not sure how to answer the second half of your question.
<ScottK> which mv works too
<Iceman_B|SSH> wow. I didnt know about whereis, I always try find and locate
<Iceman_B|SSH> but I cant get them to work the way I want to
<chaverma> yeah, find and locate are kind of Big Hammers for a small thing like finding binaries
<ScottK> zul: Did we make libmysqlclient-dev go NBS on purpose.  It has a lot of reverse build-deps: http://people.ubuntu.com/~ubuntu-archive/NBS/libmysqlclient-dev
<twb> There *is* a rename(1) command, intended to do pattern-based bulk renaming.
<twb> e.g. instead of for i in ?.jpg; do mv $i 0$i.jpg; done
<PhotoJim> so there is.
<twb> I haven't actually used it myself because I'm pretty proficient at sh.
<dug_> well vmware is about 800 times easier to install than kvm/vmbuilder :)
<dug_> https://help.ubuntu.com/community/VMware/Server/AMD64
<twb> I don't know about vmbuilder, but kvm is MUCH easier to install than vmware-server.
<twb> For one thing, to install vmware-server properly you need to install vmware-package from Debian, download the source tarball, turn it into a deb, install gdebi, then install the vmware-server .deb.
<twb> And even then, you need to manually monitor vmware's website to detect new security and feature releases, and go through the same process for those.
<twb> ...whereas kvm is a normal apt-gettable package.
<twb> There's also the problem that vmware-server packages are illegal to redistribute and need to be "activated" with a gratis serial code, and aren't subject to the same Debian Policy requirements as official packages, nor can bugs be reported using reportbug(1).
<twb> Oh, and vmware-package doesn't support vmware-server 2.x yet.
<mat1211> what is kvm?
<jmarsden> kvm is a virtual machine environment, so you can run multiple virtual computers on one physical machine.  See https://help.ubuntu.com/community/KVM
<jmarsden> BTW, a google search for    ubuntu kvm      should have found you that kind of info?
<twb> Once the province of Big Iron, now you can get it on a $300 laptop...
<twb> jmarsden: in #emacs our bot is actually trained to respond to "what is <keyword>?" the same way as "!<keyword>" :-)
<jmarsden> twb: Nice... can you program the bot here to do the same?
<twb> Sorry, I know nothing about your bots.
<dug_> !kvm
<ubottu> kvm is the preferred virtualization approach in Ubuntu. For more information see https://help.ubuntu.com/community/KVM
<twb> Emacs' fsbot is written in elisp, so you can't just copy its code.
<jmarsden> dug_: Yes... is there a way to get a list of all the ! commands the bot recognizes? !dictionary or something??
<dug_> not sure
<twb> For cases where you really only want a virtualized filesystem, process tree and network stack, how does kvm weigh up against openvz or xen?
<twb> jmarsden: usually there's a webpage
<twb> !bot
<ubottu> Hi! I'm #ubuntu-server's favorite infobot, you can search my brain yourself at http://ubottu.com/factoids.cgi - Usage info: http://wiki.ubuntu.com/UbuntuBots
<Doble> hi folks
<Doble> I'm new to ubuntu and I'm trying to set up a home DNS server using BIND9 and its giving me grief, can anyone help ?
<Doble> it's probably something really obvious but since I dont seem to get any error its hard to troubleshoot
<Doble> i think i've just screwed up the config files so much that I can't get them working again, heres the output from /var/log
<Doble> Apr  5 16:52:13 ubuntu named[4480]: starting BIND 9.5.0-P2 -u bind
<Doble> Apr  5 16:52:13 ubuntu named[4480]: found 2 CPUs, using 2 worker threads
<Doble> Apr  5 16:52:13 ubuntu named[4480]: loading configuration from '/etc/bind/named.conf'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf:12: unknown option 'zone'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf:20: unknown option 'zone'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf:25: unknown option 'zone'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf:30: unknown option 'zone'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf:35: unknown option 'zone'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf.local:9: unknown option 'zone'
<Doble> Apr  5 16:52:13 ubuntu named[4480]: /etc/bind/named.conf:41: '}' expected near end of file
<Kamping_Kaiser> gah. dont flood here
<Doble> Apr  5 16:52:13 ubuntu named[4480]: loading configuration: unexpected token
<Doble> Apr  5 16:52:13 ubuntu named[4480]: exiting (due to fatal error)
<Kamping_Kaiser> !paste
<ubottu> pastebin is a service to post multiple-lined texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu.com (make sure you give us the URL for your paste - see also the channel topic)
<Doble> sorry, didn't know it would split it into multiple messages
<Kamping_Kaiser> Doble, ^^
<Doble> here's the pastebin - http://paste.ubuntu.com/144687/
<twb> ubottu doesn't auto-kick flooders?
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<Doble> sorry for the spam before - I'll start again, im trying to configure bind9 and i think i've messed up the config files but I don't know how to fix them, and now bind fails to start - my config files: http://paste.ubuntu.com/144690/ and error log: http://paste.ubuntu.com/144687/
<mrout> Hello
<mrout> How do I install an IRC server on my Ubuntu Server (It's currently running a LAMP server)
<mrout> ??
<mrout> Is anyone there?
<mrout> ow do I install an IRC server on my Ubuntu Server (It's currently running a LAMP server)H
<mrout> *How
<mrout> *server)
<p_quarles> mrout: if somebody knows, they'll answer; don't keep repeating the question
<mrout> Sorry
<mrout> It's just that a new person came along.
<friartuck> got google?
<mrout> Yes...
<mrout> lol
<mrout> oh, I see what you mean. (I'm a bit slow)
<friartuck> http://www.google.com/search?hl=en&q=ubuntu+irc+server&btnG=Google+Search&aq=f&oq=
<mrout> Thank you.
<obst> or try wikipedia http://en.wikipedia.org/wiki/IRCd
<mrout> ty
<JessicaParker> hi can anyone assist with getting postfix configured with gmail ?
<aurax> hello, is there a content/web filtering application that's not working as a proxy?
<LMJ> what's the problem with a proxy aurax  ?
<aurax> i have 8 adsl lines and 5 vlans
<aurax> managing proxy with multiple networks and routings seems really difficult
<aurax> i mean ,proxy will kill my routing table
<JessicaParker>  hi can anyone assist with open ssl confi am getting an error and i think it is down to the location of my /demoCA/private/cakey.pem
<Hamzifer> any suggestions on how to install a quicker (and i guess, less random) psuedo random number generator? /dev/urandom's too slow :/
<giovani> Hamzifer: /dev/urandom is the fast one
<ewook> Hamzifer: why don't you just cache something from /dev/random?
<giovani> well, before we go /dev/random ... we should know the application
<Hamzifer> attempting to fill a disk with random data
<giovani> why?
<Hamzifer> ..because i am
<giovani> you've probably been mislead about the efficacy of random data for securely wiping drives
<giovani> it's entirely unnecessary
<Hamzifer> nm, google to the rescue, dd if=/dev/zero | gpg --symmetric --passphrase `dd if=/dev/urandom bs=4 count=8 2>&1 | sha256sum | head -c 64` - > target
<giovani> oh god ...
<Hamzifer> produces random data at around 23mb/sec, compared to /dev/urandom/s 1.8mb/sec
<Hamzifer> incase anyone else is interested
<Hamzifer> although if anyone has any idea how to configure a "less random" prng, e.g. /dev/prandom, that would be useful to know, google's not being too helpful there :/
<giovani> I suggest you save yourself a lot of time in the long run and read up on why Guttman's hypothesis -- which your desire to use random data is no-doubt based on, are false
<giovani> s/are/is/
<Hamzifer> Thanks, I'll keep that in mind when your concerns are relevant to my situation. Thanks for all the help
<giovani> haha
<giovani> they're concerned with any situation involving wiping drives
<giovani> where you don't enjoy wearing tinfoil hats
<Hamzifer> Indeed, and when I'm wiping a drive, I'll keep that in mind.
<giovani> <Hamzifer> attempting to fill a disk with random data
<giovani> what are you doing then?
<Hamzifer> A quick google for "Guttman's hypothesis", however, leads to mostly books.google.com results about psychology. Got any useful links?
<Hamzifer> Not wiping a drive :)
<acicula> filling a disk for encryption i suppose then
<giovani> for encryption? maybe to hide an encrypted partition
<giovani> but, there's no need other than that that I can think of
<giovani> Hamzifer: it was my bad spelling -- Gutmann is correct
<giovani> paper: http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
<giovani> his "research" is the basis for the modern recommendation of using multiple passes of random, and non-random data
<Hamzifer> Good to know I've not been doing it wrong then!
<giovani> what have you been doing?
<acicula> only thing i can think of is that if you can guess the plaintext pattern on the disk to be something standard like null you learn a bit more
<Hamzifer> I've been trying to stay relevant to my original question, which was trying to get a faster psuedo random number/noise generator going! :)
<giovani> well we want to know why! :)
<Hamzifer> I'm afraid if I told you, I would have to kill you! ;)
<giovani> prepare the tinfoil hats, folks!
 * acicula goes off to dust the thumbscrews
<Iceman_B^Ltop> my machine rebooted expectedly when I was asleep, where can I look to find out what happened around and up till that point?
<Iceman_B^Ltop> *unexpectedly
<PhotoJim> Iceman_B^Ltop: poke around /var/log.  dmesg and daemon.log might have useful info, or syslog.
<JessicaParker> can anyone help with the openssl error in ubuntu /demoCA/serial: No such file or directory
<Iceman_B^Ltop> PhotoJim: im looking at syslog. when I count back from the current uptime, I find this line "Apr  5 14:53:49 Rin-chan syslogd 1.5.0#2ubuntu6: restart."
<PhotoJim> Iceman_B^Ltop: hmm.
<giovani> JessicaParker: what's the context of the error? what were you doing when you received it?
<Iceman_B^Ltop> PhotoJim: and I'm getting a lot of these: http://pastebin.ubuntu.com/144935/
<JessicaParker> giovani: http://www.marksanborn.net/linux/send-mail-postfix-through-gmails-smtp-on-a-ubuntu-lts-server/ ultimate goal gmail smtp
<giovani> wtf
<giovani> why do you need a CA to talk to gmail
<JessicaParker> giovani: this is the command line that i get the isue from openssl ca -out FOO-cert.pem -infiles FOO-req.pem
<giovani> that's absurd
<PhotoJim> Iceman_B^Ltop: I'm sorry, I have no idea what that is about.  the libpolkit stuff looks more worrisome than the sigfile stuff.  the rest of it is innocuous.
<JessicaParker> giovani: i think you cant use port 25 and need to use 587 and requires a certificate
<giovani> yes ... but you don't need your own CA
<giovani> postfix ships with unsigned certs
<Iceman_B^Ltop> PhotoJim: perhaps http://www.bergek.com/2008/11/24/ubuntu-810-libpolkit-error/
<giovani> this is why these howto guides are worthless
<JessicaParker> giovani: i not all to familiar with this.......but looking around it looks like you need to create a self signed certificate to use gmail smtp service...
<PhotoJim> Iceman_B^Ltop: good catch.  sounds like that's the thing to try.
<JessicaParker> giovani: i dont think there are any how to guides for postfix gmail without the self signed certificate not sure though
<Iceman_B^Ltop> PhotoJim: apaprantly there is a bug filed for it, I'm looking at the page right now but, I really have no clue beyond that. It doesnt seem like a policy kit will breach my server
<PhotoJim> Iceman_B^Ltop: one wouldn't think so.
<JessicaParker> any one any ideas on this ? i dont need a self signed certificate ?
<Iceman_B^Ltop> oh wow, setting the TCP_NODELAY option in samba just doubled my speed
<Iceman_B^Ltop> I'm getting around 5-6 MB/s over LAN now
<Iceman_B^Ltop> but that was from server to this XP laptop. the other way around is still slow, around 2.8 MB/s
<chris_d_adams> hi guys, which logs should I be looking in if I want to see what my system is doing when trying to recognise a usb device that's been attached?
<giovani> chris_d_adams: /var/log/dmesg
<chris_d_adams> giovani: hmm... I'm getting no sign at all when i plug something in there. is there another way to probe in more detail?
<chris_d_adams> ah
<chris_d_adams> syslog shows something
<chris_d_adams> never mind
<chris_d_adams> I'm good here
<uvirtbot> New bug: #355709 in mysql-dfsg-5.0 (main) "package mysql-server-5.0 5.1.30really5.0.75-0ubuntu10 failed to install/upgrade: Unterprozess post-installation script gab den Fehlerwert 1 zurÃ¼ck" [Undecided,New] https://launchpad.net/bugs/355709
<giovani> chris_d_adams: dmesg is the kernel log ... when you plug in a device, it should show up there
<mrwes> I'm trying to use the following in a incrontab /media/external IN_CREATE IN_MODIFY clamscan -av $@/$#
<mrwes> and it doesn't appear to run in htop
<mrwes> I want it to clamdscan the file that was either created or modified
<giovani> well, it may be a non-issue, but I believe IN_CREATE and IN_MODIFY are supposed to be comma separated
<mrwes> yah I made that change...
<giovani> alright, well we can only go off of what you paste here
<mrwes> nod
<mrwes> it currently reads /media/external IN_CREATE,IN_MODIFY clamscan -av $@/$#
<mrwes> is there anyway to monitor whether it ran or not? I looked in /var/log/clamav but I didn't see anything
<giovani> you try using a different command than clamscan ... to rule it out as a cause?
<giovani> I don't think it logs to /var/log unless you're running the daemon
<mrwes> well I'm running the clamd
<giovani> ok?
<giovani> and you reloaded the table, right?
<giovani> try something like ... /media/external IN_CREATE,IN_MODIFY touch /testfile
<giovani> and then use /testfile's access time to establish the last time it ran
<Iceman_B^Ltop> what's a good FTP server ?
<Nafallo>  vsftpd
<Iceman_B|SSH> so that "sudo aptitude install vsftpd"
<Iceman_B|SSH> I thought an FTPd was installed during server installation....
<giovani> Iceman_B|SSH: no, and it shouldn't be -- very few people need ftp daemons on each server ... it increases default security holes, and bloats the install
<Iceman_B^Ltop> good point
<PhotoJim> Iceman_B|SSH: scp is a lot more secure, and safer.  winscp is a good client for Windows users.
<Nafallo> sftp :-)
<PhotoJim> Iceman_B|SSH: and if you have sshd enabled, you don't have to do anything to turn it on.
<Iceman_B^Ltop> ah, right, I had forgotten about that
<giovani> heh
<giovani> yeah, ftp = bad
<giovani> only use it if you have general users who aren't capable of using scp
<Iceman_B^Ltop> well the thing is, I have my server thats on a 10/1 ADSL line, its download a certain torrent at almost the max speed. My desktop at another location is on a 120/10 cable line
<Iceman_B^Ltop> the same torrent there goes as a snail pace
<Iceman_B^Ltop> so I want to download the files here on my server and then transfer them back, but its 29 GB and 2100 files
<PhotoJim> traffic shaping.
<Iceman_B^Ltop> yeah, probbaly. Ive been following forums byt of course, the ISP denies all claims
<PhotoJim> there are ways of testing it.
<Iceman_B^Ltop> *but
<Iceman_B^Ltop> im already running at high ports and encrytion on, on the cable pc
<Iceman_B^Ltop> *port
<PhotoJim> scp is the easiest (IMHO), best way to move the files, short of having a VPN connection so that you can mount nfs shares from your server onto your remote desktop.
<PhotoJim> the latter is harder to set up, but easier to use.
<giovani> scp will add overhead, though
<giovani> especially if it's a really light cpu
<PhotoJim> that's true.
<giovani> i.e. celerons can choke under huge scp moves
<PhotoJim> but by "light" we're talking pre-PIII probably right?
<giovani> celerons are like Pentium 2s :)
<giovani> at 10x the cost
<PhotoJim> my PII-333 (about-to-be-retired server) was ok on scp.  Linux is pretty efficient.
<PhotoJim> I'm guessing it's the cache issue if the Celeron sucks so much.
<giovani> possibly ... there are many factors
<giovani> it depends on what kinds of speeds you're expecting
<giovani> if you have a celeron sitting on a 100Mbps line ... and you're trying to max that out over scp ... good luck
<PhotoJim> my broadband is a tiny fraction of that, but I could run some experiments over the lan to see.
<giovani> yeah, I'm rarely scping up from my home
<acicula> my C2D maxes out at 2.2Mb/s iirc when i use the 100Mbit connection. admitedly it's not a very fast proc
<giovani> 2.2Mbps or 2.2MBps?
<giovani> they're wildly different
<acicula> MB
<acicula> the byte version
<giovani> yeah, careful of writing that
<acicula> hehe
<giovani> Mb is Megabit
<PhotoJim> the machine I'm on right now is an Atom N270 so I'm not sure it would be that great of a test platform :)
<acicula> i alwas keep confusing them
<giovani> little b ... means smaller measure
<giovani> bigger B means bigger measure
<giovani> easy enough
<acicula> i shall never forget again
<giovani> :)
<giovani> so the 2.2MBps test was over scp, or something else?
 * acicula scratches memory
<acicula> scp or sftp, dont think they are treated very differently
<acicula> the one that gives you the ascii completion bar and a speed reading anyway
<Nafallo> MB/s :-)
<acicula> it's been awhile since i last copied from my desktop
<giovani> acicula: they're protocols
<giovani> not clients
<acicula> it decided life just wasnt worth living, and that i did not need the machine
<acicula> or the disk :/
<acicula> giovani: also both clients, though i wouldnt know if they use different procol messges under the hood
<giovani> yes, to declare that the C2D was the bottleneck ... you'd need to actually look at the server load
<acicula> giovani: well i didnt do extensive benchmarking , the machine has a disk quite capable of sustaining well over 2MB/s nor is it short on memory, switched 100Mbit local lan
<giovani> yeah, but those are guesses
<giovani> maybe that section of your disk was heavily fragmented
<giovani> who knows
<giovani> so, testing the actual system during the transfer is the only way to isolate the bottleneck, unlikely to be the cpu
<acicula> actually
<acicula> my desktop broke before i got the lappy, so must've been my old one :/, so the metrics i just gave are useless
<acicula> doh
<PhotoJim> I'm going to try an scp from my dual 1 GHz PIII server to my Atom N270 netbook for giggles
<PhotoJim> 50.9 MB/s to create, not terrible. :)
<giovani> to create?
<PhotoJim> Yep.
<giovani> what does "to create" mean?
<Nafallo> -c blowfish
<PhotoJim> I use blowfish by default.
<Nafallo> :-/
<Iceman_B^Ltop> PhotoJim, what machine is that N270 running in ?
<acicula> why the preference for blowfish?
<PhotoJim> Acer Aspire One.
<PhotoJim> It seems to be one of the most efficient encryption algorithms ssh supports.
<Iceman_B^Ltop> likeable little machine?
<Iceman_B^Ltop> im looking for a netbook to buym, not too long fro mnow
<giovani> hp mini 2140 or dell mini 10 are the only netbooks I'd consider at this point
<Iceman_B^Ltop> have my eye on a NC10
<PhotoJim> Just over 11 MB/s to read it over the LAN
<giovani> heh, you're using these odd terms
<giovani> "read" what?
<PhotoJim> well... I wrote the file on the server
<PhotoJim> now I'm reading it and copying it to the netbook
<PhotoJim> the netbook can write it far faster than that rate, so the read speed is the issue here
<giovani> well you didn't just get 50MBps over a 100Mbps LAN
<giovani> that was a false metric if you did
<PhotoJim> no, that was local
<PhotoJim> I didn't say that was the scp speed
<PhotoJim> that was the file creation speed locally
<giovani> also not sure how you achieved 11MBps over the lan
<giovani> unless this is gigabit
<PhotoJim> 11.1 MB/s, not mb
<PhotoJim> MB
<PhotoJim> Mb
<PhotoJim> 11.1 MB/s, not Mb
<giovani> ... right
<PhotoJim> that's what it's telling me, and no, it's not gigabit
<giovani> 11.1MB/s is not possible over scp on a 100MBps lan
<PhotoJim> perhaps it's doing compression.
<giovani> 10MB/s is the reasonable, theorectical limit of a tcp connection over ip
<giovani> maybe 11 at maximum, with no scp overhead
<giovani> how big was the file?
<PhotoJim> 1 gig
<PhotoJim> 1024^3
<giovani> k
<PhotoJim> bytes
<giovani> did you use -C?
<giovani> that enables compression
<PhotoJim> it doesn't seem too crazy, I have a 5 Mbit broadband download rate and I get just under 600 kB/s optimally
<PhotoJim> I didn't specifically enable compression
<PhotoJim> let me try it and see what happens
<PhotoJim> Slower.
<PhotoJim> But accelerating.
<giovani> cpu bottleneck, probably
<PhotoJim> Started at about 6 MB/s
<PhotoJim> Probably
<PhotoJim> now it's at 7~.3
<PhotoJim> 7.3
<Iceman_B^Ltop> is this all Linux> or is there samba between in?
<giovani> the atom is a lightweight
<Iceman_B^Ltop> just curious :)
<giovani> samba ?
<PhotoJim> Linux scp to sshd
<PhotoJim> no samba
<PhotoJim> no NFS
<Iceman_B^Ltop> ok
<giovani> what world are we talking about?
<PhotoJim> I do have this partition mounted by NFS so I could copy it that way too
<PhotoJim> Ubuntu on both machines, Jaunty on my netbook (to which I'm copying), Intrepid on my server (from which I'm copying)
<PhotoJim> yes, with compression it's significantly slower
<PhotoJim> which is ironic because the file is just zeroes
<PhotoJim> obviously at 100BaseTX speeds, the compression can't keep up with the bandwidth
<PhotoJim> at least with that machine
<PhotoJim> one CPU is pinned at 100% on t he remote server, the other CPU is essentially idle
<PhotoJim> so the CPU is the bottleneck on that compression.
<PhotoJim> let me try via NFS
<PhotoJim> what's the best way to time it?  scp provides rate information.
<PhotoJim> but NFS won't of course.
<PhotoJim> time I suppose
<PhotoJim> both server CPUs are at about 15% load
<PhotoJim> NFS seems to do load balancing
<PhotoJim> brb, I'll report after a nature break :)
<PhotoJim> 1:36.01 to transfer 1 gigabyte
<PhotoJim> 1 gibibyte actually
<PhotoJim> 11,183,645 bytes per second
<giovani> we get it
<PhotoJim> so scp and nfs are very similar in speed
<PhotoJim> it'd be interesting to try it on a gigabit lan, to see what sort of hwardware you need to saturate it
<giovani> well a regular desktop hard drive maxes out before gigabit lan does
<giovani> so unless you have raid, or "commercial" drives, or ssds
<giovani> that's your first bottleneck
<PhotoJim> a PII-333 on the same lan here is receiving the file at 3.6 MB/s, so that's clearly too little CPU to saturate the link
<PhotoJim> that's interesting.  so gigabit is going to be a fairly long-lasting technology if we c an't saturate it now without enterprise-level hardware.
<giovani> this is news?
<giovani> gigabit far from new ... we'll see 10 gigabit on desktops in 5 years
<giovani> ssds will replace hds on consumer devices within 5 years
<PhotoJim> I know gigabit isn't new.  just surprised it's still faster than current disks.
<giovani> at least for the OS
<giovani> you thought disks were more than 100MBps?
<giovani> average consumer disk is capable of 40-60MBps read speeds
<PhotoJim> I figured higher-end ones must be getting there.
<giovani> it's an rpm issue
<PhotoJim> but apparently only really high-end ones.
<giovani> there's no higher-end consumer drive
<Nafallo> tmpfs on /home/nafallo/memory type tmpfs (rw,noexec,nosuid,nodev)
<giovani> all consumer drives are 7200rpm or slower
<PhotoJim> my OS drives on my server are 10000 rpm so it'll be interesting to do some speed tests on that
<PhotoJim> unfortunately they are way too small for my data
<giovani> no need really, just look at the specs
<PhotoJim> more fun to experiment :)
<uvirtbot> New bug: #355662 in samba (main) "libsmbclient crashes with SIGABRT  (dup-of: 198351)" [Undecided,New] https://launchpad.net/bugs/355662
<giovani> not really
<giovani> but ok
<uvirtbot> New bug: #355800 in nagios3 (main) "*** WARNING: ucf was run from a maintainer script that uses debconf, but the script did not pass --debconf-ok to ucf." [Undecided,New] https://launchpad.net/bugs/355800
<Iceman_B^Ltop> how does the "unix password sync = yes" exactly work in Samba ? will it change my Linux password when I change my samba password? how can that work when Im the only root user on the system?
 * Iceman_B^Ltop pokes PhotoJim
<Iceman_B^Ltop> you said earlier that you have a way to determine if an ISP is shaping p2p traffic?
<AnRkey> hi all
<AnRkey> I am trying to compile netxms for ubuntu 8.04.2 server
<AnRkey> I get this checking for gd.h... no
<PhotoJim> Iceman_B^Ltop: Yes?
<AnRkey> then it fails, anyone know what package that file belongs to?
<Iceman_B^Ltop> tell me more about it please
<PhotoJim> Iceman_B^Ltop: you aren't the root user.  the only root user is the user named root.  and if you use that option, usernames on the Windows client machines and Linux box should match exactly and things will work.
<PhotoJim> Iceman_B^Ltop: so if you change the Linux account password, you should change the Windows account password too, and there will be no prompt to enter a password when connecting to the Linux server from the Windows machine.
<PhotoJim> Iceman_B^Ltop: So, ideally, have one account on your Linux box for every discreet user that will connect to it as a server on your local area network.
<Iceman_B^Ltop> yeah, I had that figured out already. it's handy because I guess Windows tries to log onto the share with whatever user is logged into windows at that point
<PhotoJim> that's exactly what Windows does when it tries to connect to Windows servers.
<PhotoJim> you can force a separate login, but there is really no benefit to that.
<Iceman_B^Ltop> so let me get this straightr, the "unix password sync =" option does not related to keeping samba and linux passwords on the same box the same ?
<PhotoJim> you could create user accounts for your local users, and disable Linux login if you don't want them to actually be able to use the server via shell.
<Iceman_B^Ltop> how would I go about that?
<PhotoJim> Iceman_B^Ltop: hmm.  I think it does.  but now that you mention it, I did configure my Samba user names and passwords separately.  so I can't tell you that experientially.
<PhotoJim> I'm in the other window working, so my replies might be slow.
<Iceman_B^Ltop> no prob
<Iceman_B^Ltop> im watching a show anyways :)
<Iceman_B^Ltop> oh do tell me about the diabling the shell login thing please
<Iceman_B^Ltop> whenver you have the time
<AnRkey> nevermind: found the sucker
<AnRkey> w000t, it's compiling at last
<AnRkey> man, this netxms is awesome
<PhotoJim> Iceman_B^Ltop: I forget how to do it off the top of my head, but it has to do with editing the shell field in the /etc/passwd file.  something to the effect of "nologin".  should be easy to find.
<Iceman_B^Ltop> alright
#ubuntu-server 2010-04-05
<Psi-Jack> jasonmchristos: Spam is off topic on Freenode.
<jasonmchristos> wouldnt it be that off-topic is spam?
<MTecknology> jasonmchristos: no - perhaps you just shouldn't spam - it only irritates users - you're free to join #defcous if you would like to express your religious beliefs
<jasonmchristos> thanks
<jasonmchristos> sorry to irritate you
<ScottK> jasonmchristos: It really is off topic for this channel.
<Psi-Jack> Anyway.
<Psi-Jack> Anyone have any thoughts about the clvm issue? ;)
<jasonmchristos> r u developers?
<jasonmchristos> for ubuntu?
<MTecknology> !u
<MTecknology> yay bot
<jasonmchristos> lol
<ubottu> U is the 21st letter of the modern latin alphabet. Neither 'U' or 'Ur' are words in the English language. Nor are 'R', 'Y', 'l8', 'Ne1' or 'Bcuz'. Mangled English is hard for non-native English speakers. Please see http://geekosophical.net/random/abbreviations/ for more information.
<jasonmchristos> just neeeded a little encouragment see
<MTecknology> jasonmchristos: no, this channel purpose is described in the topic
<jasonmchristos> Psi-Jack: Are YOU sure the sysntax is correct?
<Psi-Jack> I don't communicate with trolls, sorry.
<MTecknology> !clvm
<MTecknology> !info clvm
<ubottu> clvm (source: lvm2): Cluster LVM Daemon for lvm2. In component main, is extra. Version 2.02.39-0ubuntu11 (karmic), package size 243 kB, installed size 604 kB
<MTecknology> oh
<Psi-Jack> heh
<jasonmchristos> you just did, it would seem that it doesnt understand the locking type, so my first guess is syntax if three is indeed a valid lock type
<MTecknology> Psi-Jack: where did you find locking_type = 3?
<Psi-Jack> MTecknology: /etc/lvm/lvm.conf
<MTecknology> Psi-Jack: was there a comment in there saying that's an option?
<Psi-Jack> Of course.
<jasonmchristos> if the syntax is correct i would guess that you need to update the program interpreting the config file maybe that type is not implimented in an older version
<Psi-Jack> it's supposed to be set to 3 when using clvm.
<MTecknology> Psi-Jack: did you try in [msg]? I don't have an answer, that person isn't there, and they should know exactly
<Psi-Jack> I'm beginning to wonder if it's just ubuntu doesn't compile clvm support into lvm2, yet providing cman and clvm.. Which would be strange. ;)
<Psi-Jack> And a packing bug of course.
<MTecknology> Psi-Jack: If I had my dev system still - I'd go check right now..
<masu3701> i have an old pc...intel pentium 3 processor...797 MHz, 256 MB of ram....i was lookin for a home server...is this pc gonna be good enough
<MTecknology> masu3701: depends - what do you want it for?
<masu3701> file server,
<MTecknology> masu3701: that's it about equals my backup/logging server
<masu3701> how is it runing?
<MTecknology> masu3701: slow - but it does its job
<masu3701> the hd is only 30gb tho
<MTecknology> masu3701: my backup server |  Intel(R) Celeron(R) CPU 2.00GHz  |  Mem:       1025956    1005672      20284          0     326836     477968   Swap:      1959920         72    1959848   |  then a pair of 250GB drives
<MTecknology> masu3701: It'll work for file storage, just don't expect to stream anything from it
<MTecknology> masu3701: also - don't install a gui - that'll just kill things
<masu3701> ok
<MTecknology> masu3701: good luck and enjoy - btw, it's the cpu limiting you, the ram is fine - i have production servers running on 360MB
<dvheumen> hey, does anyone know how to add a disk to a raid1 array. They are *exactly* the same size, but mdadm refuses
<masu3701> MTechnology: so i cant upgrade the hd?
<MTecknology> masu3701: ya you can - but you'll still suffer from the limiting processor
<masu3701> yea
<masu3701> can i change the processor ?
<masu3701> i have another pc that runing amd
<masu3701> and 512 ram
<masu3701> its a lil faster then this one
<MTecknology> masu3701: best thing I can say is try it out and see what you think
<masu3701> yea
<jeffesquivel> bbl
<dvheumen> Does anyone know what causes me to have to enter my password twice before I log in. It has most likely something to do with pam settings since I have set up connection to ADS yesterday. I'm just not that familiar to recognize the incorrect setting myself.
<dvheumen> I mean, I can give extra information of course, but I don't know whether it is actually pam.d that's giving me the trouble
<uvirtbot> New bug: #555414 in samba (main) "package samba-common 2:3.4.0-3ubuntu5.6 failed to install/upgrade: subprocess installed post-installation script returned error exit status 10" [Undecided,New] https://launchpad.net/bugs/555414
<xperia> hello to all. i am planning to install a secure irc server on ubuntu.
<xperia> what is the best and most secure choice on ubuntu for a irc server ?
<Psi-Jack> The one you prefer.
<Psi-Jack> "best" questions are immaterial except to the person's own opinions. Rhetorical and useless to pretty much everyone else.
<xperia> Psi-Jack: i would not say this is a subjective question. programms like irc servers can be categorized in security, easy configuration, easy setup and install and so on ...
<xperia> based on this categorys some programms are better and some lesser. this give at the end the ranking of the irc servers from my point of view.
<xperia> i am not speaking about client programms i am speaking about mostly demons that started once on the server they work nearly all the time.
<stgraber> xperia: dancer-ircd is relatively easy to setup though its config is very similar to the others. For large scale IRC, I'd consider freenode's ircd (don't remember the name) as it seems quite stable, scales well and since the last change of ircd, supports SSL and SASL for authentication
<xperia> stgraber: thank you a lot for the tip about dancer-ircd.
<Psi-Jack> Alright.
<Psi-Jack> So what's the proper "ubuntu" way to set a service to start at boot?
<xperia> well i would say the proper way to setup a service to start at boot is using the command update-rc.d in ubuntu
<xperia> https://help.ubuntu.com/community/UbuntuBootupHowto
<Psi-Jack> update-rc.d is technically a development tool, not exactly /intended/ for normal use.
<Psi-Jack> Especially now that ubuntu uses upstart.
<lukehasnoname> Psi-Jack: Write an upstart script?
<lukehasnoname> ya
<lukehasnoname> What I don't get is that Ubuntu is the pioneer of Upstart, and has shifted entirely (right?) to it. Yet there is zero documentation in the Server Guide about it. It isn't even mentioned, AFAIK.
<Psi-Jack> heh
<xperia> well this could be very true Psi-Jack. i dont use it a lot but in the last years update-rc.d was recomended in the debian world
<xperia> ofcourse this could be changed in the mean time. i am not such uptodate to this things.
<xperia> if you say upstart is now recomended in ubuntu this is probably true.
<xperia> I itself heare the word "upstart" for the first time today
<xperia> http://wiki.linuxquestions.org/wiki/Update-rc.d#ixzz0kC7NSEyG
<xperia> update-rc.d is the Debian utility to install and remove System-V style init script links.
<lukehasnoname> Kyle Rankin? Are you in here? Document Upstart in "The Official Ubuntu Server Book 2.0" which I'm sure will come out a few months after the LTS release :)
<lukehasnoname> What an excellent book.
<Psi-Jack> rcconf was what I was looking for. The more recommended way. ;)
<Psi-Jack> Oi.
<xperia> Psi-Jack: i must say that rcconf sound much more handy than update-rc.d. itself never used it and can not give any advice or recomendation becouse of that. The thing is if i am not wrong that it need to be installed extra on nearly every debian and it is nothing else than a frontend of update-rc.d that is allready installed.
<Psi-Jack> Now onto the KVM stuff. Seems that libvirtd is starting up it's own virbr0 with an address and dhcp range pre-determined.
<Psi-Jack> And why my br0 isn't even being considered OR auto-started like it should've been,
<xperia> are you wroking on seting up a router ? br0 sound like bridge 0
<Psi-Jack> kvm, actually.
<xperia> did you looked allready for some howtos ?
<xperia> http://www.howtoforge.com/virtualization-with-kvm-on-ubuntu-9.04
<Psi-Jack> http://www.howtoforge.com/virtualization-with-kvm-on-ubuntu-9.10
<Psi-Jack> That's pretty much how I setup my network/interfaces
<xperia> the best thing would be if you kontakt the person "falko" probably as he has the most experience for that. i itself never have thinked till yet of virtualizations. He has about 10 Articles on howtoforge that describe howto get virtualisation on ubuntu
<xperia> here is another one
<xperia> http://www.howtoforge.com/virtualization-with-kvm-on-ubuntu-8.10
<xperia> it could be probably as allways only some configuration line that make the problem
<ryanakca> Can someone enlighten me as to why /sbin/init and /sbin/runlevel's permissions randomly change to 000, which requires me to take a rescue disk and switch it back?
<ryanakca> This started right after a fresh install
<jlevy> I have a question about ecryptfs. I'm running jaunty server. I'm trying to create an encrypted directory. I installed ecryptfs-utils via aptitude. Now when I try to mount a folder, I get this: "Unable to get the version number of the kernel module. Please make sure that you have the eCryptfs kernel module loaded, you have sysfs mounted, and the sysfs mount point is in /etc/mtab. This is necessary so that the mount helper know
<jlevy> It lets me continue and specify the encryption type and key strength, and then: "Error attempting to evaluate mount options: [-22] Invalid argument Check your system logs for details on why this happened. Try updating your ecryptfs-utils package, and/or submit a bug report on https://launchpad.net/ecryptfs"
<jlevy> syslog shows: "Apr  5 00:40:23 drupal mount.ecryptfs: Error initializing key module [/usr/lib/ecryptfs/libecryptfs_key_mod_gpg.so]; rc = [-22] Apr  5 00:46:36 drupal mount.ecryptfs: Error initializing key module [/usr/lib/ecryptfs/libecryptfs_key_mod_gpg.so]; rc = [-22] Apr  5 00:47:42 drupal mount.ecryptfs: Error initializing key module [/usr/lib/ecryptfs/libecryptfs_key_mod_gpg.so]; rc = [-22] Apr  5 00:47:52 drupal mount.e
<jlevy> Any thoughts?
<ryanakca> (my question relates to a server running Lucid)
<pmatulis> jlevy: looks like your module isn't loaded
<jlevy> pmatulis: how do I check this or load it?
<pmatulis> jlevy: '$ lsmod | grep ecryptfs'
<jlevy> pmatulis: no luck: http://pastebin.com/X3xnaKM5
<jlevy> pmatulis: I just noticed a type in the path I was trying to mount to, but it did not change anything when I fixed it.
<jlevy> *typo
<pmatulis> jlevy: you don't have the ecryptfs module loaded (as lsmod told us) so you need to load it before continuing
<jlevy> pmatulis: ah, ok. how do I do that?
<pmatulis> jlevy: '$ modprobe ecryptfs' and then try the lsmod command again
<pmatulis> sudo will be needed to load the module
<jlevy> pmatulis: FATAL: Module encryptfs not found.
<pmatulis> encryptfs?
<jlevy> pmatulis: I'm sorry - FATAL: Module ecryptfs not found.
<pmatulis> strange
<jlevy> pmatulis: any suggestions?
<pmatulis> jlevy: output to '$ dpkg -l ecryptfs-utils | tail -1'  perhaps
<jlevy> pmatulis: root@drupal:/home/jlevy# dpkg -l ecryptfs-utils | tail -1ii  ecryptfs-utils                    73-0ubuntu6.1                   ecryptfs cryptographic filesystem (utilities)
<jlevy> pmatulis: sorry for the poor formatting
<jlevy> pmatulis: "ii" begins the output
<pmatulis> jlevy: sorry.  maybe try a karmic live cd to test
<jlevy> pmatulis: can
<jlevy> pmatulis: can't use a live cd, this server is in the cloud.
<pmatulis> jlevy: what kind of cloud?
<jlevy> pmatulis: rackspace cloud server
<pmatulis> jlevy: what kind of kernel is it running?  how was it installed>?
<jlevy> pmatulis: the server was created from their standard jaunty image
<RoAkSoAx> kirkland, ping?
<xperia> question: i have setup the dancer-ircd server package and it run quite good but how can i tweek it a litlle. does some user howto exist for this package ? maybe a irc chanel would be not bad if somebody know it.
<jlevy> pmatulis: i'll try again another day. thanks for your help!
<RoAkSoAx> kirkland, ping?
<ScottK> !weekend | RoAkSoAx
<ubottu> RoAkSoAx: It's a weekend. Often on weekends the paid developers and a lot of the community may not be around to answer your question. Please be patient, wait longer than you normally would or try again during the working week.
<RoAkSoAx> kirkland, haha this time I pressed enter by mistake :)
<RoAkSoAx> gosh
<RoAkSoAx> ScottK, haha this time I pressed enter by mistake :)
<MTecknology> Where can I grab to 10.04 server iso?
<MTecknology> or do I jsut use the alternate cd?
<kirkland> RoAkSoAx: pong
<RoAkSoAx> kirkland, heya!1 was just about to drop you an email
<kirkland> MTecknology: cdimage.ubuntu.com
<kirkland> RoAkSoAx: what's up?
<RoAkSoAx> kirkland, thanks for the review... however I do have some questions on your suggestions
<MTecknology> kirkland: there's no server iso - that's where my question is coming from
<kirkland> RoAkSoAx: okay
<RoAkSoAx> kirkland,  * In obtain_devel_release(), you have hardcoded .cache; should use the CACHE variable
<kirkland> MTecknology: that means the build is broken
<kirkland> MTecknology: poke cjwatson
<RoAkSoAx> Im hardcoding .cache because CACHE is defined after the ISO list is generated, which requires the codename. To resolve we have two options:
<RoAkSoAx> 1. Move CACHE defaults before loading configfile, however, if we change the default CACHE on the config file, the CACHE for the codename will always be the default one.
<RoAkSoAx> 2. Move the ISO list generation to a function not in the config file.
<MTecknology> kirkland: 10.04 isn't out yet - last i knew server iso is only built for released versions
<kirkland> MTecknology: absolutely not; it's generally built every day
<MTecknology> kirkland: oh...
<kirkland> MTecknology: if it's not at cdimage.ubuntu.com, then the build is broken, and needs to be fixed
<MTecknology> kirkland: http://cdimage.ubuntu.com/daily/current/
<kirkland> MTecknology: http://cdimage.ubuntu.com/ubuntu-server/daily/current/
<MTecknology> OH!
<MTecknology> kirkland: thanks :D - sorry
<kirkland> MTecknology: np
<kirkland> RoAkSoAx: hrm
<kirkland> RoAkSoAx: this is proving complicated ...
<kirkland> RoAkSoAx: i think we will need to leave it as is for Lucid
<kirkland> RoAkSoAx: and just SRU out the new code names as the builds become available
<kirkland> RoAkSoAx: that's just two uploads per year
<RoAkSoAx> kirkland, well I'm modularizing your code anyways for testdrive-gtk and I'll have the ISO list generation in a separate function which is not in the config file due to that it creates the itnerface
<kirkland> RoAkSoAx: and fix this in a more well designed manner in Maverick
<kirkland> RoAkSoAx: right, let's target Maverick for this change
<RoAkSoAx> kirkland, ok then. This will be a change only for the modularized code then :)
<kirkland> RoAkSoAx: agreed
<kirkland> RoAkSoAx: as you're working on that, bear in mind those suggestions I had in the merge
<kirkland> RoAkSoAx: please get your indentation to match mine
<kirkland> RoAkSoAx: and please use native python functions rather than shell callouts
<RoAkSoAx> kirkland, I am. I actually planned to change the codename code a little bit for the modularization.
<RoAkSoAx> kirkland, identation is already tabs in the modularization, and will do with native python functions
<kirkland> RoAkSoAx: thanks
<RoAkSoAx> kirkland, btw... do you have any test cases for all the functionality of testdrive?
<kirkland> RoAkSoAx: i don't :-(
<RoAkSoAx> kirkland, ok not a prob :) Whenever I have a cleaner modularized code we'll have to test to see what's broken, what's not working as it should and etc
<kirkland> RoAkSoAx: sounds good
<Psi-Jack> Why does Ubuntu seem to have to overcomplicate things with kvm? heh
<uvirtbot> New bug: #555510 in samba (main) "package samba-common-bin 2:3.4.0-3ubuntu5.6 failed to install/upgrade: package samba-common-bin is already installed and configured" [Undecided,New] https://launchpad.net/bugs/555510
<kobrien> zul: you around?
<RoAkSoAx> kobrien, he's surely sleeping, ping him in like 8 hours or so
<kobrien> oh right, different timezones, cool.
<RoAkSoAx> kobrien, what timezone are you into?
<kobrien> RoAkSoAx: IST
<kobrien> which is UTC +1
<RoAkSoAx> kobrien, wow so like 5am?
<RoAkSoAx> or almost 6?
<kobrien> 5:40am
<kobrien> yes
<RoAkSoAx> wow I would be going to be by that time :)
<kobrien> I don't sleep
<RoAkSoAx> wow
<RoAkSoAx> anyways im off for the nday
<uvirtbot> New bug: #555521 in postfix (main) "package postfix 2.6.5-3 failed to install/upgrade: alamprotsess installed post-installation script tagastas lÃµpetamisel veakoodi 75" [Undecided,New] https://launchpad.net/bugs/555521
<Birmaan> morning all
<lukehasnoname> It's quiet in here
<kobrien> morning...i've been here all night :)
<kobrien> and yes, it's quiet
<maxagaz> how to do a fsck on a lvm parition from a live usb ?
<maxagaz> what package needs to be installed on the usb ?
<RoyK> lvm2 iirc
<maxagaz> RoyK, but I still don't have fsck.lvm with this package
<RoyK> huh?
<RoyK> lvm isn't a filesystem
<RoyK> use lvm to gain access to your lvm partition(s)
<RoyK> on that or those, there is probably an ext3 filesystem
<maxagaz> true...
<uvirtbot> New bug: #555597 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.37-1ubuntu5.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/555597
<maxagaz> lvdisplay gives me a path of my partition in VGName, but it's not listed in /dev when I check it, why ?
<maxagaz> is there something else to do ?
<RoyK> why don't you just boot into single?
<maxagaz> RoyK, how ?
<maxagaz> RoyK, btw I found my answer... => lvm lvchange -ay ?dev/Vol/Log
<maxagaz> RoyK, then I can list the partition in dev
<maxagaz> RoyK, can you explain me how and why I should run in single mode ?
<RoyK> start ubuntu, hit escape when prompted (start of boot), choose linux ... system repair or whatever it's named (normally second choice)
<RoyK> but if you have access to the fs already, just fsck it
<maxagaz> RoyK, whatever I choose, I freezes quickly after my choice
<RoyK> freezes??
<RoyK> and usb boot works?
<maxagaz> RoyK, yes, usb boot works
<maxagaz> RoyK, I had a kernel panic
<RoyK> hm
<maxagaz> RoyK, run-init: /sbin/init: No such file or directory
<RoyK> what sort of panic?
<maxagaz> then,
<RoyK> wtf
<maxagaz> Kernel panic - not syncing: Attempted to kill init!
<RoyK> if it can't find init, something is pretty bad
<RoyK> what did you do with the system?
<maxagaz> I regularly freezes because of bond
<maxagaz> It
<maxagaz> I mean with some error about bond
<jerbob92> hi all :)
<maxagaz> RoyK, but I never figured out why
<jerbob92> have some question, how can i do multiple backup versions, like 30 max
<RoyK> bond?
<jerbob92> backup-1.tar.gz
<jerbob92> backup-2.tar.gz
<RoyK> jerbob92: use logrotate :)
<maxagaz> RoyK, it allows to have a master and a slave server, when the master freezes, the slave acts like the master, taking its ips...
<RoyK> you mean like heartbeat?
<jerbob92> Royk, what is lograte?
<jerbob92> cant find anything about it
<RoyK> man logrotate_
<RoyK> ?
<usuario_> hola
<usuario_> pruebas
<usuario_> any question?
<maxagaz> I can't mount my system (ext3 on lvm) with usb live, fsck detects no error, but still when I reboot the machine I get a kernel panic: run-init: /sbin/init: No such file or directory
<maxagaz> although /sbin/init exists
<jerbob92> RoyK your still here?
<jerbob92> isnt logrotate only for log files?
<jerbob92> i want to backup a database and 2 folders
<jerbob92>  we want to be allowed to go 30 days back into the time
<jerbob92> so 30 files
<jerbob92> file1.tar.gz
<jerbob92> file2.tar.gz
<jerbob92> and
<jerbob92> database1.sql
<jerbob92> database2.sql
<sherr> jerbob92: if you are comfortable writing your own backup scripts look at things like rsync, rdiff-backup etc.
<sherr> Else : backuppc, bacula etc. Lots of programs around that help do backups.
<jerbob92> ok :)
<jerbob92> im now trying sbackup commandline
<jerbob92> but its not that configurable
<jerbob92> hmmm
<jerbob92> backuppc wont install
<uvirtbot> New bug: #555166 in nmap (main) "zenmap crashed in Lucid Lynx" [Undecided,New] https://launchpad.net/bugs/555166
<jerbob92> got backuppc running now
<jerbob92> but i cant acces the web panel
<jerbob92> ahhhhhh
<xperia> hello everybody. have installed dancer-ircd based on this howto here
<xperia> https://help.ubuntu.com/community/Dancer-IRCD
<xperia> for some strange reason i have in my log
<xperia> this error message printed all 20 seconds
<xperia> Mon Apr  5 15:01:21 2010 Server Error: Closing Link: services. (No C/N conf lines)
<xperia> the thing is that i have however this lines in my ircd.conf
<xperia> here is the evidence
<xperia> C:XX.XXX.XXX.XXX:password:services.
<xperia> N:XX.XXX.XXX.XXX:password:services.
<xperia> what could be the Problem ?
<areay> after installing and removing the krb5-admin-server and krb5-kdc packages several times (and attempting to install them from souce), i am completely unable to install the krb5-kdc package with apt-get... it just hangs at "Setting up krb5-kdc (1.6.dfsg.4~beta1-5ubuntu2.2) ..." and does absolutely nothing until i press ctrl+c... i've looked at various logs and i can't see anything specific so i'm not really sure what to do now. i'm
<areay> using jaunty btw
<ScottK> How long did you wait?
<areay> about 3mins? maybe i'm being impatient,..
<areay> possibly longer
<areay> it asks me all the configuration questions, and then just hangs... i just don't seem to remember it taking this long the first time (when it worked)
<ScottK> Not sure, but I doubt there are many people running Jaunty here.
<ScottK> Most either run LTS or the current release.
<areay> hmm...
<jerbob92> im running jaunty
<areay> it's the last server running jaunty, and i haven't had any problems with the karmic servers... so i might upgrade and see if that fixes the problem
<jerbob92> i just had a similar problem
<jerbob92> then i checked in webmin, the package was installed but nut fully
<jerbob92> so i did apt-get --purge remove [package]
<jerbob92> then i did a install in webmin, it was succesful
<tyska> hello guys
<areay> fair enough... yeah i've been pretty liberal with --purge... but to no avail
<tyska> im trying to use the ubuntu enterprise cloud and im having some problems, someone can help me?
<areay> i think canonical/the ubuntu community should work closer with the developers of stuff like openldap/kerberos/openafs to make the installation and configuration process less like driving nails into your own skull
<ScottK> areay: krb5-kdc is in Universe which means it's primarily supported by the community, so when you say "Ubuntu should ..." that could be you.
<ahasenack> mit kerberos is in universe? wow, didn't realize that
<ahasenack> even heimdal
<ahasenack> so there is no kerberos implementation in main?
<areay> ScottK, the only ubuntu documentation for openldap and kerberos is incomplete... i would love to contribute, but i actually have no idea how either work (mainly because of a lack of clear documentation)... the only well-documented way of networking ubuntu machines is with NIS and NFS (which any sysadmin will tell you is insecure and outdated)
<ScottK> areay: sommer is the documentation lead.  If you can experiment and make progress, he can help you get it into the documentation.
<areay> ScottK, sounds good... well i can tell this is something that would benefit a lot of people, so as soon as I get it working i'll get back to you guys... i'd need to make a guide just to remember what the hell to do next time ;)
<ScottK> areay: Excellent.  There's a wiki based community section of help.ubuntu.com that you can use to document interim progress.
<areay> ScottK, i'll upgrade to karmic first --don't wanna be posting the wrong instructions
<ScottK> Sounds reasonable.  Good luck.
<areay> thanks for your help tho.... and if there's anyone else in here that knows *anything* about kerberos/openldap/openafs in relation to ubuntu and wants to contribute, drop me an email at alistair<dot>reay<at>gmail<dot>com
<ihernandez> alvin,  hello. you where looking for a way to use ldap + nss ?
<ihernandez>  + nfs?
<wack479> anyone have any suggestions for website uptime monitoring?
<joschi> wack479: pingdom.com
<joschi> wack479: or if self-hosted and small-scale: monit
<joschi> otherwise the usual suspects like nagios
<wack479> joschi: ok cool thanks
<RoAkSoAx> ivo
<ryanakca> Can someone enlighten me as to why /sbin/init and /sbin/runlevel's permissions randomly change to 000, which requires me to take a rescue disk and switch it back?
<ryanakca> This started right after a fresh Lucid install
<kobrien> zul?
<zul> yes?
<kobrien> i had an php patch to show you, just trying to dig it up now
<zul> kobrien: ok
<kobrien> zul: it's gone from launchpad, although there is a bug describing exactly the same thing. it's to do with apache2 not serving php from userdirs by default. my patch enabled it as it's expected behavior, but i see the debian guys don't like the idea for security reasons.
<zul> kobrien: i dont like it either btw i am going to do the same way as the debian patch does
<kobrien> zul: allow them to re-enable?
<zul> kobrien: users can do whatever they want i was just going to document it
<kobrien> I find it odd that, the bug and my patch with it, is gone.
<RoAkSoAx> kobrien, probably because it is marked as fix released or invalid. Do you have the bug number?
<kobrien> i don't :( it'll be on my other machine, I'll find it later
<kobrien> RoAkSoAx: found it, fix released.
<RoAkSoAx> kobrien, could you give us the link
<RoAkSoAx> or bug number?
<kobrien> Bug: 554903
<kobrien> It was marked invalid for ubuntu
<RoAkSoAx> bug #554903
<uvirtbot> Launchpad bug 554903 in apache2 "apache2 with mod php5 does not execute index.php" [Unknown,Fix released] https://launchpad.net/bugs/554903
<kobrien> can people take a look at it?
<kobrien> zul?
<RoAkSoAx> kobrien, have you tried to reproduce this bug?
<kobrien> I couldn't view php files in my public_html through firefox
<kobrien> i patched and it fixed
<kobrien> there's the patch
<kobrien> Stefan Fritz has a point though.
<RoAkSoAx> kobrien, i would suggest you to talk to mathiaz since he marked it as invalid :).
<kobrien> hmm, yes. I just assume it expected behavior that if you allow a user to have a public_html on a lamp server, they should have php
<kobrien> zul: what you think?
<zul> i can see both sides but Im thinking of putting a note in the conf file
<zul> if it isnt already there
<kobrien> right, seems reasonable.
<kobrien> RoyK: you around?
<RoyK> yeah
<kobrien> bug #551211
<uvirtbot> Launchpad bug 551211 in lighttpd "can't bind to port 80" [High,Fix released] https://launchpad.net/bugs/551211
<kobrien> which patch didn't fix it? were both tried?
<RoyK> erm - both?
 * RoyK checks
<kobrien> cheers
<RoyK> the patch fixes the problem for IPv4, but it makes lighttpd listen to ::1 only for IPv6, rendering it quite useless for anything but testing
<kobrien> RoyK: fair enough, thanks.
<rbdyck> Hello everyone. Hopefully this is the right channel. I'm having trouble installing. This should be simple, but after installing from CD when I try to boot for the first time I get the "boot: " prompt. What do I do there?
<AndyGraybeal> hi guys, i'm trying to get ntp to work, i'm trying to get 'windows' client to pickup time from the ntp server (i'm running ubuntu 8.10).  the windows client is saying roughly: the peer's stratum is less than the host's stratum.
<AndyGraybeal> it's windowsxp
<AndyGraybeal> any thoughts would be helpful
<pmatulis> AndyGraybeal: any special reason you're running 8.10?
<AndyGraybeal> well, not entirely, accept tht's what was around when i first started doing this.  i plan on upgrading to 10.4 when i have tested everything; pmatulis any thoughts on this are welcome.
<AndyGraybeal> 8.04 didn't have everything that 8.10 did at the time.  and honestly i'm afraid to do distro-upgrade at this point -- i'd rather install from scratch.  i'm running LTSP and KVM/Libvirt... i spent a lot of time getting it all to work - i'd hate for it to all break after i did a distro upgrade.
<pmatulis> AndyGraybeal: google the winxp error i guess, a quick search showed others seeing the same thing
<AndyGraybeal> thank you pmatulis
<AndyGraybeal> i looked at most fot he articles.... i'll re-read them.
 * pmatulis always uses openntpd
<AndyGraybeal> debian doesn't automatically use openntpd ?
<AndyGraybeal> er i mean ubuntu
<ScottK> AndyGraybeal: Server upgrades are usually pretty safe.  As long as you use do-release-upgrade, it should be fine.
<AndyGraybeal> ScottK: thank you for the re-assurance; i'm still worried
<AndyGraybeal> ScottK: but i'll take what you said into consideration.
<ScottK> AndyGraybeal: Back up your data (you'll do this in any case) and if the upgrade doesn't go well, reinstall.  There's no downside risk except a little time.
<ScottK> Server upgrades usually go really fast.
<AndyGraybeal> yea, okay thank you.
<Zider> is there a good firewall/iptables-manager with webinterface?
<tsimpson> Zider: I think ebox has a firewall management module
<tsimpson> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<Zider> well, webmin has a module too, I was thinking more of a standalone system, like a router has
<Zider> and less bloated :P
<rbdyck> I re-installed Ubuntu server, it did install grub2. I still get a "boot: " prompt. When I press Enter I hear it loading, it displays "Loading /vmlinuz...". Then after a bunch of messages that scroll off the screen, I get
<rbdyck> VFS: Cannot open root device "sr0" or unknown-block(11,0)
<rbdyck> Please append a correct "root=" boot option
<rbdyck> Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(11,0)
<_ruben> sr0 .. that's cdrom iirc, is the install cd still in the drive?
<xperia> hello to all. i have setup successful dancer-ircd on my ubuntu-server based on this howto here https://help.ubuntu.com/community/Dancer-IRCD
<xperia> but for some strange reason i am having a little problems with the dns lookup.
<xperia> in my dancer-ircd log i have this line here several times
<xperia> [2010/04/05 18.24] DNS lookup timed out, new timeout 3, retries 2
<xperia> and in the dancer-service this 3 Lines are trashing my log file every 20 seconds
<xperia> Mon Apr  5 20:34:40 2010 Connected tomydomain.com:6667
<xperia> Mon Apr  5 20:34:40 2010 Server Error: Closing Link: services. (No C/N conf lines)
<xperia> Mon Apr  5 20:34:40 2010 Read error from server: Operation now in progress
<xperia> The C/N conf lines exist however in my Config files so this is 100% related for some strange reason to the dns problem
<rbdyck> No. I removed it. I also got messages about sdb, which is the second logical SCSI drive. I have 2 RAID arrays: one is a pair of drives configured as RAID 1 (mirrored) which I set up as logical drive 0. That shows as devicd sda. The other is a pair of drives configured as RAID 0 (concatonated) which is logical drive 1. That shows as sdb. There are messages about sdb, but the messages about sda scrolled off.
<_ruben> rbdyck: odd .. my experience with grub2 is nearly none, so im afraid i cant help on this one
<rbdyck> I didn't select individual programs, just standard packages. I didn't get an option to choose a boot loader, it just went ahead with grub. Not that I have any experience with others anyway.
<_ruben> grub(2) is default unless / is on lvm, then it takes lilo
<_ruben> could try booting from the install media and select "boot from first hdd", tho that'd leave you at the same spot most likely
<rbdyck> I chose to install with LVM. But I saw a message that grub2 was installed.
<rbdyck> Ah!
<_ruben> grub2 might support / (or in fact /boot) on lvm though
<rbdyck> Got a message "GRUB loading"... Was able to log on.
<rbdyck> Since this is a first time install, I can reload Ubuntu. Should I choose not to install VLM?
<rbdyck> LVM
<_ruben> rbdyck: i prefer lvm .. but i also put /boot on a seperate non-lvm partition
<_ruben> 128MB or so usually
<rbdyck> I have a pair of 4.2GB hard drives, mirrored, that I intend to load /boot onto. The data will go on a separate RAID 5 array. Should I install Ubuntu without LVM just to get it going?
<soren> rbdyck: Is it hardware raid?
<soren> rbdyck: Oh, never mind. I see what you mean now.
<_ruben> 4G should be enough for the complete OS usualy
<rbdyck> Ok, I disabled the pair of 8GB drives that I was going to remove anyway. I got messages about sda. One said "sda: sda1 sda2 < sda5 >" "sd 1:0:0:0: Attached scsi disk sda"
<rbdyck> Since the RAID controller is presenting the drives as a single logical drive, I shouldn't need LVM. Since it will not boot without the install CD, I guess I should reformat and reinstall without LVM.
<hggdh> mathiaz_: go for the UEC test rig
<mathiaz> hggdh: great - thanks
<hggdh> mathiaz: tell me when you are done, I would like to check the daily
<xperia> hello does anbody know where i can find the server package "flashpolicyd" for ubuntu ?
<rbdyck> While that is running I removed the pair of 9GB data drives (they weren't 8GB) and installed the 6 18.2GB drives into drive trays.
<sherr> xperia: flashpolicyd ... ughh. Doesn't appear to exist in Ubuntu repos.
<xperia> sherr: yeah it looks like. have found the repo http://code.google.com/p/flashpolicyd/
<xperia> it is more and more needed for the flex apllication
<zul> kees: ping there is a new apache module that is suppose to prevent the slowloris bug for apache that im in the midle of backporting for lucid
<mathiaz> zul: hi - what's the state of remove mysql-5.0 from the lucid universe archive?
<zul> mathiaz: there shouldnt be depending on libmysqlclient15
<zul> mathiaz: so it should be ok to get rid of mysql 5.0 from the archve
<mathiaz> zul: apt-cache rdepends libmysqlclient15off still shows a lot of packages
<kees> zul: neato
<zul> mathiaz: no one made me aware of libmysqlicnet15off i guess thats what im doing tomorrow then
<mathiaz> zul: libpam-mysql for example
<zul> mathiaz: ill have a look at it when I get back tonight
<mathiaz> zul: ok
<rbdyck> Hmm, no difference.
<rbdyck> There is an odd thing with SCSI boot messages. This server has an onboard SCSI controller, and a separate RAID SCSI card. The backlplane is served by the card, but the CD-ROM by the onboard controller. There is a message that SCSI BIOS is not loaded when there isn't a boot CD in place, but the boot sequence can find sda anyway.
<jongbergs> !hi
<ubottu> Hi! Welcome to #ubuntu-server! Feel free to ask questions and help people out. The channel guidelines are at https://wiki.ubuntu.com/IRC/Guidelines . Enjoy your stay!
<jongbergs> hi, i've recently installed 9.10 server but forgot to install the LAMP stack..at this moment my server do not have an internet connection..how do i install LAMP directly from the Server CD?
<UnixDawg> hey guys
<UnixDawg> what is the easiest way to configure the server install to  add a install script.
<UnixDawg> we want to make a install iso that installs freeswitch+fusion +apache22 + sqlite and php5
<UnixDawg> I started a script but need help making it do it on install
<UnixDawg> any help ?
<rbdyck> I couldn't log onto the root account after new install. The install script asked me for a user account, and a password for the email server root account, but not a password for the OS root account. It isn't accepting any password I can think of. In case something had been carried forward from a previous install, I even low-level formatted both drives, re-established the RAID 1 (mirror) array, and fully reinstalled Ubuntu.
<ScottK> rbdyck: https://help.ubuntu.com/community/RootSudo
<rbdyck> thanks
<jongbergs> hi, i've recently installed 9.10 server but forgot to install the LAMP stack..at this moment my server do not have an internet connection..how do i install LAMP directly from the Server CD?
<Tallken> jongbergs: completely with no knowledge if this works: try apt-cdrom add, apt-get install apache2 (...)
<ScottK> sudo tasksel will take you back to where you can select LAMP after you add the CDROM.
<ChmEarl> bug in python xml module. The parsers obj. is expat - xmlrpc is missing
#ubuntu-server 2010-04-06
<rbdyck> Last time I re-instsalled Ubuntu, grub said it was setting up hd0. But I have SCSI RAID, so my boot device is sda and its partition is sda1. Is this the boot problem I'm having?
<cloakable> Has anyone gotten dovecot-antispam working on 9.10, and if so, how?
<roy_> Hi, how can you remove all current iptables rules
<MTecknology> roy_: there's a flush option
<MTecknology> iptables -F i think
<roy_> I did this, its because my website says connection refused, what would cause this if its not related to iptables
<RoyK^> it usually means it doesn't listen to that port
<RoyK^> iptables/ufw will normally just drop the packet, not send icmp reply
<rbdyck> I finished formatting my 18.2GB SCSI drives. They were used so I guess it isn't a great surprise that 5 out of 6 work. I think the store I got them from has a couple more.
<roy_> How can i fix this
<MTecknology> Why do you guys suggest using qemu with kvm instead of say xen?
<MTecknology> oh - obvious answer found
<MTecknology> ubuntu-vm-builder is deprecated and vmbuilder is taking it's place - what provides vmbuilder?
<MTecknology> oh... found it :)
<MTecknology> I look and look, ask, then find the answer :P
<ChmEarl> MTecknology, using Xen 4rc8 with pvops 2.6.32.10 kernel on karmic 9.10
<ChmEarl> also without any libvirt
<MTecknology> ChmEarl: isn't libvirt just designed to be an easy to use wrapper around virt tools like xen and kvm?
<jongbergs> Tallken, ScottK : thanks for your tip i'll try that..
<MTecknology> !libvirt
<MTecknology> !kvm
<ubottu> kvm is the preferred virtualization approach in Ubuntu. For more information see https://help.ubuntu.com/community/KVM
<MTecknology> I used vmbuilder to build VM's on my system. Then I realized I need to install libvirt-bin. I installed that and then ran virsh -c qemu:///system and in there ran list --all but none of the vm's I made show up. How can I make them show up in this list?
<ChmEarl> MTecknology,  (in virsh shell)#define /path/toVM.xml
<MTecknology> ChmEarl: any idea where the xml files sit?
<ChmEarl> MTecknology, you only ran the VM once when you installed it? Are any still running?
<MTecknology> ChmEarl: I didn't run them yet, I've always started vm's from virsh
<ChmEarl> to find the xml files, do #updatedb.. then locate <vmname>
<MTecknology> hrm.. there was a run.sh file made that has   exec kvm -m 768 -smp 2 -drive file=tmpOoaNco.qcow2 "$@"
<MTecknology> there's no xml found when I do that
<ChmEarl> maybe a log file?
<MTecknology> I used  --dest /virt/images/repono
<MTecknology> there I have run.sh and tmpxYbvEy.qcow2
<MTecknology> I'll try running the run.sh and see if I can find the cml
<ChmEarl> k, then start then VM via the script - while its running get the virsh shell and do #dumpxml <vmname>
<MTecknology> domain not found
<MTecknology> ChmEarl: since I didn't do anything useful yet; would it be out of the question to delete the vm's; recreate them; but now with libvirt installed?
<ChmEarl> that is OK, but don't the scripts enter a vmname now in virsh list?
<ChmEarl> once the vm is running?
<MTecknology> nope
<ChmEarl> ok, something is broken
<MTecknology> Does this look ok for creating a vm?      vmbuilder kvm ubuntu --dest /virt/images/incipio --mem 768 --cpus 2 --swapsize 512 --domain incipio --ip dhcp --bridge br0
<ChmEarl> never used that builder.
<MTecknology> it looks like ubuntu-vm-builder is being deprecated
<MTecknology> the man page says to use that instead
<ChmEarl> I made a VM directly by qemu-kvm comdline
<MTecknology> oh
<MTecknology> I'll see what happens when I make this - maybe I'll have to jsut do it that way too
<ChmEarl> there are 4 or more ways to make a VM for kvm
<ChmEarl> once you have a cmdline you can convert it to domxml used by Libvirt and import/define it
<MTecknology> ya.. I could do it with virt-install too, but I liked ubuntu-vm-builder; but that's going away so I figured now's the time to learn what's replacing it :P
<crazygir> I'm guessing qmail should install just fine on a default ubuntu 9.10 server?
<ChmEarl> virsh domxml-from-native xxyy
<crazygir> I'm getting an:   qmail: Depends: ucspi-tcp but it is not installable
<MTecknology> I didn't think qmail was available in ubuntu repos
<nightyyy> hi guys... i installed recently the ubuntu server with cloud
<MTecknology> ChmEarl: hurray, they're still not being found
<nightyyy> it is normal have the system just running without any instance running in eucalyptus
<nightyyy> with more than the 50% of ram used?
<nightyyy> and when i use "top" to see who is sharing that ram, there's no process with huge use
<MTecknology> ChmEarl: hrm... maybe it's because I specified kvm as the hypervisor instead of qemu..
<MTecknology> nope..
<MTecknology> VMBuilder.exception.VMBuilderUserError: No such hypervisor. Available hypervisors: vmserver esxi xen kvm vbox vmw6
<nightyyy> anyone with experience with cloud ?
<MTecknology> --libvirt= -  THERE
<ChmEarl> MTecknology, you have a hook into libvirt? that should do it
<MTecknology> ChmEarl: I guess I didn't read the man page close enough
<ChmEarl> MTecknology, keep your VM's simple until you see the scheme of things... puppy linux (liveOS) is a good one
<MTecknology> ChmEarl: hurray - virsh -c qemu:///system list --all |  but it's named ubuntu :S - i guess back to man
<ChmEarl> MTecknology, export the domxml while its running
<lukehasnoname> so I came up with an idea
<MTecknology> ChmEarl: fyi - this was REALLY nice to find - vmbuilder kvm ubuntu --help
<MTecknology> ChmEarl: thanks for the help :)
<MTecknology> lukehasnoname: I did too, and I'm rolling it out right now
<lukehasnoname> and I don't know if it's been thought of before.. so I'll throw it out there and see if it's been 'invented' already
<lukehasnoname> MTecknology, feel free to chare
<MTecknology> lukehasnoname: get a decent desktop system for all of my dev systems and isntead of gentoo use the system for work instead of constantly building the system - put ubuntu vm's on and don't break things
<uvirtbot> New bug: #556176 in openldap (main) "slapd homedir (and some enhancements...)" [Undecided,New] https://launchpad.net/bugs/556176
<MTecknology> what's your idea?
<lukehasnoname> There are many well known, standard configuration files on a server. dhcpd, named, db.*, upstart confs, etc. Each one of these tend to have their own syntax check mechanism of sorts, even if it is when you start the service... so there should be a standard header syntax (or filename convention) that allows vi(m) to know what file you're editing, and check syntax on the fly. Kinda like what visudo does, but instead of be
<lukehasnoname> ing its own program, it would be a plugin system for vi, like a profile.
<lukehasnoname> This may already exist, but if it doesn't, it's pure genius. I spent hours in the past two days tracking down DNS/DHCP issues that ended up being simple config file errors. Should have been the first place I checked, but having a parser in vi would be awesome.
<MTecknology> you can do that
<MTecknology> check out gentoo, they do it a lot - ubuntu does it but to the extent they do it
<crazygir> am I crazy, or has ubuntu really customized the postfix install?
<crazygir> I'm trying to follow information from several different guides and it is proving difficult
<MTecknology> it's not really a header thing when it comes to vim but I think vim just applies specific syntax hilighting to specific files - adding a header to the files would be a bad idea - especially when it comes to other editors
<ScottK> crazygir: It's not radically different.
<crazygir> I'm also not understanding what's up with master.cf, nor why main.cf is missing from /etc/postfix
<crazygir> amd I missing something?
<ScottK> It should be there.
<crazygir> *am I
<crazygir> hrm
<MTecknology> crazygir: crazy likely comes into play as well - I know it does for me
<ScottK> crazygir: sudo dpkg-reconfigure postfix is a good start.
<lukehasnoname> herp-aderp: I knew vim had syntax highlighting... hm. Do you know where the info or syntax profiles are stored?
<MTecknology> lukehasnoname: no, but you could ask in #vim - they're pretty helpful
<lukehasnoname> MTecknology, nvmd
<crazygir> w00t! thanks ScottK !!
<lukehasnoname> I occasionally ask questions before I look for the answers myself... man vim
<ScottK> crazygir: https://help.ubuntu.com/9.10/serverguide/C/postfix.html is a maintained set of documentation on postfix setup that's specific to Ubuntu.
<MTecknology> lukehasnoname: sometimes it's just too easy to ask hundreds of people a question at once instead ot trying to learn it yourself by looking ;)
<ScottK> (assuming you're on 9.10, use the version for your release)
<MTecknology> ScottK: what's your opinion of setting up productions systems with 10.04 at this point?
<ScottK> Depends on what you mean by production.
<ScottK> I have a 'test server' that runs all the time, but I'm the only user.
<ScottK> I'll upgrade that soon.
<MTecknology> development server that shouldn't go down
<ScottK> If by production you mean "I have customers who rely on my service", then I wouldn't.
<ScottK> crazygir: The biggest configuration customization for Postfix in Debian/Ubuntu is that it's chrooted by default.
<ScottK> Our docs cover this, but most tutorials you find on the web don't, so it is better to stick with Ubuntu specific docs.
<crazygir> I was just banging my head regarding the missing files and needing to run dpkg-reconfigure
<MTecknology> ScottK: does a chroot help security much?
<crazygir> MTecknology: why not?
<crazygir> depends on how complex maintenance is though
<crazygir> when officially supported like this, it is usually pretty straightforward
<ScottK> MTecknology: It's not a subsitute for apparmor or selinux, but it does help.
<ScottK> The trickiest part is getting services into the chroot and I think we have those bugs all licked.
<ScottK> Also, unlike rpm based systems additional run time capabilities that most people won't need are split into separate binaries (like postfix-mysql) so you don't need more installed/running than you actually need.
<crazygir> what is the difference between mydestinations and virtual domains?
<MTecknology> I hate it when ssh keys don't magically work perfect
<crazygir> hah
<crazygir> always some lining up to do
<MTecknology> crazygir: I installed a new server; mkdir .ssh; vim .ssh/authorized_keys2; (put in contents of .pub); chmod 750 .ssh; chmod 644 .ssh/*; exit; ssh 192.168.1.111
<MTecknology> crazygir: should work perfect just like magic, right?
<MTecknology> I also jsut tried with ssh-copy-id
<MTecknology> it will copy the key - but still no ssh login....
<MTecknology> oh... problem with ecryptfs
<MTecknology> ScottK: how do I stop using ecryptfs for a home directory?
<crazygir> I do it differently, so I can't comment
<crazygir> check your system's manpages for ssh too, not sure about the auth_keys2
<crazygir> also.. use -vvv for debugging
<MTecknology> definteily an issue with ecryptfs
<MTecknology> i give - time to reinstall the server and do so without ecryptfs
<MTecknology> I don't think it's meant for servers anymore
<crazygir> do I need to specify configuration options such as: virtual_mailbox_base, if all I'm doing is then forwarding the emails to another domain?
<MTecknology> virt-viewer is connected to the vnc server but all I get is a blinking cursor.. GAH!
<MTecknology> closer anyway - sleep time
<fintn> anyone knows why sound only works for root after installing alsa on lucid?
<fintn> (I'm using an onboard Intel HDA card)
<twb> I don't suppose anybody still cares, but Hardy's m-a borks, at least on netfilter-extensions-source, because of an implicit dependency on dpatch.
<twb> http://paste.ubuntu.com/409897/
<cloakable> twb: Probably not. That version isn't supported anymore, iirc.
<twb> April 2008 plus five years makes it supported until April 2013.
<ttx> twb: we do care and yes it's still supported.
<twb> (Except for all the packages in LTS that aren't considered "server" packages, sigh.)
<ttx> what's m-a ?
<twb> ttx: module-assistant.
<twb> ttx: the thing before DKMS
<ttx> ack
<ttx> twb: how is the error in your log related to dpatch ?
<twb> ttx: that was a different error :-)
<ttx> ah :)
<uvirtbot> New bug: #556285 in samba (main) "cannot change password of AD user when using pam_winbind" [Undecided,New] https://launchpad.net/bugs/556285
<_ruben> anyone have any clue as to what could the cause of messages like this: sudo: pam_unix(sudo:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=  user=root
<cloakable> failed sudo commands
<_ruben> but it doesnt mention which user supposedly caused it
<cloakable> have a root through your logfiles :)
<cloakable> auth.log, I think
<cloakable> Hmmm
<cloakable> _ruben: It seems the user trying to sudo is root
<_ruben> why would root fail to sudo as root?
<cloakable> i've not the foggiest.
<_ruben> and i'd expect logname=root in that case
<_ruben> hrm .. the very first entry does show logname=root and its on a tty, now i know who to ask/slap
<uvirtbot> New bug: #556312 in libvirt (main) "libvirt packages should not Recommend hypervisor packages" [Undecided,New] https://launchpad.net/bugs/556312
<uvirtbot> New bug: #556315 in libvirt (main) "problem with operations on qemu/kvm guest" [Undecided,New] https://launchpad.net/bugs/556315
<uvirtbot> New bug: #556332 in bind9 (main) "leftover /etc/init.d/bind9.dpkg-dist on 9.10 -> 10.04" [Undecided,New] https://launchpad.net/bugs/556332
<uvirtbot> New bug: #556342 in samba (main) "winbind pam profile doesn't get installed or removed when package is installed/removed" [Medium,In progress] https://launchpad.net/bugs/556342
<uvirtbot> New bug: #556343 in bind9 (main) "upgrade error on 8.04 -> 10.04 " [High,New] https://launchpad.net/bugs/556343
<incorrect> newbie question, how on earth do you get into the grub menu now, it goes past so fast and there is no hit esc to get the menu
<jasonmchristos> try searching on how to edit grub config to add a 30 second pause
<incorrect> its a bit late for that,
<jasonmchristos> how so
<jasonmchristos> i think the file wouls be in /boot
<incorrect> i goofed my bonding and well, the system is grindly slow has it times out trying to get the DNS entry for ldap servers
<incorrect> oh thank god i set a root password
<jasonmchristos> why are you so thankeful about that
<incorrect> because i can get into the system
<jasonmchristos> whats the diffrence
<jasonmchristos> u cant get into the system with other usernames?
<incorrect> set up ldap auth, break the network config
<incorrect> then try logging in using a local user
<zul> ttx: ping i backported mod-reqtimeout for lucid, its suppose to help against slowloris, do I need a FFE for it?
<ttx> zul: is is a new source package ?
<zul> ttx: nope its a patch
<twb> slowloris as in Solaris?
<zul> twb: no slowloris as in apache dos hack
<twb> Oh.
<ttx> zul: does it add new config directives ?
<jasonmchristos> i got all these rootkits i dont think im ever going to use beta software again
<zul> ttx: yeah i backported it from 2.2.16 and enabled it
<ttx> is it disabled by default ?
<jasonmchristos> im considering going back too ubuntu 8.0.4
<zul> nope enabled by default
<twb> I don't run fancy-pants httpds, because dynamic content is new-fangled rubbish :-)
<ttx> zul: definitely FFe-worthy, I wonder if it should not be disblaed by default to ease the acceptation of it
<henkjan> incorrect: hold shift while booting to enter the grub2 menu
<zul> ttx: ack
<ttx> zul: having someone from security comment on its desirability would definitely help
<twb> jasonmchristos: if you're really concerned about security, you might want to look at OpenBSD.
<zul> ttx: sure
<mdeslaur> jasonmchristos: rootkits? what rootkits?
<jasonmchristos> thats what they said about linux when i was on microsoft then another room told me solaris is unhackable
<jasonmchristos> rootkits are basically backdoors
 * ttx chuckles
<jasonmchristos> back orafices
<twb> I think mdeslaur's point is that you don't get a rootkit unless you're already compromised some other way.
<ttx> jasonmchristos: we know what rootkits are. What rootkits doid you get "running beta software" ?
<mdeslaur> jasonmchristos: I know what a rootkit is. Please tell me what rootkit you have.
<jasonmchristos> one scanner says SUCKIT the othr says Zxibit
<twb> jasonmchristos: see also #ubuntu-hardened
<jasonmchristos> thanks
<jasonmchristos> this was on the lucid beta
<jasonmchristos> im still going to scan my server
<mdeslaur> jasonmchristos: those are probably false alerts in the scanning software you used. Could you tell me which scanners you used so I can fix the false alerts?
<jasonmchristos> rkhunter and chkrootkit
<jasonmchristos> by default lucid had some open port
<jasonmchristos> i guess thats where the rootkit got in
<\sh> jasonmchristos, lucid beta has no rootkits by default...if you have at least one, you got it before...or the scanner is wrong
<jasonmchristos> in the 6xxxx range
<twb> jasonmchristos: what open port?
<mdeslaur> jasonmchristos: did you use the rkhunter and chkrootkit packages, or did you download them?
<jasonmchristos> packages
<jasonmchristos> from repo
<jasonmchristos> well as soon as i installed lucid it had a listening port
<jasonmchristos> i installed it on a blank drive
<twb> jasonmchristos: did you install using the *server* install CD?
<jasonmchristos> i dont mean to bother you guys but im actually talking about a dsktop my server is karmic going to scan and make sre it didnt creap its way over there
<twb> jasonmchristos: that's an important datum you should've mentioned up-front.
<\sh> jasonmchristos, when you tell us something about a rootkit, an open port, and "it got through this open port" we are very interested...because this would be a serious security issue...which needs to be addressed...
<twb> jasonmchristos: did nmap or netstat/ss report what process was listening on that port?
<jasonmchristos> i know but i figured server is the same except with the desktop package installedd
<jasonmchristos> thats what i did not know how to check
<jasonmchristos> i also have a packet sniffer installed on wlan0
<twb> jasonmchristos: is the port open right now?
<jasonmchristos> let me try an check
<jasonmchristos> why u want to login?
<jasonmchristos> lol
 * \sh just remembered the story "The Boy Who Cried Wolf"
<twb> jasonmchristos: no, so that we can (dis)prove your assertions
<jasonmchristos> no im serious im not cring wolf
<jasonmchristos> what u think rkhunter and chkrootkit r lying?
<mdeslaur> jasonmchristos: rkhunter detecting Xzibit is a false alert, I can reproduce it and will fix it
<twb> jasonmchristos: I think it's likely you're analysis is faulty.
<\sh> jasonmchristos, yes...
<jasonmchristos> dont matter im going to do a fresh install karmic never produced these
<jasonmchristos> someone go ahead and rn backtrack at my ip
<twb> jasonmchristos: we can't fix an issue if we can't reproduce it.
<mdeslaur> jasonmchristos: do as you wish
<jasonmchristos> well what do you want me to do
<jasonmchristos> u want my rkhunter and chkrootkit logs?
<mdeslaur> jasonmchristos: I just told you rkhunter is broken, it's detecting a rootkit in lucid for everyone. I'll get it fixed.
<twb> jasonmchristos: what I want you to do is answer my questions.
<jasonmchristos> im looking at my netstat and there are so many ports open now i cant remember which was open after a fresh install
<jasonmchristos> trying to answer you
<jasonmchristos> what about the packet sniffer on wla0?
<twb> jasonmchristos: what about it?
<jasonmchristos> that also a flse alarm?
<twb> !smart questions
<twb> Stupid bot
<jasonmchristos> go go gadget bot
<jasonmchristos> lol
<Pici> !gq
<ubottu> Are you sure your question allows us to help you? Please read http://www.sabi.co.uk/Notes/linuxHelpAsk.html to understand how to ask a 'better' question.
<mdeslaur> jasonmchristos: please post your rkhunter and chkrootkit logs
<mdeslaur> jasonmchristos: or, alternatively, you can send them to security@ubuntu.com and I'll look at them
<jasonmchristos> ok hold on just for you guys ill let u tinker for a bit but after this if you can help explain or direct me too a tut on how to inject the crypto disk key for my home directory to a fresh install those were my plans
<twb> jasonmchristos: while you're at it, the output of "sudo ss -nap"
<jasonmchristos> ok im on it
<jasonmchristos> where is the chkrootkit logs?
<jasonmchristos> i already attached the rkhunter
<jasonmchristos> ill just run it again and cut n paste
<ttx> smoser, hggdh; ping me when around
<jasonmchristos> funn all of this sudden it isnt detecting the SuckIT rootkit
<twb> jasonmchristos: btw, you might want to filter 8080 on the internet side unless you actually want people banging against it.
<jasonmchristos> it poen?
<jasonmchristos> open?
<twb> jasonmchristos: it's open at the TCP level, meaning that squid has to 401 instead of ICMP rejecting it cheaply in the kernel.
<twb> Normally I'd tell internal services to only bind to internal interfaces, but I don't know offhand how to tell squid that.
<twb> (That and a default-deny netfilter.)
<jasonmchristos> twb: thats my remote manegment on my router
<jasonmchristos> i figured thats better to leave open than an SSH port
<jasonmchristos> but then i can always use 8080 to open my ssh port when i need it
<jasonmchristos> whatever port that was it was upnp capable but its not up anymore because i dont see any upnp ports open but right after a fresh install it was in the 6xxxx range
<jasonmchristos> suckit must have self destructed but rkhunter still detects zxibit
<jasonmchristos> have the email ready
<twb> Isn't upnp one of those gaping-hole "features"/
<jasonmchristos> lol
<jasonmchristos> i think im going to disable it
<mdeslaur> jasonmchristos: rkhunter is broken
<twb> http://en.wikipedia.org/wiki/Upnp#Lack_of_Default_Authentication
<jasonmchristos> ok who wants this email with all the outputs
<twb> !pastebin
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<jasonmchristos> its had to get voip like ekiga to work without upnp because it uses rando mports
<twb> My job is to make the network safe, not useful
<jasonmchristos> http://paste.ubuntu.com/410006/
<jasonmchristos> lol good point twb
<jasonmchristos> do you work for ubuntu or another company?
<RoyK> ubuntu isn't a company :)
<RoyK> canonical is, though
<\sh> twb, this would make a nice quote ;) "twbs job is to make the network safe, not useful" ;)
<twb> I work for cybersource.com.au
<twb> FSVO work = drink their coffee and use their link.
<twb> jasonmchristos: the only thing listening on a high port there is skype, which is not part of Ubuntu.
<jasonmchristos> yeah it sems to have dissapeard
<twb> jasonmchristos: well, get back to us when you can reproduce the issue
<jasonmchristos> the xzibit thing is there
<twb> We've already covered that
<jasonmchristos> if u think its a false alarm u want to fix
<jasonmchristos> thought u guys ant to fix it
<mdeslaur> jasonmchristos: I've opened bug 556455 for the false alarm, and I'll fix it today
<uvirtbot> Launchpad bug 556455 in rkhunter "rkhunter incorrectly detects Xzibit Rootkit in Lucid" [Undecided,New] https://launchpad.net/bugs/556455
<twb> jasonmchristos: bugs that can't be reproduced can't be isolated, and thus can't be fixed.
<psyferre> hey folks, can anyone help with a permissions issue please?  I've need to give a group rw access to a mounted network share.  I've tried everything that seems relevant on https://help.ubuntu.com/community/MountWindowsSharesPermanently, but can only manage to write with the root user.  The other user in the group cannot write.
<twb> psyferre: is the network share CIFS, NFSv3, NFSv4, or something else?
<jasonmchristos> looks like samba
<psyferre> twb: I tried cifs and smbfs
<twb> Sorry, I didn't read
<psyferre> The machine on the other end is a netapp storevault
<twb> psyferre: smbfs is obsolete; you should only need to try cifs.
<jasonmchristos> yeah but i think you need to edit the fstab
<psyferre> twb: okay, thanks.  My current fstab has: //nas01/shares/ /network/nas01/shares cifs rw,_netdev,username=DOMAIN/user,password=password,dir_mode=775,gid=1001   0 0... does that seem right?
<twb> -ogid is how I'd do it with NTFS; and that wiki article seems to indicate the process is the same for CIFS.
<twb> psyferre: do you really have a user on the NAS called "DOMAIN\user"?
<psyferre> twb, well, no... I edited out the real domain and user name for security purposes
<twb> If I can do something useful by knowing your domain and username, you have BIG problems.
<psyferre> twb: heh, true enough. :)  I'd prefer to be on the safe side though :)
<twb> Does "getent group 1001" return the correct group?
<psyferre> yes
<jasonmchristos> i think you need to add the ,user optiojn
<twb> Is it a primary or secondary group?
<psyferre> secondary
<twb> Has the test user logged out and back in since you added them to that secondary group?
<psyferre> jasonmchristos: just tried adding ,user and there's no change
<jasonmchristos> psyferre: i wrote a howto for nfs http://blog.jasonmchristos.info/search?q=nfs but i had to make it readable by something other than root so try this
<jasonmchristos> just change it to cifs or whatever
<psyferre> twb: hmm... i've been logged in as root and then used su to switch to my other user
<twb> psyferre: do you have the other user's password?  If so, best to test by switching to a getty and doing a full login.
<twb> psyferre: otherwise, at least try with "su - fred" instead of "su fred"
<MTecknology> Is indentation important in /etc/network/interfaces ?
<jasonmchristos> whatever i did in the howto made my nfs mountable bu users without sudo
<twb> MTecknology: no.
<psyferre> thanks, jasonmchristos, I'll check that out
<MTecknology> twb: thanks- I musta screwed up somewhere else - I hate not being able to fix my own server because I screwed up :(
<twb> MTecknology: pastebin it
<psyferre> twb: hmm... thanks.  I didn't know there was a difference between the two
<psyferre> twb: i just opened a new putty session and logged in with the second user directly, same deal
<twb> psyferre: did the mount operation give an error?
<psyferre> twb: no
<twb> psyferre: does dmesg contain anything suspicious?
<MTecknology> twb: to the best of my memory - I can't touch the system until somebody fixes it for me - http://dpaste.com/179992/
<twb> And to confirm: root can still read and write files in the share?
<jasonmchristos> psyferre: after adding the user u have to reboot i think
<jasonmchristos> because the fstab needs to be reloaded
<zul> mdeslaur: hey would you have some time to review a apache backport patch for me later today?
<psyferre> twb: Nothing seems suspicious in dmesg, root can still read and write
<psyferre> jasonmchristos: mount -a doesn't do that?
<twb> psyferre: pastebin the output of "ls -la" on the mountpoint.
<mdeslaur> zul: sure
<zul> mdeslaur: nifty thanks
<psyferre> twb: http://pastebin.com/wwgQbDhJ
<jasonmchristos> psyferre: i think you just need to add the use option to fstab and reboot so it reloads fstab
<twb> That's the mountpoint's parent
<jasonmchristos> *user
<twb> but tom:root looks pretty suspicious.
<twb> I'd expect it to be tom:my_cifs_group or whatever
<jasonmchristos> then users other than root will be able to mount the share
<psyferre> twb: yeah, i agree.
<twb> psyferre: did you unmount and re-mount after editing fstab?
<psyferre> jasonmchristos: unfortunately, a reboot isn't a good option at the moment... one of these machines is a production database
<twb> mount -a won't notice changes to mount options in fstab
<psyferre> twb: AH.  That's it then.
<jasonmchristos> ok well i think fstab just hasnt reloaded with the , user option
 * psyferre puts his head in his hands and cries softly
<smoser> ttx, here
<jasonmchristos> i dont know exactly how to reload fstab without a reboot
<cloakable> jasonmchristos: edit it, then remount the drive
<bogeyd6> twb, mount -a is made to re-read fstab
<cloakable> Ah
<bogeyd6> twb, mount -a causes all file systems mentioned in fstab (of the proper type and/or having or not having the proper options) to be mounted as indicated, except for those whose line contains the noauto keyword
<psyferre> twb: one moment, gotta remember how to unmount.....
<psyferre> bogeyd6: so unmounting is not necessary in this case?
<bogeyd6> umount
<psyferre> that's what I thought... just umount and then the share name?
<bogeyd6> i came in halfway
<bogeyd6> ill let the other finish
<twb> bogeyd6: erm, even if they're ALREADY mounted?  You're suggesting mount -a will result in a mount -oremount on my root filesystem?
<ttx> smoser: looking at http://iso.qa.ubuntu.com/qatracker/test/3880, we only have the "instance run" test. I asked hggdh to add a cloud-config test, ISTR you have something for that... is there a testcase written somewhere in the testcases wiki ?
<mdeslaur> jasonmchristos: I've just uploaded a fixed rkhunter to the archive that doesn't falsely detect the rootkit anymore
<mdeslaur> jasonmchristos: in a couple of hours it should be available
<jasonmchristos> cool, mdeslaur how were you sure that it was false?
<ttx> smoser: could you sync with hggdh so that a test covering that is available for cloud images in general (UEC+EC2) ?
<bogeyd6> twb, im saying if you edit fstab and then mount -a it will mount the filesystem with the new options
<twb> psyferre: is it working now?
<jasonmchristos> mdeslaur: will this update be available to replaice the main rkhunter package?
<ttx> smoser: Also there isn't any EC2 candidate right now, so I can't see if the "single instance" test was removed (we should have "multiple instances" and "cloud-config")
<mdeslaur> jasonmchristos: because I looked at the check it was performing, and it wasn't right for lucid
<mdeslaur> jasonmchristos: yes, it will replace the one that is there
<twb> bogeyd6: you're wrong, at least on Sid.
<jasonmchristos> do i get karma points for this ?
<smoser> ttx, http://bazaar.launchpad.net/%7Esmoser/%2Bjunk/ec2-test/files/head%3A/user-data/ is what i have for cloud config.  I put together 3 different cloud config files that excercise a fair amount of the function. (ud-* there)
<smoser> ttx, well, lets make an ec2 candidate then
<twb> jasonmchristos: you get a warm fuzzy feeling of having saved some other schmuck from shaving the same yak
<jasonmchristos> lol
<psyferre> twb, bogeyd6: I get unmount error 16 = Device or resource busy  when trying to umount... do i need to stop cifs or something?
<smoser> i'll start writing a test case for the user data
<twb> psyferre: ask lsof what is using that filesystem
<twb> psyferre: if you're an incurable cd-er, it's probably your shell
<twb> There's mount -f and mount -l, but those are plan B.
<smoser> ttx, should i just edit http://testcases.qa.ubuntu.com/System/EC2CloudImages ?
<ttx> smoser: sounds good
<psyferre> twb: root@prometheus02:/# lsof /network/nas01/shares lsof: WARNING: can't stat() cifs file system /network/nas01/shares       Output information may be incomplete.
<twb> psyferre: ugh
<ttx> if there is some duplication of info between the EC2 and the UEC tests, maybe play some include game to avoid copying
<ttx> SmokeyD: ^
<ttx> smoser: ^
<ttx> arh :)
<twb> psyferre: if this host isn't important, just bounce it.  It's not worth isolating.
<psyferre> twb: What do you mean by incurable cd-er?
<jasonmchristos> im going to start banging a hammer on my kitchen cabinets
<twb> As in, you cd into whatever dir you're going to use instead of using your shell's tab completion
<ttx> smoser: also on this page the "multiple instance" and "single instance" tests should be collapsed into a single test with clear instructions on what should be tested.
<psyferre> twb: hmm... i use tab completion anytime I know exactly where I'm headed.  So, cd-ing through a directory tree is not just inefficient but somehow generates open handles?
<psyferre> twb: i was able to reboot this host and now my secondary user doesn't have read OR write permission
<MTecknology> twb: did you see anything wrong with that pastebin?
<twb> MTecknology: didn't look
<twb> MTecknology: I assume you create the bridge elsewhere?
<smoser> ttx, so you're wanting one page that lists tests of UEC images, and one that lists test of EC2 images ?
<twb> MTecknology: how does it know whether eth0's gateway or br0's gateway should be the default?
<twb> psyferre: what does ls -la say about the mountpoint?
<ttx> smoser: not really. If some tests are applicable to both, then one is sufficient
<ttx> smoser: if only part of the test applied, then include could help
<ivoks> hi
<ivoks> oh, ttx
<ivoks> mail
<ivoks> reply
<uvirtbot> New bug: #556487 in libvirt (main) "virConnectOpen chooses qemu:///session before qemu:///system" [Undecided,New] https://launchpad.net/bugs/556487
<smoser> ttx, yeah. so, for ec2, most of the test running is automated.
<ivoks> now
<ivoks> :)
<smoser> i'm not sure if that works or not for euca
<MTecknology> twb: bridge_ports eth0
<psyferre> twb: http://pastebin.com/4yxs7uPT
<MTecknology> twb: I just noticed that I should have  iface eth0 inet dhcp -> iface eth0 inet manual
<ttx> smoser: I'll fire up an UEC install now for the current ISo testing, so if you have anything I should play with, let me know
<zul> mdeslaur: when you get a chance http://people.canonical.com/~chucks/apache2-mod-reqtimeout.debdiff
<ttx> ivoks: and you're sure now ?
<ivoks> ttx: yes :)
<ttx> heh
<mdeslaur> zul: wow! you backported half the new version? :)
<ivoks> !!!!
<ivoks> cluster stack has been accepted in debian
<ivoks> now we can just sync.
<zul> mdeslaur: heh....crap...
<mdeslaur> zul: url in 206-fix-potential-memory-leaks.dpatch doesn't work
<abssorb> Hi, looking for suggestions to centralise /home.  Using autofs but hangs are too frequent
<incorrect> whatever happened to the ubuntu directory server?
<cloakable> abssorb: I use NFS for that
<kklimonda|G1> incorrect: there has been something like that?
<cloakable> abssorb: export /home from the server, and import it on your clients.
<abssorb> cloakable: autofs uses NFS
<incorrect> kklimonda|G1, there was
<cloakable> abssorb: use nfs directly, cut out autofs
<psyferre> twb: I chowned root:zrmgroup until the path to the share looked correct http://pastebin.com/AxDMSYqS
<twb> psyferre: er, yeah... good luck with that.
<abssorb> cloakable: Interesting. But how do two users mount their own /home/username with their own permissions?
<twb> psyferre: those modes look pretty dodgy.
<twb> psyferre: if you umount and mount, does it all reset back to the broken state?
<psyferre> twb: I agree... looks like from the mount on everything went off.  I can still only read and write as root, and secondary user can't even read
<cloakable> abssorb: /home is mounted from the server, so all home directories are available, and if you keep UID/GID constant between desktop/server, the permissions will be correct too.
<psyferre> twb: still can't umount... says the device is busy
<cloakable> abssorb: I'm just careful about adding users, myself :)
<abssorb> cloakable: Yes, add UID and GID match.  In my experiments with plain NFS, the user mounting the volume sets the UID and the GID. So user-switching is compromised.
<psyferre> twb: I've just have to keep rebooting.
<cloakable> abssorb: if you set it in fstab, /home is mounted during bootup, and stays mounted.
<cloakable> abssorb: so home directory <user> stays belonging to that user, and home directoy <user2> stays belonging to <user2>
<cloakable> abssorb: it's as if the partition is local, rather than on the central server.
<RoAkSoAx> ivoks, heya!!
<ivoks> RoAkSoAx: hi
<RoAkSoAx> ivoks, how is it going?? Where you able to review the packages I prepare?
<zul> ivoks: congrats
<abssorb> cloakable: Ah I see, you mean instead of mount /home, I mount /home/user1 and /home/user2.  That would mean re-writing the new users creation tools on the server.  OK I suppose. Before I do that, what advantage does NFS give me over Autofs, in terms of surviving hangs?
<cloakable> abssorb: no, put "server:/home    /home    nfs    auto    0    0" into your fstab. And advantages? If the server goes down, it'll hang untill the server comes back up, then it'll resume working again.
<mdeslaur> zul: what about this: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/docs/manual/mod/allmodules.xml?r1=917211&r2=917210&pathrev=917211
<zul> mdeslaur: it didnt apply
<cloakable> abssorb: If you mount the nfs export as if it's a partition, it'll act like one. permissions and all. No user will need to mount their /home directory, because /home itself is mounted on bootup.
<mdeslaur> zul: how could it not apply...
<zul> mdeslaur: dunno it didnt
<abssorb> cloakable: I've tried that, it only works properly for a single user per client.
<cloakable> abssorb: I've never had problems with it. Are you using autofs+nfs or pure nfs?
<cloakable> abssorb: because when I mount from my server, all the permissions are correct.
<abssorb> Both
<mdeslaur> zul: the rest looks great
<abssorb> cloakable: They would be for the first user to log in.
<zul> mdeslaur: nifty thanks
<cloakable> drwxr-xr-x 63 cloakable cloakable 16384 2010-04-06 14:06 cloakable
<cloakable> drwxr-xr-x  3 rune rune 88 2010-03-27 18:34 rune
<cloakable> that's on NFS
<abssorb> cloakable: OK thanks,  There must be another reason why it didn;t work on our setu
<cloakable> abssorb: yeah. Not sure about that.
<abssorb> cloakable:  Re the advantage, about recovering when a server comes back up, autofs relies on nfs to achieve this, but it's not working.  Will removing the use of autofs somehow influence this? Because that would be the only reason to stop using it (it works perfectly otherwise).
<RoAkSoAx> zul, may I ask what are the congrats to ivoks for? :)
<zul> RoAkSoAx: get the cluster stuff in debian, congrats to you too
<RoAkSoAx> zul, hehe I did little this time but thanks :)
<cloakable> abssorb: I'm not sure, I don't use autofs myself, it always seemed inelegant to me :)
<abssorb> cloakable:  I does look inelegant :)  But is actually an elegant way of solving some permission problems.  I looked over my notes, when I tried last, a direct export of /home over plain NFS resulted in problems for users whose primary group was one other than the default. This gave problems with users getting ".dmrc wrong permissions" errors.
<cloakable> abssorb: Aha. I generally run a standard network :)
<abssorb> cloakable: You've given me lots of useful things to think about. Thanks!
<RoAkSoAx> ivoks, whenever you have time please review the cluster packages at ppa:andreserl/ha, to be able to copy them to the ubuntu-ha ppa and then request the sync to get them into lucid asap. I'll be off for a while, so just let me know. Thanks :)
<ivoks> RoAkSoAx: don't copy those
<ivoks> RoAkSoAx: there are some changes needed
<ivoks> RoAkSoAx: we'll sync from debian
<ttx> zul, smoser, kirkland: I'll need each of you to cover part of the ISO tests, mathiaz won't have much time for it this time around. Expect a new server beta2 candidate in the next hours
<smoser> ttx, should we call the ec2 images candidates ?
<smoser> or do we need to respin?
<ttx> I'd wait for the Server ISo respin, to be sure to catch the latest boot process
<zul> ttx: sure no problem
<hggdh> ttx: I am here, you pinged me?
<ttx> hggdh: yes about the ISo testcases
<ttx> hggdh: the UEC topologies look alright
<uvirtbot> New bug: #556528 in euca2ools (main) "euca2ools config file overrides environment" [Undecided,New] https://launchpad.net/bugs/556528
<ttx> hggdh: please sync with smoser about the cloud images testcases
<hggdh> ttx: will do
<ttx> hggdh: we need "multiple instance run" and "user-data test" for EC2 cloud images, and "instance run" and "user-data test" for UEC images
<ttx> hggdh: he is working on the contents
<RoAkSoAx> ivoks, those packages are the same as the debian-ha hg repo, including latest commits for cluster-glue (your perl changes and --disable-fatal-warnings) so in essence they are the same packages
<hggdh> ttx: will they be mandatory, or optional?
<ttx> mandatory
<hggdh> roj
<ivoks> RoAkSoAx: ok then, i'll recheck
<ivoks> RoAkSoAx: i've looked at those yesterday
<alvin> I'm experiencing very high loads on ubuntu-server with KVM since the latest kernel update. Any pointers as to where to search for the cause? kern.log  contains "task xxxx blocked for more than 120 seconds" and a trace. Virtual machines crash randomly. The trace contains stuff with 'ext4' in it.
<alvin> I'm seeing this on 3 servers now, after the update.
<alvin> Oh, and libvirt is frozen. I can't give any virsh commands anymore
<alvin> No destroying/rebooting the crashed servers possible. I don't like restarting libvirt, and I certainly don't like restarting the servers. They are headless and I'm certain the root device will not be found anymore.
<RoAkSoAx> ivoks, ok :) anyways...the only change I dont think I have in pacemaker is the commit that was done 2 hours ago, but it's easily mergeable. cluster-glue already include the changes of the commit done an hour ago. and as I said, heartbeat and cluster-agents are 7 days old in the hg repo, which are the same I already have packaged
<ivoks> great
<alvin> Part of the logs (scroll for stack stuff): http://paste.ubuntu.com/410068/
<kirkland> ttx: sure thing, i usually do a few rounds myself
<kirkland> ttx: i'll cover the UEC and raid ones
<alvin> Whoah. I lost connection to the virtual host and all its guests... I did nothing. Server dead.
<alvin> Bye, bye server :-( I'll have to drive there now
<alvin> Meanwhile, two other servers are slowly crashing and I found the closest bug: bug 522014
<uvirtbot> Launchpad bug 522014 in linux "kernel bug 2.6.31-17-server with hung_task_timeout_secs " [Undecided,New] https://launchpad.net/bugs/522014
<alvin> It could also be bug 276476
<uvirtbot> Launchpad bug 276476 in linux "INFO: task blocked for more than 120 seconds causes system freeze" [Medium,Fix released] https://launchpad.net/bugs/276476
<zul> ill take the hardy->lucid upgrade tests
<sherr> alvin: this is server 9.10 64 bit or ?
<sherr> alvin: when you say lost connection to "virtual host and all its guests" - what is the host? Surely not a VM itself?
<alvin> sherr: No, the hosts are Ubuntu 9.10, amd64 (all of them). The guests are ubuntu Jaunty, Karmic and Windows. With lost connection, I mean: I have no longer access to the guests (ssh, or their services) and ssh to the host.
<alvin> I can no longer ping the host either
<alvin> Two other hosts here show the same: if there is heavy I/O (cp a kvm image for example), the kernel logs starts to show errors, the copy goes slow (5MB/sec) and the load goes insane. After a while things calm down and the copy speeds up again.
<alvin> All machines run lvm+ext4. 2 of them have mdadm raid
<alvin> the other one has hardware raid, but all show these hung_task_timeout_secs errors (for kvm, kjournald, pdflush)
<sherr> alvin: hmm. The log you pasted almost looks like a bad disk or cable (ATA) ... not sure.
<alvin> Ah, that was before that machine went down. I checked the RAID (mdadm status). everything was perfectly ok
<sherr> I have left off ext4 until "now" (well, from now), waiting for everything to get shaken out. Especially for things like KVM etc. Who knows?
<alvin> And since two other machines show the same errors, we can safely assume they are related. All those machines where rebooted this weekend, so they got the latest kernel.
<alvin> It has worked well for months. Karmic has proven to be unreliable to boot, but not on the ext4 part.
<sherr> Well, good luck. I hope to retry KVM sometime in the future. So far, it's not worked so well for me (performance).
<alvin> Now, I don't know if ext4 is the cause. Looks more like an io_scheduling problem.
<alvin> Actually, I tested performance here and it was slightly better than the same machine on VMWare (only marginally)
<sherr> This is the thing with Linux/Ubuntu ... always upgrade cycle and shaking out new things (new bugs)!
<alvin> Well, you can't expect users to test your sofware. That's the problem.
<ttx> kirkland: if you cover the UEC, please run them manually from ISO... I want us to catch any error in the messages as well (like "bad defaults proposed")
<ttx> I'll try to cover them anyway, but two looks can't hurt
<ttx> (just did a "topology 1" test on amd64, works like a charm, fwiw)
<ivoks> has anyone tried using vmbuilder without kvm-enabled hardware?
<ivoks> shouldn't that work and just use qemu?
<jarray52> When trying to network boot, I get the following error message: Gave up waiting for root device. Common problems: ...  This happens after the DHCP request is answered and the tftp sends the kernel image. I'm having trouble with the nfs.
<jarray52> Any suggestions?
<ivoks> nfs before network?
<ivoks> nfs before portmap?
<jarray52> ivok: no
<jarray52> ivoks: I never tried using it.
<ivoks> using what?
<jarray52> ivoks: I never tried using nfs before this.
<ivoks> eh, your first NFS experience is with root on NFS?
<ivoks> try with something easier :)
<ivoks> s/try/start/
<jarray52> ivoks: Do you mean /etc/init.d/nfs-kernel-server start? I ran that command, and nmap <ipaddress> shows that nfs is up and running on port 2049 and rcpbind on port 111.
<ivoks> and nfs-common?
<jarray52> ivoks: I installed it.
<ivoks> is there a guide you are following for seting up root on nfs?
<ivoks> and which version of ubuntu are you using
<ivoks> ?
<twb> jarray52: you should be asking rpcinfo before nmap
<twb> ivoks: it's not really done anymore.
<twb> The nearest real-world cases I can think of are LTSP5 and puppet/cfengine.
<jarray52> ivoks: ubuntu 9.04 and i'm following this guide https://wiki.edubuntu.org/EasyUbuntuClustering/UbuntuKerrighedClusterGuide
<ivoks> so, you appended this to the kernel:
<ivoks> root=/dev/nfs nfsroot=server_ip:/path
<ivoks> ?
<alvin> jarray52: I don't want to confuse you, but I have seen that message a lot of times on non-NFS root servers. I then just keep rebooting until the root drive is found. It's a known bug.
<twb> You'd also need boot=nfs, IIRC
<twb> The relevant code is in the ramdisk, which is built from /usr/share/initramfs-tools/ and /etc/initramfs-tools/
<ivoks> alvin: or run mount -a ; exit
<jarray52> alvin: Must have rebooted 20+ times. Also, try setting a rootdelay... saw a post about it.
<ivoks> in 9.04 there is a bug where network is up *after* nfs services are started
<alvin> ivoks is right. If you wait a bit and then run mount -a, you can resume
<twb> alvin: that's not a fix, that's a workaround for not understanding your symptoms
<ivoks> meaning that you can't mount NFS share
<alvin> That bug is still in 9.10
<ivoks> sorry, not 9.04, but 9.10
<ivoks> it's only in 9.10
<alvin> twb: Nobody said it's a fix, but the real bug is not located yet as far as I know
<twb> You'll get the same symptoms if the root filesystem can't be mounted at all.
<ivoks> alvin: it's located
<ivoks> alvin: and fixed in 10.04
<alvin> Those are different problems. The NFS one is mountall, the not finding a root device is unknown unless I am mistaken
<twb> ivoks: that's the kind of bug that keeps me on LTS
<ivoks> twb: yeah, that bug didn't exist a month before release of 9.10 :)
<alvin> ivoks: but it existed before. I don't remember how long though.
<alvin> karmic does feel very 'beta'
<ivoks> backporting the fix from 10.04 to 9.10 requires knowledge of upstart code :D
<alvin> And with the current system freezes I'm experiencing, my feeling about it doesn't get better.
<ivoks> anyway, i'm sorry, but i have to leave now
<twb> ivoks: doesn't it require knowledge of the upstart service language, not upstart itself?
<twb> And amounting to Required-Start: $network
<ivoks> twb: upstart in 9.10 didn't have some features that has in 10.04
<twb> Ugh
<ivoks> twb: lack of those features resulted in that bug
<ivoks> that and couple of others
<ivoks> most of them are fixed
<twb> upstart feels like such a NIH failure
<ivoks> but this one doesn't look easy without backporting huge part of upstart
<ivoks> it isn't
<ivoks> it's great tool
<jarray52> ivoks: When you're asking about root=/dev/nfs nfsroot=server_ip:/path, do you mean in /var/lib/tftpboot/pxelinux.cfg/default?
<alvin> Still, that version of upstart should have been left out in Karmic.
<ivoks> but it's such a important part of the system that any small error results in big problems
<twb> Especially when you compare it to the simplicity of cinit or the non-invasiveness of insserv make-style booting.
<ivoks> jarray52: yes
<ivoks> don't forget, upstart isn't only 'booting'
<ivoks> anyway, take care... realy have to go now
<twb> Meaning that it restarts services when they die (like every post-sysv init), or meaning that it puts grubby fingers into areas that it doesn't belong (like NetworkManager)?
<jarray52> ivoks: Thanks for your help.
<jarray52> twb and alvin: I will need to do a lot of googling to follow that conversation. Do both of you believe that my problem is due to a bug? My experience has been that Ubuntu 9.10 was very buggy. Tons of crashes and random behavior. So, I rolled back to Ubuntu 9.04. For the most part, I was happy with it.
<incorrect> slapd seems very broken in 9.10
<alvin> incorrect: That is because it was split up and undocumented (as far as I know. I wanted to try it, but didn't know where to start)
<twb> jarray52: it wouldn't surprise me if it's a bug, but I haven't seen enough evidence to conclusively demonstrate that it's not a simple fuckup on your part
<alvin> jarray52: I think it's a bug, yes. Let me see if I can point you to the right reports
<incorrect> alvin, apt-get install slapd?
<jarray52> twb: I would bet it's a simple fuckup on my part.
<alvin> (Damn: those are on my virtual mailserver, but the karmic host crashed due to an io_scheduler bug...)
<jarray52> jarray52: Given my lack of experience, that is very likely. However, I did follow the instructions in https://wiki.edubuntu.org/EasyUbuntuClustering/UbuntuKerrighedClusterGuide pretty closely.
<jarray52> twm: Given my lack of experience, that is very likely. However, I did follow the instructions in https://wiki.edubuntu.org/EasyUbuntuClustering/UbuntuKerrighedClusterGuide pretty closely.
<twb> Just because it's on a wiki doesn't mean it's true
<alvin> here: bug 504224 for the NFS mounts, and bug 470776. Also, bug 384347
<jarray52> twb: totally agreed.
<twb> twm wouldn't be caught dead here.
<uvirtbot> Launchpad bug 504224 in mountall "NFS mounts at boot time prevent boot or print spurious errors" [Medium,Fix released] https://launchpad.net/bugs/504224
<uvirtbot> Launchpad bug 470776 in mountall "retry remote devices when parent is ready after SIGUSR1" [Medium,Fix released] https://launchpad.net/bugs/470776
<uvirtbot> Launchpad bug 384347 in util-linux "_netdev not working" [Undecided,Confirmed] https://launchpad.net/bugs/384347
<jarray52> twb: sorry
<jarray52> twb: Given my lack of experience, that is very likely. However, I did follow the instructions in https://wiki.edubuntu.org/EasyUbuntuClustering/UbuntuKerrighedClusterGuide pretty closely.
<jarray52> that's better.
<alvin> It could be totally unrelated too
<jarray52> better question, what trouble shooting steps can I take
<jarray52> ?
<alvin> jarray52: I wouldn't know. If it IS a bug (or multiple ones) I'd suggest trying with an older version or another distribution as client. There are no good workarounds that you can try.
<incorrect> wow slapd is more broken in karmic than i thought, i am just lucky i ported my db over from jaunty
<incorrect> i wonder, is slapd broken in 10.4
<zul> incorrect: please open a bug in launchpad then
<incorrect> zul, i've found that there are others ranting about it on launchpad
<jarray52> twb: Do you have any suggestions?
<incorrect> given that lucid is released at the end of the month, i wonder if i should upgrade my firewall to it, my firewall being a toy
<jarray52> twb: Thanks for the rpcinfo pointer. That is useful. 3 versions of nfs are up and running. However, I think my problem is different. When running ifconfig on the node, i noticed that it lost its network connection. It doesn't seem to have maintained the ipaddress given to it by the dhcp server.
<alvin> incorrect: I haven't tested it yet, but it is reported that Lucid has less critical bugs than karmic, so I'll upgrade everything I can find.
<incorrect> alvin, i've found a posting on how to get slapd working in karmic, but my god its painful
<alvin> incorrect: I have decided to wait learning ldap until someone reports he can do it without the pain.
<incorrect> alvin, kvm + jaunty
<alvin> incorrect: Have you tried upgrading after that?
<incorrect> alvin, i upgraded my ldap server from jaunty to karmic, i didn't notice how broken thinks were as i had done all the config work
<incorrect> i just needed to do some changes and found i couldn't get phpladpadmin to work
<alvin> Aha, well, I will absolutely wait
<incorrect> i am tempted to upgade to lucid
<alvin> I'm having enough troubles as it is booting the machines. I try not to reboot.
<incorrect> well i also found a huge bug that grub won't install on a software RAID setup
<incorrect> i cried
<ttx> kirkland: fwiw, I rewrote the ISO tracker testcases for the UEC topologies, with help from hggdh. I didn't touch any UEC doc yet, though
<kirkland> ttx: i'd like to update the UEC docs
<kirkland> ttx: i was going to do that this week-ish
<alvin> incorrect: On RAID0? RAID1 will work
<incorrect> ok this problem was only on the alternative text based installer
<incorrect> i pxe boot install my machines
<kirkland> ttx: also, i'd like to get the ISO tracker and UEC help docs to be in better sync, somehow
<kirkland> ttx: the duplication is painful
<incorrect> pxe install them,
<ttx> kirkland: I think the level of detail is different for an install doc and a testcase
<ttx> kirkland: but I agree with you it can be painful
<kirkland> ttx: obviously
<alvin> incorrect: Oh, I didn't know that. I always use the cd. I should try pxe one day
<ttx> I think most of the installer is self-explaining, the UEC doc shouldn't need to go into lots of detail, but rather expand on available topologies
<incorrect> alvin, i hope lucid has the latest kvm because that now supports booting vm's via the network,
<ttx> while the testcase must include a bullet-point set of steps to not deviate from
<alvin> incorrect: Cool, but it will not have the latest version
<incorrect> alvin, i could
<incorrect> it could
<incorrect> kvm do not release very often, iirc jaunty and karmic had the same version
<alvin> incorrect: It will be 0.7.5
<incorrect> seems to be up to date then
<incorrect> qemu is at 0.12.3
<incorrect> maybe i will run up a vm for lucid
<CVirus> I'm using kickstart to do a hands-off intsallation on 100 machines .. How can I get rid of all the warnings the installer prompts me for ... like weak passwords and invalid nameservers and so ?
<jarray52> Does anyone know how to prevent the ipaddress from resetting after a kernel is loaded?
<jarray52> using network boot
<incorrect> i just use static ip addresses
<incorrect> jarray52, are you building servers or desktops?
<alvin> incorrect, jarray52: for servers, you can also use DHCP
<jarray52> incorrect: I'm just playing around. Let's say I'm trying to get a server network booted. Right now, there is no disk attached. I'm able to load the OS using DHCP.
<alvin> jarray52: I have no experience with that. Are you saying it works with DHCP, but not with a static address? (Can you even set a static address in that situation?)
<jarray52> alvin: I'm using a DHCP server to issue a static ip address.
<jarray52> alvin: I'm able to issue the static ip address and then use tftp to boot the OS.
<alvin> Well, that's still DHCP, but ok.
<jarray52> Out of curiosity, is it possible to search these irc chat discussions to see if someone had a similar problem?
<alvin> jarray52: There are logs, but I don't know whether you can search them. Maybe use the ubuntu-server mailinglist.
 * alvin is going home. No backlog due to crashed karmic quasselcore :-(
<jarray52> incorrect: By playing around, I meant trying to learn how this stuff works. I'm trying to network boot a machine.
<skrite99> anyone know much about a deadlock with InnoDB ?
<incorrect> jarray52, during the install the installer app configures the network, its outside of the pxe boot getting you enough OS to boot from nic bios
<zul> has the iso been respun yet?
<ttx> smoser: we don't have any EC2 image candidate yet ?
<ttx> or you wait for the respin
<smoser> i thought you suggested wait for respin
<ttx> ack
<ttx> there is a rumour that the latest one isn't running properly
<smoser> so is archive in suitable state ?
<ttx> smoser: not really, we would respin
<smoser> latest one , where "one" is server ISO ?
<ttx> no, the cloud images
<ttx> the last daily cloud images would not run.
<ttx> kirkland: could you test the current state of the lucid cloud images on uEC, to debunk that rumour ?
<ttx> The iso testing is using the karmic image, as a reference standpoint, so the recent lucid images weren't tested
<ttx> (at least by me)
<ttx> smoser: I asked slangasek to ping you when the server ISO is respun, so that you can generate EC2 candidates from that
<smoser> ttx, ok. i'll try a test of latest uec images.
<smoser> note, i've tested successfully on ec2
<henkjan> kirkland: the link to http://webapps.ubuntu.com/employment/canonical_USVD on your latest blogpost is broken
<jdstrand> ttx: hey. seems the server iso is being rebuilt-- may I what prompted it?
<jdstrand> s/what/ask what/
<sbeattie> jdstrand: possibly bug 548954?
<uvirtbot> Launchpad bug 548954 in upstart "Ubuntu servers should display information during boot by default" [High,Fix released] https://launchpad.net/bugs/548954
<ttx> jdstrand: "splash" still active on server boot, + winbind PAM profile screwing up login on samba-server task
 * jdstrand guesses samba for 546874 and 556342
<jdstrand> k, thanks
<ttx> jdstrand: bug 548954 and bug 546874
<uvirtbot> Launchpad bug 546874 in samba "passwd - can't login, change password (pam_winbind pam-auth-update profile)" [High,Fix released] https://launchpad.net/bugs/546874
<mathiaz> kirkland: hi
<mathiaz> kirkland: trying to run an instance on UEC
<mathiaz> kirkland: got this error on the NC: [009416][EUCAERROR ] libvirt: internal error no supported architecture for os type 'hvm' (code=1)
<brontosaurusrex> if linux boxes can't see my internal linux server, do i need to edit /etc/dhcp3/dhclient.conf to send hostname or something else (windows boxes on the network have no problem seeing the box)
<smoser> kirkland, did you report (ttx said someone did) that latest uec images aren't booting ? or was that mathiaz ?
<brontosaurusrex> if linux boxes can't see my internal linux server by its internal hostname, do i need to edit /etc/dhcp3/dhclient.conf to send hostname or something else (windows boxes on the network have no problem seeing the box)
<mathiaz> smoser: I haven't reported anything yet
<sherr> brontosaurusrex: if they are all on the same network, they should "see" each other.
<sherr> brontosaurusrex: what IP address on linux server?
<sherr> brontosaurusrex: what IP address on other linux box?
<MTecknology> How can I change a username? like michae -> michael
<MTecknology> Is there any easy way to do it system wide?...
<brontosaurusrex> 192.168.1.100 is the server
<brontosaurusrex> 192.168.1.101 is the client for example
<sherr> brontosaurusrex: and netmasks?
<brontosaurusrex> inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0
<sherr> You cannot ping .100 from 101?
<sherr> ping 192.68.1.100
<brontosaurusrex> its not really an issue, since 99,9% of users are windows, but i wonder whats up with that
<brontosaurusrex> sherr: sure i can, only hostname doesnt work
<sherr> Well, how was I supposed to know it was a DNS issue only?
<brontosaurusrex> its accessable via ip
<sherr> Put the hostnames/IP's in /etc/hosts?
<brontosaurusrex> on all linux boxes?
<sherr> Yes - unless you want to run DNS.
<sherr> Maybe look into using dnsmasq as a DHCP server.
<brontosaurusrex> how come windows boxes see the hostname?
<brontosaurusrex> samba?
<sherr> Are you using Samba>
<sherr> ?
<brontosaurusrex> its installed i think
<sherr> Then WINS perhaps.
<sherr> WINS != DNS
<sherr> Note - dnsmasq includes a DNS server that can read from DHCP leases
<brontosaurusrex> i see, so i have to run dhcp client right?
<sherr> The DHCP client is already used, no? This gets you an IP address.
<sherr> You need "name" resolution
<sherr> i.e. DNS
<brontosaurusrex> right
<sherr> /etc/hosts is one way
<skrite99> anyone have much experience with what a deadlock is in mysql InnoDB tables?
<alvin> If Windows clients can see each other by hostname and other systems can't, it's probably because they are using NETBIOS
<brontosaurusrex> sherr: ok, ty
<mathiaz> smoser: http://people.canonical.com/~mathiaz/console.log
<mathiaz> smoser: ^^ - tried to boot a daily UEC image on EC2
<smoser> you mena on uec?
<mathiaz> smoser: yes *UEC*
<mathiaz> smoser: not ec2
<smoser> ok. i've booted 20100406 several times today in ec2
<smoser> i think your metadata service is broken
<smoser> and i've verified on my local UEC that 20100405 worked fine
<`blackmk4|imac> just wondering if there will be a proper fix for this 5 month old bug: http://ohioloco.ubuntuforums.org/showthread.php?t=1311112
<`blackmk4|imac> it pretty much breaks the ability to run a server
<smoser> mathiaz, ^
<sherr> `blackmk4|imac: is there a bug number?
<`blackmk4|imac> yes
<mathiaz> smoser: it's quite possible that the meta-data is not working
<mathiaz> smoser: as I'm running UEC on multi-network topology
<mathiaz> smoser: how can I test if the meta-data service is working correctly?
<`blackmk4|imac> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/474930
<uvirtbot> Launchpad bug 474930 in linux "Ubuntu 9.10 crashes when run without monitor or when monitor sleeps." [Undecided,Confirmed]
<smoser> mathiaz, boot an insance, somehow get to it and then poke at the metadata service from within it :)
<smoser> but its broken, thats why you're seeing those errors
<sherr> `blackmk4|imac: I wonder if Lucid works?
<`blackmk4|imac> i would imagine so
<sherr> `blackmk4|imac: I am not affected by this. But what I would imagine is that someone adds a comment to the bug and asks if anyone can confirm this happening in Lucid still. If not, then great.
<`blackmk4|imac> wait
<`blackmk4|imac> what is lucid, i thought you meant a person
<`blackmk4|imac> :v
<sherr> Lucid Lynx a.k.a release 10.04 - current beta?
<`blackmk4|imac> ah, then I don't know if it works there
<sherr> But if it is important, maybe someone can test?
<`blackmk4|imac> fair enough
<sherr> If it affects you, why not comment on the bug and see if someone can test? or maybe someone already has?
<`blackmk4|imac> i commented in a few forum posts about it, I'll comment on the bug
<sherr> That is, if no one has already. Check first :-)
<alvin> Is that with a monitor hooked up AND X installed? Because I have karmic installations without monitor that run. None of them have X. (As long as you don't let them do something intensive like copying a large file)
<alvin> Oh, and with Intel videocard
<alvin> (Desktops with X and some Intel cards will crash after a little while anyway)
<sherr> A comment on the forum says "no X required" IIRC.
<sherr> alvin: although they should run even copying a large file :-/
<alvin> I didn't read the whole forum of 'me too'. ;-) The bug should be enough
<sherr> Yes, me too ... :-)
<alvin> sherr: I had 3 servers that froze today. One went down. The one that was doing the least work. It's apparently a kernel io_scheduling issue, but it's unclear where the bug should be assigned.
<alvin> I'm researching this a bit. I tried to reproduce on a non-critical system with an atom cpu, but I can't reproduce it there. But take a 2xquad core machine with 32 GB ram, and rsync a 10GB file. Bam, load goed in the air and server goes down.
<alvin> INFO: task kvm:22955 blocked for more than 120 seconds. (etcetera)
<sherr> alvin: what's backing the guests? LVM, file?
<alvin> Production servers are experiencing problems, and test servers don't :-(
<alvin> File, on LVM (ext4 formatted)
<sherr> I hear you. Maybe you need test servers that match production more closely ... i,e, bigger!
<alvin> I do have those tomorrow. 'll test there
<alvin> In fact, I could reach them. There's nobody at work, so I can quitly copy a file now. Let's do it.
<sherr> Can you arrange an ext3 test? Good data-point perhaps
<alvin> Should be possible. There is space left
<alvin> Only one of the bug reports I found talks about ext4: (bug 494476 and bug 276476)
<uvirtbot> Launchpad bug 494476 in linux ""Smbd","kjournald2" and  "rsync"  blocked for more than 120 seconds while using ext4." [Medium,Triaged] https://launchpad.net/bugs/494476
<uvirtbot> Launchpad bug 276476 in linux "INFO: task blocked for more than 120 seconds causes system freeze" [Medium,Fix released] https://launchpad.net/bugs/276476
<sherr> Is it always rsync? Wit compression? maybe try a "cp", or no comp.
<alvin> cp and scp too
<alvin> and on the server that went down: totally unknown. Only the virtual machines where running and they not under any load
<alvin> I logged in with ssh on the host because a guest (holding a miniscule db) crashed, saw the high load, checked some things and while later, everything was down.
<sherr> bug 276476 - seems old. "task blocked" is quite generic perhaps - problem in KVM?
<uvirtbot> Launchpad bug 276476 in linux "INFO: task blocked for more than 120 seconds causes system freeze" [Medium,Fix released] https://launchpad.net/bugs/276476
<sherr> 2008?
<alvin> sherr: Not only kvm. Task blocked is also to be seen with pdflush, kjournald, and others (rsync)
<alvin> fix released is only for the message or something. I see the problem only since I rebooted after installing the latest karmic kernel
<sherr> Yes - I mean it's a generic error message caused by many things maybe
<sherr> Painful - and trying to figure out the cause is painful. Time/effort ...
<alvin> Probably, but it's the only clue I have
<sherr> Try a different kernel? .33 PPA? Or roll your own from kernel.org
<alvin> The most important server is under support (Canonical), but after the first answer, they're a bit slow. I'll call tomorrow, because it halted production 2 times now.
<sherr> Keep us posted anyway - good luck. Be good to hear how it goes.
<alvin> Of course. I'll start a test now and see how it goes
<billybigrigger> does anyone here use snort?
<billybigrigger> http://ubuntuforums.org/showthread.php?t=919472
<billybigrigger> i'm trying to follow this security guide, but it is suggesting i compile snort from source, only because the version in the repos doesn't enable logging to a mysql db
<billybigrigger> does this still hold true?
<ttx> smoser: new server ISO is out, you can trigger the EC2 ones if not already done
<smoser> i will trigger now.
<smoser> ttx, started. i tested 20100405 on uec and it ran fine
<ttx> smoser: thanks !
<sherr> billybigrigger: No idea. I'd look at the available package versions, and the changelogs. See if it is still true.
<sherr> http://packages.ubuntu.com
<ttx> zul, kirkland: the new server ISO is available, please cover the tests you can before the end of your day.
<smoser> ttx, where did you hear that it wasn't functional?
<zul> ttx: my end of the day is in 9 minutes but Ill cover the tests later tonight
<ttx> smoser: some internal talk on a call, probably bad rumour
<ttx> zul: heh
<brontosaurusrex> so its either /etc/hosts or my own dns server?
<sherr> brontosaurusrex: stick a few hosts/IP's in a cuple of /etc/hosts files - prove to yourself that works :-)
<brontosaurusrex> sherr: thats how i have it done now (for a while)
<brontosaurusrex> and its good enough for home use
<brontosaurusrex> but what to do with bigger intranets? is there a way to sniff if there are any dns servers allready running, or what is the most correct procedure, finding the admins?
<sherr> brontosaurusrex: well, a sysadmin could tell you I assume. DHCP should also give out "nameservers" (go in /etc/resolv.conf)
<brontosaurusrex> well this home box has some ips in there and there is no dns
<sherr> brontosaurusrex: IP's in where? resolv.conf?
<brontosaurusrex> yes
<sherr> From your DHCP server? What IP's?
<brontosaurusrex> http://b.pastebin.com/HrUWUCE5
<brontosaurusrex> this are probably external dns's
<sherr> brontosaurusrex: Ah, NetworkManager ...
<sherr> brontosaurusrex: and Telekom Slovenije - your ISP?
<sherr> brontosaurusrex: I guess that's frm your ISP/internet modem DHCP.
<sherr> Options :
<sherr> a) Use /etc/hosts for your local systems
<sherr> b) Setup DNS locally (maybe look at dnsmasq = DHCP + DNS and simple)
<brontosaurusrex> sherr: thanks for your time, i need some reading to do
<brontosaurusrex> as it seems ;)
<MTecknology> I tried copying /var/lib/mysql/ from one server to another because it's the only backup I have for this. I setup the permissions on the new junk, copied debain.cnf from the old system (the internel structure should be exactly the same), When I do 'start mysql' it just hangs.
<MTecknology> Any ideas what I should look for to make this work?
<MTecknology> the only thing I get in top is an 'sh' process that hops up top some but doesn't really look active
<sherr> MTecknology: all the tables copied over?
<sherr> MTecknology: Try starting the server and tail the syslog in another shell i.e.
<MTecknology> sherr: yup - I just sat there beating it a few times and I think magic/pixie_dust may have taken hold :)
<sherr> /etc/init.d/mysql stop && /etc/init.d/mysql start
<sherr> and "tail -f /var/log/syslog" in anoither s=hell at the same time
<sherr> Oh, it's working now? magic/pixie_dust?
<sherr> What version pixie_dust? :-)
<MTecknology> 0.9
<MTecknology> I hear 1.0 is supposed to work out of the box
<MTecknology> sherr: I tried what you said and it all looks clean :)
<sherr> No one uses a v1.0 surely?
<sherr> 0.9 .. no problem ... :-)
<sherr> So, it is working?
<MTecknology> ya
<MTecknology> :)
<MTecknology> now to see if the whole server is working....
<sherr> OK, great.
<kirkland> ttx: on it now
<mathiaz> kirkland: hey!
<mathiaz> kirkland: did you test eucalyptus package installations?
<kirkland> mathiaz: howdy
<kirkland> mathiaz: i'm burning to a usbstick now
<xperia> hello to all. i have to execute this line here as root at boot
<xperia> how can i do this ?
<xperia> ./flashpolicyd.rb --xml flashpolicy.xml --logfile flashpolicyd.log
<mathiaz> cjwatson: hi
<mathiaz> cjwatson: when the installer runs the package installation in-target, is the in-target debconf database (already) loaded with the debconf database from the install environement?
<sherr> xperia: maybe add to /etc/rc.local (last rc script that runs each boot)
<xperia> sherr: great will test that
<mathiaz> cjwatson: IOW if I preseed something in the installer is that value available when the packages are installed in the chroot environement?
<xperia> the question is the whole script is located in my home folder. i ask me if i should remove it to some /usr/local folder maybe
<KillMeNow> xperia:  couple things you can do, you can copy the script to say /var/lib/initscripts and then add it to rc.local using the entire path
<KillMeNow> you could create your own folder in /var/lib/ called scripts
<xperia> KillMeNow: great thanks for the very helpfull answer. i have searched for rc.local in /etc but this file dont exist in /etc it must be some other path
<sherr> xperia: yes, doesn't really matter much. As long as you know where and what it is etc.
<xperia> ahh okay in this case it will work as long a file called rc.local exist in /etc. great you helped me a lot !
<incorrect> ok so slapd is very broken in 10.04
<incorrect> is ldap not very important any more?
<vmlintu> incorrect: it's definitely different from hardy
<vmlintu> incorrect: what's the problem you are having?
<incorrect> vmlintu, sure is, doesn't even configure a system that you can log into any more
<incorrect> dpkg-reconfigure slapd used to do a lot for me
<vmlintu> incorrect: yep, it used to be easier for small setups..
<vmlintu> incorrect: at first I hated the new system, but it turned out to be great as my setups are so weird
<vmlintu> incorrect: do you need help getting it working?
<cjwatson> mathiaz: should be
<cjwatson> mathiaz: provided the owner for the field in question is correct (i.e. the owning package, not d-i)
<incorrect> vmlintu, sure the documentation doesn't offer much help
<vmlintu> incorrect: I've written some entries in blog here: http://www.opinsys.fi/setting-up-openldap-on-ubuntu-10-04-alpha2
<vmlintu> incorrect: the first part does pretty much the same as the old dpkg scripts
<incorrect> i have a 6 slapd servers in a multi-master setup, but i would have thought having dpkg do some of the heavy lift would have been a good thing?
<vmlintu> incorrect: the dpkg scripts didn't do much good for multi-master setups
<incorrect> well it got you a fair bit of the way and you take it from there
<vmlintu> incorrect: do you have the old configs in hand?
<incorrect> sure i have my 8.04 setup running, i even backported 2.4.15
<vmlintu> incorrect: if you do, you can convert them to cn=config backend with slaptest tool
<MTecknology> How can I reinstall apache from scratch?
<MTecknology> I screwed up the configs pretty bad
<incorrect> i just wanted to rebuild my home setup
<incorrect> just found weird bugs since i've been upgrading since jaunty
<qman__> MTecknology, apt-get purge it, delete any remaining configs, then reinstall
<MTecknology> qman__: that didn't work
<MTecknology> aptitude purge apache2; rm -R /etc/apache2; aptitude install apache2
<qman__> how didn't it work
<MTecknology> qman__: /etc/apache2 still doesn't exist
<mathiaz> MTecknology: try purge apache2.2-common instead of apache2
<qman__> yeah, apache2 is a metapackage
<qman__> that might have caused it
<MTecknology> oh..
<MTecknology> thanks :)
<dasunsrule32> Does anyone have Vmware Server 3 beta running on their server here yet?
<incorrect> vmlintu, will you be updating the ubuntu server docs?
<MTecknology> dasunsrule32: I think most people in here use other tools like kvm
 * MTecknology screams silently to self -why on earth do people use ruby?-
<kirkland>  mathiaz yo
#ubuntu-server 2010-04-07
<MTecknology> I have qemu+libvirt on one server; if I run ufw enable on that I lose connectionto any guests. What rule do I need to add to allow communication through the host but not to the host?
<uvirtbot> New bug: #556996 in samba (main) "winbind pam-config potentially breaks stacking with modules of lower priority in common-passwd" [Low,New] https://launchpad.net/bugs/556996
<uvirtbot> New bug: #556785 in shadow (main) "Passwd in Ubuntu Lucid has started giving errors since last update" [Undecided,New] https://launchpad.net/bugs/556785
<MTecknology> jdstrand: sorry about that
<MTecknology> jdstrand: I have all vm's running over a bridged network - eth0 bridged with br0 -> vnet1 -> virtual_server_1. So do I just allow everything from anywhere to anywhere on vnet1?
<MTecknology> oh....
<MTecknology> in/out
<jdstrand> MTecknology: I've not done bridged networking with libvirt. however it should work how you'd expect. eg if some remote host wants to connect to your vm on port 22 on ip 1.2.3.4, then you can do: sudo ufw allow to 1.2.3.4 port ssh
<jdstrand> MTecknology: keep in mind the in/out is for INPUT and OUTPUT. if you need to manipulate the forward chain, then you are going to need to add stuff to /etc/ufw/before.rule
<jdstrand> s
<MTecknology> jdstrand: I just want to allow any traffic for that server to go through to that server so I can use iptables on there.
<MTecknology> I tried just 'ufw allow to 192.168.1.5' then ufw enable and I couldn't talk to that vm anymore
<MTecknology> If I try something like 'ufw allow from any port any to 192.168.1.6 port any' I get ERROR: 'Could not find protocol'
<jdstrand> MTecknology: don't use 'port any', just 'from any to any'
<jdstrand> MTecknology: if ufw is blocking, you'll need to look in kern.log
<cef> if you use 'port' you need to define udp or tcp (or other protocols that use ports)
<MTecknology> 'ufw allow any to any' - still kills my connection when I enable it..
<MTecknology> Apr  6 21:55:50 pessum kernel: [29088.509108] [UFW BLOCK] IN=br0 OUT=br0 PHYSIN=eth0 PHYSOUT=vnet3 SRC=192.168.3.6 DST=192.168.1.6 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=51033 DF PROTO=TCP SPT=47120 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0
<MTecknology> This is what I have now - http://dpaste.com/180294/
<MTecknology> jdstrand: I don't know if it makes a difference - I'm on 10.04
<jdstrand> MTecknology: I think you need to read http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29
<jdstrand> MTecknology: it references a fedora bug in libvirt that is probably what is causing you trouble
<jdstrand> MTecknology: I gotta head out, but I bet that is the issue
<MTecknology> jdstrand: with network manager not supproting bridged interfaces?
<MTecknology> oh
<MTecknology> jdstrand: alrighty, thanks :)
<maxagaz> i have installed my system using a usb key
<maxagaz> but it was installed on sdb
<maxagaz> and the system is sda now that the usb isn't plugged anymore
<maxagaz> I guess I need to update the grub
<maxagaz> can someone tell me how ?
<twb> Bleh.
<twb> That's why I hate grub and its stupid device.map
<Psi-Jack> Heh. Seems to be a horde of people wanting to get ubuntu 10.04 beta1. ;)
<AnRkey> how can i reserve a device name for my USB device so that it's always got the same /dev/devicenamehere?
<alvin> AnRkey: label the partition on the USB device (so that you have /dev/disk/byu-label/
<AnRkey> it's a printer
<AnRkey> two printers actually, they keep getting switched around or given ttyUSB3 or whatever
<jeffesquivel> AnRkey, you can use udev for that
<AnRkey> udev?
<jeffesquivel> AnRkey, this article may help you: http://www.linuxjournal.com/article/7316
 * AnRkey googles it
<AnRkey> thanks for the push in the right direction
<jeffesquivel> AnRkey, no problem
<jeffesquivel> AnRkey, here is an example rule for a printer: http://www.reactivated.net/writing_udev_rules.html#example-printer
<jeffesquivel> AnRkey, you can read that document too... but it may be to comprehensive
<jeffesquivel> *too
<bronto2> i'am trying to basically setup a lil intranet wiki, and i see it does support ldap, but can ldap be easily configure to just use posix system users?
<bronto2> configured*
<joschi> bronto2: usually no. but there are some scripts which you can use to convert your system users. but then again: why use ldap in the first place if you only want your system users to login?
<swift> hi guys, I installed and configured MRTG to monitor one internet line.... i installed it on my ubuntu server
<swift> in the index url now, I see 8 graphs related to the router being monitored
<swift> each graph is of the form "Traffic Analysis for #num# -- <Router Name>
<swift> any idea what these graphs are?
<swift> how can I choose which graphs to remove?.. or are all of these important?... please advis
<ttx> smoser: ping me when available
<smoser> here now.
<smoser> ttx,
<ttx> Two things, I suppose you got my answer to the ramdisk email...
<smoser> yeah.
<ttx> What's your opinion on it ?
<smoser> i would like to have no ramdisks.
<ttx> How much testing did the current noramdisk things get so far ?
<smoser> on my hardware, i've recreated failure with beta-1 and success on all of 2010040[1256]
<smoser> i cannot seem to create failure.
<smoser> i think that between reasonable test of your hardware (which was 'sometimes fail'), mine (always fail) and dustin's (always fail) and data center (always pass), we have fairly good coverage of that.
<ttx> OK, I'll play a few rounds myself
<ttx> and we'll take the final decision by the meeting time
<ttx> in between... we need to sort out the testcases
<smoser> above, the parentheses state what it was before.
<ttx> http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all shows they are pretty broken
<smoser> i updated http://testcases.qa.ubuntu.com/System/EC2CloudImages#preview
<ttx> Do you have testcases for the EC2 and the UEC images ?
<smoser> what do you mean?
<ttx> I mean we need:
<ttx> EC2/classic -> 2 testcases (multiple instance run, userdata/config)
<ttx> EC2/EBSroot -> 2 testcases (multiple instance run, userdata/config)
<smoser> i have a "test suite"read the link above and let me know if its not sufficient
<smoser> i have a "test suite" that runs all those tests
<ttx> UEC -> 2 testcases (instance run, userdata/config)
<smoser> i'm writing userdata/config for UEC right now (copying from EC2)
<ttx> ok
<ttx> smoser: do you agree EBSroot should have the same tests ?
<ttx> (currently they have no tests)
<smoser> yeah, it should have same tests, with an additional "shut down instance"  and "start instance"
<smoser> (i commented on that in EC2CloudImages above)
<ttx> smoser: please sync with ara when you have the links set. I updated her on #ubuntu-release a few minutes ago
<cemerick> Looks like the default archive used in sources.list for canonical AMIs on ec2 is out (http://us-east-1.ec2.archive.ubuntu.com karmic/universe)
<cemerick> anyone know if this is a policy change, or just an outage?
<smoser> kirkland, ping
<smoser> cemerick, no policy change
<smoser> what do you mean by out ?
<smoser> do you get errors?
<smoser> i've just verified from a lucid instance that it seems functional
<cemerick> smoser: well, it's unreachable :-) http://isitup.org/us-east-1.ec2.archive.ubuntu.com
<binBASH> Hi is it possible to install the ubuntu enterprise cloud later on a ubuntu server?
<smoser> cemerick, it is never available from outside of that region
<smoser> i'm guessing that 'isitup.org' doesn't run inside us-east-1, so that would be expected
<cemerick> smoser: ok; then I'm a little baffled w.r.t. the timeouts that aptitude update, et al. are yielding.
<smoser> binBASH, it is possible, yes. i'm sorry that i dont have a good link for how though. maybe ttx or kirkland do
<smoser> your instance is in us-east-1 region ?
<dassouki> what's the best way to remove tomcat5.5 and 6 (completely remove theme) if they were installed from apt-get
<cemerick> smoser: us-east-1d, yes
<binBASH> smoser: At least that sounds already good ;) Because I have a root server at hetzner.de and they provide only ubuntu-server images without uec.
<smoser> cemerick, can you 'apt-get update 2>&1 | tee out.log' and pastebin that ?
<cemerick> smoser: sure, 1m
<smoser> cemerick, i just replaced all 'lucid' with 'karmic' in my lucid instance that i have and run apt-get update successfully.
<smoser> so it seems like it would be limited to your instance. maybe some networking things you've done ?
<cemerick> smoser: this is a totally virgin node, started from ami-bb709dd2 FWIW
<smoser> binBASH, single system UEC installation is tricky at best. i do not believe its officially supported.
<binBASH> I got 6 servers atm, planning to have 150 if all works fine ;)
<smoser> cemerick, firing one up, and i'll check from taht.
<cemerick> smoser: FYI http://dpaste.com/180431/
<smoser> binBASH, and they're physical?  the nodes have to be run on physical hardware.... in theory you could do nested virt if they where amd64, but thats not going to be fast :)
<binBASH> smoser: Yup, physical
<smoser> cemerick, well, waiting for a spot instance request to come up and then i'll test also
<binBASH> smoser: Planning to run some KVM Hypervisors there :)
<cemerick> OK.  I'm switching over to another aws acct; I remember having some wonky network issues long, long ago that didn't replicate over to another acct (for some ungodly reason).
<binBASH> smoser: http://www.hetzner.de/en/hosting/produkte_rootserver/eq6/
<cemerick> smoser: Whooo. Different aws acct, all's good there. :-(
<cemerick> yikes
<smoser> cemerick, if you have support, i would try using it. if not, i would try the forums.
<smoser> cemerick, mine just worked (apt-get update)
<cemerick> smoser: yup, heading there now.  When this happened once before, a forum msg magically fixed networking on the affected acct's nodes.  It's odd tho, other network access works just fine.
<cemerick> thanks, sorry for the noise :-(
<Am1ne> hello ! I am using ubuntu-server 8.04 as a platform of mysql server ! the problem that I have is that I can't access the server remotely ! I have commented the bind-address line to allow external connections.. but still got this error :  Host '172.16.50.52' is not allowed to connect to this MySQL server
<Am1ne> any suggestions plz
<binBASH> smoser: I think I found it here > https://help.ubuntu.com/community/UEC/PackageInstall
<ttx> binBASH: beware that's outdated (applies to karmic), so it might not work
<ttx> we still need to fix the docs
<kirkland> smoser: whats up
<ttx> kirkland: see my comments on bug 556932, I think it's invalid -- if you agree please edit your test results so that it doesn't show failure on the tracker, please
<uvirtbot> Launchpad bug 556932 in eucalyptus "Not enough resources available: addresses (try --addressing private)" [High,Invalid] https://launchpad.net/bugs/556932
<binBASH> ttx: I will try it :-)
<zul> yay i ran out of disk space!
 * ttx âµ's zul
<zul> stupid daily ppas
<kirkland> ttx: ok
<a_ok> is dump/restore working with ext4 now?
<ttx> kirkland: thanks !
<a_ok> if not what would be a good replacement?
<kirkland> ttx: doh
<ttx> kirkland: :)
<kirkland> ttx: yep, all my fault, sorry
<kirkland> ttx: i did a lot of installs yesterday
<ttx> kirkland: we need some testing of the UEC cloud image without ramdisk to assess its boot stability, if yo uhave some time before the meeting
<ttx> kirkland: smoser can give you the method to test it
<ttx> (I'm on it right now, but the more the merrier)
<Omahn> Just noticed a mention of the auto-upgrade-tester in the LTS upgrade blueprint and a problem or something or other with moving it to a data center. Our site can provide some (free) hosting if it would be useful for running the auto upgrade tester.
<Omahn> I've been planning on running a copy of it locally anyway.
<uvirtbot> New bug: #557300 in tomcat6 (main) "tomcat6 package changes ownership of directories" [Undecided,New] https://launchpad.net/bugs/557300
<alvin> a_ok: I have never used it, but the man of dumpe2fs says 'ext4'
<ttx> Omahn: mvo is running it, please talk to him, he might be interested
<smoser> kirkland, just publish an image without a ramdisk, and see if it boots.
<smoser> uec-publish-image --ramdisk=none image.tar.gz lucid-20100407-noramdisk amd64
<alvin> a_ok: Didn't even know such a program existed for linux
<a_ok> alvin: ok I will just have to test it than. the changelog of the dump project mentions only preliminary ext4 support
<a_ok> alvin: we have been using it for many years. dates back to ext2
<alvin> a_ok: I think most people use tar (preferably in combination with LVM snapshots)
<alvin> a_ok: I have only used it for UFS
<a_ok> alvin: can't use tar for that kind of backups. you will lose sertain atributes etc
<alvin> Nice to know you can use it for ext too
<alvin> a_ok: Very true. In that case, there's always dd :-)
<a_ok> alvin: dd will mean our backups will be at least twice as large
<alvin> At the least, yes
<alvin> But it's a good question.
<alvin> I wonder whether LVM has a way of sending a volume to a file
<a_ok> fsarchiver seems a nice project but not good enough for production yet
<alvin> No, on first sight (man lvm) lvm doesn't have that.
<alvin> Actually, things like that are the reason I prefer ZFS for enterprise storage, wherever possible. We'll probably have to wait for BTRFS to get the good stuff in Linux too.
<ttx> smoser: the no-ramdisk uec image looks good to me
<Omahn> ttx: I'll drop mvo a pm, thanks.
<ttx> smoser: I managed to have one instance stuck !
<smoser> what is stuck ?
<ttx> doesn't boot all the way
<smoser> euca-get-console-output $IID | pastebin
<smoser> ?
<ttx> I'm on it
<LinuxAdmin> I've getting problems with nat configuration in ufw
<LinuxAdmin> can't define nat chain
<ttx> smoser: the end of it @ http://pastebin.ubuntu.com/410557/
<ttx> startedtwo in parallel
<ttx> the other one worked, pasting end of console-output as well
<LinuxAdmin> i put this lines in before.rules
<smoser> ttx, hm... well, that hang is much different than before.
<smoser> and i wouldn't think ramdisk related
<smoser> see the Generating locales output
<smoser> it shows that uec-init was running
<LinuxAdmin> *nat
<ttx> smoser: The one that worked: http://pastebin.ubuntu.com/410558/
<LinuxAdmin> :PREROUTING - [0:0]
<smoser> and landscape-client also running, which runs well after.
<LinuxAdmin> but it gives me error
<LinuxAdmin> why can't I configure nat in before.rules file, using ufw
<LinuxAdmin> I'm trying to avoid iptables, although I understand very well iptables, I'm trying to use ufw
<smoser> ttx, i really have no idea where that bug can be coming from...
<ttx> euca-run-instances -k mykey $EMI -t $TYPE -n 2
<ttx> trying again
<LinuxAdmin> can't I define advanced rules (nat for example) using ufw?
<ttx> worked
<ttx> smoser: I don't think that invalidates noramdisk, just shows that we need to test test test
<alvin> LinuxAdmin: just vote for bug 247455
<uvirtbot> Launchpad bug 247455 in ufw "a Nat option would be helpful for gateway systems" [Wishlist,Confirmed] https://launchpad.net/bugs/247455
<ttx> smoser: cannot reallt reproduce it
<mathiaz> kirkland: what's your take on bug 556312?
<uvirtbot> Launchpad bug 556312 in libvirt "libvirt packages should not Recommend hypervisor packages" [Wishlist,Won't fix] https://launchpad.net/bugs/556312
<jdstrand> LinuxAdmin: yes you can use nat rules with ufw
<jdstrand> LinuxAdmin: can you paste your before.rules file?
<jdstrand> LinuxAdmin: also, what Ubuntu release are you using?
<LinuxAdmin> I'm using ubuntu server 9.10
<LinuxAdmin> I'll paste the text in a few seconds...
<LinuxAdmin> just this two lines in beginning of the file give me an erro:
<LinuxAdmin> *nat
<LinuxAdmin> :PREROUTING - [0:0]
<jdstrand> LinuxAdmin: please use paste.ubuntu.com
<hggdh> !pastebin| LinuxAdmin
<ubottu> LinuxAdmin: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://tinyurl.com/imagebin | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<jdstrand> LinuxAdmin: and paste the entire before.rules
<LinuxAdmin> ok, just a minute
<MTecknology> Any suggestions for an easy to use mailing list that will let random users sign up? I'm considering mailman - just not sure if that's the best solution.
<LinuxAdmin> in paste.ubuntu.com do I have to "download as text"?
<LinuxAdmin> sorry it's the first time
<jdstrand> LinuxAdmin: no-- just give me the link
<LinuxAdmin> ok
<LinuxAdmin> http://paste.ubuntu.com/410566/
<jdstrand> LinuxAdmin: you forgot COMMIT for the nat table
<jdstrand> LinuxAdmin: on the line under your -A POSTROUTING rule, add:
<jdstrand> COMMIT
<LinuxAdmin> ok, as I understand I have to commit before start a new chain, wright?
<jdstrand> LinuxAdmin: a new table, yes
<LinuxAdmin> a new table, sorry
<LinuxAdmin> ok, let me try
<LinuxAdmin> can i put PREROUTING and POSTROUTING in before.rules or do I have to put POSTROUTING in after.rules?
<jdstrand> LinuxAdmin: it is fine as is. before.rules and after.rules are named as such for when the files are processed
<jdstrand> LinuxAdmin: before* first, user* (ie, cli added rules) 2nd, and after* 3rd
<LinuxAdmin> ok, thanks
<LinuxAdmin> it works
<jdstrand> cool
<LinuxAdmin> thanks again
<jdstrand> np
<LinuxAdmin> let me ask you just one more question
<jdstrand> shoot
<LinuxAdmin> I'm curious about ufw-before-forward
<LinuxAdmin> indeed, about ufw-before*
<LinuxAdmin> do I have to do anything in this chains to apply port forwarding using NAT?
<jdstrand> LinuxAdmin: if you want to customize the INPUT, FORWARD or OUTPUT chains beyond what the cli command can do (indeed, the cli command doesn't do FORWARD yet), you should add these rules to ufw-before* (or ufw-after* if you'd prefer, but most do in before)
<LinuxAdmin> ok, thanks Jamie, you helped a lot
<jdstrand> LinuxAdmin: specifically, for port forwarding, you will want to add them to the *filter table in ufw-before-forward
<LinuxAdmin> ok
<jdstrand> LinuxAdmin: see the Chains section /usr/share/doc/ufw/README.gz for more info
<LinuxAdmin> ok
<jdstrand> LinuxAdmin: basically, instead of doing -A FORWARD... you would do -A ufw-before-forward
<Daviey> mathiaz: Could you provide a sanitised preseed file that you were using?
<mathiaz> Daviey: sure
<LinuxAdmin> ok
<ihernandez> good morning
<MTecknology> If my server will only deal with mailman as far as email is concerned - what's the best smtp server to use?
<MTecknology> probably postfix?
<binBASH> ttx: is there a way to reconfigure uec via the console configtool if wrong values were added by accident?
<sommer> mathiaz: did you see my responses in #ubuntu-meeting?
<mathiaz> sommer: yes
<sommer> mathiaz: okay, just making sure heh :)
<sherr> MTecknology: Postfix is good, and supported as the ubuntu mail server
<mathiaz> sommer: if the server guide is up-to-date for lucid then we should not drop it from the archive
<mathiaz> sommer: my proposal was done under the assumption that the server guide wasn't up-to-date
<sommer> yep yep, just got confused when you replied to ttx
<mathiaz> sommer: and I'd rather not ship outdated documentation for an LTS
<mathiaz> sommer: as we did for karmic and ldap
<sommer> mathiaz: totally agree, and the doc team SRU process I believe is better now
<mathiaz> sommer: I think the content is great and you're doing a great job at it
<sommer> mathiaz: thanks man :)
<mathiaz> sommer: but sometimes life gets in the way - which is ok
<mathiaz> sommer: and we just take decisions based on that
<mathiaz> sommer: I think having a discussion about the *form* would be good at the next UDS
<sommer> ya, I think that'd be a great topic... I'll be creating a blueprint this week
<hggdh> hum. The corrected ISO is the 20100406.1, right?
<RoAkSoAx> ttx, so what are your thoughts about the syncing the new packages
<mathiaz> Daviey: bug 556833 updated with a failing preseed file
<uvirtbot> Launchpad bug 556833 in eucalyptus "System fails to reboot after eucalyptus preseeded instlation" [Undecided,New] https://launchpad.net/bugs/556833
<mathiaz> hggdh: http://iso.qa.ubuntu.com/
<mathiaz> hggdh: ^^ this list the version of the ISO supposed to be tested
<mathiaz> hggdh: otherwise ask in #ubuntu-release
<mathiaz> smoser: do you have access to the ubuntuserver blog?
<hggdh> mathiaz: indeed I could have *read* the page instead of just hitting the link
<smoser> mathiaz, i do not think so.
<smoser> or at least do not know so
<mathiaz> smoser: yeah - confirmed you don't have access to it
<mathiaz> smoser: when writting up the meeting minutes they should be published to the ubuntuserver blog
<mathiaz> smoser: do you have a wordpress.com account?
<mathiaz> smoser: ubuntuserver.wordpress.com is the place where the ubuntuserver blog is located
<smoser> i dont know if i do or not. i will get one if not and let you know.
<mathiaz> smoser: ok - let me know what email address you're using for your wordpress.com account
<mathiaz> smoser: and I'll add to the list of users of the ubuntuserver blog
<Daviey> mathiaz: /dev/cciss/, raises alarm bells with me.. I used to have helluva time with cciss support, but i thought that was all fixed now.
<MTecknology> How far off would you say I am with getting the mailman web interface going?   http://lists.kalliki.com
<smoser> ttx, http://uec-images.ubuntu.com/lucid/20100407.1/ is there now.
<smoser> and if you rsync, those images should get *very* good similarity to 20100407
<smoser> my sync took 3m
<sherr> MTecknology: who knows! All I see is a directory index .. and I guess you want a proper mailman interface?
<MTecknology> sherr: ya
<sherr> MTecknology: mailman docs/setup ... it's been too long for me. But should be straightforward.
<smoser> manifests are identical between 20100407 and 20100407.1 so the only change really *is* the lack of a UEC ramdisk in the .tar.gz file.
<mathiaz> Daviey: right - cciss is working great now
<sherr> I'd check your apache config first.
<mathiaz> Daviey: the thing is: take the preseed and comment the eucalyptus-udeb line and the install will work correctly
<MTecknology> sherr: I was trying to follow - http://doc.ubuntu.com/ubuntu/serverguide/C/mailman.html - I wound up with this config - http://paste.ubuntu.com/410590/
<mathiaz> Daviey: with eucalyptus-udeb, the install fails to reboot correctly
<Daviey> MTecknology: You don't seem to have modpython support.
<Daviey> mathiaz: that is crazy!
<mathiaz> Daviey: yeah - no kidding.... welcome to my world!
<sherr> MTecknology: why two ScriptAlias lines the same?
<smoser> i've a question about https://help.ubuntu.com/community/Installation/NetworkConsole
<smoser> anyeone know if you can set it up to start the install in a 'screen' ? and just start it without user input ?
<smoser> i basically want to be able to watch an automated install of a remote machine without a.) network kvm or b.) serial console
<Daviey> hmm the network console throws you into D-I over ssh
<smoser> only want to poke at it if it gets hung
<sherr> MTecknology: I am a little surprised you have "Indexes" on the mailman archives/public dir.
<Daviey> smoser: so ignore the fact you are on a network console
<smoser> Daviey, yeah, so i was hoping it would throw you into D-I over ssh in screen
<smoser> :)
<Daviey> if you preseed, the questions - then you get what you want :)
<MTecknology> sherr: I copied it from /etc/mailman/apache.conf
<smoser> so it wont' prompt at all ?
<smoser> i'll have ot play with it i guess.
<sherr> MTecknology: sorry, I have to pop out. I'd check the config again - maybe as per /usr/shape/doc/mailman (or whatever) - Debian readme? back later.
<sherr> *share
<smoser> thanks Daviey . the main interest is that i have 2 machines that i do autmated UEC install on down in the basement, but occasionally they hang (debconf question change or whatnot) and i'm so terribly lazy that i dont want to walk down there to see. i'd like to be able to ssh in an dcheck on them.
<smoser> the warning about "reliable network" made me think that running the installer inside screen would be good, and then just attaching the incoming user to that
<Daviey> smoser: Yeah, it's a shame network-installer doesn't get more publicity and love.  I hate working over a noisey server, so similar setup here.
<alvin> is there a new policy about /etc/fstab about using UUID for LVM volumes?
 * alvin will ask in ubuntu-bugs. It's probably a bug anyway.
<MTecknology> sherr: thanks, that's helped with postfix setup but not apache
<MTecknology> Anybody know mailman that could help me figure out the rest of this setup?
<hggdh> mathiaz: how did you get past the boot hanging on the uec rig?
<smoser> stupid question: anyone have an easy command to run that takes a package, and exits failure if it is not installed ?
<smoser> dpkg-query --show byobu | awk '-F\t' '$2 != "" { print $2; exit 0 } ; END { exit 1; }'
<smoser> is what i have
<smoser> but figured there is some way without the awk
<hggdh> exit $(dpkg -l $1 | egrep -q ^ii)
<hggdh> er. missed the echo $?
<binBASH> smoser: My cloud is setup, very cool ;)
<hggdh> smoser: exit (dpkg -l $1 | egrep -q ^ii; echo $?)
<smoser> hggdh, yeah. i saw. thanks.
<smoser>    ver=$(dpkg-query --show --showformat '${Version}\n' "$p")
<smoser>    [ -n "${ver}" ] && echo "PASS: ${p} installed (${ver})" ||
<smoser>       echo "FAIL: $p not installed"
<smoser> is what i came up with. mostly: ver=$(dpkg-query --show --showformat '${Version}\n' "$p") && [ -n "${ver}" ]
<smoser> binBASH, yeah? its functioning ?
<binBASH> smoser: I didn't test to start a vm yet, but instead of my former CentOS 5.4 setup I can find the nodes ;)
<binBASH> so I think it's working
<ttx> smoser: amd64/UEC image looking good
<ttx> smoser: please fully test those by eod today
<smoser> ttx, i'm trying to automate a few more of the user data tests and then will start the ec2 runs.
<RoyK^> hi all - I have a server setup with some lvm2 volumes - is it possible to attach a new disk to make ubuntu mirror them without recreating things?
<ttx> smoser: and see with kirkland about validation on his setup as well
<ttx> since he was hitting those issues quite steadily
<kirkland> smoser: what's up?
<ttx> kirkland: uec cloud images back to noramdisk, need as much testing as we can give it by eod
 * ttx pauses for dinner and will be back
<kirkland> ttx: syncing now
<smoser> kirkland, i was going to ask about enabling nework ssh in your installer (uec-auto). if you'd thought of that.
<kirkland> smoser: so http://uec-images.ubuntu.com/lucid/current/ are ramdiskless now?
<kirkland> smoser: i'm wgetting
<smoser> yes. 20100417.1
<kirkland> smoser: okay, it's up and running
<kirkland> smoser: well...  i'm not sure, how can I make sure i have no ramdisk?
<kirkland> smoser: ls /boot?
<smoser> you can't tell from inside.
<smoser> euca-describe-images will not show an ari
<smoser> and console output will not have ramdisk like messages
<kirkland> smoser: http://pastebin.ubuntu.com/410632/
<kirkland> smoser: registration looks right
<smoser> right.
<kirkland> smoser: so i'm confident i registered it without a ramdisk
<smoser> euca-describe-images should have 'aki-' for that image, but no 'ari-'. right.
<smoser> and it boots ?
<kirkland> $ euca-describe-images emi-3FCB1298
<kirkland> IMAGE   emi-3FCB1298    foo/lucid-server-uec-amd64.img.manifest.xml     admin  available        public          x86_64  machine eki-66F2179C
<kirkland> smoser: yep, booted
<smoser> yeah, previously on your hardware we saw hang almost all the time.
<smoser> and on mine 100% of the time.
<smoser> mathiaz, hggdh i'd like to run this test on the data center uec if possible
<smoser> as that was the place that never seemed to fail when we had no ramdisk before (everyone else generally saw failure, so *something* was different -- timeing -- and i want to test there)
<hggdh> smoser give me 15 minutes
<alloosh> hi guys, I am hosting a web application using ubuntu server. I have the application in English and german, the german version is not displayed right in the browser, is that a server issue?
<alloosh> * I mean german characters are not displayed
<kirkland> smoser: sweet, so this is resolved?
<kirkland> smoser: what was the fix?
<smoser> well, we hope so.
<smoser> there are several changes since beta-1 in upstart, mountall, and plymouth ( i dont thikn plymouth was involved).
<smoser> kirkland, one nice thing for you to do would be to verify that this fails with beta-1
<kirkland> jdstrand: are there any libvirt uploads pending?
<smoser> ie, download beta1 tarball, uec-publish-tarball --ramdisk=none
<smoser> that should hang like we used to see it.
<smoser> (it does for me)
<kirkland> jdstrand: i needed to fix a couple of minor issues in the upstart init script and the debian/control
<kirkland> smoser: can you url me the beta1 download?
<jdstrand> kirkland: yes, ubuntu19 is waiting to be accepted
<smoser> kirkland, you are lazy
<jdstrand> kirkland: (already uploaded)
<kirkland> smoser: i'm doing several things right now
<smoser> http://uec-images.ubuntu.com/releases/lucid/beta1/ubuntu-10.04-beta1-server-uec-amd64.tar.gz
<kirkland> smoser: beautiful, thanks
<smoser> no problem , i just like complaining.
<kirkland> smoser: wget happening
<smoser> kirkland, you *were* mirroring i think
<smoser> did you stop ?
<smoser> ie, you might have that local
<jdstrand> kirkland: https://launchpad.net/ubuntu/lucid/+queue?queue_state=1&queue_text=libvirt
<kirkland> smoser: hrmm, i think you're right, actually
<kirkland> jdstrand: i'm going to run the changes by you before uploading
<jdstrand> kirkland: please make on ubuntu20 based off what is in the queue
<kirkland> jdstrand: pretty small, straightforward
<kirkland> jdstrand: yup, just grabbed it
<kirkland> jdstrand: i think 2 sets of eyes is essential now
<jdstrand> kirkland: k
<jdstrand> kirkland: this is for post-freeze?
<kirkland> jdstrand: yes
<jdstrand> ok cool
<kirkland> jdstrand: post-freeze, yes
<kirkland> jdstrand: just wanted to get it queued
 * jdstrand nods
<kirkland> jdstrand: https://bugs.edge.launchpad.net/ubuntu/+source/libvirt/+bug/556312
<uvirtbot> Launchpad bug 556312 in libvirt "libvirt packages should not Recommend hypervisor packages" [Wishlist,Confirmed]
<kirkland> jdstrand: i'm inclined to agree with the reporter, and make the hypervisor a suggests of libvirt
<jdstrand> kirkland: I agree with both you and mathiaz
<jdstrand> kirkland: libvirt+qemu-kvm is the recommended/supported virtualization solutino on ubuntu
<kirkland> jdstrand: agreed
<jdstrand> kirkland: if you change this to Suggests, you probably will need to change documentation
<jdstrand> (I'm not sure, but worth checking)
<kirkland> jdstrand: what documentation?
<jdstrand> splitting out virsh from libvirtd is not a bad idea
<jdstrand> kirkland: anything people will read that says 'apt-get install libvirt-bin' or whatever
<jdstrand> re splitting> imo not for lucid and not without debian
<jdstrand> kirkland: otherwise they'll have a shiny, but useless libvirt, which will lead to confusion
<jdstrand> there may be a debian bug on the libvirtd/virsh split...
<kirkland> jdstrand: agreed, split virsh for maverick is a good idea (not for lucid)
<kirkland> jdstrand: that documentation should read "apt-get install ubuntu-virt-server"
<kirkland> jdstrand: apt-cache show ubuntu-virt-server
<jdstrand> kirkland: my feeling is don't drop to Suggests, and maybe fix for maverick
<kirkland> jdstrand: that's our meta-package for libvirt + kvm + ssh
<kirkland> jdstrand: you say "don't drop" to suggests?
<jdstrand> kirkland: sure, but I don't know what else if floating out there
<kirkland> jdstrand: sorry, i thought you were agreeing with drop to suggests
<jdstrand> kirkland: yeah-- keep as is, say in the bug that we are considering splitting out libvirtd, etc
<jdstrand> kirkland: that's my opinion, but I don't have a strong preference
<jdstrand> I understand his point, but don't agree with dropping to Suggests (mathiaz' 80/20 analogy)
<kirkland> jdstrand: i'm not seeing any apt-get install libvirt-bin in the documentation (at least google isn't finding it)
<kirkland> jdstrand: okay
<jdstrand> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=508606
<uvirtbot> Debian bug 508606 in libvirt-bin "Split virsh to separate package" [Wishlist,Open]
<kirkland> jdstrand: https://help.ubuntu.com/8.04/serverguide/C/libvirt.html
<kirkland> sudo apt-get install kvm libvirt-bin
<kirkland> https://help.ubuntu.com/search.html?cof=FORID%3A9&cx=004599128559784038176%3Avj_p0xo-nng&ie=UTF-8&q=libvirt&sa=Search
<jdstrand> kirkland: if you are comfortable that it won't confuse users, then I don't see a huge problem with dropping to Suggests, even though I don't personally agree
<kirkland> jdstrand: i think we should be unanimous at this point in Lucid :-)
<kirkland> jdstrand: i'm willing to capitulate
<kirkland> jdstrand: i don't *think* it will confuse users, as I don't see any documentation that says 'install libvirt' and expects kvm to be there too
<kirkland> jdstrand: i think that 'Suggests' is appropriate, though definitely different than the behavior we've had for a long time
<soren> I'm still wondering about this... Perhaps we should reverse the recommendation?
<soren> So that qemu-kvm recommends libvirt-bin.
<jdstrand> kirkland: it depends on the POV: someone who just wants virsh doesn't need it. someone who wants to do virtulization on ubuntu does
<soren> Using libvirt is after all our recommended way to use kvm.
<jdstrand> which is which I think it is wishlist on the debian bug
<jdstrand> *shrug*
<soren> It would certainly fix the "I wanted libvirt, but I didn't want kvm" problem.
<kirkland> jdstrand: fair enough, i'm good with deferring this for lucid, and just telling user to use --no-install-recommends
<jdstrand> I've made my point. I won't complain about Suggest any more
<jdstrand> soren: I think there may be a lot of kvm users who don't want libvirt
<soren> jdstrand: Well, the usual phrasing in our docs is that we recommend using libvirt to manage kvm.
<soren> I wonder why reversing the relationship hasn't occured to me before now.
<jdstrand> soren: absolutely. I just think that practically, there are more users of kvm with libvirt than libvirt-bin users with kvm
<jdstrand> err
<soren> er... :)
<soren> ?
<jdstrand> there are more users of kvm _without_ libvirt than libvirt-bin users _without_ kvm
<kirkland> jdstrand: i agree with you
<hggdh> smoser: ping on the uec rig
<soren> jdstrand: Probably. I'm just suggesting putting our debian/control file where our mouths are.
<soren> Or something.
<jdstrand> heh
<smoser> hggdh, is it up?
<hggdh> smoser: I cannot pressed them, they are all down
<hggdh> smoser: mathiaz opened a bug on it, bug 556833
<uvirtbot> Launchpad bug 556833 in eucalyptus "System fails to reboot after eucalyptus preseeded instlation" [Undecided,New] https://launchpad.net/bugs/556833
<mathiaz> hggdh: right - I don't know how to work around that one :/
<smoser> :-(
<hggdh> smoser: yes, a real killer :-(
<hggdh> smoser: I was trying to find *where* we are being hit, but it is a very long process
<hggdh> smoser: so I hoped you would know more ;-)
<hggdh> ttx: the euc rig is -- right now -- down hard
<smoser> oh, i have no idea on that. sorry.
<hggdh> uec
<mathiaz> hggdh: so the installation fails even with topo1?
<hggdh> mathiaz: I went to multi
<hggdh> mathiaz: hum. I will try topo1 now
<mathiaz> hggdh: yeah - try topo1
<mathiaz> it may well be that only multi is broken
<hggdh> mathiaz: I saved the syslog for multi, uploaded it to the bug
<hggdh> smoser: so can I keep the rig for now?
<smoser> hggdh, sure.
<smoser> if you do bring it up i'd like to just run some instances on it.
<ttx> back
<hggdh> smoser: cross your fingers. And toes, just in case
<smoser> done
<mathiaz> smoser: please to start running
<hggdh> ttx: I am having problems with the uec rig, cannot test multi
<ttx> hggdh: the other topologies are alright ?
<hggdh> ttx: trying now topo1, the simplest
<ttx> smoser: any reason why the userdata test for UEC cloud images is truncated ?
<smoser> truncated as compared to EC2 ?
<ttx> yes
<smoser> i'm fine if you want to put all of the tests there.
<ttx> was just wondering if they were not relevant or
<smoser> i onlhy shortened it to reduce the requirement.
<smoser> they are relevant
<ttx> I'm ok with this test right now
<smoser> just time consuming
<ttx> direct download of the link gives you a HTML page btw
<ttx> maybe point to the "download file" link instead ?
<hggdh> ok, cempedak booted with topo1. Will now load the others
<smoser> my thoguht process is that we test it more completely on ec2. and then test to make that user data is generally functional on euca, the user space code should function similarly.
<ttx> smoser: agreed
<smoser> (mostly we're testing the metadata service :)
<smoser> regarding the link, yeah, i knew it wasn't to the 'download'
<smoser> the reason for not directly to download is that i wanted to give some context of where it came from
<smoser> i'll add a 'direct download' link
<ttx> ack
<ttx> smoser: are the EC2 instance tests in progress ?
<smoser> yeah
<ttx> kirkland: did you try the UEC cloud images yet ? Looking good on my side
<kirkland> ttx: yes, look good here too
<kirkland> ttx: i did test them
<mathiaz> hggdh: so it's only the multi-network topo that fails to install?
<mathiaz> smoser: will the current ami number change when beta2 is released?
<ttx> kirkland: cool, please register your results on the ISO tracker if appropriate
<smoser> yes
<smoser> mathiaz,
<smoser> :-(
<smoser> that is, i think, not likely to change. we publish images with names like "testing" or "daily". re-publishing as "beta-1" generates new ids
<mathiaz> smoser: right - I wanted to mention the AMI number in a blog post where I use the Lucid Beta2 image
<mathiaz> smoser: but that will change
<mathiaz> smoser: I will point to a URL instead
<smoser> what url ?
<ttx> kirkland, smoser, mathiaz, zul: I'll stop my tests for today, please try to cover the gaps in http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all as well as you can, I'll fill the missing ones tomorrow morning
<mathiaz> smoser: that's my next question
<hggdh> adding user1/insecure as an user to the cloud at the rig, getting a message "password may not contain parts of user name
<mathiaz> smoser: where will the list of Lucid Beta2 image be published?
<hggdh> ins't this a bit excessive?
<mathiaz> hggdh: the username is user1 and the password is insecure ?
<hggdh> mathiaz: correct
<mathiaz> hggdh: hm - you may have run into a bug then
<smoser> well, they will appear at http://uec-images.ubuntu.com/releases/lucid/beta-2 mathiaz
<mathiaz> hggdh: or the error message is wrong
<mathiaz> smoser: thanks
<hggdh> mathiaz: k, just wanted to be sure, will open a bug on it
<mathiaz> hggdh: the letters of user1 are (almost) all in the word insecure though
<hggdh> mathiaz: yes, they are. Still, it sounds excessive
<smoser> well, password 'insecure' does contain parts of 'user1', the 'e', 'u', 's', 'r'. actually all but the 1
<smoser> yeah
<zul> ttx: acked
<smoser> such a policy would actually significantly decrease the number of pass phrases possible for some users.
<mathiaz> the security team probably knows more about that - kees jdstrand mdeslaur ^^
<hggdh> will try another variation of the theme
<hggdh> no, my bad -- overjumped a filed :-(
<mdeslaur> huh?
<mdeslaur> mathiaz: what's the issue?
<mathiaz> mdeslaur: 14:33 < hggdh> adding user1/insecure as an user to the cloud at the rig,  getting a message "password may not contain parts of user name
<kklimonda> hmm, is it still possible to get django 1.2 into lucid if the final release is planned for 26th? on the one hand it's way too late and upstream has pushed back the schedule 3 or 4 times already + we would have to check all django rdepends for compatibility issues. on the other hand supporting 1.1 for 5 years may not be easy as the upstream have done quite a few big changes in 1.2..
<mathiaz> mdeslaur: 14:33 < hggdh> ins't this a bit excessive?
<mathiaz> kklimonda: as the upstream have done quite a few big changes in 1.2... <- that's another argument to *not* ship 1.2 in an LTS release
<hggdh> mdeslaur, mathiaz overshot a field in the page, my error
<mathiaz> kklimonda: things will get outdated over the life time of an LTS
<mdeslaur> hggdh: ok, cool.
<mathiaz> kklimonda: if upstream commits for a longer maintainance window of 1.2 it may change the game
<kklimonda> mathiaz: right - 1.2 is backward compatible with 1.1 which is in karmic (and both are not compatible with 0.96 from hardy anyway. I don't think developers are going to extend support over 6 months for fixes and another 6 months for security fixes.
<kirkland> smoser: around?
<kirkland> smoser: would you mind proofreading something for me?
<smoser> of course i would
<smoser> :)
<smoser> sure whats up?
<kirkland> smoser: http://pastebin.ubuntu.com/410683/
<kirkland> smoser: just give that a once-over
<RoAkSoAx> kirkland, Howdy!! You might be able to help me. Is it possible to put each KVM instance in a different vlan? How/where to find info?
<alvin> kirkland: I just read that. About the last sentence: (fully supported) Canonical told me today (in a case) they don't support graphical operating systems in virtual machines. I was pointed to this page: https://help.ubuntu.com/community/KVM where VirtualBox is listed.
<alvin> It wasn't relevant for the case, but left me wondering. Virtualbox isn't in main.
<MTecknology> funkyHat: hi
<MTecknology> funkyHat: your choice
<funkyHat> MTecknology: let's go with here, looks quite quiet
<MTecknology> ok
<MTecknology> funkyHat: so, there's a bunch of directories now
<kirkland> Alblasco1702: thanks
<kirkland> alvin: thanks
<kirkland> alvin: i'll update
<funkyHat> ok, so the exim conf.d dir has a bunch of other dirs in it, one for each main section of the exim config. So each folder is read in a pre-set order (main first, router last I believe, but anyway). Inside those dirs the files are named so that they go in the right order
<funkyHat> That means you can put a new file in there with a number between 2 others if you want your file to be read after one but before the next
<kirkland> RoAkSoAx: hrm, mathiaz knows more about vlans than I do
<RoAkSoAx> kirkland, ok :)
<RoAkSoAx> mathiaz, ^^
<mathiaz> RoAkSoAx: what do you wanna do exactly?
<MTecknology> funkyHat: alrighty
<mathiaz> RoAkSoAx: vlan are configured in the guest
<MTecknology> funkyHat: btw - mailman is the only reason exim is on the server
<funkyHat> MTecknology: that makes it a little simpler, but not much â¡)
<alvin> kirkland: Keep in mind that X is client/server protocol ;-)
<RoAkSoAx> mathiaz, so for example, i configure in VM1 vlan1, and VM2 vlan2, the Host is connected to a switchport which is configured as a trunking port?
<funkyHat> MTecknology: http://www.exim.org/howto/mailman21.html#exconf explains the bits you need to add to each section
<funkyHat> MTecknology: so the bit about macro defs should go in /etc/exim4/conf.d/main/01_exim4-config_listmacrosdefs
<RoAkSoAx> mathiaz, and every host I have to configure it to be able to access the trunk?
<mathiaz> RoAkSoAx: IIRC a trunking port means that any vlan will go through it
<mathiaz> RoAkSoAx: that being said kvm may not support the vlan tags generated by the guest
<MTecknology> funkyHat: so the first part is 'Main configuration settings'
<mathiaz> RoAkSoAx: and drop them when sending the packet to the switch
<funkyHat> MTecknology: make a new file for the exim router. I named mine 450_local_mailman
<funkyHat> MTecknology: right
<funkyHat> MTecknology: actually I made a new file in the main dir for mine it seems
<MTecknology> funkyHat: 'Exim Router' config goes into  /etc/exim4/conf.d/router/450_local_mailman ?
<mathiaz> RoAkSoAx: you also need to make sure that kvm is using a bridge as the network interface between the guests and the switch port
<funkyHat> MTecknology: 04_mailman_options I called it
<RoAkSoAx> mathiaz, right that's the thing. usually what I would do (in a cisco switch) "switchport access vlan 2"
<RoAkSoAx> and connected to that switchport the machine
<MTecknology> funkyHat: so jsut from the router config in '/etc/exim4/conf.d/router/04_mailman_options' ?
<funkyHat> MTecknology: ah, sorry I was still talking about the main config
<funkyHat> Got a bit behind
<RoAkSoAx> mathiaz,  now, given that the machine hosts many KVM's, that would mean that I should not configure the switchport to only listen to vlan 2, but would have to be configured as trunk, restricting which vlans will go through, correct?
<MTecknology> funkyHat: let's start over here..
<funkyHat> MTecknology: so I have conf.d/main/04_mailman_options
<funkyHat> I also have conf.d/router/450_mailman_router
<MTecknology> alrighty, "Main configuration settings" goes into the first?
<funkyHat> Yes
<hggdh> smoser: I should take another 30 minutes on the rig
<smoser> ok.
<funkyHat> MTecknology: So you can just copy and paste from that howto page, but you might need to adjust bits like the username, group and paths
<MTecknology> yup
<funkyHat> MTecknology: the router and the transport you can just take as they are and put them in their own files in router/ and transport/
<funkyHat> My router is 450_mailman_router and the transport is 40_local_mailman_pipe (don't know why it's called pipe!).
<funkyHat> The routers are where the order is important
<mathiaz> RoAkSoAx: yes - IIRC setting a switch port to trunking means that it will not control the vlans bits in the packet
<mathiaz> RoAkSoAx: note that you may loose some security here as it would be the guests that are responsible for setting the proper vlan
<mathiaz> RoAkSoAx: if you compromise a guest you could switch its configuration to use another vlan
<MTecknology> funkyHat: then restart exim?
<funkyHat> MTecknology: yep
<mathiaz> RoAkSoAx: so the proper way to do it would be in the bridge on the kvm *host*
<MTecknology> hrm.. user mailman was not found
<funkyHat> make sure you use the init script not sending it sighup
<MTecknology> I wonder what user it installs as
<funkyHat> from the repos it's list, I believe
<funkyHat> you could ps aux | grep mailman
<mathiaz> RoAkSoAx: I'm not familir enough with the bridge in linux to see if that's possible (I'd guess so)
<mathiaz> RoAkSoAx: *familiar*
<MTecknology> list
<MTecknology> funkyHat: yay - no errors now - so should things work like magic?
<funkyHat> Theoretically!
<MTecknology> well - sent an email reminder - we'll see it it shows up..
<RoAkSoAx> mathiaz, right, so I would have to do something like this? http://paste.ubuntu.com/410694/
<RoAkSoAx> (for the guests)
<funkyHat> MTecknology: you can check the exim logs to see what happened to it
<MTecknology> funkyHat: doesn't look too bad
<MTecknology> http://paste.ubuntu.com/410697/
<mathiaz> RoAkSoAx: http://bazaar.launchpad.net/~mathiaz/%2Bjunk/uec-testing-preseeds/annotate/head%3A/templates/preseed/lucid/uec_multi_router#L3
<mathiaz> RoAkSoAx: ^^ in the late_command I generate a complete /etc/network/interface that sets up 4 interfaces with vlans
<RoAkSoAx> mathiaz, that's the KVM *host*, correct? so in the KVM guests we only assign an IP address on the same subnet as the one in the vlan?
<mathiaz> RoAkSoAx: nope - that would be the KVM guest
<mathiaz> RoAkSoAx: in the kvm guest you create an eth0.2 interface
<mathiaz> RoAkSoAx: where eth0 is the raw interface and 2 is the vlan
<mathiaz> RoAkSoAx: and install the vlan package
<mathiaz> RoAkSoAx: that's all what is required
<MTecknology> How do I set the time on a system?
<RoAkSoAx> mathiaz, awesome then. I'll give it a try :)
<mathiaz> RoAkSoAx: the ifupdown scripts takes care of setting a vlan in the guest
<mathiaz> RoAkSoAx: but you have to trust you guests
<mathiaz> RoAkSoAx: but you have to trust *your* guests
<MTecknology> oh..
<RoAkSoAx> mathiaz, right. Ok then I'll give it a try to see how my config goes then :). Thanks for the help
<BlaDe^> Hi I've recently moved over to linux, and I've installed apache/php and PHP doesn't have the permissions to include
<mathiaz> RoAkSoAx: np
<BlaDe^> should I run apache as a different user or chmod differently? what's the recommended solution?
<guntbert> MTecknology: did you see https://help.ubuntu.com/8.04/serverguide/C/NTP.html ?
<funkyHat> BlaDe^: fix the permissions on your files so that www-data can read them
<MTecknology> guntbert: I just rememebered dpkg-reconfigure tzdata
<BlaDe^> funkyHat,  should I chmod the entire /var/www dir then ? (to 777 iirc) ?
<funkyHat> BlaDe^: never ever to 777
<guntbert> MTecknology: good :)
<BlaDe^> funkyHat to what then?
<MTecknology> funkyHat: this line looks interesting..   2010-04-07 19:30:05 1Nzawv-0005yE-15 == michael@lists.kalliki.com R=dnslookup_relay_to_domains T=remote_smtp defer (111): Connection refused
<sherr> BlaDe^: This is very basic unix. You need to look at file/dir permissions.
<sherr> BlaDe^: man chmod
<funkyHat> MTecknology: that was before you restarted exim
<sherr> BlaDe^: consider read (r) perm for instance - and user/group/other perms
<MTecknology> funkyHat: I cleared the log and restarted - I'll send the mail again
<MTecknology> funkyHat: that pops up again right after restarting
<funkyHat> MTecknology: hrm
<MTecknology> funkyHat: clear log, restart - http://paste.ubuntu.com/410699/
<MTecknology> restart exim*
<funkyHat> MTecknology: what about /var/log/exim4/rejectlog?
<funkyHat> MTecknology: oh, we might have forgotten the bit about configuring mailman...
<funkyHat> hah
<funkyHat> http://www.exim.org/howto/mailman21.html#mmconf
<MTecknology> I set that part
<MTecknology> I think I did - h on
<funkyHat> oh ok
<BlaDe^> sherr,  I've read what the permissions to.. bitwise system and such but I don't knwo what I should be allowing
<funkyHat> BlaDe^: basically you should only allow www-data to read. usually making the files world-readable is acceptable. so chmod -R go+rX /var/www should do nicely
<guntbert> BlaDe^: never allow write access for "others" on a server!
<BlaDe^> right ok
<funkyHat> That is (for group and others) add (read and "execute if it already had execute permissions")
<ttx> smoser: about to call it a day, I see only 4 EC2 tests completed, is that the current situation ?
<smoser> yes.
<smoser> ttx, dont worry too much about it.
<smoser> i ran into a snafu with the ebs images
<smoser> which i'm fixing.
<MTecknology> funkyHat: ok - screwed up a little
<smoser> it will require new AMIs in the iso tracker
<ttx> ah
<ttx> smoser: ok
<MTecknology> funkyHat: I restart and now all I'm getting is - 2010-04-07 14:53:56 1Nzawv-0005yE-15 == michael@lists.kalliki.com R=dnslookup_relay_to_domains T=remote_smtp defer (-53): retry time not reached for any host
<MTecknology> funkyHat: any idea what that last piece to this is?
<MTecknology> I'm assuming the last piece before things get simple
<MTecknology> funkyHat: did you run off?
<funkyHat> MTecknology: yes but I am back!
<MTecknology> funkyHat: :P
<MTecknology> funkyHat: could we maybe go dpkg-reconfigure exim4 step by step? I need to run up to my gf's room and I'll be right back on
<funkyHat> MTecknology: sure
<BlaDe^> I've setup the mod_rewrite and it's present in the phpinfo(); however my url's aren't being re-written. Is there anything additional I need to do for .htaccess files to be read?
<funkyHat> BlaDe^: in your server config you need to add AllowOverride +FileInfo
<BlaDe^> ah right ok, I'll try that
<BlaDe^> should I apply that to the root dir?
<funkyHat> There should be a section for <Directory /var/www?
<funkyHat> argh I can't type ????
<funkyHat> right chevron
<funkyHat> > nope I was just being an idiot
<MTecknology> funkyHat: alrighty - internet site
<BlaDe^> Yeah, I've just added it. However it still isn't working
<MTecknology> kirkland: system mail name: lists.kalliki.com
<funkyHat> MTecknology: yep
<BlaDe^> <Directory /var/www/> AllowOverride Options FileInfo </Directory>
<kirkland> MTecknology: huh?
<BlaDe^> then restarted apache
<MTecknology> funkyHat: IP to listen on: blank ?
<funkyHat> MTecknology: yep
<MTecknology> kirkland: he's halping me setup a mailman mailing list
<MTecknology> kirkland: oh- sorry!
<MTecknology> kirkland: didn't mean to hilighty you, k is too close to f :P
<funkyHat> MTecknology: I'm wondering if setting the system mail name to lists.kalliki is getting in the way
<MTecknology> funkyHat: should I set it to jsut the actual server name?
<Hypnoz> whats the deal with #sysadmin? How do you get an invite to that chan?
<funkyHat> MTecknology: yeah
<MTecknology> funkyHat: so texo.kalliki.com or just texo?
<funkyHat> MTecknology: probably doesn't matter
<MTecknology> funkyHat: ? - the help part says
<MTecknology> Thus, if a mail address on the local host is foo@example.org, the correct value for this option would be example.org.'
<funkyHat> MTecknology: put it in as texo.kalliki.com then
<MTecknology> funkyHat: ok - IP to listen on blank
<funkyHat> Yep
<MTecknology> funkyHat: and then in 'Other destinations for which mail is accepted:' I should add lists.kalliki.com ?
<funkyHat> texo.kalliki.com doesn't resolve...
<MTecknology> no
<funkyHat> MTecknology: no don't put that there
<MTecknology> funkyHat: it resolves internally but that's it
<MTecknology> funkyHat: lists.kalliki.com goes to that server
<funkyHat> Don't put lists.kalliki.com in other destinations
<funkyHat> I don't think it's needed
<MTecknology> ok
<MTecknology> the default in there is texo.texo
<funkyHat> Ok just leave it like that then
<MTecknology> Domains to relay mail for: ?
<funkyHat> None
<funkyHat> or default
<MTecknology> machines to relay I'm guessing should be blank too
<MTecknology> 'Keep number of DNS-queries minimal (Dial-on-Demand)?' default No - probably doesn't matter?
<funkyHat> Shouldn't matter, no is better
<MTecknology> yay - more intelligible errors in the logs
<MTecknology> 2010-04-07 15:22:58 1Nzbm6-0006vx-OR ** root@lists.kalliki.com: Unrouteable address
<funkyHat> aha!
<funkyHat> What does the rejectlog say?
<MTecknology> there isn't one
<MTecknology> so _ I have two frozen messages _ must be getting closer now :)
<MTecknology> funkyHat: nice - "Drupal Multisite in lighttpd" - I went to nginx
<funkyHat> MTecknology: I'm actually still running apache, working on migrating my setup to lighttpd so I can do a proper comparison
<MTecknology> funkyHat: Is there something I need to enable for rejectlog?
<funkyHat> MTecknology: no, maybe that's a spamd thing
<MTecknology> funkyHat: I have spamassassin installed but I commented out the line that tells mailman to use it
<funkyHat> I'm trying to remember if I had any other issues...
<funkyHat> MTecknology: can you pastebin /var/lib/exim4/config.autogenerated
<MTecknology> funkyHat: http://paste.ubuntu.com/410718/
<hggdh> smoser: you can use the rig now
<hggdh> smoser: tell me when you are done, please
<smoser> hggdh, thanks... is it up and running ?
<MTecknology> hggdh: sounds like fun - can I play?
<hggdh> smoser: yes, it is up & running, topo1
<hggdh> MTecknology: heh
<MTecknology> funkyHat: so is that just slapping together all the configs?
<funkyHat> MTecknology: yeah, the split config files is a Debian thing, when the init script starts exim up it jams all of the files together and puts them there, and that's the actual config file that exim reads
<smoser> hggdh, you registered the beta 1 images ?
<MTecknology> funkyHat: does it look like i screwed up?
<funkyHat> MTecknology: I don't think so. Still figuring it out
<funkyHat> We might need to modify an acl
<MTecknology> sounds exciting
<MTecknology> The 'S' in 'STMP' is supposed to stand for 'Simple' right? ... I'm not seeing it.
<funkyHat> MTecknology: â¡D the protocol itself is pretty simple
<funkyHat> EHLO lists.kalliki.com
<funkyHat> MAIL FROM: <m@funkyhat.org>
 * MTecknology votes for CTMP 'Complex' so then the servers can be Simple instead :P
<funkyHat> haha â¢D
<MTecknology> sending mail from telnet is fun through
<jimbobco> anybody have experience using ubuntu as an iscsi target?
<hggdh> smoser: yes, I did
<hggdh> smoser: amd64
<hggdh> kirkland: 300 instances run on topo1
<hggdh> er. up to, I mean
<MTecknology> funkyHat: :(
<kirkland> hggdh: rocktastic!
<funkyHat> MTecknology: I'm comaring our configs in meld
<funkyHat> *comparing
<kees> mathiaz: I'm not sure I follow; what were you curious about?
<MTecknology> funkyHat: I meant that I'm sad this isn't working as easily as I hoped - I'll setup smtp on the server in the mean time
<MTecknology> in the router*
<mathiaz> kees: hi - hggdh run into an issue with the username and the password
<kees> "the" username?
<mathiaz> kees: it turned out to be a user error
<kees> ah, okay
<webmaven> One of my Ubuntu vmware images seems to have got itself screwed up, and now says that the file system is read only. Any ideas what went wrong, and how to fix it?
<hggdh> kees, it was a real problem, between the chair and the keyboard
<MTecknology> funkyHat: ok - any smtp coming in will wind up on that server
<kees> hggdh: heh :)
<funkyHat> MTecknology: mm, add lists.kalliki.com to the list of domains to accept mail for in dpkg-reconfigure
<webmaven> No ideas, huh?
<funkyHat> webmaven: fsck
<funkyHat> MTecknology: I didn't notice any major differences between our configurations
<webmaven> gives me a warning.
<MTecknology> funkyHat: all done
<MTecknology> how can I purge frozen messages?
<funkyHat> I'm getting connection refused
<funkyHat> MTecknology: exim4 -v -M <message ID> will try to push them through again
<webmaven> WARNING!!!  Running e2fsck on a mounted filesystem may cause
<webmaven> SEVERE filesystem damage.
<webmaven> Do you really want to continue (y/n)?
<alvin> I'd say 'no'
<funkyHat> webmaven: can you run from a live CD and do it without being mounted?
<funkyHat> I've never had to run it manually, so there might be better advice, but that can't be a bad way to go
<webmaven> funkyHat: no, it's a vmware image.
<funkyHat> webmaven: live cd image?
<MTecknology> funkyHat: I can telnet to localhost on it
<webmaven> it doesn't have a physical CD-ROM drive to boot from.
<funkyHat> Hm, there must be a boot option to force a fsck
<MTecknology> funkyHat: 451 4.3.0 Temporary system failure. Please try again later.
<webmaven> Hmm. That sounds like a promising idea.
<MTecknology> touch /forcefsck
<MTecknology> iirc
<funkyHat> Bit of a problem if the FS is read only
<MTecknology> funkyHat: that's right after MAIL FROM: test@lists.ubuntu.com
<MTecknology> good point
<funkyHat> You're trying to send mail from the list to itself
<webmaven> MTecknology: that won't work, since the fs is read-only.
<MTecknology> hrm - How do I cancel a telnet connection
<MTecknology> Ctrl+C isn't working
<webmaven> Can I unmount the fs?
<funkyHat> Usually ctrl+]
<MTecknology> ah..
<funkyHat> webmaven: might be able to. you're likely to have less problems if you drop to single user mode first
<funkyHat> But I guess if it's alreayd read only it won't make that much difference
<MTecknology> MAIL FROM: michael@kalliki.com   451 4.3.0 Temporary system failure. Please try again later.
 * webmaven googles 'drop to single user mode'...
<funkyHat> runlevel 1
<funkyHat> I assume you have virtual console access
<MTecknology> funkyHat: btw - this system is behind a router - I have 7 systems behind it - one public ip
<funkyHat> MTecknology: that should be fine
<MTecknology> it's broken and will never live again :'(
<funkyHat> I still can't connect from here
<MTecknology> funkyHat: I'm not sure how to tell xim to listen
<MTecknology> it's not the router blocking it
<webmaven> funkyHat: not convenient access. I've been acessing this vm via ssh.
<MTecknology> it shouldn't be..
<funkyHat> Does it tell you it's Exim, when you connect using telnet?
<funkyHat> webmaven: well if you've got filesystem problems you might have to get access to it anyway
<webmaven> Hmm.
<MTecknology> funkyHat: .......no
<MTecknology> funkyHat: http://dpaste.com/180622/
<funkyHat> MTecknology: you're not talking to exim then
<MTecknology> funkyHat: tcp6       0      0 [::]:smtp               [::]:*                  LISTEN      29615/exim4
<MTecknology> there's also tcp        0      0 localhost:smtp          *:*                     LISTEN      1030/sendmail: MTA:
<MTecknology> hrm.. pastebin again
<MTecknology> funkyHat: http://dpaste.com/180625/
<funkyHat> right, so exim isn't listening on ipv6 because sendmail is :/
<webmaven> Well, I ignored the warning, and ran fsck. Didn't find any problems.
<funkyHat> huh
<funkyHat> webmaven: ok, well try remounting it rw then
<webmaven> fs is still read-only though
<webmaven> How do I do that?
<MTecknology> RoAkSoAx: for the heck of it... let's try a reboot....
<MTecknology> funkyHat: *
<funkyHat> mount remount -o rw /device/name
<funkyHat> that's wrong
<funkyHat> mount -o rw,remount
<funkyHat> MTecknology: if you want
<MTecknology> funkyHat: there we go
<funkyHat> I got them mixed up, exim is *only* listening on ipv6
<MTecknology> funkyHat: 220 texo.texo ESMTP Exim 4.71 Wed, 07 Apr 2010 16:21:57 -0500 :D
<funkyHat> aha
<MTecknology> now I try to send an email....
<MTecknology> funkyHat: probably from switching around mail servers - had a process not killed
<MTecknology> :D
<MTecknology> funkyHat: it showed up :D
<MTecknology> funkyHat: and the two messages queued up just came through :D
<funkyHat> Ooh
<funkyHat> So it's working?
<funkyHat> MTecknology: check that the package sendmail is not installed
<MTecknology> funkyHat: it's not - but it was a few hours ago
<funkyHat> Ok
<MTecknology> uninstalling it didn't terminator the process because mailman was using it (my best guess - not sure)
<funkyHat> yeah, that's a little weird
<webmaven> mount / -o rw,remount
<webmaven> mount: cannot remount block device /dev/mapper/webdev04-root read-write, is write-protected
<MTecknology> it jsut replied with the subscription confirmation :)
<funkyHat> webmaven: odd!
<MTecknology> webmaven: umount /dev/mapper/webdev04-root; fsck -y /dev/mapper/webdev04-root  ?
<funkyHat> MTecknology: and I can connect via smtp now
<MTecknology> try it
<funkyHat> Your mail server is calling itself texo.texo though
<MTecknology> ya - I'm sure that's an easy little fix
<MTecknology> funkyHat: would this look like the right thing to have in /etc/hosts?   127.0.1.1 lists.kalliki.com texo
<webmaven> e2fsck 1.41.9 (22-Aug-2009)
<webmaven> /dev/mapper/webdev04-root: clean, 108452/1237888 files, 2873860/4948992 blocks
<funkyHat> MTecknology: yep looks ok
<MTecknology> funkyHat: doesn't look like the mail was received that I sent from my client though
<MTecknology> funkyHat: I do see the one from you in the queue thoguh
<MTecknology> funkyHat: you did that by telnet lists.kalliki.com 25 ?
<funkyHat> MTecknology: yep
<MTecknology> funkyHat: so now the last piece of the puzzle...
<MTecknology> mail from client to list
<funkyHat> here you go
<smoser> hggdh, are you going to wipe the system ?
<MTecknology> funkyHat: hm?
<MTecknology> funkyHat: logs make it look like the message is bouncing
<MTecknology> funkyHat: http://dpaste.com/180634/
<funkyHat> Yes test-bounces is ok
<funkyHat> I got both of them back, so they should both be in the queue
<MTecknology> funkyHat: hm?
<funkyHat> MTecknology: <listname>-bounces is the "local part" used by mailman for a lot of emails it sends
<MTecknology> funkyHat: hrm.. http://lists.kalliki.com/pipermail/test/ - the email I accepted from you (in admin interface) isn't showing up
<smoser> hggdh, i've got to run. i have a screen session running a loop of start instances and kill instances. i'd appreciate it if it was left to run to completion and the logs saved off somewhere.
<smoser> but i have to run. if you need the machine, just take it, though.
<MTecknology> funkyHat: I see both of yours in my inbox
<funkyHat> MTecknology: maybe the archive takes a while to catch up
<MTecknology> alrighty
<MTecknology> funkyHat: so- why can't I post from my mail client to send email to it?
<funkyHat> MTecknology: you can't?
<MTecknology> I'll try again
<MTecknology> or... maybe it just got in - looking at the logs
<hggdh> smoser: I will wait, and I will not wipe the system clean
<hggdh> smoser: running under 'ubuntu'? I can save the directory if you want
<MTecknology> funkyHat: No reason for an MX record if the address being sent to is the same ip as the smtp server, right?
<funkyHat> MTecknology: I'm wondering about that myself. It doesn't seem to be a problem from here
<MTecknology> funkyHat: you can send an email from your client?
<funkyHat> MTecknology: my second mail was sent from gmail, which means via my own SMTP server
<smoser> hggdh, yes, under ubuntu
<MTecknology> funkyHat: my mail client sends through gmail
<MTecknology> I'll try again
<MTecknology> funkyHat: did you sign up for the list?
<funkyHat> no
<funkyHat> my gmail is set up to send all email through my own mail server
<hggdh> smoser: I will tar the whole thing. I believe you are under ./smoser-test
<MTecknology> oh
 * MTecknology sends mail to test@lists.kalliki.com
<MTecknology> funkyHat: should be there - right? http://dpaste.com/180639/
<funkyHat> That's the email from mailman back to you
<MTecknology> funkyHat: http://dpaste.com/180641/
<funkyHat> Yep
<MTecknology> funkyHat: that looks like it should be going through correctly?
<funkyHat> MTecknology: yeah it looks fine
<MTecknology> funkyHat: so why am I not getting the message back in my inbox? :(
<funkyHat> Could be because gmail decides not to show you it
<funkyHat> If it's identical to the one you sent
<funkyHat> I've just subscribed
<MTecknology> good point..
<funkyHat> Oh I think I did it wrong though
<MTecknology> ya, I don't see you signed up
<MTecknology> funkyHat: and I'm assuming this line is you getting an email back from lists
<MTecknology> s/lists/mailman/
<MTecknology> and this looks like you sent a message to the list
<funkyHat> Yes
<MTecknology> and I see it in my inboix
<MTecknology> funkyHat: I see what you did - subject: subscribe
<funkyHat> That's the one which didn't work
<MTecknology> yup
<MTecknology> I just looked at the mod queu
<MTecknology> e
<funkyHat> I should have emailed test-subscribe
<MTecknology> funkyHat: so now there's 1) archives list is still empty and 2) making sure I'm not a spammers friend
<funkyHat> The default exim config is pretty sane
<MTecknology> cool :)
<MTecknology> 550 relay not permitted
<MTecknology> :D
<funkyHat> tada!
<webmaven> funkyHat: looks like the problem is network-related, these VMs are using a SAN, and one of our switches is bouncing.
<funkyHat> webmaven: aha!
<webmaven> funkyHat: I didn't even know they had set them up that way.
<MTecknology> funkyHat: just had to make sure - I've seen about the worst a random user can experience  from it - >1k spam per hour
<uvirtbot> New bug: #557453 in postfix (main) "package postfix 2.6.5-3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 75" [Undecided,Invalid] https://launchpad.net/bugs/557453
<funkyHat> MTecknology: fair enough. You might want to enable spamassassin too â¢)
<funkyHat> Though if you're only going to run member-only lists it's probably not a concern
<MTecknology> funkyHat: I have it installed, just not enabled
<MTecknology> funkyHat: I'll just add GLOBAL_PIPELINE.insert(1, 'SpamAssassin') to the bottom of  /etc/mailman/mm_cfg.py
<MTecknology> hrm - there's a comment for it
<MTecknology> funkyHat: I'll just assume that's fully magical
<funkyHat> MTecknology: it probably makes more sense to enable it in exim
<MTecknology> funkyHat: oh, I thought spamassassin + mailman would put the message into the moderation queue and spamassassin + exim4 would just drop the message
<funkyHat> MTecknology: oh, hrm
<funkyHat> spamassassin drops some mail but only if the spam score is stupidly high
<funkyHat> Mostly it just adds a spam: yes header
<MTecknology> oh
<MTecknology> funkyHat: so - I think once the archives are showing I'll have to hug you
<MTecknology> I'm holding off for now ;)
<funkyHat> Maybe they aren't because it's a members only list?
<MTecknology> I'm logged in to maange the list though
<MTecknology> the archive is set to public
<MTecknology> Archive messages?
<funkyHat> That would be it
<MTecknology> Archive messages? yes;  Is archive file source for public or private archival? public;  How often should a new archive volume be started? Monthly
<smoser> hggdh, job is done. if you could copy off smoser-test dir that'd be great. then i'm done
<MTecknology> funkyHat: looks like a permissions issue
<MTecknology> funkyHat: I ran a script to fix the permissions - I think it broke more now :P
<funkyHat> uhoh :P
<MTecknology> [Wed Apr 07 17:27:47 2010] [error] [client 192.168.1.5] Symbolic link not allowed or link target not accessible: /var/lib/mailman/archives/public/test, referer: http://lists.kalliki.com/mailman/listinfo/test
<MTecknology> funkyHat: http://dpaste.com/180656/
<funkyHat> MTecknology: do you have a <directory> section for /var/lib/mailman/archives/public in your apache config?
<funkyHat> Or a directory above that
<funkyHat> Oh, symbolic link
<funkyHat> Options +AllowSymlinks
<MTecknology> funkyHat: I have <Location /> Options +FollowSymLinks </L>
<MTecknology> funkyHat: since when this is installed it goes all over the system and since I'm the only admin that will muck with this system - would <Dir /> Opt +SymLink </Dir> be horrible?
<funkyHat> MTecknology: well I guess if you're the only person that can log in it's not too bad, it seems awkward though :P
<MTecknology> funkyHat: Still doesn't work :P
<funkyHat> MTecknology: errors?
<MTecknology> funkyHat: su - list; ls /var/lib/mailman/archives/public/test; shows the files in it
<funkyHat> Oh so it's a permissions issue
<funkyHat> What are the permissions on the dir?
<MTecknology> that's what that last pastebin was for
<MTecknology> hrm.....    Alias /pipermail/ /var/lib/mailman/archives/public/
<MTecknology> funkyHat: http://dpaste.com/180665/
<funkyHat> MTecknology: what are the permissions on /var/lib/mailman/archives/public?
<MTecknology> funkyHat: drwxrwsr-x 2 root list 4096 2010-04-07 11:30 public
<funkyHat> Ok
<MTecknology> funkyHat: I'm thinking it's probably an apache config issue - I don't see any permission issues...
<funkyHat> MTecknology: yep looks like it
<Rafael__> I have the following question, i am very please with the help of this IRC that i have recieve since my windows cleint copies a folder into the ubuntu server every using rsync and Cron.
<Rafael__> what i would like to know if there is a way to aboid shring my wiindows fodler with everybody?
<MTecknology> funkyHat: alrighty - I'll deal with it more later - thanks VERY VERY VERY much :D
<MTecknology> funkyHat: I think it;s eating time..
<funkyHat> om nom nom
<hggdh> bug 557110
<uvirtbot> Launchpad bug 557110 in mysql-cluster-7.0 "Dependency mismatch for mysql-cluster-*" [Undecided,Confirmed] https://launchpad.net/bugs/557110
#ubuntu-server 2010-04-08
<RoAk> kirkland, im thinking on removing the release codename from the config and to only obtained from the cache file where Im storing it. What do you think about this?
<RoAk> and to load all the defaults from config file, instead of setting defaults where not set in the code
<MTecknology> funkyHat: you still around?
<funkyHat> MTecknology: just a bout
<funkyHat> s/a b/ab/
<MTecknology> funkyHat: I'm getting this now - http://dpaste.com/180694/
<funkyHat> MTecknology: ok
<funkyHat> That looks fine to me
<MTecknology> funkyHat: oh..
<MTecknology> OH!
<funkyHat> MTecknology: is it working? â¡)
<MTecknology> I sent the message from the wrong addy - that's why the message isn't going through
<funkyHat> hehe
<MTecknology> funkyHat: DONE!
<MTecknology> funkyHat: How can I change the mailman admin email addy?
<funkyHat> MTecknology: can't remember, one of the helper scripts can do it
<funkyHat> MTecknology: oh, I think it's an option in the web admin interface
<funkyHat> in general options
<MTecknology> funkyHat: where's the actual admin interface?
<funkyHat> http://lists.kalliki.com/mailman/admin/test
<MTecknology> funkyHat: oh, I thought there was one interface for more admin
<MTecknology> funkyHat: so how can I delete/add lists?
<MTecknology> funkyHat: I need to recreate the mailman mailing list - i muffed it up some and that's probably easiest - no need for test either
<RoAkSoAx> kirkland, were you able to seen the messages i left you?
<kirkland> RoAkSoAx: yeah ... so I was thinking about this more
<kirkland> RoAkSoAx: we should talk about your graphical front end
<kirkland> RoAkSoAx: because i think that's going to make it a *lot* easier
<kirkland> RoAkSoAx: basically, I suggest that we change s/lucid/maverick/ in the Lucid code just before Lucid releases
<kirkland> RoAkSoAx: i think the gtk frontend should make the selection of the ubuntu codename *really* easy
<lifeless> kirkland: so powernap
<RoAkSoAx> kirkland, i was planing to merge my code to obtain the codename into the PyGTK but, still, i would have to hardcode the CACHE if we keep how the configgile is loaded now
<kirkland> lifeless: yessir
<lifeless> kirkland: is there any reason what it doesn't isn't on by default all the time ?
<lifeless> does it have downsides on laptops?
<kirkland> lifeless: i run it on all 5 of my mythfrontends
<kirkland> lifeless: arguably, it duplicates some of the behavior you get (or can control) with gnome-screensaver and the gnome power utilities
<kirkland> lifeless: obviously we don't have those in the Ubuntu Server, so we wrote our own lightweight screensaver/powersaver daemon
<lifeless> kirkland: hmm
<lifeless> I think where i am coming from is this:
<kirkland> lifeless: as it's written right now, powernap simply watches the process table for the presence or absence of a specified list of processes you want to monitor
<lifeless>  - only using power when you have something to do
<lifeless>  - we do things we don't want to do
<lifeless> (such as npviewer.bin never freaking stopping)
<kirkland> lifeless: ie, the processes that are running if your system is "doing work", and if they're not running (and haven't been running for a configurable while), then your server is deemed inactive and can powernap
<lifeless> and pulseaudio not closign the sound card when there isn't sound to play
<kirkland> lifeless: heh
<kirkland> lifeless: right, right ...
 * kirkland thought lifeless was going to come at it from a tcp port usage perspective :-)
<kirkland> (because that's something else that's been on my mind)
<lifeless> kirkland: sadly, my keyboard and mouse don't connect over the network
<lifeless> be nice to get them all PAN'd up.
<kirkland> lifeless: you should enable ssh :-)
<lifeless> oh, it is, it is.
<RoAkSoAx> kirkland, Anyways, I would love to resolve this so that I don'
<RoAkSoAx> don't have to worry about it later
<kirkland> lifeless: okay, go on ....
<RoAkSoAx> kirkland, would you prefer me to do it both ways so you can take a look at it and give your thoughts?
<lifeless> no thats about it
<kirkland> lifeless: stand by for one moment
<lifeless> just mulling on how to get less idiotic power wastage in our stack
<kirkland> RoAkSoAx: okay, let's not dwell on calculating the latest release any longer ...
<lifeless> like, epoll timeouts are daft.
<kirkland> RoAkSoAx: establish your cache
<kirkland> RoAkSoAx: and just make sure that the user can update that cache by selecting the release they want to testdrive through your GTK gui
<kirkland> RoAkSoAx: we'll handle the distro-default in the config file just as a parameter
<kirkland> RoAkSoAx: we'll SRU twice a year when sabdfl picks his funny adjective/animal combo
<kirkland> RoAkSoAx: but each user will pointy/clicky the release they want to test
<kirkland> RoAkSoAx: so the flow should look something like:
<kirkland>  a) pick distro (Ubuntu, Debian, etc...) -- you work on Ubuntu, but make a framework so that others can submit patches to let this work with Debian, Fedora, and friends
<kirkland>  b) pick a version (here's your codename list -- lucid, maverick, etc)
<kirkland>  c) pick a flavor (desktop, server, netbook, mythbuntu, etc)
<kirkland>  d) pick an arch (amd64, i386)
<kirkland> RoAkSoAx: kind of wizard like
<RoAkSoAx> kirkland, for each time the user logs in?
<kirkland> RoAkSoAx: and also an Edit -> Preferences menu with the rest of the configurable things in /etc/testdriverc
<RoAkSoAx> I mean launches the app?
<kirkland> RoAkSoAx: only if they choose "wizard"
<kirkland> RoAkSoAx: in the Edit -> Preferences tab the user can set their default $distro, $version, $flavor, $arch
<RoAkSoAx> kirkland, the preferences menu is already in my mind. The wizard cool idea. though I was kinda thinking the interfae differently
<kirkland> RoAkSoAx: okay ... what's your idea?
<RoAkSoAx> kirkland, will draw it and send it to you :)
<kirkland> RoAkSoAx: sounds good
<kirkland> RoAkSoAx: let's track it in a wiki or the blueprint
<kirkland> RoAkSoAx: keep as much of this public as possible
<kirkland> RoAkSoAx: i suggest a TestdriveGTK wiki page
<RoAkSoAx> kirkland, but is basically having different tabs like "Tab Ubuntu" "Tab Kubuntu" "Tab Edubuntu" "Tab Mythbuntu", and each tab separate by: "amd64" "i386"
<kirkland> RoAkSoAx: i'd also like to look into getting Testdrive offered as a handler for ISO files, if they're liveCD ISO files
<kirkland> RoAkSoAx: such that if you right click on an ISO in nautilus, one of the options is "Run this ISO in Testdrive"
<RoAkSoAx> kirkland, yeah Im planing to do that as soon as It's officially announced from the GSoC and I wanna come up with the drawings of the possible interfaces before making it public
<kirkland> RoAkSoAx: alrighty ... think about both
<kirkland> RoAkSoAx: i think the wizard approach should be relatively straight forward
<RoAkSoAx> kirkland, yeah wizard should be easier, but personally I would probably go directly to the main interface because I personally would hate to be using the wizard all the time
<RoAkSoAx> would be simpler to have the interface right there after launching the APP
<kirkland> RoAkSoAx: alrighty
<kirkland> RoAkSoAx: for the images you already have cached, yes, i agree
<kirkland> RoAkSoAx: perhaps the wizard is how you add a new iso to your cache the first time
<kirkland> RoAkSoAx: once you've run it once, it's cached, then it shows up on the default canvas as a button
<kirkland> RoAkSoAx: like a "recently used" document
<RoAkSoAx> kirkland, or maybe the wizard can be used whenever we select the tab "Others"
<kirkland> RoAkSoAx: okay, how about this ... let's run with the "automotive" theme of Testdrive
<kirkland> RoAkSoAx: your default canvas is your "garage"
<kirkland> RoAkSoAx: which shows the "vehicles" you have in your garage
<kirkland> RoAkSoAx: it's empty, initially
<kirkland> RoAkSoAx: when you want to "add a vehicle to your garage for testdriving", you go through the wizard
<RoAkSoAx> kirkland, such as add: Ubuntu, other "vehicle" Kubuntu
<kirkland> RoAkSoAx: once you've gone through the wizard once, and rsync'd an iso, it shows up as a "vehicle" in your "garage"
<RoAkSoAx> and so on
<kirkland> RoAkSoAx: a Vehicle is Ubuntu-Lucid-Desktop-amd64
<kirkland> RoAkSoAx: or Ubuntu-Hardy-Server-i386
<kirkland> RoAkSoAx: or Debian-Sarge-i386
<kirkland> RoAkSoAx: or Kubuntu-amd64
<kirkland> RoAkSoAx: whatever it takes to describe *one unique ISO*
<RoAkSoAx> kirkland, ok, I'll have to think that thru, however, my plan is this: Step 1: getting testdrive modularized (including those changes I was thinking of). Step 2: Version 1.x which includes the "simple" testdrive version. Step 3: Version series 2.x: Add tabs for other distros/Xubuntu versions (kubuntu, mythuntu), and Step4: Version series 3.x could be adding support to add different distributions, and etc etc
<RoAkSoAx> and adding wizard and stuff like that
<RoAkSoAx> cause I don't want to jump into everything from scratch because it would be painful
<RoAkSoAx> if we handle that with versions as Im proposing I do think would be better idea
<kirkland> RoAkSoAx: okay
<RoAkSoAx> is like "Adding Features"
<RoAkSoAx> per release
<kirkland> RoAkSoAx: okay, we'll need to set some primary/secondary/tertiary goals for the overall project, but i think something like this should work
<kirkland> RoAkSoAx: i hope that the modularization is about 10-15% of the overall work
<RoAkSoAx> kirkland, yeah I mean, we don't have to rush features right now.
<RoAkSoAx> kirkland, in my opinion modularization is around 60%
<kirkland> RoAkSoAx: hmm, then we need to discuss what you mean by modularization
<kirkland> RoAkSoAx: i can split it into a couple of functions in a matter of a few hours
<kirkland> RoAkSoAx: and then we can make it more object oriented
<RoAkSoAx> kirkland, im doing it object oriented right now
<RoAkSoAx> kirkland, lp:~andreserl/testdrive/module
<kirkland> RoAkSoAx: but i don't see that as taking 2 of the 3 months of this project
<RoAkSoAx> kirkland, files "testdrive", "testdrive.py"
<RoAkSoAx> kirkland, I was thinking modularization to be done by the end of may, mid june
<kirkland> RoAkSoAx: how long are you working on the project, overall?
<kirkland> RoAkSoAx: first glance, testdrive.py looks about like I expect it would
<RoAkSoAx> kirkland, well this week haven't work more than couple hours due to schoolwork, but i've worked like 10 hours a week for 2/3 weeks
<RoAkSoAx> kirkland, im planing to work on it 20+ hours a week when the program starts
<kirkland> RoAkSoAx: cool
<RoAkSoAx> since right now i'm fed up with schoolwork due to i;m in the last weeks of class, so I have papers, presentations, proejcts due
<RoAkSoAx> finals
<RoAkSoAx> too
<kirkland> RoAkSoAx: okay, so as by the time Maverick opens, I'd like to merge your modularized testdrive pretty much en masse
<kirkland> RoAkSoAx: sure thing, i understand ;-)  wasn't too, too long ago I was in college :-P
<kirkland> RoAkSoAx: basically by UDS
<kirkland> RoAkSoAx: speaking of, are you coming to UDS?
<RoAkSoAx> kirkland, hahah yeah I'm sick of college, and Yes I would love to have the modularization done by the UDS so that we can start working in parallel without affecting each others work.
<kirkland> RoAkSoAx: great, yeah, so let's set a tentative goal to get the OO code merged by UDS
<RoAkSoAx> kirkland, and as of today don't know yet since I've to apply for Visa, will send my docs by friday
<kirkland> RoAkSoAx: gotcha
<kirkland> RoAkSoAx: we'll plan on holding a testdrive-gtk session at UDS (which you'll just attend remotely, in the worst case)
<RoAkSoAx> kirkland, yes I was actually thinking about that today, so I would at least want to have a simple interface for testdrive-gtk by the UDS to show
<RoAkSoAx> on that just create the buttons and be able to launch
<kirkland> RoAkSoAx: hopefully you and I will have hashed out the basic design before UDS, but we'll put that out for discussion at UDS
<kirkland> RoAkSoAx: oh, that's certainly not necessary
<kirkland> RoAkSoAx: screen shots or napkin sketches would be sufficient for UDS
<kirkland> RoAkSoAx: focus on the modularization/OO code before UDS;  draw up your ideas on a napkin and scan it ;-)
<RoAkSoAx> kirkland, i already have an interface that creates the buttons, but haven't added the launch functionalioty yet...  :)
<kirkland> RoAkSoAx: alrighty
<kirkland> RoAkSoAx: i think we're on the same page
<kirkland> RoAkSoAx: did you already create the blueprint?
<RoAkSoAx> kirkland, yeah, it is really simple though
<kirkland> RoAkSoAx: url?
<kirkland> RoAkSoAx: i'd like to capture a few of these points we just discussed in the whiteboard
<RoAkSoAx> kirkland, https://blueprints.launchpad.net/testdrive/+spec/gsoc-testdrive-modularization
<RoAkSoAx> kirkland, https://blueprints.launchpad.net/testdrive/+spec/gsoc-testdrive-frontend
<kirkland> RoAkSoAx: could you rename those to server-maverick-testdrive-modularization and server-maverick-testdrive-frontend ?
<kirkland> RoAkSoAx: actually, just the last one, server-maverick-testdrive-frontend
<RoAkSoAx> kirkland, what would help me know is if you could take a look to my modularization code so far, and give me your thoughts about it, to see if I'm going in the right path since, to be honest, is been a while since my last OO programming project
<kirkland> RoAkSoAx: we shouldn't need a UDS session on the modularization
<RoAkSoAx> kirkland, server-maverick or desktop-maverick?
<kirkland> RoAkSoAx: make it server-maverick for now ... it might get switched to the desktop team, but i'm on the server team
<kirkland> RoAkSoAx: and i'm reporting to my manager our GSoC effort
<kirkland> RoAkSoAx: as for your OO, like i said, first glance looks okay;  i haven't run it yet
<kirkland> RoAkSoAx: i'm not a python OO expert either, so it might help to put it out for some peer review at some point
<kirkland> RoAkSoAx: heck, lifeless might even give it a look over at some point :-)
<RoAkSoAx> kirkland, haha ok cool. Anyways, just send you an image to your @ubuntu email with my idea of interface
<kirkland> RoAkSoAx: thanks
<kirkland> lifeless: alrighty ... so where were we ....  yeah, so powernap ....
<kirkland> lifeless: i'm hoping for a UDS session with AmitK
<kirkland> lifeless: i'd like to teach powernap how to "do" and "undo" a configurable set of power optimizations when something "happens" or "doesn't happen" as the case may be
<funkyHat> MTecknology: rmlist
<RoAkSoAx> kirkland, btw... registered the blueprint so i guess i'll have to wait for it to be approved. I'll take notes of todays conversation and put them in the whiteboard of the blueprints I already created
<MTecknology> funkyHat: thanks for all of your help :D - This thing is pretty awesome :)
<kirkland> RoAkSoAx: i just updated https://blueprints.edge.launchpad.net/testdrive/+spec/gsoc-testdrive-modularization
<funkyHat> MTecknology: and newlist or http://lists.kalliki.com/mailman/create
<kirkland> RoAkSoAx: added a couple of workitems, and took a note
<funkyHat> MTecknology: no problem â¡)
<RoAkSoAx> kirkland, awesome. will do
<kirkland> RoAkSoAx: https://blueprints.edge.launchpad.net/testdrive/+spec/server-maverick-testdrive-frontend-gsoc
<kirkland> RoAkSoAx: i renamed it for you
<RoAkSoAx> kirkland, shouldn't that be registered in Ubuntu for the UDS?
<kirkland> RoAkSoAx: i just proposed for the sprint
<kirkland> RoAkSoAx: we should try to get one or two of the Design/User-Experience team in that session!
<RoAkSoAx> kirkland, Ok will do. I';ll try to draw a better sketch and try to ping someone there eto see what they think
<uvirtbot> New bug: #557773 in php5 (main) "php5-cli fails apt install due to mislabeled libkrb53 dependency" [Undecided,New] https://launchpad.net/bugs/557773
<kirkland> RoAkSoAx: thanks
<RoAkSoAx> kirkland, btw.. if you plan to run the modularized code you need to also copy testdriverc into /etc since I've changed it to be compatible with config parser
<kirkland> RoAkSoAx: still around?
<RoAkSoAx> kirkland, yep
<kirkland> http://people.canonical.com/~kirkland/testdrive-gtk.html
<kirkland> http://people.canonical.com/~kirkland/testdrive-wizard.html
<kirkland> RoAkSoAx: ^
<RoAkSoAx> kirkland, awesome!!
<kirkland> RoAkSoAx: http://people.canonical.com/~kirkland/testdrive-gtk-2.html
<kirkland> RoAkSoAx: one more ^
<kirkland> RoAkSoAx: that table should be sortable by clicking on any of the tabs across the top
<kirkland> RoAkSoAx: might have more fields, like size
<kirkland> RoAkSoAx: whatever
<kirkland> RoAkSoAx: anyway, that's my 10-minute sketch ;-)
<RoAkSoAx> kirkland, that's a good idea. I'll do improvements such as classify them into tabs (as my sketch). However, Im not sure if all that is achievable in 3 months. So, I would guess that we'll first have to define which features we would like to see and then, start implementing from that point
<kirkland> RoAkSoAx: agreed; let's use UDS as the pivot point by which we define and prioritize features
<kirkland> RoAkSoAx: up until UDS, focus on the foundation
<RoAkSoAx> kirkland, I'm, though I'm implementing simple pyGTK stuff to give me some ideas of how the modularization should go to be able to use it in both sides (command line/PyGTK)
<kirkland> RoAkSoAx: i see you dropped some of my comments from the whiteboard ... would you please establish a wiki page and move them there, if you don't want them in the whiteboard?
<RoAkSoAx> kirkland, did I? probably we edited the whiteboard pretty much at the same time and they got lost since I didn't delete anything :S
<kirkland> RoAkSoAx: http://pastebin.ubuntu.com/410843/
<RoAkSoAx> kirkland, yep I did those changes when your comments weren't there so I guess we just found a bug in LP :)
<kirkland> RoAkSoAx: fair enough ... would you mind grabbing them from that pastebin and chunking them back in, pretty please ;-)
<RoAkSoAx> kirkland, done already :)
<kirkland> RoAkSoAx: rock on
<kirkland> RoAkSoAx: okay, i'm calling it a night
<kirkland> RoAkSoAx: later dude
<RoAkSoAx> kirkland, have a good one
<aaditya> This karmic-server did not come up after reboot. After going through a cumbersome process of bringing a monitor in its range, I see that grub-2 menu has no timer, and hence one must hit the enter key to proceed.
<aaditya> Server was rebooted after 120 days and there were apparently 4 kernel updates in the mean time.
<aaditya> Is there a way to ensure that such problems don't occur on servers?
<aaditya> Searching the forums, it appears to be a common issue.
<lukehasnoname> aaditya,  grub by default not having a timer? That's odd
<aaditya> lukehasnoname: Based on certain forum posts, I realized that it was due to some failure during previous bootup. Still no clue what that problem was.
<aaditya> syslog did not help much and at this time I worked around somehow
<aaditya> I also verified that /etc/default/grub was in perfect shape
<aaditya> This is precisely the issue: http://ubuntuforums.org/showthread.php?t=1283800
<aaditya> grub getting stuck on a server based on one issue is not reasonable in my understanding. It's potentially a huge headache for the sysadmins.
<_ruben> hmm .. i have a (long-running) script with output redirection to a logfile which filled up the disk, deleting the logfile wont free the space since the file is still in use, any ideas?
<Skaag> how do I install a package from lucid, on karmic?
<Skaag> specifically, python-pyinotify
<_ruben> the complex method would adding lucid repos + apt pinning, or just download the debs+deps from your favorite repo
<Skaag> thanks
<uvirtbot> New bug: #557890 in samba (main) "transfer lockup connecting to a NetApp/CIFS share" [Undecided,New] https://launchpad.net/bugs/557890
<alvin> aaditya: look here: bug 420077
<uvirtbot> Launchpad bug 420077 in grub2 "grub2 has no timer" [Undecided,Confirmed] https://launchpad.net/bugs/420077
<uvirtbot> New bug: #557924 in php5 (main) "php5-cgi crashed with SIGSEGV in zend_get_constant_ex()" [Undecided,New] https://launchpad.net/bugs/557924
<binBASH> Hi there. I have a question regarding Ubuntu Enterprise Cloud. Which networking mode I need to configure when I have a dedicated subnet of 4 ips for each node in random ip ranges?
<merlijn-> Hi,
<merlijn-> I'm trying to upgrade an Ubuntu 8.04 server to latest 10.04 beta, but do-release-update -d keeps suggesting intrepid for upgrade
<_ruben> merlijn-: is your 8.04 server fully up to date?
<_ruben> merlijn-: and it seems you have it configured to not want to upgrade to lts versions only
<merlijn-> yea I think I fixed it now
<merlijn-> silly typo in /etc/update-manager/release-upgrades
<aaditya> alvin: looks like that's the issue I was dealing with. I've added a comment to the bug.
<aaditya> Thanks!
<uvirtbot> New bug: #558335 in clamav (main) "freshclam log reports warning about libclamav version" [Undecided,New] https://launchpad.net/bugs/558335
<alonswartz> hey folks, is there a resource which lists the latest kernel and ramdisk ids (aki/ari) for the official ubuntu/canonical ec2 images? preferably with their associated versions.
<alonswartz> I compiled a list of what looks like the latest versions for the current LTS (Hardy) http://alonswartz.pastebin.com/H9CSdyUv but it would be useful to know their versions
<alonswartz> updated the list according to include "official" by ami id's listed on the ec2 starter page
<alonswartz> are these kernels/initrds built against the latest xen modules?
<linuxboy_> I've got a techie installing ubuntu ina remote location. He is failing at the raid setup screen. Can I ssh in and continue it for him?
<alonswartz> the official ec2 hardy amis are 2010-01-28, but the latest xen modules 2.6.24-27 were released on 2010-02-04, does that mean the aki/ari's were compiled against 2.6.24-26 (released on 2009-12-04)?
<kirkland> smoser: howdy
<binBASH> Hi there. I have a question regarding Ubuntu Enterprise Cloud. Which networking mode I need to configure when I have a dedicated subnet of 4 ips for each node in random ip ranges?
<ttx> kirkland: how did your all-night-cloud-image-testing go ?
<kirkland> ttx: not so well
<binBASH> I wish I could start instances in my cloud :p
<kirkland> ttx: only did 9 rounds
<kirkland> ttx: i think i overloaded it
<binBASH> it says I don't have ips
<binBASH> hmm
<ttx> kirkland: right... a better test would be to run the with-ramdisk images in parallel with the no-ramdisk
<kirkland> ttx: i just changed that and started over
<ttx> i don('t thin the with-ramdisk would better succeed in such a test tbh
<ttx> binBASH: probably an issue with your range of public IPs
<ttx> binBASH: for the first install of UEC I'd suggest following http://testcases.qa.ubuntu.com/Install/ServerUECTopology1
<binBASH> ttx: Yeah, could be. I wonder how to configure those. My Provider gives me 4 ips / node. All not on the same ip range.
<zul> ttx: ill take the memcached one
<binBASH> if I run euca-describe-availability-zones verbose all looks fine though
<ttx> binBASH: you can list them rather than using a range
<ttx> A,B,C instead of A-D
<ttx> zul: ok
<binBASH> ttx: Yeah, but how it will detect on which node the vm needs to be started?
<ttx> zul: please make sure you can reproduce the issue first, so that you can confirm it's fixed by the patch :)
<ttx> zul: also could you cover http://iso.qa.ubuntu.com/qatracker/test/3891 ?
<alonswartz> update: nevermind, i found what i was looking for in the ubuntu-on-ec2 ppa
<binBASH> ttx: You know the ips are bound to node, and cannot be moved to another one.
<ttx> binBASH: hm? The nodes from a given cluster should live on the same network
<zul> ttx: sure
<binBASH> ttx: So as workaround I would probably need many clusters? ;)
<ttx> binBASH: you should run on a private network and properly NAT
<ttx> so that you don't have this "4 IP per node" linmit
<binBASH> ttx: NAT would be not fine in this case, there is a traffic limit of 2 tb / node.
<binBASH> ttx: What I need would be, instances with ip 1.1.1.1-1.1.1.3 can be run on node A, 2.2.2.2-2.2.2.3 can run on node B etc.
<binBASH> :)
<binBASH> dunno if this is possible however
<smoser> kirkland, here. whats up?
<kirkland> smoser: wanted you to check the log i sent you
<kirkland> smoser: see if you see any aberations
<kirkland> glitches in the matrix ;-)
<smoser> kirkland, from last night ?
<smoser> the one from last night looks reasonable.
<kirkland> smoser: yeah
<kirkland> smoser: my overnight run aborted
<kirkland> smoser: i tried to run 8 instances every time
<kirkland> smoser: and i don't think my cloud could handle that
<kirkland> smoser: within your logic anyway
<smoser> ttx, i didn't bother trying beta 1 without ramdisk as i know it would fail.  i didn't bother trying with ramdisk for current because i have no failures related to content of the image (only to the hypervisor)
<smoser> kirkland, can you put that log out somewhere ?
<ttx> smoser: we believe that those errors that Dustin and I hit are not linked to the with/noramdisk, but rather to the bad performance of our laptop-based cloud... but having some hard evidence that noramdisk doesn't perform worse than withramdisk would help :)
<smoser> i'm just interested in seeing it.
<smoser> it seems to me that eucalyptus will return success from euca-run-instances before it really has space, and when it does, gets into bad limbo
<smoser> ttx, i believe we've seen zero failures that we can at all relate to the ramdisk-less image
<smoser> so, there is no way that with-ramdisk image could perform better
<ttx> smoser: ok
<smoser> the only thing i've seen as a failure in the image is the metadata service was busted once for me
<kirkland> smoser: i've overwritten that log, but it's in my byobu scrollback
<kirkland> smoser: i'll copy and paste
<smoser> kirkland, no worries.
<smoser> i do think that the test case is legit though, and shows general failure of euca under load
<kirkland> smoser: http://pastebin.ubuntu.com/411023/
<kirkland> smoser: but i agree with ttx, i don't think this is ramdisk related
<ttx> smoser: well, general failure of eucalyptus on underpowered hardware, at least
<ttx> kirkland: do you like my blog meme idea ?
<kirkland> ttx: sure, go for it
<kirkland> jiboumans: roaksoax has put together a couple of blueprints for the GSoC testdrive project
<kirkland> jiboumans: should we label those server-maverick-testdrive* and set you to the approver?
<jiboumans> kirkland: sounds like a great idea
<jiboumans> feel free to add -gsoc- to that title if you want to make it a bit more prolific
<kirkland> jiboumans: alrighty ... just be warned, they're going to look rather desktopy (putting a GTK frontend on a command line too)
<kirkland> jiboumans: right, already includes gsoc too
<kirkland> jiboumans: https://blueprints.edge.launchpad.net/testdrive/+spec/server-maverick-testdrive-frontend-gsoc
<smoser> kirkland, see on line 194 ot f that pastebin.  that to me is an error in euca
<smoser> as euca-describe-images showed 'running' for an instance that had 0.0.0.0 as its IP
<kirkland> smoser: no, it didn't ... i manually terminated that one
<kirkland> smoser: i looked at your code and didn't see any way to exit out of wait_for_running()
<smoser> i think it times out at 600 seconds
<ttx> maybe "terminated" isn't terminated enough
<smoser> i'm wrong. it never does
<smoser> kirkland, you were right.
<kirkland> smoser: i only see one break in your while loop in wait_for_running()
<kirkland> ttx: i like what smoser has here, though ...
<kirkland> ttx: i think you/me/mathiaz/smoser should run some over night stress tests for the next few weeks
<kirkland> ttx: firing up random numbers of images, different machine sizes, and different images in combination
<ttx> smoser: your script doesn't wait for termination ?
<kirkland> ttx: my cloud goes into powersave mode most nights
<kirkland> ttx: as i'd expect most of our should
<kirkland> ttx: for the next few weeks, i suggest that we run some random stuff overnight
<smoser> ttx, it does not check state of instances to make sure they're terminated after terminate-instances
<ttx> smoser: it can take some time to go from shutting-down to terminated, in my experience
<smoser> but it does/did try 4 times with 10 second sleeps in between on the 'start'
<smoser> so if a start returns failure it will just sleep and try again
<kirkland> ttx: btw, i'm doing a UEC demo at the Texas Linux Fest this saturday
<kirkland> ttx: do you have slides or the video from your similar presentation?
<ttx> kirkland: right, you wanted to get my slides
<smoser> (obviously that 40 second total should probably be increased)
 * ttx uploads
<kirkland> smoser: i just set rounds to 1,000,000, and I'm just going to let it run until i need my cloud and ctrl-c it
<kirkland> smoser: it's tee'ing to a logfile
<ttx> kirkland: http://one.ubuntu.com/p/Vy/
<smoser> you should probably redirect stderr to log also
 * ttx tries ubuntuone magic
<kirkland> smoser: what are the indications of problems?  error?  failure?  what should i grep for?
<smoser> oooh, look at ttx with his ubuntu one url. fancy
<smoser> "failed reach" is the real failure
<kirkland> ttx: thanks
<kirkland> ttx: i'm going to do a dry run practice today/tomorrow
<ttx> it was already on ubuntuone, so I figured I'd try the "publish with ubuntuone" feature
<macno> there is a problem with phpldapadmin on lucid, is this the right place to report it?
<ttx> working as designed :)
<smoser> kirkland, ttx we should probably dump a bit more into that script to make it more reliable while not necissarily more forgiving.
<smoser> a.) wait for terminate
<zul> macno: no please report it on launchpad
<smoser> b.) wait longer for try-to-start
<ttx> kirkland: are you going to have internet access ?
<smoser> c.) maybe put warnings/errors somewhere other than stdout
<smoser> d.) put a limit on wait-for-run
<ttx> kirkland: the ukuug presentation did not have that access, and the installer went into strange network timeout quirks
 * ttx stares at the oracle logo in the openoffice splash screen...
<kirkland> ttx: i'm not planning on having internet access
<kirkland> ttx: i'm planning on bring 2 of the dell 1200's, plus the linksys router
<kirkland> ttx: the router would connect and network the 2 laptops
<kirkland> ttx: if there is internet access, i can use the router to bridge to the wireless
<kirkland> ttx: if not, no big deal, i'm not going to fight with it
<kirkland> ttx: i'm also bringing my laptop, and i was planning on bringing a webcam too, that I could point at the other two machines while they install
<kirkland> ttx: and 2 usb keys
<kirkland> ttx: i'll do the install from usb key
<kirkland> ttx: rather than show my preseed magic
<ttx> kirkland: just test beforehand how the installer reacts to network-but-no-internet situation... I had to press Cancel at a few file downloads at critical times during the install demo
<kirkland> ttx: right, the apt one?
<ttx> kirkland: yes, doing it from the USB key tells better how easy it is for everyone to try out
<ttx> kirkland: yes. Also the timeserver takes a long time to timeout
<kirkland> ttx: definitely
<binBASH> someone knows if it's possible to configure dhcpd so it provides an ip range based on mac address?
<masu3701>  i have an old pc that i want to use as a server....its a AMD Athlon runin at 1.73 GHz and 512 ram
<masu3701> the hd is 30 gb tho
<incorrect> oh no, mercurial 1.5 didn't make it into lucid
<incorrect> sigh
<pmatulis> masu3701: thank you for that info
<masu3701> pmatulis: ?
<ScottK> ttx: I'm going the i386 server upgrade right now.
<ttx> ScottK: ok. zul is running it as well
<ScottK> It's the server my IRC runs through, so if I disappear and don't come back, you'll know it didn't go well.
<zul> ttx: upgrade passed
<uvirtbot> New bug: #558427 in eucalyptus (main) "UEC should be a superset of the Ubuntu Server" [High,In progress] https://launchpad.net/bugs/558427
<ScottK> ttx: Mine passed too.
<ttx> one test to go before 100% coverage !
<binBASH> ttx: I tried like described in the link posted by you to testcase UEC. The instance keeps pending and won't change to run state :/
<hggdh> smoser is the rig availabel?
<zul> ttx: can you add this one to your list https://bugs.edge.launchpad.net/ubuntu/+source/apache2/+bug/392759
<uvirtbot> Launchpad bug 392759 in apache2 "[FFE] apache2 DoS attack using slowloris" [Unknown,Fix released]
<ttx> zul: ack
<zul> ttx: thanks
<ScottK> Only 5 bugs filed.  Not a bad upgrade.
<smoser> hggdh, have at it.
<RoAkSoAx> ttx,  heya
<hggdh> smoser: thanks
<hggdh> ttx, what about bug 557110 ?
<uvirtbot> Launchpad bug 557110 in mysql-cluster-7.0 "Dependency mismatch for mysql-cluster-*" [Undecided,Confirmed] https://launchpad.net/bugs/557110
<ttx> hggdh: yes, that's a good one
<dassouki> i want to create a rule to delete all files in a folder if they've been created moer than an hour ago
<Pici> dassouki: A rule?
<dassouki> Pici: or a shell script
<Pici> dassouki: find can do this, look the -cmin/-mmin tests in the manpage
<dassouki> Pici: i was hopping for an automagic solution
<zul> ttx: i was able to reproduce the memcached thingy
<Pici> dassouki: run the script in a cron job then.  I have similar scripts running here for cleaning up old logfiles (I should use logrotate, but whatever).
<dassouki> Pici: that's what i wnt to use it for, a data logger downloads file, however the file is cummulative
<dassouki> i just want to delete all the older files
<Pici> dassouki: find has arguments that can delete files, or you can use -exec if you want to do something else with them.
<dassouki> no just delete them
<venmx> hi, im trying to setup ubuntu to deploy over the network with kickstart, but im getting "bad archive error", cant find anything useful on google. can anyone help? my problem is i don't know what part of the ftp to mirror and which part to point the "url" directive in the kickstart.cfg file to?? do i need to mirror the /ubuntu/pool directory and contents too to get the packages?
<ttx> kirkland: did you exercise recently the web UI / create users / email notifications / imagestore image registration in UEC ?
<uvirtbot> New bug: #558476 in mysql-dfsg-5.1 (main) "mysql-server-5.1 failed to install/upgrade: le sous-processus script post-installation installÃ© a retournÃ© une erreur de sortie d'Ã©tat 1" [Undecided,New] https://launchpad.net/bugs/558476
<ttx> hggdh: how are the beta2 candidate functional tests on the testrig going ?
<mdz> smoser, around?
<smoser> here
<mdz> smoser, hi. quick question on EC2 AMIS
<mdz> how many daily builds do we keep around, or have we not started expiring them yet?
<smoser> i'm fairly certain the number is 5
<smoser> but for you, you may be seeing 90 days worth
<smoser> is that your question ?
<mdz> smoser, I'm trying to explain the data I see on http://thecloudmarket.com/stats#/by_image_owner
<mdz> where it looks like something significant changed on 1 March :-)
<smoser> ugh. it appears lfash isn't working properly for me...
<smoser> ok. so 2 things explain this
<smoser> 1.) right around there we started publishing ebs amis
<smoser> 2.) until sometime last week I had no code to privatize/delete ebs amis
<kirkland> ttx: very lightly, only
<kirkland> ttx: ie, not comprehensively
<ttx> kirkland: yes, same for me... I'd suggest to spend a few test cycles on that to make sure we are ok there
<ttx> (or find someone that did)
<ttx> kirkland: generally I think we are in good shape as far as "installation / running an instance" goes... and we migth have not exercised peripheral features so much
<ttx> kirkland: your imagestore question made me think about the rest of the UI things
<kirkland> ttx: yeah, it really hasn't been touched
<kirkland> ttx: which is perhaps a good thing
<ttx> hggdh: around ?
<smoser> mdz, ^^ i just verified it the ebs removal code is functional.  I would expect that number of images to settle down to about twice the march 1 level
<binBASH> ttx: The instances are running now. It just turned out it takes some time until they are online :p
<mdz> smoser, thanks for the explanation
<ttx> binBASH: oh yes :)
<binBASH> I didn't expect it can take 5 mins
<ttx> binBASH: the subsequent ones go faster
<ttx> binBASH: but that's about what it takes on relatively slow disks
 * ttx remembers the time when it took 20 min
<binBASH> I think it's the lan which causes bottleneck
<binBASH> only 100 Mbit
<binBASH> ;)
<venmx> hi... what is the "pool" directory for in the ubuntu rsync/ftp/http archives?
<ttx> binBASH: arh, upgrade ;)
<binBASH> not possible
<binBASH> :/
<roy_> How can i remove pear package from server, anyone know this ?
<ttx> kirkland: remember when a UEC full test run would take half a day ? funny times :)
<binBASH> ttx: I can't select instance type on rightscale though
<kirkland> ttx: heh
<ttx> binBASH: could you elaborate on that ?
<binBASH> ttx: It always launches the smallest instance, can't choose the others.
<ttx> kirkland: please see with hggdh on the beta2 candidate functional tests results -- and signoff on them if you're satisfied by the results (by marking the work item done)
<kirkland> hggdh: howdy, your b2 tests are done?
<ttx> binBASH: not sure that's a bug on our side...
<binBASH> on console I can launch others ;)
<hggdh> kirkland: going thru topo2 now
<binBASH> think it's a rightscale bug
<venmx> does anyone have experience with ubuntu netboot installation using a local archive?
<hggdh> kirkland: after I am done with it, we can close the tests
<kirkland> hggdh: cool, poke me as soon as you're done
<hggdh> kirkland: ack
<ttx> hggdh: glad to see you're alive :)
<ttx> kirkland: your pings work better than mines.
<binBASH> ttx: I wonder as well how to get vnc viewer enabled. :)
<hggdh> ttx: heh
<kirkland> ttx: i ping harder
<binBASH> so I can connect to vms when there is no ip
<hggdh> weechat is not ringing when I an pinged, so -- if I am busy on another workspace -- I will not know about it
 * ttx hugs the new xchat green indicator
<hggdh> :-)
 * hggdh kicks (lovingly) the bloody weechat GIT
 * ttx drinks the traditional release day cognac
<AntORG> plex
<AntORG> wrong tab
<hggdh> kirkland: on topo2, at the rig -- no instaces succeed
<kirkland> hggdh: hrm?
<kirkland> hggdh: bug?
<kirkland> hggdh: something else?
<hggdh> kirkland: not sure, actually success rate is at about .05 right now, stressing
<hggdh> kirkland: 200 instances
<hggdh> I see a lot of ssh timeouts
<kirkland> hggdh: hrm, okay, let's get on top of this ...
<kirkland> hggdh: anything different between this and topo1 (besides the topology)?
<hggdh> kirkland: and apart from the users, not, all the same
<zul> ttx: ping one more to add to your list https://bugs.edge.launchpad.net/ubuntu/+source/autofs5/+bug/533029
<uvirtbot> Launchpad bug 533029 in autofs5 "autofs5-ldap doesn't work immediately after bootup" [High,Triaged]
<kirkland> mathiaz: around?
<mathiaz> kirkland: not for a long time
<mathiaz> kirkland: I'll be back later today though
<kirkland> mathiaz: looks like hggdh is having no success with topo2 right now
<kirkland> hggdh: can you pastebin your results log?
<mathiaz> kirkland: is it the installation that fails or the test run?
<kirkland> mathiaz: test runs
<kirkland> mathiaz: i think ... hggdh ?
<kirkland> hggdh: btw, how do you pronounce hggdh ?
<kirkland> "hugduh?
<hggdh> kirkland: http://pastebin.ubuntu.com/411159/
<hggdh> kirkland: 'haggadah'
<hggdh> hugduh is cool, though
<kirkland> hggdh: :-D
<kirkland> hggdh: okay, so need a bit more than that to determine the cause ....
<hggdh> I 'spose, yes ;-)
<hggdh> kirkland: want me to upload the log to tamarind?
<kirkland> hggdh: sure, put it somewhere i can see
<hggdh> kirkland: /home/cerdea/multi_test.log.2010-04-08_125614
<uvirtbot> New bug: #558598 in clamav (main) "[dapper] clamav-milter template parse error" [Undecided,New] https://launchpad.net/bugs/558598
<smoser> hggdh, what is that pastebin from ?
<smoser> ie, what produced it
<kirkland> smoser: mathiaz's test framework
<smoser> that is fancy smanchy. i'd not seen it. where is it ?
<smoser> source ?
<kirkland> hggdh: what kind of image do you have registered?
<kirkland> smoser: really?  i was kinda wondering why you wrote your own script from scratch :-)
<kirkland> smoser: thought you wanted to narrow your testing focus
<smoser> i try to pay as little attention as possible
<kirkland> smoser: bzr+ssh://bazaar.launchpad.net/~mathiaz/%2Bjunk/uec-testing-scripts/
<hggdh> kirkland: uec 20100406.1
<smoser> and fully subscribe to the teachings of NIH
<smoser> :)
<kirkland> smoser: is 20100406.1 the known, good image?
<hggdh> smoser this is the output from mathiaz scripts
<smoser> 20100407.1
<smoser> is what will be beta
<smoser> 2
<hggdh> smoser: ooooh...
<hggdh> I will delete the image, and add this one
<kirkland> hggdh: yeah, smoser made it ramdiskless ... we should rerun with that
<kirkland> hggdh: cool
<smoser> it should make no difference
<kirkland> hggdh: let me know when you start getting results
<smoser> although there might be some packages difference3 sbetween those 2
<hggdh> kirkland: roger
<kirkland> hggdh: but i'm not sure that's the problem
<hggdh> it does not sound like it...
<smoser> fwiw, 20100407.1 is hwat is listed at http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all
<hggdh> well, when I started, it was not there
<smoser> which should generally be "the ultimate source"
<smoser> fair
<kirkland> smoser: can you grab that log from tamarind?
<kirkland> smoser: looks like it's ssh timing out
<kirkland> DEBUG:INSTANCE i-43B4084B:Test output: ssh: connect to host 10.55.55.114 port 22: Connection timed out^M
<kirkland> smoser: instance marked running
<smoser> what does console show ?
<kirkland> hggdh: do you have any of these failed instances still running?
<kirkland> hggdh: when you get one into the failed state, can you pastebin the output of euca-get-console $INSTANCE_ID ?
<hggdh> kirkland: no, all instances were terminated
<kirkland> hggdh: okay, thanks
<hggdh> smoser: er, http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all still lists 20100406.1
<smoser> Ubuntu Server UEC i386 (20100407.1)
<smoser> Ubuntu Server UEC amd64 (20100407.1)
<hggdh> smoser: I stand corrected
<hggdh> k downloading the new image now
<ttx> kirkland: do we have any known good point with that topology on the testrig ? I.e. post-beta1 tests that would show success ? Or is it the first time it's run ?
<hggdh> ok. first run (single, 10 instances, 100% success
<hggdh> ttx: all my tests there on beta1 were successfull
<smoser> hggdh, fwiw, its identical package list between 20100406.1 and 20100407.1.  The only difference should be the lack of ramdisk.
<smoser> (I just compared the manifest.txt in those)
<kirkland> ttx: i've run this before
<kirkland> ttx: i suspect that something's wrong with hggdh's user setup
<kirkland> ttx: he's testing multiple users
<hggdh> must have been, somewhere
<ttx> hggdh: ok, if you reproduce the issue, try to minimize the changes between the two (for example by running the same cloud image as in beta1). After all you're validating UEC more than the cloud image here
<kirkland> hggdh: what's the permissions on their private keys?
<hggdh> kirkland: 400, all of them
<kirkland> hggdh: hmm, okay ...
<ttx> kirkland: re: "something wrong in hggdh's user setup": something that would have changed since beta1 ?
<kirkland> hggdh: okay, try with the new image, and lets see what happens
<kirkland> ttx: something that's different between how hggdh set up his multiple users, and how I set up my multiple users
<kirkland> ttx: and how mathiaz set up his multiple users
<ttx> kirkland: if we are to compare results, better use the same technique
<ttx> I thought that would be consistent across runs
<smoser> we should modify the test suite to grab console output on failure at least, and likely in all cases. its just useful info to have.
<hggdh> interesting. I have a running instance i-404B07EF, and euca-get-console-output hangs
<kirkland> ttx: right, this is back to the issue mathiaz and I have brought up many times ... there is no Eucalyptus API for automatically creating users
<hggdh> smoser: I agree
<kirkland> ttx: so the only options right now is for each person to do it manually, each test run
<kirkland> ttx: or save and replace the database
<ttx> kirkland: understood, thanks for the precision
<smoser> hggdh, hangs ?
<kirkland> ttx: we can do the latter (mathias does from time to time)
<smoser> it does sometime take quite a while
<smoser> my experience is at least 6 seconds in most cases.
<kirkland> ttx: but that doesn't exercise the bits that create/install/update the database
<hggdh> indeed. I actually did it wrong this morning -- did not remove the old user data, and failed on the first test
<hggdh> smoser: 408 timeout
<smoser> oh. wow. on euca-get-console-output ?
<ttx> hggdh: you nuked it pretty badly :)
<Petfrogg> hello
<hggdh> ttx: I am good at it :-)
<hggdh> this is it. I am trashing the current setup and starting from scratch
<Petfrogg> i am using a ubuntumachine as the gateway and firewall. Inside i got a couple machines i would like to get directed to if i type in my webbbrowerser on anymachine inside the network. "http://netpet" -> 10.0.0.60. How do i fix that? Should i do it in the firewall?
<ttx> hggdh: you have The Gift of QA !
<Petfrogg> or should i somehow fix it in the hdcp?
<Petfrogg> dhcpd
<Petfrogg> or is it a DNS issue?
<xperia> hello to all. i have on my ubuntu server the mailserver postfix that is able to recieve and send mails. on the ubuntu server this works great but now i need somehow the possibility to fetch this mails with thunderbird with my laptop from the mailserver. do i need some special configuration for fetching mails with thunderbird ?
<hggdh> kirkland: just to be sure, to get up multiple users: (1) get to the web, and create them;(2) for each user: login, download creds; (3) move them all to the CLC
<kirkland> hggdh: right
<hggdh> kirkland: (4) unzip each bloody cred into the correct place
<kirkland> hggdh: right
<hggdh> dammit
<kirkland> hggdh: ensure perms are right
<hggdh> kirkland: it seems they are already unzipped correctly
<hggdh> but if they were wrong (perms) I would be unable to use them
<hggdh> and, just to be sure: current beta is 20100406.1 (server) and 20100407.1 (uec), correct?
<ttx> jdstrand: the change you propose is actually committed as http://github.com/memcached/memcached/commit/d9cd01ede97f4145af9781d448c62a3318952719
 * ttx disappears again
<jdstrand> ttx: holy cow, it was completely untested
<jdstrand> ttx: I hope they tested it
<jdstrand> :)
<ttx> jdstrand: theirs is slightly different :)
<jdstrand> yeah, but still, I wasn't sure of the implications of '4' vs '5', etc, etc,
<jdstrand> ultimately, it is on them :)
<fine_line> join ubuntuforums
<kirkland> hggdh: yes, that looks correct
<kirkland> hggdh: any new results from this round?
<alienseer23> I had to copy over /var/lib/mysql from another server due to a hdd failure, can someone help me with the permissions/ownership of these files and folders?
<alienseer23> is 755 mysql:mysql good for everything in that di9rectory?
<hggdh> kirkland: installation takes quite some time...
<kirkland> hggdh: oh?  i thought you were just going to re-register the new image?
<hggdh> kirkland: no, I had already done that, and it was still hosed
<Hypnoz> this script is in /etc/init.d/networking, but it doesn't seem to actually detect these things (like nfs)
<Hypnoz> http://paste.ubuntu.com/411198/
<Hypnoz> anyone else want to test out that sed | grep line to see if it finds their nfs mount?
<hggdh> kirkland: I deceided to start from scratch
<hggdh> kirkland: right now installing the SC, and then I can install both NCs at the same time
<kirkland> hggdh: cool
<sherr> Hypnoz: works for me (karmic)
<sherr> NFS mount
<Hypnoz> ah you know what, i wasn't removing the -q from grep
 * Hypnoz is dump
 * Hypnoz is also dumb
<RoAkSoAx> kirkland, howdy!! I was thinking... why would we need to select a codename that is not ubuntu+1?? Since I do not think that someone would like to TestDrive an stable release
<Hypnoz> man ubuntu networking is flaky
<Hypnoz> this is the command /etc/init.d/networking uses to bring down all interfaces...  ifdown -a --exclude=lo
<Hypnoz> so you'd think that would work right...
<Hypnoz> hmm seems to be an issue with ifdown being able to see interfaces used in a bonded interface
<kirkland> RoAkSoAx: i think they might
<kirkland> RoAkSoAx: i think i might
<kirkland> RoAkSoAx: i use that to check for regressions
<kirkland> RoAkSoAx: i think we should default to the development version
<RoAkSoAx> kirkland, and "add" any other available release!
<kirkland> RoAkSoAx: but maybe someone is running Lucid Ubuntu and wants to try out Lucid Kubuntu
<RoAkSoAx> kirkland, right, I didn't think about that but I see your point
<jaypur> hi i'm using scp but it's not working, it says that is no such directory, but it exists, can someone help me?
<jaypur> i've already moved a filme once but now it's not going
<jaypur> no one?
<jaypur> =/
<jaypur> i got it now
<jaypur> bye
<hggdh> kirkland: does this ring any bells? http://pastebin.ubuntu.com/411223/
<kirkland> hggdh: nope ... smoser ^ ?
<smoser> how did you do that color . yuck.
<smoser> doesn't ring a bell, but that %s looks wrong :)
<hggdh> heh. /me still going full throttle
<smoser> hggdh, can you reproduce that ?
<hggdh> smoser: anytime :-)
<hggdh> smoser: on the current rig install, I mean
<smoser> and can you add some --i dont know why you wouldn't get output from the failed uec-publish-image
<smoser> tell me where this is, and i'll take a look.
<hggdh> smoser: at cempedak
<smoser> stop making it so easy for me to help!
<smoser> i'm trying to avoid you :)
<hggdh> I am already creating quite a reputation...
 * hggdh right now is in need of a bit of strong beers
<hggdh> s/bit/lot/
<RoAkSoAx> hggdh, i wish i could say that :(
<hggdh> RoAkSoAx: I wish I could *do* it
<RoAkSoAx> hggdh, me too but i can't drink alcohol for the next 3 months at least
<hggdh> RoAkSoAx: oh, this is different. I *can*, just will not. Liver is still good, methinks
<RoAkSoAx> hggdh, that's why I said I wish i could... cause I cannot even think about it (because I cant even drink it :()
<hggdh> RoAkSoAx: think positively. One 3 months (at least)
<hggdh> but it still sucks
<RoAkSoAx> hggdh, it does!! been 3 months already without being able to drink
<smoser> hggdh, ok. so heres the problem/solution
<smoser> Image: Failed to find service dispatcher for component=walrusfailed to upload kernel
<smoser> is the error you get if you allow the euca-upload-bundle error to get through
<smoser> but uec-publish-tarball/uec-publish-image were capturing it and not letting it out on error.
<smoser> the reason they weren't letting it out is because the error is going to stdout
<hggdh> smoser: walrus is offline?
<smoser> if it were going to stderr it would get through
<smoser> i dont know aobut that.
<hggdh> weird. It is, now
<hggdh> smoser: want a bug on stdout/stderr?
<smoser> well the bug on stderr is in euca2ools.. you can open it if you'd like. i'd say its low priority.
<hggdh> will do, later. Right now I will dig in and find out why the walrus did not get the credentials
<hggdh> ah, got it
<smoser> hggdh, well, i just pushed a fix/workaround in euc-publish-tarball for the stdout issue.
<smoser> it *somehwat* makes sense to send stdout from underlying tools through to the user on failure
<smoser> but i would have preferred (and expected) the error messages to stderr
<hggdh> I agree, but at least we will get the error messages
#ubuntu-server 2010-04-09
<kirkland> hggdh: hi, still around?
<hggdh> kirkland: still. Running tests now. I added a euca-get-console-output to the script
<kirkland> hggdh: cool, i'm still around
<hggdh> kirkland: OK. on a distributed env it does not seem that I can ssh into an instance from the CLC
<hggdh> kirkland: I have to get my wife, will be back in 40
<hggdh> kirkland: I think there is a bad route somewhere. I hope... the ionstance is not accessible from the CLC
<hggdh> brb
<RoAkSoAx> kirkland, i jsut had an idea... Desktop app for manpages. I don't think that exists, does it?
<storrgie> hello anyone avail?
<storrgie> the version of qemu-kvm and libvirt and virt-manager is REALLY far behind... is there a good ppa?
<hggdh> kirkland: ping
<kirkland> hggdh: just passing through
<kirkland> hggdh: what's up?
<PC_Nerd101> Hi, when I'm using apt-cacher do I have to change the lists on all machines or just the caching server when I'm changing the mirror I update from ?
<twb> What's the format of an apt-cacher client's sources.list?
<twb> Just one line will do
<RoAkSoAx> kirkland, still around?
<PC_Nerd101> um - jsut let me check
<PC_Nerd101> http://paste.ubuntu.com/411430/ - this is my current source.list for a client to the apt-cacher
<twb> I see no reference to apt-cacher there
<PC_Nerd101> oh hang on....
<PC_Nerd101> yes - but I have a line in /etc/apt/apt.conf.d/01proxy that redirects traffic to my cache....:  Acquire::http::Proxy "http://192.168.1.2:3142";     - sry - I havent worked with these machines in a while ...  just remembering how I originally configured them
<twb> OK, if that's how apt-cacher works, then you shouldn't need to change the client.
<PC_Nerd101> so then how would I specify the new mirror in apt-cacher?
<twb> One of them works by prepending the cacher URL to the sources.list entries; and THAT needs the clients to be updated to use a different upstream mirror.
<twb> I thought that was apt-cacher.
<PC_Nerd101> I dont know - but there's a lot of results on google about setting up with apt-cacher and apt-mirror, so that might be related to what your thinking.
<PC_Nerd101> I need to change these mirrors to my ISP who hosts an unmetered mirror... and seeing as I have abotu 6 ubuntu machines in teh house its nice to cut down 5/6'ths of my updates.
<twb> There are like twenty different partial mirroring systems
<PC_Nerd101> I just run on apt-cacher because it seemed hte most common across installations similar to my one.
<twb> If you have unmetered access, I suggest you just run debmirror and create a local copy of your (release, arch) tuple.
<twb> debmirror has Just Worked for me, whereas apt-cacher and apt-proxy were nothing but flaky
<PC_Nerd101> I dont particularly want to host my own mirror, it seems a bit of overkill... and I dont particularly want to download the entire mirror for my 2x distro's..  I just want to cahce the 20 or so regular packages I use on the servers.
<twb> It's not as big a deal as you seem to think.
<PC_Nerd101> hmm - well for now I jsut want to make a config change rather than an installation change regarding how I run the server's
<twb> IIRC (hardy, i386, main, no-source) is only like 4GiB
<twb> PC_Nerd101: I've told you what I know about apt-cacher.  I can't help you with it any more than  that.
<PC_Nerd101> thats kewl - but I might stick with searching around for a solution rather than installing a mirror...  but thanks for your advice (I will probably follow you up when I start using netboot and an image).
<uvirtbot> New bug: #528816 in puppet "Unit tests error: test_modifyingfile: undefined method `alias='" [Medium,In progress] https://launchpad.net/bugs/528816
<darkk^> Can anyone comment on KVM stability in 8.04 release? I have one hardy-based server that I'm going to use as host and I'm choosing between up-to-date virtualbox backported to hardy and kvm/hardy.
<lifeless> grab lucid
<lifeless> nearly released, and like hardy it is an LTS.
<darkk^> I don't think it's best idea to deploy beta at production. I'm going to migrate VMs to lucid host in a couple of months.
<ScottK> Depends on how soon you need to be in production.
<ScottK> Sensible, IMO.
<lifeless> darkk^: it won't be beta ina couple of months ;)
<darkk^> moreover, I see no reason to upgrade hardy to lucid as that box is quite trashed and will be temporary host anyway :-)
<twb> Pfft.
<twb> I started targeting Lucid for a one-month project, back in November.
<twb> Big surprise: it's unlikely to ship before Lucid does
<darkk^> so in fact I consider two ways 1) use kvm/hardy and migrate to kvm/lucid later (easier migration) 2) use virtualbox and migrate to kvm/lucid (MAYBE, better stability, I don't know if hardy/kvm is stable enough - it was long long ago)
<twb> I tried (1) a couple of months ago, using the sanctioned upgrade mechanism.  It was laughably failuriffic.  So I went back to using aptitude for dist upgrades, like the Goddess intended.
<darkk^> (1) saying "migrate to kvm/lucid" I mean "migrate to another physical box, running lucid"
<twb> Oh, you mean the *host* node
<PC_Nerd101> regarding apt-cacher (used as a caching proxy), does the client download its package lists via the proxy or direct from the mirror, in which case how can I make sure that the mirror my proxy path_map's to is synced with the mirror the package lists are from ?
<twb> Proxies are intended to work without direct access to the target, so I'd be VERY suprised if anything using apt-proxy/apt-cacher/whatever went directly to the upstream host.
<uvirtbot> New bug: #528812 in puppet "Unit tests error: test_data(TestCronParsedProvider): No fakedata matching /usr/share/test/data/providers/cron/examples/*" [Medium,Triaged] https://launchpad.net/bugs/528812
<PC_Nerd101> ok - so when the client is configured using the /etc/apt/apt.conf.d/01proxy  (Acquire::http::Proxy "server";) config - the package lists will be redirected to whatever teh proxy does?  fantastic :) ty :)
<PC_Nerd101> if it does do that, why does the output from sudo aptitude update still list the repo's in my sources.list file?  is it just passively changing the location without "notifying" the current machine that the proxy changed its request?
<uvirtbot> New bug: #558944 in puppet (main) "Unit test error: test_data_parsing_and_generating(TestMailaliasAliasesProvider): Puppet::DevError: No fakedata matching /usr/share/test/data/types/mailalias/*" [Medium,Triaged] https://launchpad.net/bugs/558944
<twb> PC_Nerd101: just test it by firewalling off direct connections
<PC_Nerd101> where would I firewall it from - the proxy machine or the client machine?
<twb> That would depend on your privileges and on the network layout
<twb> It doesn't really matter where
<PC_Nerd101> internet connected router is at 192.168.0.2 - servers are all on 192.168.1.x, cacher on 192.168.1.2, all machines proxy through this machine ( whether in or out of the subnet via port forwarding)... I want to ensure that connections are indeed to the mirror specified in the cacher and not their own sources.list definitionsl
<PC_Nerd101> so I would firewall on the caching machine?
<twb> Whatever.
<PC_Nerd101> ok.
<PC_Nerd101> thanks
<darkk^> you can also use wireshark/tshark or get netflow from the router to check your hypothesis without firewall rules modification.
<twb> Nod
<rcsheets> is there a more up-to-date version of https://help.ubuntu.com/community/UEC that's written for Lucid?
<KurtKraut> rcsheets, AFAIK, help.ubuntu.com content is updated when the final version is released.
<rcsheets> hmm. it seems like someone somewhere would be writing the new documentation before it goes to help.ubuntu.com
<twb> Surely it's backed by a svn docbook repo or something
<rcsheets> well i dunno, it's a wiki
<rcsheets> are you suggesting the svn docbook repo is what the wiki stores the edits in?
<twb> If it was a *good* wiki, it'd be backed by a VCS :-/
<lifeless> it is
<lifeless> a horrible one, but one.
<lifeless> by horrible I mean, CVS-like.
<twb> lifeless: is help.u.c running some kind of sucky Canonical-internal wiki?
<lifeless> no
<lifeless> just moin
<twb> I didn't think moin's VCS backends were production-ready
<lifeless> its flat file store on disk is effectively a VCS
<lifeless> it would eat your brain to use it
<lifeless> but nevertheless, its a vcs
<twb> I've migrated from moin's plain text backend
<twb> Calling it a VCS is a bit of a stretch
<ttx> kirkland, hggdh: I have no status on the B2 validation work items from https://blueprints.launchpad.net/ubuntu/+spec/server-lucid-uec-testing -- please update
<twb> It'd be like calling LVM snapshotting a VCS
<twb> lifeless: but yeah, FYI the moin devs are/were working on VCS backends, so maybe you can migrate to git or (bleh) bzr in a couple of years
<rcsheets> right, in a "you could use it for that" sense
<rcsheets> well i'll have to continue my quest for lucid docs later. thanks for the info :)
<twb> When I migrated wiki.darcs.net from moin to gitit, I even imported the old commit history (except the spam).  That was fun!
<ttx> soren: around ?
<indigoparrot> hi there, anyone a Bacula user here?
<twb> !anyone
<ubottu> A large amount of the first questions asked in this channel start with "Does anyone/anybody..."  Why not ask your next question (the real one) and find out?
<indigoparrot> I'm running a bacula server on Ubuntu and I can't get it to connect to any of my window's clients. I've checked the IPs, passwords and users, all of which are correct. Any ideas?
<bigon> is'nt that bug https://bugs.edge.launchpad.net/ubuntu/+source/rng-tools/+bug/544545 a bug for ubuntu-server team?
<uvirtbot> Launchpad bug 544545 in rng-tools "rngd doesn't start automatically" [Undecided,New]
<uvirtbot> New bug: #559044 in ntp (main) "package ntp 1:4.2.4p6 dfsg-1ubuntu5.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/559044
<indigoparrot> bumping my question from an hour ago - I'm running a bacula server on Ubuntu and I can't get it to connect to any of my window's clients. I've checked the IPs, passwords and users, all of which are correct. Any ideas?
<darkk^> strace and/or wireshark it to check if you have proper connectivity (e.g. maybe firewall is blocking connects)
<indigoparrot> I've telnet'd from the ubuntu box (director) to the bacula-fd servers (windows box) with no problem, it's accepting incoming connections on the right port
<Schmidt> Am I right in assuming that it is futile to NAT traffic between two different private IP-networks? (range1 is 10.10.1.0/24 and range2 192.168.30.0/24)
<darkk^> Schmidt, what do you mean saying "futile" ? It's possible. Private IPs do not differ from public one from NAT point of view. :-)
<PC_Nerd101> does apt-cacher use the local sources.list to determine which mirror to request from?  or does it simply make the exact request ( to the requested mirror) on behalf of the client?
<uvirtbot> New bug: #559070 in openldap (main) "Lucid (or karmic) slapd upgrade does not really allow localroot cn=config manage rights" [Medium,Triaged] https://launchpad.net/bugs/559070
<sherr> PC_Nerd101: it uses the proxy defined in the apt/preferences file.
<sherr> PC_Nerd101: AFAIK, clients use the proxy, not connction to repos directly (tail the apt_cacher logs?).
<sherr> PC_Nerd101: Although I use apt-cacher-ng not apt-cacher.
<Schmidt> darkk^: I meant "not possible", I thought computers dropped traffic going between two private ip-networks because of the spoof problem...
<Schmidt> My actual problem was solved though :)
<darkk^> Schmidt, openvpn is your friend if you're going to pass traffic between two private netwoks via public internet :-)
<Schmidt> darkk^: We will implement a VPN solution, it's in the pipeline, but this was a top prio thing
<Schmidt> We could actually solve it with ssh-tunnels
<Schmidt> (something I am quite new to)
<darkk^> right, openssh support both dumb port forwarding and something VPN-like via TunnelDevice.
<binBASH> moin
<johe|work> even moin
<binBASH> anyone knows how to enable vnc for vms running in cloud?
<ttx> smoser, kirkland: ping me when you get up
<smoser> ttx, here.
<ttx> smoser: o/
<ttx> smoser: see pm
<kirkland> ttx: here
<ttx> kirkland: I finish with smoser and I'm all yours ;)
<ttx> kirkland: see my message a few hours ago about B2 tests signoff
<zul> ttx: upgrading from intrepid to lucid is supported?
<ttx> zul: no
<kirkland> ttx: i have no pm from you
<ttx> kirkland: it was a public message:
<ttx> <ttx> kirkland, hggdh: I have no status on the B2 validation work items from https://blueprints.launchpad.net/ubuntu/+spec/server-lucid-uec-testing -- please update
<kirkland> ttx: right, there were issues, hggdh was still running them when i went to bed last night
<kirkland> ttx: i'm looking to hear back from him this morning
<ttx> kirkland: ok
<ttx> kirkland: makes sense, let's wait a little
<hggdh> kirkland: ttx no luck last night, same ssh timeouts
<hggdh> kirkland: ttx I will mark it as postponed
<ttx> hggdh: it's not really postponed, it's not done... since now you'll move to test the B2 release rather than the B2 candidate (and that's another work item)
<kirkland> ttx: hggdh: okay, i'll sign off on your tests, but we need to get a bug opened about the ssh timeout issue
<kirkland> hggdh: open a bug, and attach all the logs you can
<kirkland> hggdh: i'll look into the eucalyptus side
<ttx> hggdh: well, it's "done with some failures to investigate"
<kirkland> ttx: and I'll need mathiaz to look into the test suite side (in case it's the test that's broken)
<ttx> hggdh: ideally we'd have a report saying what worked and what didn't, to use as a data point when we'll compare with future tests
<PC_Nerd101> I'm attempting to upgrade ( server) from 9.10 to 10.04 beta2 however sudo do-release-upgrade --devel-release says there was no new release found.  I took the command from the LucidUpgrades page on comunity help.  Any suggestions on what I shoudl check?
<kirkland> hggdh: poke me when you have that bug filed, and i'll sign off on the b2-candidate tests
<kirkland> hggdh: also, create a junk bzr branch, and check in logs of your results
<kirkland> hggdh: we need to come up with a better way of tracking "proof" that stuff worked at the milestones, but bzr will work for now
<smoser> soren, ping
<hggdh> ack
<PC_Nerd101> bump* re. do-release-upgrade --devel-release reporting no new release.   Is this correct or is documentation incorrec ?
<smoser> soren, ping regarding bug 524020. i attached a patch to trunk, if you'd like me to put a branch for sponsor for lucid i can.
<uvirtbot> Launchpad bug 524020 in vm-builder "karmic uec builds fail to publish due to 2 installed -ec2 kernels" [High,Fix released] https://launchpad.net/bugs/524020
<sherr> PC_Nerd101: On 9.10, I just did "sudo do-release-upgrade -d" and it works.
<sherr> PC_Nerd101: I aborted it at the (very) end of course :-)
<sherr> This is 9.10 32bit desktop (laptop) though
<PC_Nerd101> ok - well I just disabled the /etc/apt/apt.conf.d/01proxy file to disable its proxy connection to apt-cacher... and got it started and (seemed to be ) working.. so I suspect that somewhere along the line apt-cacher might not be getting the updates....  I'm looking at bug reports now to see if I can spot anything...
<PC_Nerd101> * I'm on 32 laptop as well, partitioned with ntfs on sda1 and 9.10 on sda3
<aurigus> Does hdparm -t test write speeds? Or read speeds only?
<sherr> aurigus: what does the manual page say?
<aurigus> read
<aurigus> does anyone have a handy command to test drive write speeds
<aurigus> just discovered zcav
<PC_Nerd101> sherr: I've checked bug reports and upgraded both client and cacher... I cannot find the cause of this issue.  I've checked and I can upgrade the client when it is not set through the cache proxy..  but it wont work when I have it through the proxy...
<PC_Nerd101> sherr: further - When I update the "signature" without the proxy, cancel, reenable the proxy and attempt to upgrade again, I get "Failed Upgrade tool signature. ....  There may be a network problem." ..  any ideas?
<sr1n1> Hi, how do I output something to the console-output from an init script on an EC2 machine?
<sherr> PC_Nerd101: Sorry, no. Some of this might be the design of the program (do-release-upgrade). I don't know.
<PC_Nerd101> sherr: no problem - so would you say the only current alternative ( that you know of) would be to simply download the updates to all machines I want to update to? - ie, not cache it?
<sherr> PC_Nerd101: sorry, no idea. I'd probably do som research on the do-release... program and operation. Or wait for release.
<hggdh> kirkland: bug 559230 opened, I am saving the logs
<uvirtbot> Launchpad bug 559230 in eucalyptus "multi-machine topology, cannot reach an instance from the CLC" [Undecided,New] https://launchpad.net/bugs/559230
<smoser> ttx, why did you assign bug 523148 to kirkland
<uvirtbot> Launchpad bug 523148 in libvirt "virsh console does not work (/dev/pts/1: Permission denied)" [Undecided,Invalid] https://launchpad.net/bugs/523148
<smoser> it can't be fixed.
<uvirtbot> New bug: #559243 in mysql-dfsg-5.0 (universe) "libmysqlclient15off segfault when using libnss-mysql-bg" [Undecided,New] https://launchpad.net/bugs/559243
<smoser> at least not without new code in libvirt
 * ttx looks
<kirkland> smoser: thanks, i'm going to mark wont-fix for lucid
<ttx> someone targeted to lucid, probably before the investigation
<ttx> smoser: in fact, kirkland did nominate it to Lucid :)
<kirkland> ttx: that was when I thought it was fixed in 0.7.7
<kirkland> ttx: and jdstrand and I were trying to build a case for or against 0.7.7
<ttx> kirkland: ack, makes sense to wontfix it then
<kirkland> done
<smoser> and this is not fixed in 0.7.7 or libvirt trunk.
<binBASH> Anyone knows how to enable vnc for the vms in uec please?
<smoser> binBASH, you'll need to install a vnc server inside them.
<binBASH> smoser: it's not possible to use the kvm inbuilt vnc?
<smoser> graphical console access is not something that uec/ec2 offer . and the serial console offered is read-only.
<binBASH> smoser: I think I started kvm manually and was able to use vnc.
<smoser> you could fairly easily hack it, and enable 'vnc' as console.. but without some trickery, you'd then have to connect to the node controller to get at it.
<sommer> \
<smoser> binBASH, yes, libvirt/kvm do offer this. eucalyptus/ec2 do not expose it (and the libvirt xml that they write do not contain 'console: vnc' or whatever the syntax is
<binBASH> smoser: well I have to find a way how to get networking working, with this provider dilemma ;)
<smoser> i havne't been following, so i dont know exactly the delimia
<binBASH> smoser: The provider gives 4 ips per server. they cannot be moved to another one.
<binBASH> and I don't wanna do NAT because there's a 2 tb traffic limit per server
<binBASH> I configured a br0 bridge, and when I launch kvm manually I can configure the networking inside the vm.
<binBASH> via the vnc
<smoser> binBASH, well, whatever you do manually inside there, you can do via script in --user-data-file=
<smoser> when launched
<smoser> ie:
<binBASH> I think via dhcp it's not possible to configure an ip range per mac address as well
<smoser> euca-run-instances --user-data-file=my-setup-networking-script.txt emi-xxxxxxx
<smoser> thats probably going to fail though for the lucid images....
<smoser> as nothing will happen until eth0 comes up
<smoser> hmm..
<binBASH> smoser: But see this is a real dilemma :P
<smoser> binBASH, you might be able to set up each node as an "availability zone"
<smoser> if IP addresses could be limited to an availabilitty szone then you'd be set
<smoser> but i dont know if they can
<smoser> in ec2 they are not
<binBASH> smoser: It would be nice if it's possible ;)
<smoser> i ca'nt think of a clean way that isn't going to require you modifying eucalyptus
<amine_> hello, looking for a good doc in Ubuntu bridging.. any suggestions !
<binBASH> smoser: I don't think you can modify eucalyptus
<binBASH> class files.....
<smoser> well you can rebuild
<smoser> ppa and such
<binBASH> java files available?
<smoser> you *can* modify class files too... you just have to be uber smart :)
<smoser> its all built from source
<binBASH> :p
<smoser> bzr branch lp:ubuntu/eucalyptus
<binBASH> smoser: The eucalyptus web iface is kinda limited.
<binBASH> smoser: I'll write to eucalyptus forum first, before doing such big changes ;)
<smoser> probably a good idea
<binBASH> maybe it's better to write a custom cloud iface anyways for what I really need ;)
<binBASH> because I want to define data centers virtually and have people routed via geoip to vms
<binBASH> dunno if this is all possible with eucalyptus
<binBASH> I dunno as well if it's possible to move running vms.
<binBASH> like I have a node in netherlands and one in usa and want to move nl to usa
<smoser> it is not possible to move running vms on eucalyptus
<binBASH> do you know if it's possible with kvm at all?
<ttx> mathiaz: ping
<mathiaz> ttx: o^1098
<ttx> mathiaz: see pm
<smoser> binBASH, it is possible with kvm, yes.
<smoser> but it requires shared storage between the nodes
<smoser> as, i believe, does even vmware
<smoser> which means the migrate-across-an-ocean thing is not too terribly reasonable.
<binBASH> smoser: I have shared storage
<binBASH> glusterfs......
<smoser> well, it can be done. kvm does support it, and libvirt exposes it. eucalyptus does neither.
<binBASH> ok
<binBASH> smoser: I wonder what happens if I start a dhcpd on each node.
<binBASH> if the dhclient in the vm can use it. :)
<smoser> it would probably dpend on the type of setup you have.
<smoser> i wondered what would happen there, though.
<binBASH> because then I could configure the ranges there.
<smoser> will a dhcp request from a node controller even get to your cloud controller ?
<binBASH> Dunno
<smoser> the fallout will be that euca-describe-images won't know the IP.
<smoser> there is a mode in eucalyptus that allows for this.
<smoser> it hackily tries to get the IP of the node via arp.
<binBASH> I think it can't get the dhcp from cloud controller, because nodes are not on same switch
<uvirtbot> New bug: #559326 in mysql-dfsg-5.1 (main) "symbolic link missing for libmysqlclient_r.so" [Undecided,New] https://launchpad.net/bugs/559326
<uvirtbot> New bug: #559378 in dhcp3 (main) "dhclient3 crashed with SIGSEGV in do_packet()" [Undecided,New] https://launchpad.net/bugs/559378
<Rafael_> I posted this question a few days ago, but have not solve it yet: I use rsync to copy a  windows client folder into the ubuntu server every using rsync and Cron. So for example Folder âtestâ on windows is mounted on ubuntu and from there thu rsync it copies into another folder,  at the present moment to do this I have to share on the windows computer the folder âtestâwith every body for this to happenâ¦. i would
<Rafael_>  like to know if there is a way to avoid sharing my windows folder with everybody?
<rcsheets`osu> while setting up grub-pc 1.98-1ubuntu4, i got this message:
<rcsheets`osu> File descriptor 3 (pipe:[8183]) leaked on lvs invocation. Parent PID 2877: /bin/sh
<rcsheets`osu> should i be concerned?
<uvirtbot> New bug: #559447 in samba (main) "[lucid] samba is to be removed during update" [Undecided,New] https://launchpad.net/bugs/559447
<soren> ScottK: Care to take a look at bug 559462? I'm about to disappear for a week so it would be nice to get this handled before then.
<uvirtbot> Launchpad bug 559462 in ubuntu "[FFe] New package: python-cloudservers" [Undecided,New] https://launchpad.net/bugs/559462
<ScottK> Looking
<soren> ta
<ScottK> soren: As long as you can find and archive admin with time for the New review, approved.
<soren> ScottK: Awesome, thanks.
<trappist> I've created a new user, and for some reason his processes are listed in ps by his uid, not his username.  this is giving me some permissions issues.  /etc/passwd looks right, where else should I look to resolve this?
<guntbert> trappist: 1) getent passwd <user>   -- compare with 2) getent passwd <hisuid>
<trappist> ah getent, that's what I was trying to conjure up
<trappist> they match
<guntbert> trappist: and what does ls -ld /home/user show, the uid or the name?
<trappist> the name... I think ps is doing this because the username is a tad long, 'telluride'
<trappist> just saw the same thing on another machine (that's behaving) with the same user
<birmaan> hoi
<guntbert> trappist: you are right I just tested it by varying a username
<trappist> so I guess I've narrowed it down to a bug in the god rubygem
<uvirtbot> New bug: #556819 in eucalyptus (main) "Can i run GNOME ( graphical user interface) on cloud machine ?" [Undecided,Invalid] https://launchpad.net/bugs/556819
<trappist> guntbert: thanks
<KristianDK> Hello! I just installed an Ubuntu Enterprise Cloud Cluster/Controller - where do i find the username/password for the web interface?
<KristianDK> i tried admin/admin as described on the wiki, but it claims there is no such username
<kirkland> mathiaz: ping
<mathiaz> kirkland: o/
<kirkland> mathiaz: could you have a look at hggdh's results for config_multi?
<kirkland> mathiaz: those failed pretty badly
<mathiaz> kirkland: where?
<kirkland> mathiaz: i'm hoping for a bug in the testing script, or perhaps the configuration
<kirkland> hggdh: hey, where's your results posted?
<kirkland> mathiaz: i asked hggdh to commit his test results to a bzr branch
<kirkland> mathiaz: for tracking across milestones
<kirkland> hggdh: and what's the bug # you opened?
<hggdh> kirkland, mathiaz -- /home/cerdea/uec-testing.tar
<hggdh> kirkland: I have not yet commited to a public bzr
<gzmask> guys, after I install UEC I can't use my account to login in ecalyptus web portal
<hggdh> kirkland, mathiaz no, this is not the last runs, I will upload them there
<gzmask> do I need to adduser first?
<kirkland> hggdh: okay
<kirkland> gzmask: admin/admin
<kirkland> gzmask: is the default username/password
<gzmask> kirkland: Error: Username 'admin' not found
<uvirtbot> New bug: #559533 in mysql-dfsg-5.1 (main) "[ubuntu 10.4] when _boot_ in single mode, mysql is running" [Undecided,New] https://launchpad.net/bugs/559533
<hggdh> kirkland, mathiaz /home/cerdea/uec-test.tar.gz on tamarinf
<hggdh> kirkland, mathiaz bug 559230
<uvirtbot> Launchpad bug 559230 in eucalyptus "multi-machine topology, cannot reach an instance from the CLC" [Undecided,New] https://launchpad.net/bugs/559230
<ScottK> lamont: Now that you're EOW, would you mind putting your postfix maintainer's hat on for a moment?
<kirkland> gzmask: in the web frontend?
<kirkland> gzmask: https://wherever:8443/ ?
<gzmask> ya, at port 8443
<kirkland> gzmask: hmm, sounds like your install is incorrect?
<gzmask> can be, first time trying to
<gzmask> but I can login in the bash shell using the account I created
<RoAkSoAx> kirkland, howdy!! I meant to ask you if for the modularization we should drop the default setting of variables in the code, and just use the config file for defaults. Or, should we keep it?
<uvirtbot> New bug: #558793 in samba (main) "net ads dns register fails in 2008 R2 domain" [Medium,New] https://launchpad.net/bugs/558793
<KristianDK> gzmask, i have the same problem - i just installed like an hour ago
<gzmask> KristianDK: have you figured something out yet? I am googling but nothing catches my eye yet
<KristianDK> gzmask, nothing at all - everywhere it says "just type admin/admin" and it should work
<KristianDK> ive been googling like everything
<gzmask> hmmm.... maybe I should switch to Xen on ubuntu then
<KristianDK> kirkland, you never heard about this issue before?
<kirkland> KristianDK: gzmask: gimme a minute ... i just installed fresh
<kirkland> KristianDK: gzmask: can you confirm that this is 10.04 Beta2 ?
<KristianDK> kirkland, Im using 9.10
<KristianDK> but i could try with the 10.04 beta too
<gzmask> 9.10 x64 versoin ubuntu server iso
<kirkland> KristianDK: gzmask: okay, i just tested 10.04 Beta2, and admin/admin works perfectly on first login
<kirkland> KristianDK: gzmask: i don't have a 9.10 setup right now
<kirkland> https://help.ubuntu.com/community/UEC/CDInstall
<KristianDK> kirkland, i can give you SSH to my brand new setup if you want :P
<kirkland> that should be instructions
<kirkland> KristianDK: sorry, i'm slammed trying to fix 10.04 issues
<kirkland> KristianDK: no time for 9.10, dr. jones
<KristianDK> hehe, ok - np :D
<kirkland> :-)
<KristianDK> i guess i have to try installing the 10.04 then
<KristianDK> :D
<gzmask> gonna check my installation steps. thanks kirkland
<kirkland> KristianDK: it's way better :-D
<kirkland> gzmask: k
<kirkland> gzmask: open a bug, if you can reproduce this again
<kirkland> oh, also ...
<kirkland> gzmask: KristianDK: are you sudo apt-get dist-upgraded to the latest 9.10 ?
<kirkland> gzmask: KristianDK: there are a few really important euca bug fixes in there
<KristianDK> no, its just a fresh install, no commands used at all
<kirkland> gzmask: KristianDK: one that could solve your issue (database problems)
<gzmask> not yet, gonna do it now
<kirkland> KristianDK: oh, i'm sure that's it
<kirkland> gzmask: ^
<kirkland> sudo apt-get update && sudo apt-get dist-upgrade
<kirkland> give it a few minutes to restart all of your services, etc.
<KristianDK> I'll just give it a go, it wont take more than a few minutes it seems :)
<KristianDK> kirkland, do you btw know which date they will launch 10.04?
<genii> April 29
<KristianDK> ty :D
<KristianDK> gzmask, kirkland, after the update it seems to work :)
<KristianDK> just for the record
<gzmask> cool, my internet sucks, still updating
<gzmask> worked, apt-get won again
<xgpt> hey everyone, what SMTP server should I use for my home server? I don't need anything too fancy...simple is better.
<guntbert> xgpt: why do you need an smtp server at all?
<xgpt> because I want to start spamming viagra ads...kidding i just want to play around with one
<cloakable> xgpt: dovecot-postfix :)
<guntbert> xgpt: if you keep it strictly private it doesn't really matter - but I like dovecot
<funkyHat> dovecot isn't an smtp server
<guntbert> funkyHat: thx - I don't know what happened to my brain  :-/
 * guntbert blushes
<funkyHat> guntbert: hehe
<kirkland> KristianDK: ;-)
<lullabud> is there a command that will indicate if your hardware does hardware virtualization?  i have a collection of misc boxes, trying to find some spares to use for a test environment for UEC
<nekro_> lullabud: try kvm-ok
<nekro_> lullabud: also "modprobe kvm ; lsmod | grep kvm" will show you kvm_intel or kvm_amd if you have hardware support
<lullabud> thanks
<lullabud> w00t, that is exactly what i need, thanks!
<eaglecoth> Hey, I followed the InternetConnectionSharing Guide on ubuntu.org,  it works flawlesslys
<eaglecoth> however, setup is not kept after reboot, where is the proper place to configure bootup internetsharing?
<deliverance> hey guys
<pwnguin> am i crazy or does NTP not work correctly out of the box?
<pwnguin> it looks like ntpdate-debian uses /etc/ntp.conf by default, which requires ntp to be installed
<GhostFreeman> Has anyone managed to get Tokyo Cabinet and Tokyo Tyrant running on 9.10
<soren> What on Earth is that?
<GhostFreeman> its a clone of dbm
<GhostFreeman> its pretty popular with all the nosql kids
<uvirtbot> New bug: #559628 in ntp (main) "yust another apparmor-message" [Undecided,New] https://launchpad.net/bugs/559628
#ubuntu-server 2010-04-10
<hggdh> kirkland: still there?
<steven_t> heh!
<steven_t> i installed nginx with aptitude install nginx... and removed it with aptitude --purge remove nginx... but guess what? /etc/init.d/nginx is still there, as is /etc/nginx and all the manpages etc
<zul> steven_t: file a bug then
<steven_t> lol
<lamont> ScottK: sup?
<ScottK> lamont: I was thinking about the new postscreen tool in 2.7.
<ScottK> Upstream has clearly labled it "Experimental" in 2.7.
<ScottK> Should it be split out into a separate binary that's not installed by default?
<lamont> hrm... possibly
<ScottK> I was thinking that experiments shouldn't be part of the default mail server task for an LTS.
<lamont> I would not be averse to such a thing
<lamont> ah, come on.... where's your sense of ADVENTURE??
<lamont> er, I mean, I agree
<ScottK> Heh.
<lamont> you wanna work up a diff?
<ScottK> I know that 'experiment' in the default install is now an Ubuntu Desktop tradition, but for Server, I think not so much a great idea.
<lamont> harsh dude
<ScottK> Accurate.
<ScottK> OK.  Let me see what I can do.
<cloakable> hmmm
<cloakable> Oh, how is spam filtering in the mail task going?
<ScottK> I lost track of how far we got on that.
<ScottK> ivoks would know, but he's not around.
<ScottK> In any case, amavisd-new with spamassassin and clamav is the standard, documentation approach.
<cloakable> awesome
<cloakable> I've been trying with dovecot-antispam, because it seems it would give the best result, if I could get it working (:
<cloakable> (I.e monitors a spamfolder, and on movement out of the folder, calls the spamfilter automatically to mark as 'ham')
<cloakable> However, documentation on it is nonexistent :(
<ScottK> cloakable: Start with the Ubuntu Server Guide documentation on spam filtering.
<ScottK> dovecot-antispam would be an advanced part you might bolt onto it later.
<cloakable> ScottK: That's a little clunky to train for nonspam, though.
<cloakable> Needs ssh-ing into the server to call manually.
<cloakable> And while I can do that, I'd rather not have to, and there's users on my server that cannot :)
<ScottK> With a good set of RBLs + amavisd-new/spamassassin/clamav you get rid of an awful lot of it without having to mess with bayesian filter training.
<ScottK> I'd get that set up first and then see if you want to bother.
<cloakable> Mmmmm.
<cloakable> And when I get spam in my ham and ham in my spam? :P
<ScottK> First see how much of it is before you solve the problem.
 * cloakable finds out what was wrong with dovecot-antispam >.>
<cloakable> It's been compiled with the wrong backend :)
 * cloakable gets the source, comments out 'mailtrain' and puts in 'dspam'
<cloakable> There, added 'dovecot' to the dspam trusted list
<uvirtbot> New bug: #559745 in eucalyptus (main) "NC failed to start a session with a libvirt internal error" [Undecided,New] https://launchpad.net/bugs/559745
<uvirtbot> New bug: #559752 in samba (main) "package samba-common 2:3.4.0-3ubuntu5.6 failed to install/upgrade: el subproceso script post-installation instalado devolviÃ³ el cÃ³digo de salida de error 1" [Undecided,New] https://launchpad.net/bugs/559752
<ScottK> lamont: Never mind.  Apparently it's so experimental Wietse didn't include it in the tarball.  No wonder I couldn't find it.
<uvirtbot> New bug: #553853 in samba (main) "(Kubuntu) Samba shares (fstab) slow down system shutdown/reboot" [Undecided,New] https://launchpad.net/bugs/553853
<lamont> ScottK: he tends to be more pedantic than me about experimental vs official - which lets me ignore that aspect without thinking much about it
<lamont> to the point that you made me go "wut, yeah kill that" thinking you'd actually seen it there already
<ScottK> Which is a good thing to have in an MTA author.
<lamont> very much so
<lamont> on that note, sleep time.
<ScottK> I thought I'd seen it referred to as being in the release onthe mail list.
<lamont> head->pillow
<ScottK> Good night.
<GhostFreeman> I just killed two birds
<GhostFreeman> good hello
<GhostFreeman> I wish I would use the right channel
<ScottK> It's more fun for us when you don't.
<aetaric> hey, i added a scsi drive to my live server and it isn't showing up, do i have to reboot it?
<histo> ZenMasta_: are you there?
<ZenMasta_> need some help installing pdo and pdo_mysql i get a message sh: phpsize not found
<histo> looks like pdo is part of the php code now and you should just need to skip to pdo_mysql
<ZenMasta_> i see, let me try and see what happends
<ZenMasta_> histo same error
<histo> ZenMasta_: yeah what version of ubuntu are you using?
<ZenMasta_> 9.10
<ZenMasta_> on a side note, when I try to install pdo_mysql it downloads pdo_mysql and then after it downloads pdo still
<histo> and you're using sudo pecl install pdo
<ZenMasta_> yep
<histo> do you have php5-dev installed?
<ZenMasta_> I think so how can i find out without trying to install it again
<histo> dpkg -l | grep php5
<ZenMasta_> just decided to install before you typed that
<histo> should show a php5-dev package but like I said i think pdo is obsolete
<ZenMasta_> so we'll see what happends
<histo> yeah pdo has been moved into the php source
<ZenMasta_> histo that did it
<ZenMasta_> installing now so i'll try the web app when its done and hopefully it will progress
<histo> did you try the webapp prior to running pecl
<histo> !info php
<ubottu> Package php does not exist in karmic
<histo> !info php5
<ubottu> php5 (source: php5): server-side, HTML-embedded scripting language (metapackage). In component main, is optional. Version 5.2.10.dfsg.1-2ubuntu6.4 (karmic), package size 1 kB, installed size 20 kB
<histo> ZenMasta_: http://pecl.php.net/package/PDO/php-src/pdo
<histo> see
<ZenMasta_> thanks
<ZenMasta_> how do I edit php.ini? when I try to open it with vi it's as if it doesn't exist so it pretends to make a new file
<binBASH> Hi
<KristianDK> Hello - i would like to test out the ubuntu enterprise cloud with more than 1 or 2 nodes, so i was considering if you can run UEC on Amazon Ec2 for testing? It seems this is the only way to "rent" a lot of computers for a short period of time
<RoyK^> hi all. seems I'm doing something strange here. I try to mount an nfs filesystem on the host from a virtualbox VM, but I get 'mount.nfs: mount to NFS server 'rpcbind' failed: RPC Error: Program not registered' - 'mount.nfs: internal error' - /etc/exports looks right, services are started and ufw has 'allow from x.x.x.x' (the VMs address). The VM runs in bridge mode. Any ideas?
<pmatulis> RoyK^: is mountd running on the server?
<RoyK^> hm. no. what starts that? thought that should be in nfs-kernel-server or something
<twb> Start by checking "rpcinfo -p"
<RoyK^> http://pastebin.com/98hsiw0x
<binBASH> KristianDK: You could use public cloud from Eucalyptus
<KristianDK> binBASH, but i want to check out the configuration :-) Not how the instances works
<twb> Isn't the whole point of EUC that it's backwards-compatible with Amazon?
<twb> Er, UEC
<binBASH> KristianDK: It's possible to run everything on one node.
<binBASH> Not really a need for multiple nodes.......
<uvirtbot> New bug: #560011 in ntp (main) "Time cannot be fixed with ntpdate" [Undecided,New] https://launchpad.net/bugs/560011
<KristianDK> binBASH, both controller and node in one box?
<binBASH> yeah
<RoyK^> twb, pmatulis any ideas?
<binBASH> have the same here
<twb> RoyK^: pastebin "exportfs -vra"
<binBASH> KristianDK: I have 7 nondes, including the one with cluster and cloud controller
<RoyK^> twb: exporting 213.236.233.67:/var/www
<binBASH> planning to have 150 nodes ;)
<RoyK^> tried with * as well
<KristianDK> binBASH, cool :-) Well, i want to test things out before deploying it in a big scale
<binBASH> Like me then ;)
<KristianDK> binBASH, do all your nodes have the VT extension as its recommended?
<binBASH> yup
<binBASH> KristianDK: http://www.hetzner.de/en/hosting/produkte_rootserver/eq6/
<binBASH> have those as nodes
<RoyK^> 150 nodes??? how many racks?
<KristianDK> binBASH, i was actually considering http://www.hetzner.de/en/hosting/produkte_rootserver/eq8/ :-D
<KristianDK> im already a customer there for some other servers
<binBASH> KristianDK: It will be a problem with network configuration ;)
<binBASH> still stucked at this......
<KristianDK> yeah, i guess because of the IP adresses being bound the MAC and the limitation of the 100mbit router, right?
<binBASH> Yeah, the ips are bound to server.
<binBASH> RoyK^: Dunno, this providers does tower hosting......
<binBASH> KristianDK: With 150 nodes, there will be a lot of ram anyways, don't need eq8 really ;)
<RoyK^> binBASH: if you need 150 nodes, I'd guess hosting it locally may be a lot cheaper
<binBASH> RoyK^: Don't think so.
<binBASH> don't have money to buy all those servers ;)
<KristianDK> binBASH, true :) I think i'll end up with an ESXi more on the EQ8 anyway, since everything else seems complicated with hetzner :(
<RoyK^> binBASH: but .... 150 nodes? you can run like 10 VMs on a single node - perhaps more - what do you need this for?
<binBASH> KristianDK: Well, I'll try to start vms now with vnc option and configure networking inside there manually.
<binBASH> RoyK^: Need them not for the vms, but for storage
<RoyK^> binBASH: how much storage do you need?
<KristianDK> binBASH, i've configured a router VM which the IPs are bound to, it forwards the IPs
<binBASH> RoyK^: 200 TB
<RoyK^> binBASH: I just got an offer for such a box - NOK 250k
<binBASH> KristianDK: I don't want to NAT, because 2 TB Traffic Limit per node
<RoyK^> binBASH: and storage should be done on zfs imho
<RoyK^> NOT on a VM
<RoyK^> but on hardware
<binBASH> RoyK^: I'll use GlusterFS
<RoyK^> why not just a big supermicro box stuffed with 2TB drives and a SAS expander and some extra chassises for disks?
<binBASH> RoyK^: Like I said, I'm limited in Finances ;)
<RoyK^> it'll be cheaper
<KristianDK> binBASH, i dont use NAT, its Ip forwarding, i use the router as gateway in the network config - but i don't think you can get around the 2tb limit anyway? Its bound to the IPs
<RoyK^> you need 200TB and can't afford it?
<binBASH> RoyK^: It's a single point of failure.
<RoyK^> binBASH: then get two of them and use zfs send/receive to keep them in sync
<binBASH> KristianDK: if you forward to one server it will count there
<binBASH> RoyK^: Atm I'm using 20 TB Raid 6 NFS Server.
<RoyK^> binBASH: I REALLY doubt you can get something cheaper from somewhere else
<RoyK^> binBASH: we bought this box some time back with 30TB net storage - it just cost like USD 10k
<binBASH> RoyK^: Everything I was looking for was more expensive. like NetApp or Isilon
<KristianDK> binBASH, as i understood from hetzner you need one router VM per physical server
<RoyK^> binBASH: hah - use supermicro hardware, cheap drives, and opensolaris with zfs (with compression and dedup)
<RoyK^> binBASH: where I work, we have rather high storage demands - windfield and other satellite data takes up space, and we're getting more and more all the time
<binBASH> KristianDK: If it would be like this, I wouldn't use the vm for routing.
<binBASH> Would just use the main box itself, because the ip is unusable anyways from within vms.
<KristianDK> binBASH, i was told this was the only option - what else would you do?
<KristianDK> true
<KristianDK> i was thinking ESXi again
<KristianDK> sorry :P
<binBASH> KristianDK: I already, started vms manually and they had a usable ip address.
<binBASH> but I dunno how to automate it within Eucalyptus, that's the only problem
<KristianDK> yeah
<RoyK^> binBASH: really, using VMs for storage is a BAD idea
<binBASH> RoyK^: I don't wanna use vms for storage ;)
<binBASH> the storage will be on the real server itself, though I will embedd it from inside the vms
<RoyK^> binBASH: but - please - give opensolaris+zfs a try - it's well worth it. no raid controller, just zfs doing it all
<RoyK^> zfs rocks rather loudly
<binBASH> RoyK^: If I would use opensolaris on some boxes I wouldn't be able to use their processors.
<binBASH> I have rather high demand on cpu
<binBASH> storage speed is not that important
<RoyK^> for your needs, I would say separate storage and computation
<RoyK^> use storage computer cpu for compression and dedup
<binBASH> Don't need compression
<RoyK^> depending on the data, both can give you quite a bit of gain without much cpu use
<binBASH> for jpgs it's useless
<RoyK^> what sort of data is this?
<RoyK^> indeed
<binBASH> RoyK^: We're hosting image agencies.
<binBASH> Things like www.gettyimageslatam.com
<RoyK^> but stuff like zfs snapshotting is quite priceless
<twb> btrfs and LVM do snapshotting
<RoyK^> btrfs is NOT stable
<twb> Granted
<RoyK^> LVM snapshotting is crap
<twb> LVM snapshotting is adequate for my purposes
<RoyK^> LVM snapshotting moves data out of the original place for each write instead of writing new data and moving pointers
<RoyK^> meaning if you have lots of snapshots, everything will be very, very slow
<twb> Um, both LVM and ZFS snapshotting are block COW.
<twb> I grant you that LVM is probably a lot slower.
<RoyK^> lvm moves data out before overwriting them - not like zfs, which writes new data
<twb> Shrug.
<RoyK^> CoW is two different things - either write new data and move pointers, which is what ZFS and NetApp does, or move the old data prior to overwriting the old ones, which is what LVM does
<binBASH> maybe I'll take Strato HiDrive Pro for Storage ;P
<twb> At the end of the day, ZFS is not enough to make me adopt osol.
<binBASH> 5 TB mirrored = 149 Eur / Month
<RoyK^> twb: heh - then you really haven't looked into it
<twb> I'm running a 2TB osol server for ZFS right now.
<binBASH> RoyK^: If you really can afford a big storage you should go to Isilon ;)
<twb> But as soon as btrfs is ready, it will die
<binBASH> It's a much better technology
<RoyK^> btrfs is decent, but lacks a lot of what's in zfs atm. give it a year or two and it might catch up
<RoyK^> but yes, I will also switch to btrfs once it's there
<twb> Exactly
<twb> Which will unfortunately not be until 2012 (for LTS) :-(
<binBASH> KristianDK: I really don't know how to master that network problem ;)
<RoyK^> but then, I can't wait two years for a storage solution, and then opensolaris is the way togo
<KristianDK> binBASH, i think we need to talk to hetzner
<KristianDK> they are kind of blocking for allowing this
<binBASH> :p
<binBASH> KristianDK: Like they're blocking gigabit as well
<KristianDK> exactly, i asked them for gbit
<KristianDK> and you can actually get that
<binBASH> KristianDK: You can have it, just costs........
<KristianDK> yep
<binBASH> you need to have flexipack and another nic
<binBASH> and additionally a switch
<binBASH> and additionally 69 Eur for moving all your servers so they are beside each other.
<RoyK^> binBASH: what makes me wonder is why you (or your company) are hosting terabytes of data and can't afford a decent (and quite cheap) storage solution, like the osol-based one we have. It can be expanded quite easily with a SAS expander and won't cost a lot using WD Green drives or so
<binBASH> RoyK^: Because agencies don't pay that much ;)
<binBASH> RoyK^: And they pay monthly. Not a year in advance :p
<RoyK^> well, loan some money and it'll pay back quite quickly
<RoyK^> for EUR 10k, you get 30-35TB, which I guess will be sufficient for some time
<binBASH> RoyK^: Like I said we have already 20 TB.
<RoyK^> no, wait, more - wait...
<KristianDK> lol
<binBASH> and we bought it for 9K 2 years ago
<binBASH> though it's a single point of failure and not mirrored
<binBASH> hi leonel
<leonel> ea binBASH
<RoyK^> binBASH: just got this offer - supermicro box with 36x2TB disk and an ok motherboard, a bunch of memory, some cpus etc, meaning if you use three RAIDz2 groups of 12 drives each, it gives you 20x3=60TB -> price NOK 86k, around EUR 10k, and possibly cheaper outside Norway
<RoyK^> 34 2TB drives, that was, but still (forgot about the root SSDs)
<binBASH> 10K for one box I assume ;)
<RoyK^> yes, but it's still cheap
<binBASH> Here we pay 8500 Eur for a box with 24 x 2 TB
<RoyK^> with 3xraidz2, you can loose six drives in total
<RoyK^> wierd - that's _more_ expensive :)
 * RoyK^ thought Norway was meant to be the expensive place
<binBASH> though if a box fails raid is useless ;)
<RoyK^> binBASH: then get two and use zfs send/receive to mirror the two
<RoyK^> and when storage fills up, get a SAS expander and an extra chassis and some drives
<twb> osol not supporting ext2 is a real pain in the arse.
<RoyK^> over 3-5 years, I would guess you would save LOTS of doing this yourself instead of paying others to do the same
<RoyK^> twb: why should it???
<twb> So that I can seed the osol box by sneakernet instead of our shitty 100baseT and ADSL lines
<binBASH> RoyK^: Well I would lack then cpu power for video processing
<RoyK^> binBASH: don't!
<RoyK^> binBASH: use NFS
<RoyK^> or iSCSI
<RoyK^> or CIFS
<binBASH> RoyK^: We're having NFS already ;)
<RoyK^> nfs performs well enough for that - for those storage needs, it would be silly to put all services in one place
<binBASH> huh?
<RoyK^> get a storage server with sufficient memory and cpu for the storage alone and get compute nodes to do the ugly stuff
<twb> RoyK^: what's his use case?  Just normal office documents and such?
<RoyK^> twb: images and video
<binBASH> I think getting 150 Nodes which offer 1200 cpu cores + 200 TB Storage is better ;)
<RoyK^> if planning for 3-6 months, sure, but if you are planning to be in business for a long time, buying hardware will save you a lot of money
<twb> Yeah, NFS over 1000baseT to a single NAS or SAN is probably the Right Thing.
<binBASH> RoyK^: You forget the fact you normally throw out servers every 2 years
<RoyK^> binBASH: not really - storage servers can last a LONG time
<RoyK^> especially with zfs - autogrow is nice
<RoyK^> take a zfs mirror, replace one part with a larger drive, resilver, replace the other, resilver, and zfs says 'oops - I'm bigger'
<RoyK^> same with raidz volumes
<binBASH> Raid rebuild will take ages
<RoyK^> not really - for 30TB a scrub takes a couple of days
<binBASH> I already takes 4 hours to rebuild the current raid ;p
<RoyK^> resilver about the same
<RoyK^> and replacing drives isn't what's done daily
<RoyK^> but hey, I've just been working with storage for 10+ years, do as you please
<binBASH> I think a distributed storage architecture is much better.
<binBASH> Companies like NetApp or Isilon doing it as well.
<RoyK^> how much do they charge you per month for 200TB?
<KristianDK> binBASH, have you, btw, checked for hetzner alternatives?
<binBASH> KristianDK: Yup
<binBASH> KristianDK: But only in Germany
<KristianDK> binBASH, and they have the same sucky setup? :P
<KristianDK> binBASH, are you german?
<binBASH> They are even mor worse.
<RoyK^> NetApp is doing quite well, yes, but they charge you EUR 100k for a few terabytes
<binBASH> KristianDK: I live in Switzerland :)
<KristianDK> binBASH, ok - cool :-) I'm from Denmark, so i speak a bit German, but sometimes i really don't get what they are trying to tell me at hetzner :P
<binBASH> RoyK^: Things like GlusterFS are working like this.
<binBASH> KristianDK: I moved from Germany to Switzerland in 2007
<RoyK^> does that support stuff like versioning or snapshotting?
<KristianDK> binBASH, well, the problem is i've been searching for alternatives to hetzner, but they seem remarkably cheap compared to everything else
<KristianDK> and im actually satisfied with everything but their network setup :P
<KristianDK> binBASH, however - they recently introduced the failover IP thing
<KristianDK> which redirects and IP to another server
<KristianDK> maybe we can work something out with this thing?
<RoyK^> binBASH: but how much for 100T?
<RoyK^> or 200
<uvirtbot> New bug: #560047 in dovecot (main) "new upstream version available" [Undecided,New] https://launchpad.net/bugs/560047
<binBASH> RoyK^: Like I said each node gives 2,7 TB
<binBASH> For mirroring you need 2 nodes.
<binBASH> A node costs 69 Eur / month
<binBASH> and provides 8 cpu cores which I can use for video rendering and image processing
<binBASH> because we're a swiss company we don't have to pay German VAT
<binBASH> so it's cheaper.
<binBASH> so it costs like 9500 Eur / Month.
<binBASH> KristianDK: For the failover ip you need flexipack, which is 15 Eur / Month
<binBASH> RoyK^: I would agree as pure storage it's too expensive.
<jondowd> good morning - I have a Dell Precision 650. I want to run SATA drives on it as boot devices. Can I install a 3rd party SATA PCI card and boot from a drive connected to it? thanks
<ScottK> You should be able to.
<ScottK> Absolute worst case scenario you unplug the installed drives from the built in controller, install, and then reconnect them.
<jondowd> ScottK: how do I get the BIOS to see the PCI card?
<ScottK> The one time I've had to worry about it, it just did.
<jondowd> (never booted from a add-in card) Cool - I'll give it a try - Thanks !
<binBASH> RoyK^: http://gluster.com/community/documentation/index.php/Main_Page#Gluster_Filesystem
<koolhead17> binBASH: :P
<binBASH> koolhead17: ?
<koolhead17> gluster
<binBASH> koolhead17: It works here without problems so far.
<koolhead17> binBASH: it rocks
<binBASH> koolhead17: are you using it?
<koolhead17> binBASH: my friend owns the company behind this project :D
<binBASH> ohh :p
<koolhead17> binBASH: he is the lead developer too :D
<binBASH> very cool
<binBASH> glusterfs is very good design I think.
<binBASH> Too bad I can use it with 100 Mbit only koolhead17 :p
<koolhead17> binBASH: heh. poke them
<koolhead17> i think #gluster
<binBASH> koolhead17: It's not a gluster issue, servers only have 100 mbit ;)
<twb> ScottK: absolute worst case is a wincontroller :-P
<ScottK> twb: True.
<twb> Or my boss's favourite trick -- buy a server with hotswap bays, but forget to buy the RAID5 chip for the hardware RAID controller
<binBASH> lol
<twb> In which case I could create up to two RAID0 arrays of one drive each, so I couldn't even make an md RAID5
<binBASH> twb: Sounds more like epic fail than a trick ;)
<twb> binBASH: he has done it TWICE
<binBASH> twb: So he didn't learn?
<twb> And we're still running Pentium IIIs, so you can imagine how rarely we buy new gear
<binBASH> Pentium 3 omg
<binBASH> twb: How much people you are in company?
<twb> Probably about ten
<binBASH> ok, more than us then ;)
<twb> It's hard to tell because some spend months pimped out, and some ex-employees continue to lurk on the lists
<twb> We replaced the LaserJet 4 last month, and the sysadmin deploying it went "oh, cool, the NEW unit has only been EOLd by HP since 2008"
<binBASH> lol
<twb> Having said that, I loved that little LJ4
<binBASH> sounds like a lack of money
<twb> There's a policy of handing most of the profits to the engineers instead of the company
<twb> But it's also a mindset thing.
<twb> We have a pair of Q9550 with 2TB of storage, but one got stolen to run rpppoe.
<binBASH> twb: We had such a policy as well twb
<binBASH> All money to personal
<binBASH> ;)
<binBASH> Get a project manager for 100K Eur / Year
<binBASH> kick him out 9 months later because he sucked
<binBASH> and have a second boss, which was also not very useful
 * cloakable eyes gluster storage platform >.>
<pjp3rd> hi id like to set up monthly bandwidth quotas for my home network something like http://www.digirain.com/en/trafficquota-overview.html but ive been googling like crazy and i cant find anything like that for ubuntu, any suggestions?
<brianherman> pjp3rd: You can use iptables to set a quota
<brianherman> http://linuxgazette.net/108/odonovan.html
<brianherman> pjp3rd:http://linuxgazette.net/108/odonovan.html
<brianherman> pjp3rd: https://help.ubuntu.com/community/UFW
<pjp3rd> brianherman, thanks that looks like a good place to start
<binBASH> pjp3rd: why you want quote in home network?
<brianherman> pjp3rd: Use the ubuntu one it seems the simplest
<brianherman> Quota not quote
<binBASH> quota yeah ;)
<pjp3rd> brianherman, but id need to set up a seperate quota for each user, a way for user to check his quota and to automate it everymonth. so it would be nice if someone has already done that work..
<pjp3rd> binBASH, cus the ISP gives me a quota and im sharing it with 16 people, seems like the most sensible way to avoid fights when we are running out after 2 weeks each month
<binBASH> pjp3rd: If you share the line with 16 people why not limit bandwidth?
<binBASH> pjp3rd: http://manpages.ubuntu.com/manpages/karmic/man8/tc.8.html
<pjp3rd> binBASH, im not sure what you mean?
<binBASH> pjp3rd: http://manpages.ubuntu.com/manpages/hardy/man8/tc-cbq-details.8.html
<pjp3rd> binBASH, im not looking to shape the bandwidth for speed im looking to make sure we dont exceed the monthly limit
<binBASH> ok
<sherr> pjp3rd: there's something called "bandwidthd" that might help in some way.
<brianherman> pjp3rd:http://bandwidthd.sourceforge.net/
<pjp3rd> sherr, bandwidthd helps half the problem if it can moniter the bandwidth. id like to enforce limit as well
<binBASH> pjp3rd: At least then all would know who steal the traffic :p
<pjp3rd> binBASH, yip it would tell me but too late rather than fighting with people every month id just like to divide it equally and no more worries
<binBASH> pjp3rd: if you limit bandwidth for everyone it will be equal
<binBASH> pjp3rd: Think limiting bandwidth is better than having no internet for half of the month ;)
<pjp3rd> k, based on brianherman's tip ive found www.linuxquestions.org/questions/linux-networking-3/iptables-to-stop-bandwidth-completely-592827 which is what im basically looking for
<binBASH> but your decission ;)
<pjp3rd> problem is it doesnt seem like such a polished solution
<pjp3rd> binBASH, im not sure what you mean
<binBASH> pjp3rd: You said you have a quota, what if you exceed it?
<pjp3rd> my isp enforces a quota when we exceed it the connection is throttled to basically unusable speeds
<binBASH> pjp3rd: So why don't you distribute a max. bandwidth equally?
<pjp3rd> binBASH, what do you mean?
<binBASH> I mean, I wouldn't accept the fact if I'm amongst the 16 people, and one causing so much traffic, so I would have no internet then for half a month
<binBASH> pjp3rd: With the tc links I posted, you can assign each user equal bandwidth.
<binBASH> so it's not possible traffic limit will be exceeded
<pjp3rd> binBASH, correct me if im wrong but what i can understand from the link you posted tc/cbq can shape my connection meaning how much is been used by any given user/protocol at a given time, thats not going to help me to stop total monthly usage from exceeding the isp quota is it?
<binBASH> pjp3rd: With that you can limit the bandwidth for each user. So every user has equal line.
<binBASH> and you can setup a max. bandwidth rule as well, so with that you can't exceed your providers traffic limit.
<pjp3rd> binBASH, oh i didnt see details about that? how can i set up a monthly limit?
<binBASH> pjp3rd: You don't set a traffic limit. You set a bandwidth limit.
<binBASH> The Linespeed will be slower though
<pjp3rd> binBASH, can you give me more details?
<binBASH> pjp3rd: http://www.oamk.fi/~jukkao/lartc.pdf
<binBASH> read this ;)
<RoyK^> binBASH: do you really need like 5k cores for this? I thought you were doing storage
<RoyK^> or 800 cores, that is
<binBASH> RoyK^: Storage and Processing ;)
<binBASH> 1200 cpus actually
<RoyK^> yeah, but you were talking about hosting images
<RoyK^> yeah
<RoyK^> 1200 cores
<RoyK^> with 1200 cores you can do some rather fancy stuff, but then, what is it you're going to do with them?
<binBASH> RoyK^: Hosting images, resize them, watermark them, recalculate videos, etc......
<RoyK^> you might need 8 cores for that
<RoyK^> not 1200
<binBASH> RoyK^: If you don't wanna wait ages, you need more  ;)
<RoyK^> a resize normally can't be shared amongst cores
<RoyK^> and I somehow doubt you have 1200 concurrent resizes
<pjp3rd> binBASH, i just skimmed through the whole book can you point me to which chapter should help me?
<binBASH> pjp3rd: Chapter 9
<RoyK^> binBASH: what are you using for this - imagemagick? can you upload some files for me to test?
<binBASH> RoyK^: ImageMagick for the images, yup
<pjp3rd> binBASH, im sorry i must be misunderstanding you but how is shaping traffic going to limit total usage per month per user?
<binBASH> RoyK^: One jpg is like 128 MB in worst case;)
<RoyK^> binBASH: and how often are these uploaded/resized?
<binBASH> RoyK^: Very often ;)
<RoyK^> seems to me 1200 cores for this job is like shooting sparrow with heavy artillery
<binBASH> RoyK^: Getty Images uses it for their editorial press content.
<RoyK^> binBASH: define 'very often'
<binBASH> RoyK^: The problem is, the images will be transfered to news agencies.
<RoyK^> well, it's only resized on upload, so how many images do need to convert per second?
<binBASH> RoyK^: The faster the better.
<RoyK^> well, of course, but wasting a ton of money is useless
<binBASH> like I said it's editorial press content. Means, someone makes a photo of a football match.
<binBASH> it should be transfered immediately to news agencies when uploaded.
<binBASH> time counts......
<RoyK^> if you have 1200 concurrent file uploads in general, 1200 cores might be worth it, but 600 will probably do well, even 300
<RoyK^> then again, I somehow doubt you have 1200 _concurrent_ jobs in such a system
<RoyK^> most likely 10+
<binBASH> RoyK^: Yup, though we have some more customers, ;)
<RoyK^> have you monitored your current system to see its load?
<binBASH> no
<RoyK^> it should tell quite easily how much is needed
<RoyK^> if load average at peak times is 4, give it 4 cores, etc
<RoyK^> we have a 40 core compute farm at work for doing models, and that's eating some data. 1200 cores must be overkill for your use
<binBASH> RoyK^: There is another problem as well. The company develops some visual search engine atm. And noone knows what it will consume ;)
<RoyK^> binBASH: then get a separate box for that as well
<RoyK^> binBASH: it will probably need a truckload of RAM and fast disk access to its index, but not a lot of cpu
<binBASH> RoyK^: And how to do it without money? :P
<RoyK^> binBASH: hey, kid, if you try to make business work, you need to invest. I'm just trying to give you simple advices, but it seems to me you know it all better than the rest of the world. keep on, kid, and you might be wanting to find a new job in a few months
<binBASH> RoyK^: Boss refuses to take new investors
<RoyK^> tell your boss you can't do this without EUR 20k
<RoyK^> that's not really a lot of money
<RoyK^> tell him it'll cost several times as much even during the first year
<RoyK^> or perhaps the first year will make it break even
<RoyK^> binBASH: also, please understand that making large system work well usually means dividing services amongst servers, some for storage, som for computing
<RoyK^> get a supermicro system for the first 60TB or so and add more disks later with SAS expanders - use it on opensolaris - share it with NFS - it can grow easily
<RoyK^> then get small 1U boxes for doing the computing - start off with a quad intel or opteron with a bunch of cores, perhaps less, and you might see it's not really very heavily loaded
<binBASH> RoyK^: There is another problem as well. :-) We need entry points through geoip
<RoyK^> if it is, add more
<binBASH> Means, getty has offices in asia, russia etc.
<binBASH> so we don't want to send them to german servers
<binBASH> but also to servers in usa
<RoyK^> binBASH: fuck this - you reall don't listen - you've decided to use this or that already - I'm done trying to advice anymore now
<binBASH> and we really don't want to fly to usa to build something up there in a datacenter
<binBASH> good ;)
<RoyK^> seems to me what you want is to brag about a truckload of terabytes, and you're doing it the wrong way, wasting money and making things worse
<RoyK^> keep on, kid, but don't blame the ones of us that tried to help
<binBASH> RoyK^: Really don't wanted advice
<RoyK^> I can see that
<RoyK^> binBASH: out of interest - what is your current system's load average?
<binBASH> RoyK^: It should have a reason that companies like google have a shared storage ;)
<RoyK^> google uses its own storage
<RoyK^> for good reasons
<RoyK^> I'm still curious about this load average of yours
<RoyK^> also - how do you plan to parallelize that across 300 machines?
 * RoyK^ guesses binBASH was in a hurry and perhaps a little drunk when he made those plans
<binBASH> RoyK^: That is what gearman is for
<RoyK^> binBASH: what is the load average on your current box?
<bogeyd6> binBASH, for very large deployments that require lots of storage that is similar you might consider a de-dupe filesystem such as lessfs
<bogeyd6> sdfs also comes to mind
<RoyK^> erm
<RoyK^> does lessfs do dedup?
<bogeyd6> RoyK, you been around long enough to know what google is for
<RoyK^> bogeyd6: I've tried hinting on using zfs, but it seems binBASH has already decided and is just here to bra
<RoyK^> bogeyd6: I've tried hinting on using zfs, but it seems binBASH has already decided and is just here to brag
<bogeyd6> good im glad he is bragging about using Ubuntu Server in large environments and i hope he proudly announces it to his customers
<RoyK^> bogeyd6: you've been around for long enough to know that to answer a yes or no might perhaps be a little more sophisticated and nice than just barking fgfi
<bogeyd6> !google | RoyK
<ubottu> RoyK: While Google is useful for helpers, many newer users don't have the google-fu yet. Please don't tell people to "google it" when they ask a question.
<bogeyd6> also, condescension is highly frowned upon, please refrain
<RoyK^> bogeyd6: I know, SIR, but you spent more time on telling me to google it than a yes/no answer would take
<bogeyd6> * RoyK^ guesses binBASH was in a hurry and perhaps a little drunk when he made those plans << belong in another linux support channel
<RoyK^> bogeyd6: not really
<bogeyd6> well i said my peace, i hope you consider signing the ubuntu code of conduct
<sherr> RoyK^: I thought your discussion with binBASH was quite interesting and useful until you ruined things by being rude and a little obnoxious.
<sherr> Let's all be civil.
<bogeyd6> !conduct | RoyK
<ubottu> RoyK: The Ubuntu Code of Conduct is a community etiquette document to which we ask all Ubuntu users to adhere, and can be found at http://www.ubuntu.com/community/conduct/ .  For information on how to electronically sign the CoC, see https://help.ubuntu.com/community/SigningCodeofConduct .
<RoyK^> well, people, listen
<bogeyd6> sherr, agreed and royk should also be congratulated on his level of participation in previous instances
<bogeyd6> would make a very valuable member of the server community
<RoyK^> mr binBASH first tried to ask about how to do his storage, and talked about 1200 cores doing image resizing for uploads, at which I asked why, and why not central storage, to which he merely barked that he didn't need my input
<RoyK^> this is something that can annoy the one (me) trying to help one (him) out
<bogeyd6> RoyK, zfs is in fact available in opensolaris
<RoyK^> bogeyd6: yes, and did you know ext3 is available in linux?
<bogeyd6> which is pretty awesome
<RoyK^> scroll up :)
<RoyK^> I was trying to tell him that
<bogeyd6> wasnt available until 27a
<RoyK^> but it seems like he wants a truckload of cpu nodes with 2TB each for some reason
<RoyK^> 27a? what?
<bogeyd6> i thought if you could recommend ZFS you would know a bit of its history and usage
<binBASH> RoyK^: Looks like you're totally mistaken, I never asked for storage.
<bogeyd6> binBASH, did you have something you did need help with?
<binBASH> bogeyd6: originally I asked how I can start vms in ubuntu enterprise cloud with the -vnc parameter.
<bogeyd6> i personally think that zfs needs to much horsepower and that makes it a disadvantage
<RoyK^> bogeyd6: I just started using osol at 2009.06 - the old solaris platforms were just something I plaied with
<bogeyd6> binBASH, that is a good question, i know it can be done on a VPS but in a desktop in a cloud?
<RoyK^> bogeyd6: I know it needs a lot, but for dedicated storage, it's nice
<binBASH> RoyK^: If you want details what is the problem with our current setup you can come in query :)
<bogeyd6> binBASH, https://wiki.edubuntu.org/UEC/Images/Testing
<bogeyd6> my google-fu is 10th degree master
<RoyK^> binBASH: I tried asking about the current load, since you insist on needing 1200 cores
<bogeyd6> i got a load that would blow your mind
<RoyK^> how nice
<bogeyd6> 13:57:15 up 28 days,  5:48,  1 user,  load average: 0.84, 1.38, 2.39
<RoyK^> well, seems like the system uses a core or two quite well
<bogeyd6> hah!
<bogeyd6> single processor
 * RoyK^ had a server peaking at load avg 32 the other day
<RoyK^> something went wrong in freeradius
<_ruben> only 32? ... i've reached 100+ on mailservers that were "spammed" :)
<RoyK^> yeah, but this box was running a single radiusd that shouldn't really have been busy
<RoyK^> guess it started a bunch of threads that went mad
<binBASH> RoyK^: You know not only cpu usage causes load
<RoyK^> binBASH: yeah, but there weren't any funny processes in D or Z state or similar
<RoyK^> just a truckload of threads that went spinning
<binBASH> back your question about our sys. Like I told you we're using imagick. Libjpeg doesn't use smp so we could calculate 8 images at once with one server.
<binBASH> one image takes around 3-6 seconds
<binBASH> the images with bigger filesizes take much longer
<RoyK^> have you tried graphicsmagick?
<RoyK^> it's said to be faster by far than imagemagick
<binBASH> yup, it lacks some features we need.
<RoyK^> ok
<RoyK^> but still - the time for one image to be resized is ok, but how about the system load over time?
<RoyK^> that's what you should worry about when designing something new
<binBASH> if there is high processing the load is like 25
<RoyK^> can you distribute this load somehow?
<binBASH> it's already distributed :)
<RoyK^> I mean, I guess there's a common web front
<binBASH> we have dedicated servers for web, for image processing, for sphinxsearch and for exports to partners/ftp ...
<binBASH> also database
<RoyK^> ok
<RoyK^> with glusterfs, what happens if you remove a node? are nodes mirrored as well as the drives on those nodes?
<binBASH> RoyK^: every node is backuped by another one
<RoyK^> ok, so mirroring, somehow?
<binBASH> yup
<RoyK^> I guess it still lacks the stuff zfs/btrfs has, though :P
<binBASH> Well, it's scalabe and a complete different technology.
<binBASH> I dunno if you know NetApp or Isilon.
<RoyK^> I do
<RoyK^> isilton, no, but netapp, yes
<binBASH> I personally would prefer Isilon over NetApp from what I've heard
<binBASH> bogeyd6: What does this link have to do with eucalyptus? It's just for testing kvm setup
 * cloakable eyes eucalyptus
<binBASH> kvm works perfectly for me already ;)
<cloakable> Anyone here use eucalyptus?
<binBASH> cloakable: Yes ;)
<binBASH> cloakable: Ubuntu Enterprise Cloud is built on it.
<cloakable> binBASH: If I have two four-core nodes in the cloud, can I give, say, six to an instance?
<binBASH> cloakable: No
<RoyK^> cloakable: you can't run a vm across multiple machines
<cloakable> Damn D:
<RoyK^> get an amd 12-core :D
<cloakable> Which would suck up how many hundred watts? :P
<binBASH> cloakable: http://www.linuxvirtualserver.org/
<RoyK^> cloakable: not really a lot
<cloakable> RoyK^: More than 45W?
<cloakable> :P
<binBASH> cloakable: I think lvs can do it.
<cloakable> binBASH: awesome, will look at
<RoyK^> erm - iirc lvs is a network thing, not a processing thing
<binBASH> RoyK^: lvs will let multiple nodes appear as one supernode afaik
<cloakable> What would be really awesome would be a network-aware hypervisor >.>
<binBASH> cloakable: Too slow
<cloakable> Possibly
<cloakable> has it been tried? ;)
<binBASH> don't think so.
<RoyK^> binBASH: yes, on IP, but not sharing computing tasks
<binBASH> RoyK^: yeah, could be.
<RoyK^> lvs is nice for web servers and so on, but not for VMs
<cloakable> Would like to deploy an LTSP image onto a group of say 3-4 4-core machines :)
<cloakable> Or just use it as a desktop :D
<binBASH> impossible afaik ;)
<cloakable> Which is a shame, because it would be awesome :)
<binBASH> hehe
<cloakable> An area Atom would shine in ;)
<binBASH> RoyK^: The worst thing about that many nodes. Administration overhead. So puppet to the rescue ;)
<cloakable> heh
<cloakable> or cluster-ssh ;)
<binBASH> cloakable: I have it already.
<cloakable> :)
<Guest79698> Hi, I have 2 ubuntu desktop machines and 1 ubuntu server machine. Using tcpdump, both desktop machines receive a multicast audio stream on my network, while the server machine does not
<Guest79698> is there something on the server edition which might block multicast traffic?
<histo> the only thing that is different should be the kernel as far as I can tell
<animeloe[net]> Apr 10 15:29:24 server deliver(root): msgid=<20100407105335.E878661D8@$DOMAINt>: save failed to INBOX: Internal error occurred. Refer to server log for more information. [2010-04-10 15:29:24]
<animeloe[net]> Apr 10 15:29:24 server deliver(root): stat(/root/Maildir/tmp) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root)
<animeloe[net]> I still haven't figured out how to fix that issue
<animeloe[net]> (not that I've been looking very hard)
<jared_1> Had a quick question for you guys.  I'd like to build a simple home server since it's something I've been without for way too long (I've been so lucky to never lose data ... yet).  Anyways I'll probably be doing basic stuff... File server, FTP, simple webserver....But I would like a graphical environment (kde, gnome, etc) for VNC.  What hardware would you recommend to run a raid 1 or raid 0+1 (4 drives).  Trying to keep it affordab
<jared_1> Looking for mobo / raid controller / processor recommended.
<animeloe[net]> for your data I'd definately say a good raid 5
<jared_1> I'm fine with a raid 5 setup too
<jared_1> but personally never done really any raid setups
 * animeloe[net] only uses hardware raid, so can't help with software raid
<jared_1> replaced drives in raid arrays and whatnot, but never purchased hardware
<animeloe[net]> got lots of money to spare?
<animeloe[net]> get a nice areca or equivelent
<jared_1> Hah I do, but cheaper the better obviously :)
<animeloe[net]> well
<jared_1> more a hobby and for work experience then a necessity
<animeloe[net]> you want raid, you'll be spending at least a thousand just on a card
<histo> jared_1: you may want to check this out https://wiki.ubuntu.com/ReliableRaid
<histo> jared_1: explains some of the current issues
<blue-frog> jared_1, if you don't want to lose data, make backups. raid has nothing to do with keeping data safe
<RoyK^> Guest79698: servers should receive multicast just as clients do - the kernel isn't that different
<RoyK^> Guest79698: is the server running on hardware or is it a vm?
<Guest79698> on hardware
<Guest79698> i vaguely remember an incident with a port mapping to a server which didnt work because there was some form of hardening on the server
<RoyK^> Guest79698: ufw status
<RoyK^> if ufw is enabled, it might block multicast
<jdstrand> there are rules for multicast in /etc/ufw/before.rules
<jdstrand> they are allowed in the default install
<Guest79698> yeah, it wasn't that.. this is pretty weird, it seems the RTP control traffic(length 220) comes through but the audio traffic itself(length 1292) does not
<RoyK^> erm - RTCP gomes through but not RTP?
<Guest79698> seems like it
<Guest79698> it might be a router problem, i'm not sure.. the rtp traffic pops up on both the other machines regardless of having pulseaudio rtp receive on
<RoyK^> RTCP is usually imbedded in RTP, though - perhaps RTSP?
<Guest79698> i'll fix up a paste for you with some info, hold on
<Guest79698> http://pastebin.com/kzVynkCB
<Guest79698> note the different port in the traffic on the server machine, which got me thinking it might be RTCP traffic which because of a lot of actual traffic doesnt neccessarily show up on the others
<Guest79698> both ports are owned by the pulseaudio process
<Guest62434> i'm Guest79698 btw :)
<Guest62434> with the multicast issue
<RoyK^> Guest62434: why not get a proper nick?
<Guest62434> good question
<RoyK^> vegar uten d og greier
<guntbert> !no | RoyK^
<ubottu> RoyK^: Hvis du vil diskutere pÃ¥ Norsk, vennligst gÃ¥ til #ubuntu-no. Takk!
<RoyK^> jada :)
<vegar_> wrong response :p
<RoyK^> I don't discuss stuff in Norwegian in here, but a short comment or two should be accepted
<guntbert> RoyK^: it was no reprimand - I wanted to be helpful
<vegar_> we have feelings too you know guntbert :)
<RoyK^> I know the rules, thanks :)
<guntbert> vegar_: why wouldn't you? :)
 * RoyK^ hands guntbert a bunch of dried fish to snac on
<RoyK^> snack, even
 * guntbert nibbles
<RoyK^> mÃ©r finns gott aÃ° eta harÃ°fiskur Ã­ kvÃ¶lÃ°
<vegar_> now it's starting to get out of hand
<RoyK^> :D
<RoyK^> I'll quit it
<RoyK^> it'd be nice if someone could fast-forward the btrfs progress so that I could use linux for storage and not having to use friggin' opensolaris
<MTecknology> Any of you know how a partition UUID is calcualted?
<MTecknology> RoyK^: put your own effort into it :D
<guntbert> MTecknology: only how you can find it: blkid :)
<RoyK^> MTecknology: heh - I'm not a coder - it's easier to just use zfs
<MTecknology> guntbert: :P - I use ls -l /dev/disk/by-uuid/
<guntbert> MTecknology: right - but if I remember correctly that is not always correctly populated
<MTecknology> guntbert: no, not after you change things before a reboot - I never knew blkid before today :P
<guntbert> MTecknology: ok
<uvirtbot> New bug: #560299 in samba (main) "package samba-common-bin 2:3.4.0-3ubuntu5.6 failed to install/upgrade: Unterprozess installiertes post-installation-Skript gab den Fehlerwert 2 zurÃ¼ck (dup-of: 514963)" [Undecided,Confirmed] https://launchpad.net/bugs/560299
<MTecknology> guntbert: how are ya?
<GhostFreeman_> How do i reset the password on a server using an install CD
<animeloe[net]> you can use a liveCD
<animeloe[net]> with the server CD use the rescue mode
<vegar_> Anyone heard Radiohead - Reckoner and thought.. is this RHCP?
#ubuntu-server 2010-04-11
<uvirtbot> New bug: #560377 in qemu-kvm (main) "[lucid] Stuttering mouse" [Undecided,New] https://launchpad.net/bugs/560377
<GhostFreeman> Has anyone here managed to get byobu to run on Hardy?
<Nafallo> o/
<GhostFreeman> do I just need to pull down the deb and unpack
<Nafallo> I use the PPA.
<GhostFreeman> what's the PPA? Apt?
<Nafallo> !ppa
<ubottu> With Launchpad's Personal Package Archives (PPA), you can build and publish binary Ubuntu packages for multiple architectures simply by uploading an Ubuntu source package to Launchpad. See https://help.launchpad.net/PPAQuickStart.
<GhostFreeman> Thanks
<GhostFreeman> To use ppa on hardy, do I need a launchpad account?
<MTecknology> So it's simple and easy to upgrade from one LTS to the other?
<KurtKraut> MTecknology, yes: just a single command and about 1gb of download.
<MTecknology> KurtKraut: that's awsome - i didn't know it was possible :)
<KurtKraut> MTecknology, there might be some questions asked during the processe if you have installed a packaged that has been through some deep changes.
<JanC> well, the amount of download depends on what you have installed   :P
<KurtKraut> MTecknology, bookmark this page: https://help.ubuntu.com/9.10/serverguide/C/installing-upgrading.html
<limpc> hi
<limpc> im having problems with RAID 10 and ubuntu 9.10
<limpc> its not detecting my drives at installation
<limpc> they're already configured as raid 10 via motherboard controller
<pmatulis> limpc: controller not supported?  what is it?
<limpc> no, theres no actual raid controller, its more of softRAID than a real HW raid controller. however its built into the motherboard, and is handled by an AMD SB850 chipset
<limpc> it has a raid config utility built in, i set up 4 2TB drives as RAID 10, it shows a LD of 3999.99 afterwards
<limpc> but none of the drives show up in ubuntu's partition manager during installation
<seurgey> Hey, im trying to set up a VPN server for my iphone using this guide http://en.dogeno.us/2009/05/setup-a-vpn-server-in-ubuntu-810-for-iphone/ and my iphone wont seem to connect
<seurgey> ........
<seurgey> hello/
<the_hoser> Hi there.  I'm trying to setup a quick server with apache/fcgid and I'm getting a 403 error when attempting to run my fastcgi program.  no errors are landing in the error logs.  Any idea what I'm missing?
<the_hoser> figured it out
<jetole> hey guys. Does anyone know how to setup dhcpd so that it ignores certain mac addresses. i.e. it doesn't offer them any dhcp service what so ever?
<lifeless> jetole: I would try creating an empty pool and putting those macs in the pool; or you could use ipfilter
<lifeless> sorry, iptables
<lifeless> theres also a ethernet layer table/filter
<jetole> I think you mean arptables on that comment but this isn't really something in my case that I think I want handled by the firewall but I will look into the empty pool concept
<MTecknology> gah.. I'm trying to make one server send email to another server - both on the same lan- they don't want to listen...
<MTecknology> I can do telnet lists.kalliki.com 25 - so the hosts file is making it resolve correctly... I'm starting to think exim is at fault but the logs may or may not agree - I'm not sure...
<MTecknology> ok - exim logs say that the request isn't even making it to exim..
<uvirtbot> New bug: #560603 in php5 (main) "crashes with SIGSEGV" [Undecided,New] https://launchpad.net/bugs/560603
<mrcoodles> hello everyone im having a problem with a NIC under server 9.10 ; lspci outputs "Ethernet controller: Device 00ec:8139 (rev 10)" . any idea what to do ?
<persia__> Hello.  I just upgraded my server to lucid, and ran into an issue with LVM devices (including root) not being automounted anymore (due to missing dm_mod).  Does anyone have a pointer to nice docs that explain how I should modify my system to set up initramfs properly?
<padhu> I want to configure mail server over intranet in ubuntu. any guidelines to configure it?
<ScottK> padhu: There is a lot of mail server setup documentation in the Ubuntu Server Guide.
<RoyK^> padhu: apt-get install postfix?
<padhu> RoyK&^: Is it enough?
<padhu> is it require to edit .conf file before starting service?
<guntbert> padhu: make sure it works only on the intranet and doesn't send mail to the outside
<RoyK^> padhu: obviously that depends on what you want to do
<guntbert> padhu: especially not mail *from* outside *to* outside
<RoyK^> guntbert: i don't think you can configure postfix to route only to certain subnets
<RoyK^> guntbert: heh - that's relaying
<RoyK^> default postfix config doesn't allow for much relaying
<padhu> My requirement is Local network only. not for outside. Users count is nearly 100
<RoyK^> iirc default is to relay from my_networks
<ScottK> padhu: You want more than an MTA.  I'd look at the server guide.
<RoyK^> padhu: postfix is an mta, doing transport, you might want to have imap access as well?
<guntbert> RoyK^: yes - my concern was relaying
<RoyK^> that's not a problem unless you do something wierd in main.cf
<padhu> Rpyk^: any setup guide or Howto?
<padhu> imap also wanted
<ScottK> padhu: It's in the server guide.
<ScottK> See help.ubuntu.com
<guntbert> padhu: https://help.ubuntu.com/8.04/serverguide/C/email-services.html
<padhu> ScottK: Link please
<guntbert> padhu: you can replace the version number ... :)
<ScottK> padhu: guntbert just gave you  the link for 8.04.  Make sure you get the version for what you are running.
<padhu> Thank you
<padhu> Any version, but only ubuntu
<padhu> most of people point me other flavours. But i want ubuntu
<RoyK^> padhu: if you want a full-grown email system, take a look at zimbra
<RoyK^> easy to setup and manage, but it works best on a dedicated system (or VM)
<padhu> oh, is it possible to setup for local network?
<_Dok_> I think about ubuntu server for my new server. I read, that 9.10 is maintained until 2011. Is it possible to update the server 2011 remote?
<padhu> RoyK^: any guide or link?
<ScottK> _Dok_: Yes.
<guntbert> _Dok_: if you wait a bit and install 10.04 - that is LTS too - so you get a very long support
<_Dok_> 10.4?
<ScottK> It will be released on April 29.
<_Dok_> when will 10.4 been released?
<RoyK^> padhu: zimbra.com
<RoyK^> padhu: otherwise will google.com help you quite efficiently
<padhu> THanjk you :-)
<padhu> google is good partner for search ;-)
<ScottK> RoyK^: Generally we try to help use Ubuntu Server here.  I'm not sure why you're directing people elsewhere when it could serve their needs.
<RoyK^> ScottK: I use zimbra on ubuntu 8.04
<ScottK> RoyK^: Yes, but it's not part of Ubuntu.
<RoyK^> well, still, it rocks
<ScottK> It also does a lot of things that he didn't express an interest in.
<RoyK^> he mostly expressed an interest in a mail system, and zimbra is easy to setup, so I mentioned it - is that so bad?
<ScottK> It seems rather odd to jump to non-FOSS solutions when not needed on a FOSS distro support/development channel.
<RoyK^> zimbra is gpl
<ScottK> Parts.
<RoyK^> the OSS zimbra is pure gpl
<RoyK^> the closed zimbra contains more parts not under gpl
<ScottK> In any case, Ubuntu Server pretty easily meets the needs expressed.
<RoyK^> it does, but it seemed to me he didn't know much about linux in the first place, so setting up one of the email servers in ubuntu might be a bit of a hassle
<_Dok_> will it possible to update from 9.10 to 10.04 LTS?
<RoyK^> do-release-upgrade
<RoyK^> or do-release-upgrade -d to upgrade to the beta
<_Dok_> ic
<jMyles> Looking for opinions:  What's a good way to watch the activities of a server?  cat /var/log/*.log is too intense for me, especially the firewall messages.  What are some of your favorite setups?
<Zider> logsentry
<jMyles> Zider: never heard of it - it is a seperate package?
<Zider> no idea if there's a package for it
<RoyK^> no logsentry package afaics
<RoyK^> logwatch and fwlogwatch are there, though
<RoyK^> jMyles: logwatch looks ok
<darksider> would it be helpful to ubuntu if i hosted a mirror of all the repositories? i livei in scotland, so it would maybe be nice to have a SCOTLAND mirror for those north of the UK border??
<darksider> live in *
<RoyK^> darksider: I don't see why not, but I guess a 100Mbps link would be a minimum
<darksider> RoyK, aha..i think i could maybe get that...i will obviously be testing it out to see how well it handles MY stuff first ^_^
<RoyK^> aren't there quite a few mirrors in the uk already?
<RoyK^> and - does Scotland have a separate network, or is all routed through southern UK?
<RoyK^> which reminds me I haven't been in Scotland for years
 * RoyK^ misses good Haggis with a Guinness or two
<blacksunseven> looking for some help with a network install, any takers?
<lenios> blacksunseven, any problem?
<blacksunseven> yeah, i've followed https://help.ubuntu.com/community/Installation/LocalNet to the tee, making the appropriate changes
<blacksunseven> but when i execute the wrapper i get the following Starting bootpd: default current directory is at /var/lib/tftpboot ... :bootpd not running
<blacksunseven> the guide does reference inetd which was not installed, so i installed xinetd and have added bootp and tftp lines to the conf for that
<lenios> there's an alternative with dhcp at https://help.ubuntu.com/8.10/installation-guide/i386/install-tftp.html
<blacksunseven> giving it a shot now
<blacksunseven> Setting up dhcp3-server (3.1.2-1ubuntu7.1) ...
<blacksunseven> Generating /etc/default/dhcp3-server...
<blacksunseven>  * Starting DHCP server dhcpd3                                                    * check syslog for diagnostics.
<blacksunseven>                                                                           [fail]
<blacksunseven> invoke-rc.d: initscript dhcp3-server, action "start" failed.
<lenios> check syslog then
<blacksunseven> yeah i needed to change the config file, where i am now
<blacksunseven> http://pastebin.org/147201
<sherr> The error is quite clear "Not configured to listen on any interfaces!"
<sherr> You need to configure the DHCP file to listen on one of your network interfaces.
<blacksunseven> i understand, but i'm not sure how to do that
<RoyK^> blacksunseven: it's in the manual, and in the config file
<RoyK^> are tee eff em
<blacksunseven> i had tried that before saying anything
<blacksunseven> the manual is long
<blacksunseven> and i'm not familiar with everything in it
<blacksunseven> cant find it
<uvirtbot> New bug: #363877 in python-boto (main) "Deprecation Warnings when running under Python 2.6" [Low,Fix released] https://launchpad.net/bugs/363877
<jMyles> Looking for a good way to watch logs in real time on my ubuntu server, preferably color-coded.  What's the standard here?  logwatch gives me a crazy bizarre output.
<sherr> blacksunseven: http://www.ubuntugeek.com/how-to-install-and-configure-dhcp-server-in-ubuntu-server.html
<sherr> jMyles: Round my way, the standard realtime way is "tail -f /var/log/syslog" :-)
<sherr> Alternatives (web page style) are MRTG, munin, cacti etc.
<jMyles> sherr: Yeah, I'm using tail now, but it's a little dizzying.  I'd like to have something that's color coded, perhaps that I can switch different logs on and off during operation.
<jMyles> sherr:  I'd love a beep when someone successfully SSH's, along with a bold alert of who it was and how they were authenticated
<jMyles> sherr: Perhaps also a similar but less dramatic performance whenever a DHCP lease is given out...?
<sherr> jMyles: Maybe that sort of stuff gets wrapped up in the "management" overlays you can get e.g. ebox. But I don't know.
<sherr> If you want monitoring, solutions like "nagios" exist.
<edakiri> What software can capture just the unix permissions information, including user & group and later reapply them to files?  That would be useful for backup and restoration with some archive formats.
<sherr> edakiri: "ls", "chmod" and "chown" in a script I guess. What archive formats?
<edakiri> sherr: PAQ , 7z although I know it can be done through tar, I want a non
<edakiri> non-solid archive & ability to selectively extract files
<edakiri> remembered something related by SUSE.  It is not a complete solution in itself.  Maybe with the right 'find' parameters?  http://ftp5.gwdg.de/pub/opensuse/source/distribution/11.2/repo/oss/suse/src/permissions-2009.10.07.1653-2.1.src.rpm
<edakiri> Yes, looks possible.
<zomGreg> hello, I am running Karmic after upgrading from Jaunty. I'm trying to upgrade my eucalyptus install from 1.6.1 to 1.6.2. Can anyone help direct me on this?
<Rafael_> Does anybody has experience with cwrsync?
<KurtKraut> The Intel vPro technology has a subset called Intel AMT that allows VNC/KVM access to BIOS, allowing remote BIOS configuration, remote shutdown and even remote power up through the internet. Can this be done with Linux? All demos I see are using Windows. This is quite a good feature for servers. Is anyone here aware of that?
<stgraber> KurtKraut: servers tend to have something better than the AMT (HP iLO or DELL drac) though AMT can be useful to manage a set of laptop and desktop in a corporate environment
<stgraber> KurtKraut: to make the actual chipset to work, it's relatively easy, it's just a few options to set in the BIOS, you'll then be able to connect over http to the chip and shutdown/reboot/tweak some settings of your laptop/desktop
<stgraber> KurtKraut: AFAIK the actual KVM part is done through an agent in the OS, that part doesn't work on Linux but a similar functionality could be provided by having a VNC and/or SSH server started at boot time
<KurtKraut> stgraber, so I can purchase computers/laptops with AMT, use Ubuntu in these computers and Ubuntu at my office and I'll be able to control them remotely. AMT doesn't require Windows, right?
<stgraber> KurtKraut: as for the actual client, you can either use your web browser or use some package in the archive to access to the serial redirect option of the AMT
<stgraber> KurtKraut: for power on/off, reboot and serial redirection it doesn't
<KurtKraut> stgraber, oh, that's sad :(
<stgraber> KurtKraut: for a well integrated remote desktop (similar to having a KVM in your laptop), it'll require something to be installed in the OS
<stgraber> so if your goal is to be able to start laptop/desktop remotely and then connect to them over SSH or VNC, amt will work well
<KurtKraut> stgraber, what I miss the most is the hability to shutdown or power up remotely thru internet and change BIOS settings remotely.
<KurtKraut> stgraber, for the rest, I already use SSH.
<stgraber> though I guess you'll very likely need some management tool if you want to control more than a desktop at once (and I don't think that exists on Linux yet)
<stgraber> KurtKraut: ok, so AMT should actually do what you're looking for
<KurtKraut> stgraber, so AMT allows me to access a BIOS of remote machine remotely without using Microsoft Windows in my company?
#ubuntu-server 2011-04-04
<jeeves_moss> how do I set up an internal DNS?
<io_error> Good evening, do the EC2 AMI images work with pvgrub?
<kaushal> Hi
<kaushal> I have planned to use 10.04 LTS for setting up Gateway in my office
<kaushal> what should be the hardware configuration and what all recommended applications are needed ?
<io_error> Do the  official EC2 AMI images work with pvgrub?
<axisys> how do I setup a NAS on ubuntu server? I have a 1TB WD usb storage that I attached to my ubuntu server .. I like to make it accessible from all of the other computers (mac and linux) .. kind a like a private dropbox
<rnigam> hello everyone, I have a netperf question. I am trying to set the socket buffer size on sender and reciever side using -m and -M and the buffer size actually doubles when i run the netperf command. I am  running netperf on Maverick. Please direct me to the right channel if this should not be here. Thanks.
<axisys> ok i mounted the usb storage like this
<axisys> /dev/sdb1 on /mnt type vfat (rw)
<axisys> how do I make sure it sticks a reboot?
<io_error> axisys: Add an entry into /etc/fstab
<axisys> in other words what should the /etc/fstab look like?
<axisys> io_error: :-)
<io_error> axisys: Something like this: /dev/sdb1 /mnt vfat defaults 0 0
<axisys> io_error: thanks
<axisys> ok so this worked ..
<axisys> /dev/sdb1       /storage        vfat    rw      0       0
<axisys> io_error: thanks a lot
<io_error> axisys: as long as it works :)
<axisys> i guess now I have to find out how to share it over the network so my mac mini to rw to it
<io_error> axisys:  have you the GUI installed?
<axisys> io_error: on the ubuntu server?
<io_error> axisys: right
<axisys> io_error: no .. just cli
<axisys> but i can x11 over ssh if necessary .. after all they all are hanging off of my linksys router
<io_error> axisys: hm, first need to install samba... like apt-get install samba4
<axisys> io_error: hmm.. mac does not read nfs?
<io_error> axisys: sure you can do NFS to the Mac, but Windoze will not like it
<axisys> io_error: i have no windows.. just mac ppc and ubuntu
<io_error> oh, well just set up NFS and forget about that samba junk :)
<axisys> io_error: yep.
<axisys> how do I share the storage folder ? in solaris i could run share
<axisys>  /storage is where the usb device mounted
<io_error> axisys: Add a line in /etc/exports ... example: /storage *(ro,insecure,all_squash)
<axisys> io_error: oh ok.. thanks
<io_error> axisys: it works pretty much the same as solaris /etc/exports
<io_error> Does the official EC2 AMI work with pvgrub?
<uvirtbot> New bug: #749895 in amavisd-new (main) "package amavisd-new-postfix 1:2.6.4-1ubuntu6 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/749895
<axisys> io_error: what does insecure and all_squash do?
<shadow42085> hi
<io_error> axisys: They're in the man page :) mainly just makes the export REALLY read-only and locks it down a bit more
<io_error> axisys: if you want it writeable you'll have to put in different options anyway
<shadow42085> I need to know which free control panel for websites are
<shadow42085> ththe easiest to use
<shadow42085> sorry bout the double
<io_error> shadow42085: there really aren't many that are good AND free
<shadow42085> well I was considering webmin but it's obsolete
<io_error> shadow42085: The only free one I can think of offhand is ispconfig, but it's been a long time since I looked at that
<shadow42085> I think i have seen it but never used it
<io_error> shadow42085: In any case if you want to set up web hosting software, the absolute best place to go is webhostingtalk.com forum
<io_error> cPanel is king in web hosting, because it's very good, but you also have to pay for it
<shadow42085> i know cPanel I have used it before
<shadow42085> when I used free hosting
<io_error> Well I finally found the answer to my own question. The official EC2 AMI images are already using pvgrub.
<shadow42085> but I am using an old server that I am tinkering with
<io_error> shadow42085: I think cPanel has a free trial, but if you insist on free stuff then I suggest you check out the WHT forum for more ideas
<shadow42085> ok
<shadow42085> i will just go back to webmin it was the easiest
<axisys> io_error: failing to mount it on nfs client
<axisys> sudo mount -t nfs4 192.168.1.106:/storage /mnt
<axisys> mount.nfs4: mounting 192.168.1.106:/storage failed, reason given by server: No such file or directory
<io_error> axisys: Did you kick the NFS server?
<axisys> sudo exportfs
<axisys>  sudo exportfs
<axisys>  /storage        192.168.1.0/24
<io_error> axisys: Restart the nfs server, and if that doesn't work, kick it for real :)
<io_error> Oh, and make sure /storage exists and it's mounted :)
<axisys> sudo /etc/init.d/nfs-kernel-server restart <-- run that
<io_error> anything in the log?
<axisys> My Stuff
<axisys>  /storage$ ls
<axisys>  My Stuff
<axisys> io_error: nfs server log
<axisys> io_error: http://pastebin.com/iV9ZmN55
<axisys> some people suggested to disable ipv6 during boot to fix it. from 2010
<axisys> hmm
<axisys> /dev/sdb1 on /storage type vfat (rw)  <-- could the vfat be a problem?
<axisys> i am trying to nfs share the usb drive..
<io_error> axisys: no, errno 97 is address not supported by protocol. You can try blacklisting ipv6 if you aren't using ipv6 on your home network.
<io_error> axisys: add "blacklist ipv6" to /etc/modprobe.d/blacklist.conf and reboot
<axisys> ok.. my irc is running on the nfs server.. any way to avoid reboot ?
<axisys> ok i am taking the path of samba
<axisys> i see the folder .. but cannot write to it
<axisys> drwxr-xr-x 4 root root 16384 1969-12-31 19:00 /storage .. i think i need to change it to nobody.nogroup .. but it is failing
<axisys> sudo chown nobody.nogroup /storage
<axisys> chown: changing ownership of `/storage': Operation not permitted
<axisys> this is how storage is mounted
<axisys> /dev/sdb1 /storage vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0
<axisys> i guess i need to add switches in the mount option to make it nobody, ngroup
<io_error> axisys: You'll need the uid= and gid= options in /etc/fstab, and you also need to set a user mapping in /etc/exports
<shadow42085> I am having CA problems now
<shadow42085> !pastebin
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<shadow42085> is there anyone still here?
<uvirtbot> New bug: #749983 in openssh (main) "GSSAPI auth fails when incorrect reverse" [Undecided,New] https://launchpad.net/bugs/749983
<m_tadeu> hi everyone....does anyone know how can I verify if a udp package is being forwarded from a router to a server?
<jjohansen1> m_tadeu: any other machine on the local network can watch all the packets using packet sniffing, look at wireshark or similar tools
<m_tadeu> jjohansen1: thanx...I'll take a look at it
<alex88> hi guys..someone ever used portknocking?
<SpamapS> alex88: long ago I did.. had a script that would tail my deny logs.
<alex88> SpamapS: sorry for late answer..but i'm thinking..you need to knock the right ports in the right sequence?
<alex88> will any error need you to restart the attempt?
<SpamapS> alex88: yeah the idea is you have a script on your laptop/phone/whatever that just hits the sequence of ports and then the FW allows traffic from your IP
<alex88> SpamapS: yeah i know the idea..my thought was that if someone will syn scan the full port range it will hit the ports
<SpamapS> alex88: the sequence is exact
<SpamapS> alex88: if one port arrives that isn't in the seq, you assume that is not the right knock
<alex88> SpamapS: so you have to restart from the beginning?
<SpamapS> alex88: of course, otherwise as you say portscans would have a good chance of hitting your knock
<alex88> SpamapS: that was my doubt.. thank you very much :)
<alex88> oh, last one..how can portknocking be encrypted?
<SpamapS> alex88: its a random sequence of numbers.. its already a key
<alex88> i mean, if you sniff you see serveral tcp syn.. but those can be replayed..
<SpamapS> alex88: you could use a OTP system, meaning you can only use one knock one time
<SpamapS> and just pre-share a list of knocks
<alex88> yeah read about that..but in this http://www.portknocking.org/view/knocklab/knock_lab it seems it just encrypt the config
<SpamapS> alex88: I stopped using port knocking because it was a PITA to use on public terminals..
<alex88> pita?
<SpamapS> alex88: I found it was easier to simply carry public keys (1 privileged, one non-privileged) and disable password auth for SSH.
<SpamapS> PITA = Pain In The Arse
<alex88> lool ok :)
<alex88> well sure for the ssh security :) for now i've that enabled when  you connect to vpn
<SpamapS> same story for VPN really..
<SpamapS> certs.. ssh.. whatever it is
 * lool pops up
<alex88> yup..
<soren> ScottK: Re bug 741616.. It's already in the queue, as it turns out.
<uvirtbot> Launchpad bug 741616 in nova "[FFe] Add a nova-ajax-console-proxy package" [Wishlist,Confirmed] https://launchpad.net/bugs/741616
<ScottK> soren: OK.  I'll try and have a look a bit later then.
<soren> ScottK: Ta very much.
<shadow42085> i can't seam to get auth login and auth=login in a dovecot-postfix setup
<shadow42085> any ideas
<uvirtbot> New bug: #739364 in irqbalance (main) "irqbalance crashed with SIGSEGV in readdir64()" [Medium,Triaged] https://launchpad.net/bugs/739364
<al-maisan> hello there! I am installing ubuntu server on a system that uses LVM, what device should should I specify to grub-install?
<al-maisan> the installer suggests "/dev/mapper" and the LV group is called "VolGroup00"
<shadow42085> i can't seam to get auth login and auth=login in a dovecot-postfix setup any ideas?
<uvirtbot> New bug: #738489 in squid (main) "squid crashed with SIGABRT in raise()" [Medium,Incomplete] https://launchpad.net/bugs/738489
<uvirtbot> New bug: #744173 in php5 (main) "php5 assert failure: *** glibc detected *** /usr/bin/php: double free or corruption (!prev): 0x08c672e8 ***" [Medium,Incomplete] https://launchpad.net/bugs/744173
<uvirtbot> New bug: #471980 in dhcp3 (universe) "no pude anexar un archivo a la carta" [Low,Confirmed] https://launchpad.net/bugs/471980
<shadow420> I am trying to setup a mail server using postfix/dovecot but when I telnet into it and test it and don't see auth login and auth=login any ideas?
<shadow420> !mailserver
<ubottu> Ubuntu supports the Simple Mail Transfer Protocol (SMTP) and provides mail server software of many kinds. You can install a basic email handling configuration with the "Mail server" task during installation, or with the "tasksel" command. See also https://help.ubuntu.com/community/MailServer and https://help.ubuntu.com/10.04/serverguide/C/email-services.html
<zul> hallyn: have you seen this before? with lxc and libvirt? https://bugs.launchpad.net/nova/+bug/749973
<uvirtbot> Launchpad bug 749973 in nova "libvirtError: internal error cannot determine default video type" [High,Confirmed]
<uvirtbot> New bug: #742995 in irqbalance (main) "irqbalance crashed with SIGSEGV in g_slice_alloc()" [Medium,Incomplete] https://launchpad.net/bugs/742995
<hallyn> zul: no.  how does it determine video type?
<hallyn> does it try any ioctls?  I'm wondering whether the devices namespace is to blame
<shadow420> I am trying to setup a mail server using postfix/dovecot but when I telnet into it and test it and don't see auth login and auth=login any ideas?
<Webbb> #ubuntu-fi
<al-maisan> when  installing ubuntu server on a LVM system: can the /boot partition be inside the the LVG as well or do I need to keep it on a normal (i.e. non-lvm) partition?
<hallyn> zul: you know, now that i've got lxc-clone with lvm, i just can't stand the delay any more in starting cloud instances to test a bug :)
<MTeck> I'm trying to copy only a specific set of files that could be buried pretty much anywhere. I'm trying to do it with something like this...    rsync -auz --delete --include "*/" --include "*.[Pp][Nn][Gg]" --include "*.[Dd][Oo][Cc]" --exclude "*" /source/ /dest   but that seems to grab everything
<RoAkSoAx> morning all
<MTeck> Any thoughts about what I'm doing wrong?
<kirkland> RoAkSoAx: hiya
<kirkland> RoAkSoAx: made it back okay?
<kirkland> RoAkSoAx: how did the talk go?
<shadow420> I am trying to setup a mail server using postfix/dovecot but when I telnet into it and test it and don't see auth login and auth=login any ideas?
<RoAkSoAx> kirkland: it went well
<RoAkSoAx> kirkland: yeah made it back alive... left hotel 9.30am arrived miami 10.30pm
<RoAkSoAx> got delayed in dallas
<hallyn> zul: (reminder) can you push the new lxc package?
<kirkland> RoAkSoAx: bummer
<kirkland> RoAkSoAx: we got a little feedback on cobbler ppa packages, https://bugs.launchpad.net/bugs/741661
<uvirtbot> Launchpad bug 741661 in cobbler "Web UI does not work from default install (2.1.0~bzr-2009-0ubuntu1)l" [Medium,In progress]
<shadow420> I am trying to setup a mail server using postfix/dovecot but when I telnet into it and test it and don't see auth login and auth=login any ideas?
<kirkland> RoAkSoAx: looks like those packages are better, but there's a new exception
<RoAkSoAx> kirkland: i'll look at it in a bit. I also have the patch for the hardlink thing, but have to test it first
<kirkland> RoAkSoAx: i want to get something uploaded today
<kirkland> RoAkSoAx: let's get it to a point where it's definitely better than what was there
<kirkland> RoAkSoAx: and upload
<kirkland> RoAkSoAx: and continue working on next issues
<RoAkSoAx> kirkland: ok cool, I'm about to test the patch and will upload to PPA
<kirkland> RoAkSoAx: at that point, I suggest we get that into the archive
<kirkland> RoAkSoAx: and then keep burning down other issues incrementally
<shadow420> I am trying to setup a mailserver using postfix/dovecot but when I telnet into it and test it and don't see auth login and auth=login any ideas?
<RoAkSoAx> kirkland: so better, yet, upload what's in PPA now, and from there I'll apply the hardlink patch
<RoAkSoAx> so we have something functional right now
<th0mz_> is ther a way to reload network only on 1 interface and not all please ?
<th0mz_> i changed a few things in the interfaces file, but ifdown & ifup doest seems to apply changes.
<semiosis> th0mz_: i think 'service network-interface restart INTERFACE=???' will do it
<zul> hallyn: yep
<th0mz_> thanks semiosis
<kirkland> RoAkSoAx: good, i agree
<kirkland> RoAkSoAx: i might wait for SpamapS to come online this morning
<shadow420> I am trying to setup a mailserver using postfix/dovecot but when I telnet into it and test it and don't see auth login and auth=login any ideas?
<kirkland> RoAkSoAx: he offered on Friday to take a look and do a quick round of testing
<RoAkSoAx> kirkland: yeah, cause the new issue that was reported on bug #741661 might also be something related to upstream?
<uvirtbot> Launchpad bug 741661 in cobbler "Web UI does not work from default install (2.1.0~bzr-2009-0ubuntu1)l" [Medium,In progress] https://launchpad.net/bugs/741661
<shadow420> um excuse me?
<kirkland> RoAkSoAx: right
<kirkland> RoAkSoAx: i don't know what that error means
<kirkland> RoAkSoAx: we might need to jump in #cobbler and ask
<RoAkSoAx> kirkland: i think it is an issue when trying to edit kickstarts
<kirkland> RoAkSoAx: perms/owners on a dir in /var/lib/cobbler, i bet
<zul> hallyn: lxc-fix-3bugs lxc-clone and fix-template-syntax?
<hallyn> zul: lxc-clone should not be in there
<hallyn> lxc-fix-3bugs does have 3 fixes though
<hallyn> and that's the branch, yes
<hallyn> much as I'd like to get lxc-clone in there, I think skaet would have my head :)
<zul> ack
<zul> hallyn: done
<hallyn> zul: thanks!
<zul> hallyn: have you seen this error before with lxc and libvirt before: https://bugs.launchpad.net/nova/+bug/749973
<uvirtbot> Launchpad bug 749973 in nova "libvirtError: internal error cannot determine default video type" [High,Confirmed]
<uvirtbot> New bug: #750371 in squid (main) "squid causing /var to stay busy during shutdown" [Undecided,New] https://launchpad.net/bugs/750371
<hallyn> zul: no.  do you know what nova does to check display?
<hallyn> zul: my guess is it's because of the devices namespace
<hallyn> uh, cgroup
<hallyn> zul: can you reproduce it?
<hallyn> if you can, try doing so with a container where all of the 'cgroup.devices.*$' entries in the config are commented out
<hallyn> ah, no
<Kartagis> hello
<zul> hallyn: i havent been able to but ttx can
<Kartagis> 2011-04-04 15:14:36 IMAP(bilgi): Error: mail_location not set and autodetection failed: Mail storage autodetection failed with home=/home/bilgi
<Kartagis> 2011-04-04 15:14:36 IMAP(bilgi): Fatal: Namespace initialization failed
<Kartagis> 2011-04-04 15:16:21 imap-login: Info: Aborted login (auth failed, 3 attempts): user=<bilgi@bilgisayarciniz.org>, method=PLAIN, rip=184.82.40.118, lip=184.82.40.118, secured <--- could this be why I am unable to login?
<hallyn> zul: what is major:minor for /dev/nbd12 ?
<hallyn> zul: I suspect you need to add those to the devices cgroup
<hallyn> (to the whitelist that is)
<zul> ttx: ^^^
<ttx> yep?
<hallyn> so add something like:
<zul> hallyn: hmm how do you do that?
<ttx> hmm, I need to reinstall to further test. Maybe comment on the bug, the original poster might get the info to you faster than I do
<hallyn> lxc.cgroup.devices.allow = b 43:* rwm
<hallyn> commented
<zul> ttx: we were just disccusing that lxc libvirt bug
<zul> you were able to reproduce it at one point right?
<ttx> yes.
<ttx> before natty blew up my test laptop.
<ttx> zul: I followed your wikipage.
<ttx> i suspect the poster of the bug did, too.
<ttx> zul: maybe the instructions are missing a critical step.
<zul> hallyn: this is using libvirt exclusively
<parkdriver> I currently have a clean install of ubuntu server 10.04.2 LTS but I read about ubuntu 11.x being released this month
<parkdriver> worth the upgrade or should i keep the 10.04.2 LTS?
<zul> ttx: it might be...ill try to reproduce it locally
<ScottK> soren: Done.
<hallyn> zul: but libvirt still uses the devices cgroup
<hallyn> zul: where is your wiki page?
<zul> hallyn: http://wiki.openstack.org/LXC
<uvirtbot> New bug: #750402 in cobbler (universe) "Editing Kickstarts/Snippets errors with "tainted file location"" [Undecided,New] https://launchpad.net/bugs/750402
<zul> kirkland: are you going to patch cobbler for the bug just opened?
<kirkland> zul: yes, RoAkSoAx and I are working on it
<kirkland> zul: we have a package in a PPA for testing
<zul> k
<RoAkSoAx> kirkland: Ok, so had to change the patch for hardlink as the hardlink we have in Ubuntu is different that the one in fedora (now testing)
<kirkland> RoAkSoAx: k
<Kartagis> hello
<RoAkSoAx> kirkland: ok I'm ready to upload to ppa, do you want me to add a ~ppa2 changelog entry, or just modify the ~ppa1 and but it to ~ppa2?
<Kartagis> can anybody be so kind to tell me why I can login to horde but to imp?
<kirkland> RoAkSoAx: do the latter
<RoAkSoAx> kirkland: done
<jjohansen> hggdh: any results on the test yet?
<hggdh> jjohansen: they failed, seemingly the same error
<rnigam> hello everyone, I have a netperf question. I am trying to set the socket buffer size on sender and reciever side using -m and -M and the buffer size actually doubles when i run the netperf command. I am  running netperf on Ubuntu Maverick Server. Please direct me to the right channel if this should not be here. Thanks.
<jjohansen> hggdh: hrmm interesting
<jjohansen> hggdh: so kvm can't be launched at all or only from eucalyptus?
<hggdh> jjohansen: I do use kvm on natty, on my laptop; on this machine it is only via euca
<patdk-wk> Kartagis, imp uses imap auth, horde uses any auth you want
<jjohansen> hggdh: can you try launching a plain kvm instance on the machine in question?
<Kartagis> patdk-wk: I set horde to use IMAP auth
<patdk-wk> are you sure the imap auth settings for horde and imp are the same?
<patdk-wk> I would just tell horde to use imp auth
<hggdh> jjohansen: will try; right now, though, I am in the middle of a lucid proposed kernel test (that is also failing)
<jjohansen> hggdh: well thats not good :(
<Kartagis> patdk-wk: yes
<hggdh> jjohansen: heh. Tell me about it...
<shaggy2> I need help, I am trying to set a static ip on my ubuntu server 10.10 it came out with error this error
<shaggy2> sudo /etc/init.d/networking restart
<shaggy2>  * Reconfiguring network interfaces...                                                                       SIOCDELRT: No such process
<shaggy2> SIOCADDRT: No such process
<shaggy2> Failed to bring up eth1.
<shaggy2> when I do ifconfig I get  eth1 and eth1:2
<pmatulis> shaggy2: maybe pastebin your interfaces file
<shaggy2> ok whats the link for pastebin? never used it
<webb> Hi
<webb> Anyone here is an expert with installing WEBMIN?
<shaggy2> http://pastebin.com/BSWTLHQN
<SpamapS> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<SpamapS> webb: ^^
<SpamapS> webb: try ebox
<SpamapS> or whatever they call it now
<webb> Oh... ok
<shaggy2> I have used webmin once before. search google for help thats how I done it, but yes it does fault out
<webb> eBox is now known as Zentyal
<shaggy2> pmatulis: http://pastebin.com/BSWTLHQN
<SpamapS> !ebox
<ubottu> ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<pmatulis> shaggy2: no loopback interface huh?
<shaggy2> there is
<pmatulis> shaggy2: i don't see it in the file
<webb> Thanks guys
<webb> let me give it a try
<shaggy2> # The loopback network interface
<shaggy2> auto lo
<shaggy2> iface lo inet loopback
<shaggy2> sorry missed it in the selection
<webb> :D  I think I will be back shortly asking for help...
<shaggy2> heng on I will pastebin the whole file
<shaggy2> pmatulis: http://pastebin.com/czJgRvFJ
<shaggy2> I returned it to auto to see what happened on restart with that one
<pmatulis> shaggy2: and the result?
<shaggy2> http://pastebin.com/VQ3VzCnJ
<webb> anyone knows is zentyal compatible with ubuntu server 10.10?
<pmatulis> shaggy2: looks good
<shaggy2> thats on auto, I want to chagne it to static cause I am changing the network addresses on my local systems so they are not public
<SpamapS> webb: it should be
<SpamapS> webb: looks like they're still calling it ebox even now in natty
<SpamapS>       ebox | 2.0.16-0ubuntu1 | natty/universe | source, all
<shaggy2> at I have a couple of items on my network that I don't want on the public ip's and I can not manualy set the ip for them so I have to do them on dhcp on the router
<webb> SpamapS: Have a look at http://forum.zentyal.org/index.php?topic=5443.0
<pmatulis> shaggy2: i would configure it manually to test
<webb> It is important to notice that all Zentyal releases are based on the Ubuntu LTS versions. Each Zentyal release is based on the Ubuntu LTS version that is available at the moment the release is launched.
<SpamapS> webb: ahh.. so in the regard.. you're not going to get much help from upstream. :-/
<webb> So.. it's not compatible?
<shaggy2> pmatulis: I got it to work, I reentered all the details that I changed, and then removed the dhcp3-client and it all works fine
<pmatulis> shaggy2: nice
<RoAkSoAx> kirkland: ok so installing a Fedora kvm instance with koan works. The ubuntu one not quite though! Looking into that now
<zertyui> hello
<zertyui> is there any incoherence  when you mysql and PostgreSQL on ubuntu lucid ?
<zertyui> i mean on a same machine
<jcole> i have a problem with automatic nsswitch and pam management.. i currently create a config for auth-client-config, but now i have pam-auth-update trying to also manage my pam configs.. this is causing my users lots of problems
<jcole> so, what i would like to know, which method should i use to manage nsswitch and pam? auth-client-config? pam-auth-update? auth-client-config+pam-auth-update??
<kirkland> RoAkSoAx: sweet
<kirkland> RoAkSoAx: i reviewed the ~ubuntu-virt ppa cobbler, looks like a vast improvement over whats in Natty right now
<kirkland> RoAkSoAx: i'm going to upload that now
<jcole> i am supporting hardy on up
<kirkland> RoAkSoAx: and then sponsor the cherry pick fix for https://bugs.launchpad.net/bugs/750402
<uvirtbot> Launchpad bug 750402 in cobbler "Editing Kickstarts/Snippets errors with "tainted file location"" [High,Confirmed]
<kirkland> did SpamapS ever come online today?
<RoAkSoAx> kirkland: yeah that sounds like a good plan
<RoAkSoAx> kirkland: and yeah he was online
<RoAkSoAx> SpamapS: ping
<jcole> cjwatson: btw, you were tright, the automated installer issue i had for sources.list was due to a buggy app borking sources.list after install
<kirkland> SpamapS: yo
<kirkland> RoAkSoAx: oh, hmm
<kirkland> RoAkSoAx: looks like some cruft leaked into debian/patches
<kirkland> dpkg-source: info: applying debian-changes-2.1.0-0ubuntu3
<kirkland> dpkg-source: info: applying debian-changes-2.1.0-0ubuntu3~ppa1
<RoAkSoAx> kirkland: yeah I also though the same but the diff's didn't show anything on them
<RoAkSoAx> so I just assumed it came from before
<kirkland> RoAkSoAx: okay, i'll prune them
<RoAkSoAx> alrighty
<kirkland> RoAkSoAx: please forward 35_fix_hardlink_bin_path.patch to upstream cobbler
<RoAkSoAx> kirkland: yes will do, will also fw 31_add_ubuntu_koan_utils_support.patch and 32_fix_koan_import_yum.patch
<kirkland> RoAkSoAx: okay, uploaded cobbler_2.1.0-0ubuntu3_source.changes
<kirkland> RoAkSoAx: yes, please do
<RoAkSoAx> kirkland: awesome!
<kirkland> RoAkSoAx: i don't think i can forward 33_authn_configfile.patch upstream
<kirkland> RoAkSoAx: that'll need to be a minor config difference between us and them
<RoAkSoAx> yeah that makes sense
<kirkland> RoAkSoAx: we have debconf, so we can make auth config by default
<RoAkSoAx> kirkland: but I think that's not needed if we use the cobbler user instead of creating new users
<RoAkSoAx> let me check
<uvirtbot> New bug: #564550 in apache2 (main) "apache2 crashed with SIGSEGV in zend_std_get_method()" [Low,Incomplete] https://launchpad.net/bugs/564550
<jcole> on the wiki, it says to use auth client here (ldap) -> https://help.ubuntu.com/community/LDAPClientAuthentication#Notes%20for%207.10%20and%20later
<jcole> but then, it says to use pam update here (active directory) https://help.ubuntu.com/community/ActiveDirectoryWinbindHowto#PAM
<RoAkSoAx> kirkland: yeah maybe they are enable that in their packaging too, but maybe not, so yeah that's not worth forwarding upstream
<jcole> seems like debian doesnt have auth client so it must be an ubuntu only thing... debian uses pam update instead... but the problem with pam update is it doesnt manage nsswitch
<jcole> so, should i use a combination of both?
<jcole> fyi, these are my auth client configs for ldap only and ldap+kerberos -> http://pastebin.com/X0D90cFr
<jcole> those configs work for hardy on up
<jcole> im also using pam_mkhomedir and pam_ccreds (for offline ldap logins)
<zertyui> hello there
<cjwatson> jcole: cool, thanks for following up
<zertyui> how to grep two content on the same time ?
<jcole> cjwatson: thanks for pointing me to the d-i logfile, i had no idea d-i saved that
<jcole> zertyui: grep -e string1 -e string2 file.txt ?
<zertyui> how to apt-cache search grep two content ?
<jcole> cjwatson: now im having an issue with my users logging into their boxen :/ pam-auth-update is clobbering auth-client-updates configs
<jcole> zertyui: you can use regex with apt-cache search
<zertyui> how ?
<jcole> apt-cache search string1\|string2
<cjwatson> jcole: nothing I know about, I'm afraid
<jcole> zertyui: or, apt-cache search "string1|string2"
<zertyui> dklmÃ¹*
<jcole> cjwatson: what is the preferred method to manage nsswitch and pam on an ubuntu-server? seems like managing logins methods would be a trivial thing
<pmatulis> jcole: auth-client-config
<zertyui> you don't get my point
<zertyui> what i mean is :
<jcole> pmatulis: and what about pam-auth-update clobbering my auth-client-configs
<pmatulis> jcole: well, don't do that then
<jcole> pmatulis: this is my auth client configs -> http://pastebin.com/X0D90cFr
<zertyui> i simply need to grep two content  like dev and postgresql     when i do apt-cache search postgresql |grep postgresql & dev
<pmatulis> jcole: and?
<jcole> pmatulis: those work fine, but pam-auth-update (which debian uses) clobbers my config
<zertyui> how to do this ?
<pmatulis> jcole: so don't use it
<cjwatson> jcole: not my field, sorry
<semiosis> zertyui: if you want to grep for 'a AND b', you can pipe from one grep to another... | grep a | grep b... will show lines containing both a AND b
<jcole> pmatulis: how do i remove it?
<semiosis> zertyui: if you want to grep for 'a OR b', you need to use the grep regexp for OR, which is vertical-bar |, so it needs to be escaped so the shell doesnt interpret it as a pipe... grep a\\\|b
<pmatulis> jcole: the package?
<zertyui> ok working
<jcole> pmatulis: "apt-get remove --purge libpam-runtime" tries to remove "at* cron* gdm-guest-session* libpam-ck-connector* login* lsb-core* network-manager-pptp* network-manager-pptp-gnome* pppconfig* pppoeconf*  pptp-linux* ubuntu-desktop* ubuntu-standard*"
<zertyui> thanks semiosis
<semiosis> yw
<jcole> pmatulis: i have it disabled in debconf also
<zul> SpamapS: can you put openvpn on your list to upstartify for natty+1
<pmatulis> jcole: boy, how did you come up with that command?
<jcole> pmatulis: dpkg -S /usr/sbin/pam-auth-update
<pmatulis> jcole: did you install it manually?
<SpamapS> zul: That one seems like it could be very tricky..
<SpamapS> zul: not that its simple w/ sysvinit.. but there are a number of ways openvpn is used.
<zul> SpamapS: yeah i looked at it before and shudder
<pmatulis> jcole: why not just leave it alone?
<zul> SpamapS: i was just looking at bugs and there are bugs like openvpn is started after x
<zertyui> is it possible to pickup a command from history ?
<jcole> pmatulis: my users are getting libpam-runtime installed by default and pam-auth-update is borking their pam configs which are suppoe
<jcole> bleh
<SpamapS> zul: right.. there's really no reason to delay openvpn after its networking is available. The issue is that it sometimes needs a particular interface.. so we may need to be very smart and try starting it multiple times.
<pmatulis> jcole: so stop using the command.  i don't get your problem really
<zul> SpamapS: totally agreed
<pmatulis> SpamapS: yes, like a bride, notably
<pmatulis> heh
<pmatulis> bridge
<SpamapS> hahahaha
<SpamapS> zul: do we have a server team idea pool yet?
<jcole> pmatulis: im not using pam auth update it gets automatically ran when some libs are installed (like ldap, krb, etc.)
<zul> SpamapS: nope afaik
<jcole> libpam-runtime	libpam-runtime/override	boolean	false
<jcole> pmatulis: that is the debconf value for disabling it ^^
<pmatulis> jcole: that's weird - the interference, i've never seen it
<SpamapS> pmatulis: meaning it creates a bridge, or needs a bridge before it starts? Therein lies the rub.. because its hard to know which.
<jcole> pmatulis: tell you what, try this on your box: apt-get install krb5-config krb5-user ldap-utils libnss-db libnss-ldap libpam-ccreds libpam-krb5 libpam-ldap nss-updatedb
<pmatulis> SpamapS: AFAIK, 'needs a bridge', but there is also the tap stuff that can screw things up
<jcole> pmatulis: i support hardy on up, and some ubuntus dont have pam update, so you need a newer ubuntu
<pmatulis> jcole: ah ok, "some ubuntus don't have pam update"
<pmatulis> jcole: which release is borked?
<jcole> pmatulis: now the problem is many of my users cant log into their boxen now
<jcole> pmatulis: their pam config are all scerwed up
<SpamapS> pmatulis: right, so I'm thinking we may need to really tightly integrate openvpn w/ upstart and run one upstart job per physical interface that comes up.
<SpamapS> Which.. at that point, sounds like ifup-post.d
<jcole> pmatulis: it looks like ldap/krb libs have config scripts for pam-auth-update so that must be why pam-auth-update is invoked
<pmatulis> jcole: which release is borked?
<jcole> pmatulis: i know at least lucid and maverick
<pmatulis> jcole: er, these releases have both pam-auth-update and auth-client-config ?
<SpamapS> hallyn_afk: ping re bug #574665
<uvirtbot> Launchpad bug 574665 in qemu-kvm "kvm + virtio disk corrupts large volumes (>1TB)." [High,In progress] https://launchpad.net/bugs/574665
<jcole> pmatulis: its not all my users... i think a pam-auth-update debconf box popped up for many of my users and they just hit enter or something
<jcole> pmatulis: yes
<pmatulis> jcole: best do a test yourself to make sure what the problem is
<jcole> pmatulis: just install those packages above that i told you about and you will see pam-auth-update prompt to run
<pmatulis> jcole: b/c such a thing would have caused an outrage.  i've been using ldap and kerberos lately and it 'just works'
<hallyn_afk> SpamapS: yes?
<jcole> pmatulis: try to revert/remove it (-r) and see what happens
<pmatulis> jcole: why do you say the prompt is due to pam-auth-update?
<jcole> pmatulis: im thinking maybe because im disabling pam-auth-update in debconf ("libpam-runtime libpam-runtime/override boolean false") there is no "bare" pam config being generated
<pmatulis> jcole: you did that before experiencing any grief?
<jcole> pmatulis: so, running auth-client-config does create a "bare" config for my users.. now, if they reverted auth-client-config, there is no "bare" config to go back to because pam-auth-update never created one in the first place
<jcole> pmatulis: try to revert you auth-client-config (-r) and then try to login locally
<jcole> pmatulis: chance is, you dont have a bare pam config that will work
<pmatulis> 14:45 <     jcole> pmatulis: im thinking maybe because im disabling pam-auth-update in debconf ("libpam-runtime libpam-runtime/override boolean false") there is no "bare" pam config        Brumle
<jcole> pmatulis: this is what i think the problem is
<pmatulis>                    being generated                                                                                                                                                           c0nv1ct_
<pmatulis> bleh
<pmatulis> jcole: did you make the debconf change before things went pear-shaped?
<jcole> pmatulis: there is a debconf prompt that asks you if you want pam-auth-update to manage you pam configs, setting that debconf value disables it
<jcole> pmatulis: i am having auth-client-config manage my pam configs
<pmatulis> jcole: well, like you hypothesize, it looks like these tools are inter-dependent
<jcole> pmatulis: did you try to -r your auth-client-config and check if you can still login?
<pmatulis> jcole: i'm not doing any tests right now
<jcole> pmatulis: well, it seems i should use pam-auth-update to mange pam since all ubuntu/debian auth packages (ldap/krb/etc) now include configs for pam-auth-update
<pmatulis> jcole: probably if you let the system do what it wants you should be good but that doesn't help you now does it?
<jcole> pmatulis: i was using auth-client-update at first because the ubuntu wiki talks about it here -> https://wiki.ubuntu.com/AuthClientConfig
<jcole> pmatulis: but, if its not the standard for ubuntu/debian packages then it doesnt make sense to use it anymore, especially after the issues im having
<wwwd> Hey all! I used $useradd to create a user. When I try and log in I am getting a blank background with no control or desktop. The messages are: Could not update ICEauthority file /home/user/.ICEauthority, Ther is a problem with the configuration server (/usr/lib/libconf-2-4/config-sanity-check-2 exited with status 256). I have tried adding user to group and asigning privlidges. Any idea why this is happening?
<wwwd> By the  way I also tried using the GUI >admin>users and groups...same
<pmatulis> jcole: btw, you should have confirmed the proper way and then force that on your clients.  never let users configure that kind of stuff
<jcole> pmatulis: i dont let my users configure there nss/pam
<pmatulis> jcole: didn't you say that?
<jcole> pmatulis: i have these configs that do it for them -> http://pastebin.com/X0D90cFr
<jcole> their*
<david5345> My Linux server clock is drifting too much. Both on 10.04 and 8.04 LTS I loose a lot of time. I found one server last week that lost 500 seconds in the space of 30 days. Why are my Ubuntu boxes having such a hard time keeping the time ?
<pmatulis> 14:39 <     jcole> pmatulis: its not all my users... i think a pam-auth-update debconf box popped up for many of my users and they just hit enter or something
<jcole> pmatulis: right
<pmatulis> jcole: well, that's what should be avoided
<jcole> pmatulis: i cant remove the package that has pam-auth-update
<jcole> pmatulis: if i could, i would add a "conflicts" for it to my control file
<pmatulis> jcole: how/why did such a thing run for them?
<jcole> pmatulis: like i told you above, its after installing the krb/ldap libs
<jcole> pmatulis: ubuntu/debian krb/ldap libs have configs included in them by default for pam-auth-update, so pam-auth-update prompts to run
<pmatulis> jcole: right, so they should never install such packages
<jcole> pmatulis: what?
<SpamapS> hallyn: so, that package hasn't been uploaded to lucid-proposed yet, has it?
<jcole> pmatulis: i want to enable ldap logins, so i need the ldap libs
<pmatulis> jcole: it sounds like you're migrating existing systems so you should get into a management tool (puppet) or create a custom package that automates things
<jcole> pmatulis: what is the alternate package for libpam-ldap that does not include pam-auth-update configs?
<jcole> pmatulis: or libpam-krb?
<jcole> pmatulis: is that on the ubuntu wiki/docs somewhere for managing logins?
<kirkland> Daviey: zul: not much activity in #cobbler-devel, huh?
<pmatulis> jcole: there are no alternate packages like that
<zul> kirkland: more activity on the cobbler ml
<kirkland> zul: i see
<jcole> pmatulis: you suggested me not to install those libs so my users wouldnt get that prompt
<jcole> pmatulis: those are the libs that enable ldap/krb in pam
<pmatulis> jcole: you deliver them in another way i meant
<jcole> pmatulis: apt-get install ?
<pmatulis> jcole: no
<hallyn> SpamapS: should'nt have been
<hallyn> SpamapS: i don't know if it has been today, but don't think so
<pmatulis> jcole: i gave you 2 ideas above
<SpamapS> hallyn: I'm asking because verification-* usually has special meaning regarding testing the packages in -proposed
<jcole> pmatulis: what im doing is very simple, manage nss/pam with a config file for auth-client-config, that's it
<hallyn> SpamapS: then I goofed
<pmatulis> jcole: how did modify debconf for these packages?
<hallyn> SpamapS: i thought verification-needed/done were with respect to SRU process before going into -proposed
<hallyn> SpamapS: pls to remove that tag :)
<SpamapS> hallyn: ok, well it sounds like its ready for upload to -proposed. You have per-package upload on it right?
<pmatulis> jcole: pam, ldap, kerberos is not simple i'm afraid.  especially when end users are doing the configuring
 * hallyn tilts his head
<jcole> pmatulis: in my package, i have depends on those krb/ldap libs above, an auth-client-config file and a debconf value that tells pam-auth-update "No" for managing pam
<pmatulis> jcole: ah, so you have a custom package then
<SpamapS> hallyn: ok so you should upload your package to lucid-proposed then and ask Richard to test again if he can from -proposed. ;)
<hallyn> sigh, what's the bug# again.  this thing doesn't color usermsgs on playback
<jcole> pmatulis: the problem is not with the krb or ldap config files themselves
<SpamapS> bug #574665
<uvirtbot> Launchpad bug 574665 in qemu-kvm "kvm + virtio disk corrupts large volumes (>1TB)." [High,In progress] https://launchpad.net/bugs/574665
<hallyn> ah there it is
<hallyn> thanks :)
<SpamapS> hallyn: np. :)
<hallyn> SpamapS: will do
<hallyn> takes my mind off of the painful multiple-patch backport iw as trying to do
<jcole> pmatulis: the problem is with the tools that are managing pam configurations
<hallyn> also for lucid libvirt
<pmatulis> jcole: did you roll out any clients with your package before users got involved?
<jcole> pmatulis: you are telling me now to write a puppet system for managing pam configs instead of pam-auth-update or auth-client-config
<pmatulis> jcole: no, it's just an idea that's related
<pmatulis> jcole: i believe custom packages is the best solution for existing systems
<hallyn> zul: is https://launchpadlibrarian.net/68220165/buildlog_ubuntu-natty-armel.lxc_0.7.4-0ubuntu4_FAILEDTOBUILD.txt.gz something you've seen before?
<jcole> pmatulis: are you saying to create my own pam management system?
<zul> hallyn: yep
<pmatulis> jcole: no, AFAICT, you have modified packages that you're having users run.  that seems the best way
<hallyn> zul: is it a transient error?  or a bug in the packaging?
<jcole> pmatulis: im not modifying any packages
<zul> hallyn: no i think its autoconf not recognizing arm ill look at it
<jcole> pmatulis: i have a simple package that depends on those krb/ldap libs above, an auth-client-config file and a debconf value that tells pam-auth-update "No" for managing pam
<hallyn> zul: thanks!
<jcole> pmatulis: i could even put that in a shell script in 3 lines
<jcole> pmatulis: its not complicated
<pmatulis> jcole: fine, fine.  did you test it?
<hallyn> SpamapS: oh maste,r what do you recommend?  Merging the bzr tree, or dputing a source package, for lucid-proposed?
<jcole> pmatulis: yes, applying the auth-client-config works
<jcole> pmatulis: it works perfectly
<pmatulis> jcole: so, how does it go pear-shaped?
<jcole> pmatulis: reverting the auth-client-config
<pmatulis> jcole: why revert then?
<Daviey> kirkland, seems not
<SpamapS> hallyn: whatever results in the exact same package as your PPA had being uploaded. :)
<SpamapS> hallyn: IMO, dput is probably simpler.. but merging *should* result in the same thing.
<Daviey> kirkland, seems you and zul are most active :)
<hallyn> SpamapS: all right i'll give UDD a sporting chance
<jcole> pmatulis: many reasons, because some users want to remove the package or they dont want ldap logins, etc.
<pmatulis> jcole: ah!
<jcole> pmatulis: i think the reverted configuration is not bare enough to even allow local logins
<jcole> pmatulis: sine pam-auth-update never runs
<jcole> since*
<jcole> pmatulis: so, im wondering if the recommended way on ubuntu is to use *both* pam-auth-update and auth-client-config
<soren> ScottK: Wicked, thanks.
<pmatulis> jcole: in my travels i have never seen the need to disable anything using debconfg
<pmatulis> jcole: so i guess the answer to your wondering is 'yes'
<jcole> pmatulis: youve never seen seeded debconf to configure/disable applications?
<jcole> pmatulis: you can either do it manually with dpkg-reconfigure or with debconf-set-selections or a preseed file in a package
<pmatulis> jcole: i meant in ldap/krb situation
<jcole> pmatulis: dpkg-reconfigure krb5-config
<pmatulis> jcole: "in my travels i have never seen the need to disable anything using debconf when using ldap/krb"
<jcole> pmatulis: if pam-auth-update uses that debconf value to determine if it should manage pam configs or not, then how else should i tell pam-auth-update to not manage pam configs besides updating that debconf value?
<jcole> pmatulis: thanks for the food for thought... im going to use a hybrid method
<pmatulis> jcole: that's the thing, you *don't* tell it not to manage pam
<pmatulis> jcole: basically i see your issue an 'overengineered problem'
<jcole> pmatulis: how do you suggest to manage pam configs?
<kirkland> Daviey: heh
<jcole> pmatulis: i hardly see an auth-client-update config file or an pam-auth-update config file as "over-engineering"
<RoAkSoAx> kirkland: so It seems that once I finish patching koan to install Ubuntu KVM's, using the NQA pressed is going to be trivial
<kirkland> Daviey: I'd rather just talk to zul in #ubuntu-server then :-)
<kirkland> RoAkSoAx: neat
<adam_g_> hi--does anyone know if there has been any progress or news regarding this issue, other than what is on the ticket? https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211 -- ive been running into the same issue between different filesystems and block device flavors repeadetly over the last 1.5 weeks on ec2
<uvirtbot> Launchpad bug 666211 in linux "maverick on ec2 64bit ext4 deadlock" [High,Confirmed]
<pmatulis> jcole: over-engineering by disabling stuff.  you don't need to do that
<pmatulis> jcole: and as you see, it messes things up
<smoser> adam_g_, i think, unfortunately, the bug has the right status
<smoser> i do not htink that smb has been able to make any progress on it.
<smoser> but having an easy recreate would be helpful
<jcole> pmatulis: disabling pam-auth-update and using auth-client-config, is like disabling exim so you can use postfix
<adam_g_> smoser: i wouldn't say i can reliably reproduce on-demand, but i come across it frequently enough.
<pmatulis> jcole: that's your assumption.  it may not be correct.  and like i said, i never needed to do such a thing and i never had such a problem
<io_error> Hello! I am about to install 10.04 LTS on a private KVM virtual machine. Should I: "Install Ubuntu Server" or "Install Ubuntu Enterprise Cloud"? What's the difference?
<lenios> io_error, you should install server
<io_error> lenios: Thanks. But what's the difference?
<io_error> The website is so full of marketing buzzspeak that I can't tell what's really going on
<RoAkSoAx> io_error: it says it there "Ubuntu Enterprise Cloud"
<RoAkSoAx> io_error: Install Ubuntu Server installs only the server compoennets
<lenios> if you want to create your cloud using ubuntu, you'll use "install ubuntu enterprise cloud"
<RoAkSoAx> to run whatever you want
<io_error> lenios: OK, so it installs the tools you would build a private cloud with?
<RoAkSoAx> while the other than installs a server, but with the software package for Eucalyptus based Cloud
<lenios> yes
<io_error> RoAkSoAx, lenios: Ah, now I get it. Thanks :) Not building any private clouds today...
<io_error> Just want a local build environment so I don't have to pay for a bunch of extra EC2 instances :)
<kirkland> RoAkSoAx: i just uploaded another cobbler fix
<kirkland> RoAkSoAx: you want to put together an upload with an nqa preseed?
<ghostrocket> hi all - when i run a full-upgrade on my ubuntu ami box, is that the equivalent of using the latest daily build?
<RoAkSoAx> hallyn: ping?
<hallyn> RoAkSoAx: yeah?
<RoAkSoAx> hallyn: howdy! I was wondering if you know how does libvirt treat ubuntu distros?
<RoAkSoAx> hallyn: cause I'm working on cobbler, and it throws this:  virtinst library does not understand variant natty, treating as generic
<hallyn> virtinst != libvirt
<hallyn> isn't it part of virt-tools?
<hallyn> mdeslaur does more with that than I do (and much appreciated by me, too)
<RoAkSoAx> argh right, just noticed :)
<RoAkSoAx> hallyn: alright, I'll nag him
<RoAkSoAx> thanks :)
<uvirtbot> New bug: #750786 in samba (main) "nmbd job fails to start on boot" [Undecided,New] https://launchpad.net/bugs/750786
<shaiguit1r> Hey, I read https://help.ubuntu.com/10.04/serverguide/C/postfix.html but I'm a bit stuck when I telnet to port 25 (postfix master running there) it hangs on CLOSED tcp
<raphink_> shaiguit1r: is postfix running?
<shaiguit1r> CLOSE_WAIT that is
<shaiguit1r> raphink_: yeah
<raphink_> ps axuww | grep postfix
<shaiguit1r> shai@Ubuntu-1004-lucid-32-minimal ~ $ sudo lsof -i:25
<shaiguit1r> COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
<shaiguit1r> master  14056 root   12u  IPv4 331903      0t0  TCP *:smtp (LISTEN)
<shaiguit1r> master  14056 root   13u  IPv6 331905      0t0  TCP *:smtp (LISTEN)
<shaiguit1r> ai@Ubuntu-1004-lucid-32-minimal ~ $  ps axuww | grep postfix
<shaiguit1r> root     14056  0.0  0.1   5812  1792 ?        Ss   Apr04   0:00 /usr/lib/postfix/master
<shaiguit1r> postfix  14360  0.0  0.1   5828  1692 ?        S    00:07   0:00 pickup -l -t fifo -u -c
<shaiguit1r> postfix  14361  0.0  0.1   5872  1720 ?        S    00:07   0:00 qmgr -l -t fifo -u
<shaiguit1r> postfix  14376  0.0  0.1   5824  1708 ?        S    00:08   0:00 proxymap -t unix -u
<raphink_> do you have a local firewall?
<shaiguit1r> hmm yes
<raphink_> sudo iptables -L
<shaiguit1r> That might be crapping things, but TBH I'm pretty n00b with iptables, so not sure
<shaiguit1r> sec
<shaiguit1r> ta!
<shaiguit1r> raphink_: http://pastie.org/pastes/1756312/text?key=mpwx9lk5fcuxpqmrmtoya
<shaiguit1r> line 8 has smtp
<shaiguit1r> open
<shaiguit1r> If I got it right :P
<raphink_> looks good to me
<shaiguit1r> Hmm, OK
<raphink_> what do you see in /var/log/mail.log when you try to telnet localhost 25 ?
<raphink_> you're supposed to see something like
<raphink_> Apr  5 00:14:48 jonah postfix/smtpd[18408]: connect from localhost.localdomain[127.0.0.1]
<raphink_> Apr  5 00:14:53 jonah postfix/smtpd[18408]: disconnect from localhost.localdomain[127.0.0.1]
<shaiguit1r> oh wow that's lame
<shaiguit1r> http://pastie.org/private/mpwx9lk5fcuxpqmrmtoya
<shaiguit1r> raphink_: ^
<raphink_> hehe
<shaiguit1r> Sorry I'm pretty new at this
<patdk-lap> yep, if there is any config issue, smtpd wil lbomb
<raphink_> the fatal lines don't look too good ;-)
<shaiguit1r> that's weird though, I followed:
<shaiguit1r> https://help.ubuntu.com/10.04/serverguide/C/postfix.html
<shaiguit1r> on 10.04 ubuntu
<raphink_> let's see, you're missing aliases.db
<shaiguit1r> there's no mention of /etc/aliases.db
<raphink_> try
<raphink_> sudo touch /etc/aliases
<raphink_> sudo newaliases
<patdk-lap> aliases comes in by default
<patdk-lap> normally setup by the installer
<shaiguit1r> root@Ubuntu-1004-lucid-32-minimal ~ # ls /etc/aliases.db
<shaiguit1r> ls: cannot access /etc/aliases.db: No such file or directory
<shaiguit1r> root@Ubuntu-1004-lucid-32-minimal ~ # ls /etc/aliases
<shaiguit1r> /etc/aliases
<shaiguit1r> root@Ubuntu-1004-lucid-32-minimal ~ # cat /etc/aliases
<shaiguit1r> # See man 5 aliases for format
<shaiguit1r> postmaster:    root
<raphink_> sudo service postfix restart
<raphink_> then you're just missing "sudo newaliases" shaiguit1r
<shaiguit1r>  sudo touch /etc/aliases &&  sudo newaliases &&  sudo service postfix restart
<shaiguit1r> ?
<raphink_> yes
<raphink_> the touch is not necessary since you already have the file
<shaiguit1r> root@Ubuntu-1004-lucid-32-minimal ~ # sudo newaliases
<shaiguit1r> postalias: fatal: open database /etc/aliases.db: Permission denied
<shaiguit1r> Need to touch the db file first?
<raphink_> huhu
<raphink_> is your filesystem OK ? ;-)
<shaiguit1r> oh dont' get me worried :)
<patdk-lap> newalias doesn't careabout timestamps
<patdk-lap> it updates it, no matter what
 * shaiguit1r straces it
<red2kic> I have a question about whois.net -- Am I allowed to contact the owner? I hate lawyer jargons.
<raphink_> patdk-lap: I had recommended the touch in case the file didn't exist, not because of the timestamp
<patdk-lap> I know
<patdk-lap> but he seems to be stuck on timestamps
<shaiguit1r> hmm, weird!
<shaiguit1r> even after touching the file, it doesn't open it, and I'm root.
<raphink_> red2kic: the owner of whois.net ? or the owner of a domain?
<red2kic> raphink_: The owner of a domain name.
<raphink_> red2kic: if you've got the address, what prevents you from writing to someone?
<raphink_> shaiguit1r: did you check that your partition is not mounted read-only?
<red2kic> raphink_: I pretty much have little next to zero experiences with websites.
<patdk-lap> people call me from my whois info all the time
<patdk-lap> the usa spammer that did it, has moved to china though
<shaiguit1r> raphink_: I can touch and rm the file, so I doubt it.
<raphink_> by the way red2kic, there's a `whois` command that does the same as whois.net
<raphink_> shaiguit1r: do you have selinux set up on this box?
<shaiguit1r> root@Ubuntu-1004-lucid-32-minimal ~ # touch /etc/aliases.db && rm /etc/aliases.db && echo $?
<shaiguit1r> 0
<shaiguit1r> nope, don't think so
<red2kic> raphink_: Ah. That's a cool command!
<raphink_> shaiguit1r: you could still check
<raphink_> ps axZ | grep postfix
<raphink_> to see if it's confined
<shaiguit1r> bah!
<shaiguit1r> raphink_: my bad, /etc/aliases was owned by www-data!
<shaiguit1r> for some reason
<shaiguit1r> say, all of /etc/ should be owned by root, is that correct?
<raphink_> that shouldn't prevent root from writing to /etc/aliases.db
<shaiguit1r> newaliases just worked
<shaiguit1r> it did
<raphink_> really
<shaiguit1r> -rw-r--r-- 1 www-data www-data 51 2011-04-05 00:19 /etc/aliases
<shaiguit1r> other doesn't have w
<shaiguit1r> but yeah,that's weird.
<raphink_> that said, it's a better idea to give /etc/aliases to root than www-data ;-)
<shaiguit1r> :)
<shaiguit1r> right
<raphink_> given your system conffiles to apache's user is usually a bad idea for other reasons ;-)
<shaiguit1r> thanks. So all of /etc/ can safely be moved to root right?
<raphink_> let's see
<raphink_> in general, yes, but not always
<raphink_> sudo find /etc/ -not -user root -exec ls -l {} \;
<raphink_> I've got a few files that don't belong to root
<shaiguit1r> which pacakges?
<raphink_> openfire configurations for example
<raphink_> but that's not standard confs
<raphink_> in general, everything belongs to root there
<shauno> I've only got one, bind/rndc.key is bind:bind.  a fair few which aren't root's group tho
<shaiguit1r> oh which?
<shaiguit1r> ah for DNS
<shaiguit1r> nothing else?
<raphink_> sudo find /etc/ -not -user root -exec ls -l {} \;
<raphink_> will list the files that don't belong to root
<shauno> http://paste.ubuntu.com/589454/   that's a fairly boring box, mail & dns.  group ownerships seem to be used in a fair few places tho
<shauno> root:root is a sane plan if your ownerships are seriously messed up, but there will be cleaning up to do
<shauno> (enough cleaning up that it wouldn't be my Plan A)
<patdk-lap> heh, my list of not root is much much larger
<patdk-lap> and my system isn't screwed up :)
<shaiguit1r> heh
<shaiguit1r> I did that chown to www-data in the past, it was my bad.
<patdk-lap> couchdb, quagga, ssl, shadow, cups, backuppc, munin
<patdk-lap> seems to be the big offenders
<shaiguit1r> chowne dit back to root, we'll work our way through problems next up.
<shaiguit1r> OK, so the postfix works great, thanks for the help!
<shaiguit1r> Great community.
<shaiguit1r> ta.
#ubuntu-server 2011-04-05
<axisys> I have a samba share..i can access it in r/w mode .. is there a way to create a locked folder ?
<axisys> i created the samba share in 10.04.02
<axisys> and I access it from 10.10 or mac ppc .. no problem
<SpamapS> axisys: what do you mean by "locked" ?
<peedee> hi there..just wondering if anyone knows how to set the default region in an environment variable for ec2-api-tools? I'm in australia so I want to have all my instances in ap-southeast-1 but with all the command line tools like ec2-descirbe-instances for example, if you don't specify a region explicitly it assumes us-east-1
<shadow42085> I am trying to setup a private mailserver after installation and configuration I get to testing and when I telnet into it and run ehlo mail.sitename.domain I dont see auth login and auth=login but I see verthing else any ideas?
<shadow42085> everything**
<jjohansen1> hggdh: did you get to testing a plain kvm instance?  Also can we test natty kernel on maverick
<hggdh> jjohansen1: did not have time, with all the issues on the lucid kernel sru test
<hggdh> jjohansen1: but will get back to it asap, this is also critical...
<jjohansen1> hggdh: okay, just checking
<shadow42085> anyone have a solution to my issue or do I need to clarify more
<axisys> SpamapS: password protected
<SpamapS> axisys: there are a few ways to do it. The simplest one is, on the server side, create an smbpasswd record for the users you want to have access with 'smbpasswd -a user' .. then add 'valid users = user' to the entry in smb.conf. If you're sharing via the GUI, I think you can just list the valid users there.
<uvirtbot> New bug: #750922 in rabbitmq-server (main) "package rabbitmq-server 1.7.2-1ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/750922
<RoAkSoAx>  kirkland apparently, it seems/win 11
<RoAkSoAx> kirkland: apparently there's some issue with the virtinst python module as I can'
<RoAkSoAx> can't get an Ubuntu instance to run
<kirkland> RoAkSoAx: ?
<kirkland> RoAkSoAx: with cobbler/koan?
<kirkland> RoAkSoAx: virt-inst is kind of always broken, in my experience
<RoAkSoAx> kirkland: koan, uses virtinst python module to install VM's in the host
<RoAkSoAx> kirkland: this uses the HTTP repo created by cobbler
<RoAkSoAx> kirkland: the Fedora repo is recognized correctly as a repo
<kirkland> RoAkSoAx: you might poke soren about virtinst tomorrow; as I recall, I think he is/was a fan of virtinst (?) but I could be completely wrong about that one
<RoAkSoAx> kirkland: yeah the Ubuntu repo is not recognized as one
<RoAkSoAx> kirkland:
<RoAkSoAx> issue similar to: http://askubuntu.com/questions/29476/how-to-install-ubuntu-over-http-in-virtual-manager
<RoAkSoAx> kirkland: and yes will poke him tomorrow to try to get this fixed by the end of the week
<kirkland> RoAkSoAx: cool
<RoAkSoAx> kirkland: powernap is getting more and more attention, so cool :)
<twb> Grr, timeout(1) isn't part of ubuntu-minimal as at lucid
<twb> Breaks my .profile
<tonyyarusso> I'm trying to set up SpamAssassin with Postfix via Amavis-New per https://help.ubuntu.com/10.04/serverguide/C/mail-filtering.html .  I believe I have everything installed, but I do not have the X-Spam-* headers on my messages yet.
<soren> kirkland, RoAkSoAx: I'm not particularly fond of virtinst. It was (and is, I think) a necessary evil. Wazzup?
<nealmcb_> howdy soren :)
<soren> nealmcb_: Hi! :)
<soren> nealmcb_: Up late/early or in an unusual $TZ?
<nealmcb_> up late, headed to DC tomorrow^H^H later today for IDtrust.  I'm doing the opposite of a gradual adjustment to the 2 hr shift
<nealmcb_> but now time for bed....
<soren> nealmcb_: Have fun :)
<nealmcb_> having too much fun getting my system76 lemur fully installed and running better
<nealmcb_> thanks - and thanks again for your wise words on kvm, xen and who-knows-what....
<soren> Oh, thank *you* for the kind words ;)
<slime_> hello all, i need to choose opensource groupware solution , any recommendation  for  good one that support calendar ?
<soren> slime_: I've never used it myself, but I think citadel is what people tend to use.
<slime_> thanks, soren
<Zero_Dogg> EOL for 6.06 is marked as June this year, but has no date. That means June 1, right?
<twb> That would be the safest assumption, I think
<Zero_Dogg> yep, I figured as much, just wanted to doublecheck. Thanks :)
<soren> Wow, Dapper EOL's? Time sure does fly.
<Daviey> zul, Morning, can you check into https://code.launchpad.net/~ubuntu-branches/ubuntu/natty/lxc/natty-201104041611/+merge/56200 please?
<Kartagis> hi
<Kartagis> may I ask how to create a virtual domain/user please?
<zul> Daviey: if i have to :)
<Daviey> zul, Just thought you might /want/ to :)
<andygraybeal> mornning
<Daviey> morning andygraybeal !
<andygraybeal> :)
<Kartagis> may I ask how to create a virtual domain/user please?
<airtonix> Kartagis: your question doesn't make sense
<Kartagis> airtonix: I'm trying to configure postfix to use virtual domain
<Kartagis> I've got the related lines
<Kartagis> I'm adding them
<airtonix> Kartagis: what do you mean by "Virtual Domain" ?
<Kartagis> but how to create virtual domain/user?
<ikonia> Kartagis: I explained this to you
<Kartagis> airtonix: non-unix accounts
<ikonia> Kartagis: it depends how your mail server and imap server is setup
<ikonia> Kartagis: eg: things like backend account storage
<ikonia> what method you've setup for storing the usernames/passwords
<ikonia> Kartagis: there are many guides on the net, some even include web front end guis
<airtonix> kerberos/ldap etc etc
<ikonia> exactly
<ikonia> mysql is a popular choice, more so for web guis to manipulate
<Kartagis> ikonia: I am trying to configure the mail server now
<ikonia> ok - I understand that, but how you add the users depends on how you are setting it up
<ikonia> there are many options, (lots of guides too)
<Kartagis> ikonia: how I add the users? I don't know how to add non-unix accounts
<ikonia> Kartagis: it depends on the method you are using for holding the accounts and passwords
<ikonia> Kartagis: as I've said there are many options and many guides on how to setup these options, once you chose one, that is how you will add the users
<Kartagis> ikonia: I think I'll go for the non-unix account approach
<ikonia> you've said that
<ikonia> there are MANY ways to store non-unix accounts,
<ikonia> the method you chose dictates how you will add them
<airtonix> i like to store my non-unix accounts in outlook express reply mails
<airtonix> individually
<ikonia> ha ha
<ikonia> naughty response
<airtonix> :<
<awanti> i have edited some group policies on windows 7. now i want to deploy it on my every windows 7 pc during logon from smaba pdc (i want to execute this file "login script.cmd or .bat"........... But how i have to do it plz.
<zul> morning
<jamespage> hey zul o/
<zul> hey jamespage how are you?
<jamespage> fine thanks - and you?
<zul> jamespage: good just trying to wake up
<RoAkSoAx> morning all
<raphink> hi RoAkSoAx
<RoAkSoAx> hi raphink
<raphink> what's up RoAkSoAx ?
<jdonnaruma> anyone have experience with installing imapproxy? attempting to start it at the end of the 'apt-get install imapproxy' fails every time
<axisys> 192.168.0.30:/export/arl_splunk /splunk nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.30,mountvers=3,mountproto=tcp,addr=192.168.0.30 0 0
<axisys> ^ cat /proc/mounts
<uvirtbot> axisys: Error: "cat" is not a valid command.
<axisys> how do I mount it ?
<axisys> so it sticks a reboot?
<pmatulis_> axisys: sticks a reboot?  what does that mean?
<axisys> 192.168.0.30:/export/arl_splunk /splunk nfs rw,relatime 0 0  <-- this should be enoung for fstab based on the /proc/mounts ?
<axisys> pmatulis_: survives a reboot in other words
<pmatulis_> axisys: with the mount command i guess (mount /splunk)
<axisys> pmatulis_: the fstab entry looks kosher ?
<axisys> 192.168.0.30:/export/arl_splunk /splunk nfs rw,relatime 0 0
<pmatulis_> axisys: looks overly complex, not sure.  try it
<axisys> pmatulis_: stole it from /proc/mounts
<pmatulis_> axisys: not supposed to
<axisys> also rw,relatime,vers=4 will mount it as nfsv4 ?
<axisys> the nfs server supports nfsv4 .. how do I mount it as nfsv4 share instead of default nfsv3 ?
<Kartagis> how do I know if dovecot supports mysql?
<pmatulis_> axisys: read the ubuntu server guide
<zul> hggdh: can i use one of the test machines
<robbiew> RoAkSoAx: hey...we have gfs and ocfs packages in main, right?
<RoAkSoAx> robbiew: yes we do
<Kartagis> can I ask a question about postfixadmin?
<RoAkSoAx> robbiew: they have been there for ages now :)
<robbiew> RoAkSoAx: sweet
<robbiew> thnx
<zul> hallyn: i think i fixed the lxc/libvirt/openstack issue
<hallyn> what was it?
<RoAkSoAx> ubunu/win 24
<RoAkSoAx> ashhh
<axisys> we are planning to setup a secure ubuntu server as proof of concept.. it might replace some of the old servers that we are using to access our netowork.. any suggestion of good security audit tool that reports what is not secure in the box.. i saw bastille does it. but it does not look updated since 2008
<gagarine> hello I have a folder with permission like that:
<gagarine> drwxrwsr-x 10 root www-pub  4096 Apr  5 15:02 htdocs
<gagarine> but I can write in the folder altough I'm in the group www-pub
<gagarine> any idea why?
<gagarine> hooo nevermind...
<gagarine> I need to logout, login
<gagarine> ... i didn't know than group was loaded only on loging
<zul> who is running the meeting today btw?
<jamespage> zul: me
<zul> ok good :)
<zul> jamespage: i can never remember my attention span sucks
<jamespage> zul: :-)
<hggdh> hallyn: re. bug 746751 -- manually running KVM on the natty machine (under a maverick kernel) works; running under euca fails
<uvirtbot> Launchpad bug 746751 in linux "kernel: [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 30)" [Critical,In progress] https://launchpad.net/bugs/746751
<hallyn> hggdh: ok, i think i'm going to focus on that one this afternoon.
<hallyn> hggdh: you use a lot of bare metal - have you seen any libvirt crashes on startup?
<hallyn> (there is a libpciaccess thread safety bug, with lots of dups, but hard to reproduce, not seemingly possible on my laptops)
<hggdh> hallyn: I have not seen any crashes so far
<hallyn> ok, thanks
<hggdh> but I will keep an eye for it...
<jjohansen> hggdh: any news?  /me has been off line
<hggdh> jjohansen: yes. manually running a KVM image on the machine (natty-based, but running a maverick kernel) is working
<hggdh> the KVM image is for lucid, BTW. I do not think I could create a more convoluted environment
<jjohansen> hggdh: have you tried natty + natty kernel
<jjohansen> :)
<bdamos> hi, here: http://www.ubuntu.com/business/server/virtualisation, it says ubuntu can be configured with a low footprint, but i can't find much documentation on how to do this. does it just mean services can be disabled to reduce the footprint? or is it more to it then than?
<bdamos> that*
<hggdh> jjohansen: not yet... I am waiting for this test to end (I am running a kernel SRU test there, and it is working better than on my laptop)
<hggdh> but this will be the next test, already in view ;-)
<orudie> greetings. how can I get a list of everyone in 'sudo usermod -s /bin/false user' and 'sudo chsh -s /usr/sbin/nologin username', and what is the difference between the two ?
<gagarine> how do I inerhit owner from a parent directory? it's for /var/www because more than one user write files in this directroy
<hggdh> orudie: nologin gives you an error message
<gagarine> i made a group www-pub with write access and add user to this one
<gagarine> www and content belong to root for the moment
<orudie> hggdh, it doesn't
<h4x0r_x4x0r> morning
<orudie> sup
<gagarine> drwxrwsr-x 10 root www-pub  4096 Apr  5 15:23 htdocs
<h4x0r_x4x0r> I'm looking for C++ IDE working in cloud
<h4x0r_x4x0r> anyone of you know this?
<RoAkSoAx> kirkland: zul SpamapS get me out of a doubt, in Debian/Ubuntu, if software X installs a floder in /var/run/<folder>, then on boot, is /var/run/<folder> removed, or should files under it be cleaned up?
<kirkland> RoAkSoAx: /var/run is cleared on every reboot
<kirkland> RoAkSoAx: it's a tmpfs in memory
<kirkland> RoAkSoAx: /var/tmp is static across boots
<kirkland> RoAkSoAx: "preserved" rather
<RoAkSoAx> kirkland: right, is this something in what we differ from other distros? (Cause talking to a suse developer and they insist that directories should not be removed, but just clear files under the directories)
<RoAkSoAx> zul: yeah I'm aware of that. but the discussion came I had was with upstream because of bug #751344
<uvirtbot> Launchpad bug 751344 in cluster-agents "Cluster resource agents fail to run because of missing /var/run/resource-agents directory" [Low,Confirmed] https://launchpad.net/bugs/751344
<RoAkSoAx> zul: on which, upstream makefile installs /var/run/resource-agents, and one of the upstream developers was saying that folders under /var/run/<folder>/* are not removed, but just cleared on boot
<RoAkSoAx> zul: and I was saying that from my understanding, everything is cleared under /var/run on boot and that init scripts create folders (if necessary)
<zul> RoakSoAx: thats right from my understanding as well
<hallyn> hggdh: kvm by hand works, kvm through ecau fails - what about a hand-created libvirt kvm instance?
<hallyn> (I'll try it out this afternoon if you haven't)
<hggdh> hallyn: what would you need on this hand-crafted instance?
<hallyn> nothing, just define a fresh backing file and hook it up to an iso to boot from, see if it works...
<RoAkSoAx> zul: indeed! I guess this is an upstream error then
<parkdriver> Is there anyone here with Webmin experience? I was wondering if there are any security/performance/etc issues that i should know of.
<parkdriver> I'm planning to use it on a production server.
<parkdriver> Of course doing everything with the command line is always the purest form of management but I guess I want to simplify things a bit without causing a lot of problems.
<parkdriver> Alternatives to Webmin are of course welcome as well.
<pmatulis_> !webmin | parkdriver
<ubottu> parkdriver: webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<pmatulis_> !ebox | parkdriver
<ubottu> parkdriver: ebox is a web-based GUI interface for administering a server. It is designed to work with Ubuntu/Debian style configuration management. See https://help.ubuntu.com/community/eBox
<parkdriver> Good to hear in advance!
<parkdriver> I tried eBox / Zentyal but the way it's designed just makes no sense
<parkdriver> Configuring a single service is a pain because the configuration pages are spread all over the system. Not quite the minimalistic approach I was looking for.
<parkdriver> Oh I see that Zentyal is essentially version 2 of eBox.
<mok0_> parkdriver: on an Ubuntu server, all configuration files are in /etc/*
<mok0_> not "spread all over the system"
<parkdriver> with etc i meant 'et cetera'
<parkdriver> and i was referring to the way that \Zentyal'
<parkdriver> handles the configuration of services
<parkdriver> which is quite counter intuitive, if you ask me
<kuramanga> Hey guys, just wondering if any of you have experience getting current total disk space using Python? I've been poking around with using os.statvfs but can't get the numbers to match up and I'd rather not use subprocess to trigger df or fdisk calls.
<mok0_> parkdriver: Sorry I misunderstood you
<axisys> pmatulis_: ubuntu server guide https://help.ubuntu.com/10.04/serverguide/C/network-file-system.html nfs client mount syntax for fstab does not look right
<axisys> pmatulis_: example.hostname.com:/ubuntu /local/ubuntu nfs rsize=8192,wsize=8192,timeo=14,intr
<axisys> not all the columns are there
<pmatulis_> axisys: what columns?
<axisys> pmatulis_: these two columns are missing
<axisys>  <dump>  <pass>
<pmatulis_> axisys: you don't need them
<axisys> <file system> <mount point>   <type>  <options>       <dump>  <pass>  not all these columns need to be populated?
<axisys> pmatulis_: did not know that
<axisys> pmatulis_: thanks
<pmatulis_> axisys: yw
<twopoint718> Under 10.10 server, I have noticed that until I make an outgoing network connection, I cannot (for example) ssh into it.  The same holds true for pinging, no response from the server until I first make some outgoing connection.  This is very weird and as far as I can tell I have checked obvious things (no firewall is present, ipconfig shows the correct public IP, correct routes, sshd is up and running at boot)
<pmatulis_> twopoint718: switch issue?
<twopoint718> pmatulis_: this is connected directly via ethernet to a wall jack.
<twopoint718> I just did a test here, I pinged google from the server and then I was able to ssh into it from my desktop
<twopoint718> Is there some kind of keepalive or "wakeup" that I need to configure for ssh?
<patdk-wk> twopoint718, what is this *wall jack*
<patdk-wk> wall jacks don't just work, they have to go somewhere
<semiosis> twopoint718: ipconfig?
<patdk-wk> semiosis, using windows?
<twopoint718> semiosis: ifconfig shows a proper config.  And like I said, I *can* connect, but only if the server is the one initiating it.
<twopoint718> patdk-lap: it's a big cisco switch, it has about 50 other things plugged in and so there'd be lots of people having trouble if it were the switch.
<semiosis> twopoint718: does that switch do any NAT at all?
<twopoint718> semiosis: no, this is a public-facing ip
<patdk-wk> big cisco switchs tend to have arp issues
<patdk-wk> do you keep changing arp or ports?
<semiosis> twopoint718: it could still do NAT... but if it's not, then as patdk-wk suggests, sounds like ARP
<patdk-wk> mac
<patdk-wk> normally cisco switchs are set to ignore arp updates and to cache mac for like 4-8hours
<patdk-wk> so most users never have an issue
<semiosis> if you got back to the beginning state, where you could not ssh in, then you might tcpdump on the server and see if you're getting ARP queries when you try the failed SSH attempt
<twopoint718> lemme tcpdump... thanks.
<kirkland> RoAkSoAx: are you still working on cobbler?
<RoAkSoAx> kirkland: not at the moment
<RoAkSoAx> kirkland: what can I help you with though?
<kirkland> RoAkSoAx: I was hoping you'd integrate an nqa-like preseed
<kirkland> RoAkSoAx: before GA
 * RoyK just read on the zfs list about this guy that has stuffed 45 drives into the same raidz1 VDEV and now seems surprised it fails
<RoAkSoAx> kirkland: yeah I just need to figure out what's wrong with virtinst which I'm gonna look again at later today
<kirkland> RoAkSoAx: k
<RoAkSoAx> soren: ping
<Lanta|N900> hey if i wanted to install ubuntu but select nothing on tasksel (i.e. base minimal system, not even "standard system utilities") then can i do that from a normal ubuntu disc or do i need the server disc?
<twopoint718> semiosis, patdk-lap: thanks for all the help.  I've tracked it down (duh) to an ip conflict. "arping" showed two different MACs
<RoAkSoAx> kirkland: so this is what I get when trying to install the isntance with koan http://pastebin.ubuntu.com/589824/
<semiosis> twopoint718: it's always the simple stuff... glad you figured it out :)
<RoAkSoAx> kirkland: it is definitely something wrong with virtinst that can't correctly determine if the ISO is an ubuntu ISO from an HTTP location
<suigeneris> hello
<suigeneris> I am reading this page: http://bliki.rimuhosting.com/space/knowledgebase/linux/mail/postfixadmin+on+debian+sarge and it says dovecot.conf should be added auth_userdb line. but when I restart, I get Unknown setting: auth_userdb. any thought would be appreciated
<Roasted> how do I make dhcp3 auto start on an ubuntu server?
<Roasted> I have it installed but it doesnt auto start when I fire up the box
<semiosis> Roasted: update-rc.d?
<patdk-wk> edit /etc/defaults/dhcp*
<patdk-wk> you have to define the interfaces it should use, before it will start
<zul> SpamapS: so what do you think of https://bugs.launchpad.net/ubuntu/+source/samba/+bug/740777?
<uvirtbot> Launchpad bug 740777 in samba "smbd.conf needs to wait for network up event" [Medium,New]
<SpamapS> uvirtbot: hm
<uvirtbot> SpamapS: Error: "hm" is not a valid command.
<suigeneris> postfix/trivial-rewrite[30125]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) <--- what's this error? the mysql server is running and the sock file is there
<RoAkSoAx> kirkland: alrighty!! Finally I got the ISO to launch with virtinst, which I had to patch... know will look into the nqa stuff
<SharkOn> hey im trying to conf one of my pages to run apache on port 88, i have added NameVirtualhost *.88 and listen 88 to ports.conf, and my page is using virtualhost *:88 in sites-enabled, i have also opened port 88, but still not working, does anyone know if i have forgot smoething?
<osmosis> what would cause a sudden drop in # of inode table size?
<ivoks> zul: i doubt that's required
<RoAkSoAx> kirkland: NQA
<kirkland> RoAkSoAx: ?
<kirkland> RoAkSoAx: working?
<RoAkSoAx> kirkland: not quite yet :( just failed when trying to retrieve some packages I think, but at least I got it to launch :)
<RoAkSoAx> kirkland: command:http://pastebin.ubuntu.com/589873/ had to amnually change the preseed in cobbler_web and add the extra options
<patdk-wk> osmosis, that is easy
<patdk-wk> the kernel using that memory for something else :)
<RoAkSoAx> kirkland: ok, nqa seems to be working now  YaY \o/
<kirkland> RoAkSoAx: \o\  /o/  \o\  /o/
<RoAkSoAx> kirkland: i'll upload it to ppa in  a bit
<david5345> I added a user using the normal useradd -m newuser && passwd newuser command, but auth.log says "Failed password for invalid user newuser... " Oddly enough I've tried two different user names, and I have done this about a 100 times before, why won't it work now ? grepping /etc/passwd and /etc/shadow show's the user has been created and ls /home shows the user's directory. Ubuntu 8.04 LTS
<phoenixsampras> does 10.04  has older packages than 10.10? for ie. ruby
<Jayro> hello, could anyone help me get sound working on Ubuntu 10.10 server edition. I am using mp3blaster but when i try to play a song i get "failed to open sound device"
<phoenixsampras> does 10.04  has older packages than 10.10? for ie. ruby
<guntbert> Jayro: excuse me - sound and server ? how does that fit?
<Jayro> jukebox with a http interface
<phoenixsampras> does 10.04  has older packages than 10.10? for ie. ruby
<guntbert> !info ruby
<ubottu> ruby (source: ruby-defaults): An interpreter of object-oriented scripting language Ruby. In component main, is optional. Version 4.5 (maverick), package size 21 kB, installed size 120 kB
<guntbert> !info ruby lucid
<ubottu> ruby (source: ruby-defaults): An interpreter of object-oriented scripting language Ruby. In component main, is optional. Version 4.2 (lucid), package size 20 kB, installed size 100 kB
<guntbert> phoenixsampras: obviously not ^^
<RoAkSoAx> kirkland: btw.. cobbler package keeps adding diffs automatically
<kirkland> RoAkSoAx: yeah, wassup with that?
<RoAkSoAx> kirkland:no idea... one of the weirdness of quilt :)
<RoAkSoAx> kirkland: i think it's becauseof a left over from .pc dir
<zul> RoAkSoAx: wha?
<phoenixsampras> what is the last ruby package on 10.0.4 ?
<RoAkSoAx> zul: get the source from cobbler
<RoAkSoAx> zul: and you'll see that there's
<guntbert> !info ruby lucid
<ubottu> ruby (source: ruby-defaults): An interpreter of object-oriented scripting language Ruby. In component main, is optional. Version 4.2 (lucid), package size 20 kB, installed size 100 kB
<RoAkSoAx> zul: and you'll see that there's debian-changes-2.1.0-0ubuntu4
<RoAkSoAx> n debian/patches
<guntbert> phoenixsampras: ^^
<phoenixsampras> 4.2? but they are on 1.9
<guntbert> phoenixsampras: who is?
<phoenixsampras> ruby
<guntbert> phoenixsampras: strange
<Jayro> could someone please help me get sound working on ubuntu server
<guntbert> phoenixsampras: see http://packages.ubuntu.com/lucid/ruby/ please
<guntbert> Jayro: excuse me - sound and server ? how does that fit?
<Jayro> jukebox with a http interface
<Jayro> guntbert, ^
<SpamapS> RoAkSoAx: right, thats because you have differences between orig and your tree that aren't tracked in quilt
<guntbert> Jayro: I don't get it - but nvm - it was only curiosity
<raphink_> guntbert: we have a kernel expert in our team that loves to code sound support early in the boot process to make servers do a duck sound when they turn on ;-)
<raphink_> apart from that, dunno ;-)
<Jayro> guntbert, do you think you could help me get the sound working?
<guntbert> Jayro: really - no - sorry
<RoAkSoAx> SpamapS: yeah which shows that those changes shouldn't have appear in the first place. I think that the way to resolve is by quilt pop -a and then, rm -rf .pc and that should be it
<Jayro> does anyone know of a way to test the sound so i can see if it is just mp3 blaster?
<TeTeT> Jayro: maybe aplay some wav file?
<SpamapS> RoAkSoAx: yeah. Not sure. You guys still trying to fix the "bug" of it not being as usable as you'd like? ;)
<Jayro> TeTet, i ran speaker-test and got alsa errors :S
<RoAkSoAx> SpamapS: nah.. it is usable now. I just fixed koan (and virt-inst) to install KVM instances, and if using kirkland's NQA preseed, then everything is automatic :)
<SpamapS> RoAkSoAx: glad you fixed that "bug"
<SpamapS> :)
<TeTeT> Jayro: hmm, maybe the wrong model is used, do you have the lspci -vvnn output somewhere on pastebin or so?
<Jayro> Tetet, give me a sec
<Jayro> TeTeT, lspci -vvnn output gives me the help for lspci
<uvirtbot`> New bug: #751959 in cobbler (universe) "koan cannot deploy Ubuntu iKVM instances" [Wishlist,Confirmed] https://launchpad.net/bugs/751959
<TeTeT> Jayro: huh? you sure you used only one '-'?
<Jayro> lspci -vvnn output is exactly what i used
<soren> RoAkSoAx: pong
<RoAkSoAx> soren: do you maintain/care about virtinst?
<TeTeT> Jayro: what's the output of lspci then?
<soren> RoAkSoAx: Do I detect a trick question?
<Jayro> TeTeT, it just gives me the help file
<soren> RoAkSoAx: Why do you ask? :)
<RoAkSoAx> soren: Installation from an HTTP source fails (I was testing with cobbler/koan). So I patched it, and just wondering if you cared enough/know about it enough, to give an opinion
<soren> RoAkSoAx: Ah. No, not at all.
<Jayro> TeTeT, http://pastebin.com/i9QCujnL
<RoAkSoAx> soren: alright, thanks though
<TeTeT> Jayro: hmm, which ubuntu version? 10.04?
<Jayro> server 10.10
<gagarine> How I understand we can set a umask on process but not on directory... So they are not way to set a default set of permission for child directory?
<Jayro> TeTeT,
<Jayro> ^
<TeTeT> Jayro: still digging through google links. how does your alsa-base.conf look like?
<Jayro> TeTeT, where is that lcoated?
<Jayro> located*
<TeTeT> Jayro: maybe this of help: https://help.ubuntu.com/community/SoundTroubleshooting
<TeTeT> Jayro: /etc/modprobe.d/alsa-base.conf
<TeTeT> Jayro: for example my soundcards line there has options snd-hda-intel power_save=10 power_save_controller=N model=toshiba
<TeTeT> Jayro: as I use a toshiba laptop the card has some sort of special wiring
<Jayro> yeah, ill pastebin my /etc/modprobe.d/alsa-base.conf
<TeTeT> Jayro: this page also contains relevant info: https://wiki.ubuntu.com/DebuggingSoundProblems
<Jayro> ill look, but here is my alsa-base.conf
<Jayro> http://pastebin.com/a9vTZRKy
<Jayro> TeTeT, ^
<RoAkSoAx> kirkland: ok, so I just uploaded cobbler and virtinst to ubuntu-virt PPA
<RoAkSoAx> kirkland: please give it a test
<RoAkSoAx> kirkland: you;ll need to add the nqa seed at /var/lib/cobbler/kickstarts/
<RoAkSoAx> kirkland: and then modify the profile for the server ISO you'v;e imported with cobbler
<RoAkSoAx> kirkland: to reflect the changes in the kickstart (pointing the nqa seed) and add the kernel options (priority etc etc)
<RoAkSoAx> kirkland: you should be able to launch it with a command such as:
<Jayro> TeTeT, hows it look? Im guessing i have to add a line but idk what it is
<TeTeT> Jayro: this thread seems to suggest all is fine with your card on 10.10. http://ubuntuforums.org/showthread.php?t=1666715 any chance to try the desktop cd / usb key for checking it?
<RoAkSoAx> kirkland: sudo koan --server=192.168.1.118 --virt --profile=ubuntu-server-i386 --virt-bridge virbr0
<TeTeT> Jayro: I would guess option snd-hda-intel model=<something>
<TeTeT> Jayro: but I have no idea what something should be
<Jayro> hmm.. maybe Nvidea?
<TeTeT> Jayro: https://help.ubuntu.com/community/HdaIntelSoundHowto
<Jayro> thanks
<TeTeT> Jayro: good luck, I need to logoff now, midnight here
<Jayro> TeTeT, thanks cya
<phoenixsampras> how to know what version is my ubuntu server?
<zertyui> hello anyone there ?
<adam_g> Â /window 4
<parkdriver> How can you configure SSH to block IP's on the basis of the number of times the IP tries to connect to the SSH service?
<soren> parkdriver: You can't. Look at fail2ban instead.
<parkdriver> soren: thanks i'll look into it
<parkdriver> soren: does that work with ufw?
#ubuntu-server 2011-04-06
<qman__> actually you can
<qman__> with rate limiting in iptables
<parkdriver> qman__:  Thanks, that was what I was looking for: rate limiting.
<qman__> parkdriver, pretty sure UFW has this built in now
<qman__> something like sudo ufw allow ssh limit
<qman__> you'd have to RTFM to verify
<parkdriver> Oh, that's great.
<parkdriver> I'll Read The Fine Manual :)
<qman__> glad to help :)
<SpamapS> parkdriver: you may also want to look into denyhosts
<SpamapS> parkdriver: it has a giant list of known bad hosts that have tried too many times on other servers.
<parkdriver> SpamapS: Is that list already included with Ubuntu?
<SpamapS> parkdriver: its in universe
<SpamapS> parkdriver: its one of the first things I install on any box I need to open up ssh to the world on. ;)
<parkdriver> I've the feeling that I still have to do that and that my ISP delivered a quite open Ubuntu box
<parkdriver> SpamapS: you only open up ssh to certain IP's?
<SpamapS> parkdriver: if I can live with that sort of config, yes
<SpamapS> parkdriver: I always have one or two machines out there that I can ssh to from anywhere that become my "bounce" hosts.
<SpamapS> parkdriver: running sshd on another port also helps.
<parkdriver> SpamapS: Yes, I was planning on something obscure like port 39327.
<parkdriver> SpamapS: I don't have the luxery of multiple boxes that I can SSH through so I've got to be careful.
<parkdriver> SpamapS: Don't want to end up locking myself out.
<parkdriver> SpamapS: Is an Ubuntu Server installation open to everything by default?
<SpamapS> parkdriver: yes, denyhosts has locked one of my users out because he was on a public wifi in .nl that had been used to hack before. ;)
<parkdriver> SpamapS: With other words: is an unconfigured Ubuntu Server vulnearble?
<SpamapS> parkdriver: no, ubuntu server by default has only avahi open. OpenSSH server is an optional package.
<SpamapS> parkdriver: another good option is to always have SSH keys with you, and turn off password auth
<parkdriver> SpamapS: Yeah, good one.
<parkdriver> SpamapS: I just scanned the newly installed host and it says hundreds ports are open
<SpamapS> parkdriver: you must have added services then
<parkdriver> SpamapS: I guess that only is a potential risk when there's a service running..
<SpamapS> parkdriver: did you pick things like "LAMP server" ?
<parkdriver> SpamapS: Well, no. Only openssh-server and htop.
<SpamapS> parkdriver: sudo netstat -tnlp will show you all listening TCP ports and programs listening on them
<parkdriver> SpamapS: No, it's a clean install. No out of the box services except for openssh-server.
<SpamapS> I doubt that
<parkdriver> SpamapS: Just ran the command and it only returns 'sshd'
<SpamapS> then you either a) are mistaken with your scan, or b) are rooted, and the rootkit is hiding all of the services running.
<parkdriver> SpamapS: Discovered open port 49101/tcp on x.x.x.x
<parkdriver> SpamapS: That's what zenmap (nmap gui) says to me.
<parkdriver> SpamapS: I guess the port is 'open' but there's nothing listening on the port..
<SpamapS> parkdriver: another possibility is your ISP is running a tarpitting firewall that shows open ports but responds VERY SLOWLY to slow down port scanners.
<parkdriver> SpamapS: I'm probably mis interpreting the output of zenmap.
<SpamapS> parkdriver: yeah. ;)
<jjohansen> hggdh: any news?
<SpamapS> anyway, I'm off. Good luck!
<parkdriver> SpamapS: Then my ISP is doing a good job. Scanning takes ages to complete.
<parkdriver> SpamapS: Thanks for the help. Bye!
<hggdh> jjohansen: no, hallyn was going to have a go there this evening
<jjohansen> hggdh: I haven't been able to reproduce locally
<jjohansen> well at least not that bug, I have plenty of other bugs popping up
<jjohansen> hggdh: it may also be worth trying the oneric kernel
<bastidrazor> Apr  5 20:43:44 servitude unbound: [11904:0] notice: quit on signal, no cleanup and statistics, because installed libevent version is not threadsafe
<bastidrazor> launchpad reports there was a fix but i'm still getting this.
<madteckhead> Hey, just setting up my first ec2 server to serve django and wondering what a good base AMI is? Should I use amazon's AMI or the Ubuntu Canonical ones?
<madteckhead> any advice much appreciated.
<hggdh> jjohansen: where can I find the oneric kernel? the ppa?
<jjohansen> hggdh: hrmm, I don't its been pushed to a ppa yet, I can build one for you
<jjohansen> hggdh: but the other tests take priority, the oneric kernel is 2.6.39 so it just gives us another point on the kernel time line to look for patches
<hggdh> jjohansen: OK. Let's see what hallyn comes up with, then
<uvirtbot`> New bug: #752172 in logwatch (main) "Ubutu-specific afpd configuration missing" [Undecided,New] https://launchpad.net/bugs/752172
<ruben23> hi guys i change a username passwor don my ubuntu server but when i reboot i cant access the new password any help, im lockdonw
<ruben23> lockdown i mean
<woonix> From the Grub bootloader, select the (recovery mode) option
<woonix> then you will be able to change the password and reboot
<Kartagis> hello
<Kartagis> can you help me with http://pastebin.com/6u4hDr2G please?
<jmarsden> Kartagis: I am about to go to bed, but that really looks like a file or directory permissions issue, so check perms on each directory /var/mail/vhosts/bilgisayarciniz.org/ allow directory traversal by the user concerned and check permissions on the /var/mail/vhosts/bilgisayarciniz.org/bilgi file itself also, if it exists.
<sky> hello all....which config file i have looking for to trace a connecting over ldap on port 389 to an AD MS server unbuntu 10.04 LTS
<sky> i mean log file
<MetaJake> might anyone suggest some web-server alternatives to apache?
<joschi> MetaJake: nginx and lighttpd are rather popular
<Kartagis> MetaJake: tomcat, nginx, lightppd
<joschi> MetaJake: what's your usecase?
<MetaJake> joschi, sorry what is usecase? My situation? Just some static html and css. Down the road maybe some dynamic content with python + mysql.
<MetaJake> kartahgis, I see. thank you
<MetaJake> kartagis * ^
<Kartagis> MetaJake: why not use apache?
<MetaJake> kartagis, apache is what I started with. I am new to linux web-servers in general and I wonder what options are out there, and how their capabilities compare to Apache.
<twb> MetaJake: http://en.wikipedia.org/wiki/Comparison_of_http_servers
<twb> MetaJake: key questions are: do you need PHP?  SSI?  CGI?  Authentication?  vhosts?
<twb> A naÃ¯ve HTTP server can be implemented in a dozen lines of sh.
<twb> After apache, lighttpd and nginx are perhaps the most popular.
<MetaJake> twb I see thank you. What do you mean by sh?
<MetaJake> (dozen lines of SH)
<twb> shell
<twb> MetaJake: /bin/sh
<MetaJake> twb, I see
<twb> For just serving files out to anonymous clients, I use thttpd or (internally) busybox httpd
<uvirtbot`> New bug: #752361 in cloud-init (main) "grub prompts for install device on upgrade" [High,New] https://launchpad.net/bugs/752361
<wok> hey guys. ive created a ubuntu instance on ec2 using an official conanical image. how do i enable ssh on there without any current access?
<Daviey> wok, ssh should already be enabled... what you may find is that you need to modify your security group to all access on port 22... This is an aws issue
<wok> Daviey: sorry, yea, id mistyped the port in the group :p
<wok> thanks anyhow :)
<Daviey> cool
<uvirtbot`> New bug: #752429 in drbd8 (main) "drbd8-utils dependancy on drbd8-source can't work with maverick & natty kernel backports" [Undecided,New] https://launchpad.net/bugs/752429
<uvirtbot`> New bug: #752487 in php5 (main) "Segmentation Fault in libapache2-mod-php5    5.3.2-1ubuntu4.7" [Undecided,New] https://launchpad.net/bugs/752487
<RoAkSoAx> morning all
<pmatulis_> why do bots get a backtick?
<Kartagis> if I want to deliver to ~/Maildir rather than /var/mail/%u, should I edit dovecot.conf namespace private location?
<jpds> pmatulis_: I think that's the supybot default for alternative nicks (nice tail by the way)
<krux> Kartagis, if you'r using namespace private { you can use location << yes..
<phoenixsampras> when i do upgrade from 10.04 to 10.10, how can i keep using grub1?? i dont want grub2
<SharkOn>  hey im trying to conf one of my pages to run apache on port 88, i have added NameVirtualhost *.88 and listen 88 to ports.conf, and my page is using virtualhost *:88 in sites-enabled, i have also opened port 88, but still not working, does anyone know if i have forgot smoething?
<semiosis> SharkOn: service apache reload ?
<semiosis> s/apache/apache2/
<SharkOn> i do that after every conf :)
<SharkOn> ooh, now it suddenly works
<SharkOn> :D
<SharkOn> thanks anyway
<semiosis> lol
<SharkOn> service apache 2 reload = /etc/.init.d/apache restart
<SharkOn> right?
<SharkOn> i did the second one always, now i tried the first one and it seems to work
<pmatulis_> jpds: hm, not sure why i have a tail in this channel and not in others
<semiosis> no, reload != restart
<SharkOn> but a restart reloads the conf?
<semiosis> reload causes apache to reread its configs but the process never goes away... restart quits & re-starts, which of course also rereads configs at startup like always
<chrismat> Is it possible to use system tap on 10.04?
<chrismat> I need to trace down a source of latency in our file server
<SharkOn> semiosis: ok thats what i thought, strange that it works now then, but good :D
<semiosis> SharkOn: but usually "service xxx command" is the same as "/etc/init.d/xxx command", except when its not
<SharkOn> okey :)
<semiosis> SharkOn: (such as with upstart jobs)
<SharkOn> when i have a page on some other port then 80, is it possible for someone to know which port it is, cuz now i write say www.mydomain.com:88
<SharkOn> is it possible for someone to find out its 88 for that page ?
<semiosis> well if you tell them the url, the port is clearly right there
<SharkOn> how to c that if i have ServerSignature Off on apache?
<SharkOn> just asking, fun to know
<semiosis> if you mean can people discover it without being told, yes they can do a port scan such as with nmap
<SharkOn> ooh i forgot about nmaping :)
<semiosis> SharkOn: if you wanted to hide your web server even from a port scan, one way to do that is with a technique called "port knocking" which is very well explained here http://en.wikipedia.org/wiki/Port_knocking
<SharkOn> ah okey didnt knew about that, i dont want to do that for now, but good to know so i will read about that , thanks :)
<semiosis> have fun!
<robos> Hi: So I'm about to inherit a bunch of ubuntu machines, so it's pretty new to me. What is the general thought of compiling vs. packages on ubuntu?
<semiosis> use packages and apt-get on with your life ;)
<semiosis> that's my general thought anyway
<robos> semiosis, will things break if I mix the two?
<robos> For example, with rhel you always want to use RPMs because that's how all system updates and etc. are handled
<robos> so to avoid duplicate packages and making a mess of the system it's best to only use RPMs. Not sure if it's the same with ubuntu or not
<semiosis> robos: you'll probably want to stay on that track.  ubuntu, being a derivative of debian, has a very large "universe" of packages it draws from
<semiosis> robos: many more than you're used to coming from RPM-based distros
<semiosis> robos: you can search for packages here http://packages.ubuntu.com/ and on a running ubuntu you can use the very nice package manager 'aptitude' to search/install/remove/etc packages
<semiosis> robos: aptitude isnt the only way, but it's a good place to start
<robos> does ubuntu use dpkg for system updates?
<semiosis> robos: i assume you're using CLI, since we're in #ubuntu-server, but if you were using the GUI there's nice graphical package managers as well
<robos> yeah, all CLI
<semiosis> robos: dpkg is the "back end" that does the heavy lifting, but you'll usually interact with higher-level utilities like apt-get & aptitude
<robos> but are all system updates handled through the package manager?
<semiosis> robos: what else would manage system updates?
<robos> semiosis, well, i guess it could compile certain updates?
<RoAkSoAx> kirkland: Howdy!! Where you able to test the NQA qith cobbler/koan?
<semiosis> robos: i suppose that's an option at your disposal, but usually (by default) everything is done through packages
<RoAkSoAx> s/where/were
<kirkland> RoAkSoAx: I haven't, sorry
<semiosis> robos: it's anyone's guess what's going on in those systems you're inheriting though
<kirkland> RoAkSoAx: in a virtual-sprint today
<robos> cool. Okay, another question... say I download apache/httpd using apt-get/aptitude or whatever and it's managed through the package manager. Who's responsible for updating that software package (in this case apache httpd.)  Is Ubuntu or someone else?
<RoAkSoAx> kirkland: no worries :)
<semiosis> robos: the apache developers write the code, of course.  then the debian developers & maintainers package it up, then the ubuntu developers & maintainers import (and possibly modify) the debian package and it gets distributed through the Universe respository
<RoAkSoAx> kirkland: when you have the time you can test it so that you cna review the patch for virtinst of bug #751979 and sponsor it
<uvirtbot> Launchpad bug 751979 in virtinst "virt-install fails to install Ubuntu ISO when it is located in an HTTP location" [Medium,Confirmed] https://launchpad.net/bugs/751979
<semiosis> robos: that's the gist of it
<robos> okay. Will the debian developers include any features in that update or do they only include bug fixes?
<robos> For example, rhel will "backport" only bug fixes.. they will not include any new features for stability purposes. I was wondering if ubuntu did the same
<RoAkSoAx> robos: yes we do. we call it SRU
<robos> SRU.. sweet. I'll google and see how all that works
<semiosis> robos: well there are source code versions, debian package revisions, ubuntu package revisions, ubuntu distribution releases (lucid, maverick, etc)... so it really depends on the details of the package & versions you're talking about
<semiosis> robos: ubuntu distribution releases come every 6 months, which is usually when new features come out, updates within a single release are usually bug-fix.
 * semiosis feels like he's writing a wikipedia article here...
<semiosis> robos: actually, thats a great place to start: http://en.wikipedia.org/wiki/Ubuntu_(operating_system)
<robos> Is there a list of packages that ubuntu uses SRU releases for?
<robos> I'm assuming ubuntu does SRU releases for apache/httpd?
<semiosis> robos: https://wiki.ubuntu.com/StableReleaseUpdates sounds to me like it applies to all packages.. RoAkSoAx can you confirm?  is it just Universe, also Multiverse...?
<semiosis> robos: btw, Universe is the core package respository where all the main packages live (including apache of course).  Other repositories are Multiverse, Partner, and then there are Personal Package Archives (PPAs) which anyone can publish and anyone else can subscribe to
<robos> ah, gotcha
<robos> so if you're running a production server you want to stick with Universe
<semiosis> robos: yeah, unless a package you need is in Multiverse, or not in either (I for example set up a PPA because I needed a package not in ubuntu at all)
<robos> gotcha
<robos> so I suppose, let's say httpd comes out with a new version. Ubuntu probably won't release that new version into universe because of SRU's.. but you can probably find the new version in multiverse?
<semiosis> robos: please see that wikipedia article for a better description of the various official repos, i'm not doing it justice
<robos> yeah, i'm reading that and the wiki page about SRU's and other things
<robos> looks like Universe and Multiiverse are not supported by Ubuntu
<robos> only Main is
<semiosis> robos: jump to the section "Package classification & support" there's a nice semiotic (!) square there that shows where things live
<robos> yup, reading that as we speak
<semiosis> robos: see i told you i wasn't doing it justice
<semiosis> robos: sorry for the confusion
<robos> np
<robos> i have a couple weeks to figure all this out :-)
<robos> i'm running an ubuntu desktop as we speak
<robos> i can probably stand up a dev server too and figure all this out
<robos> cool; ty for the info guys
<semiosis> robos: gah cant believe I mixed up Main & Universe... since I dont use Canonical's paid support they're pretty much the same thing to me.
<semiosis> robos: you're welcome & have fun learning ubuntu, it's a great distro
<robos> I'm just a little concerned about using it as a server in a production environment
<robos> but i'm sure that will go away. Having SRU's is a bit of a relief
<robos> semiosis, I think Universe and Main both use SRu's. RoAkSoAx, can you verify
<robos> ?
<robos> semiosis, here is how I get that: http://ubuntuforums.org/showthread.php?p=8474169
<robos> Looks like Main and Universe joined teams
<semiosis> neat!
<Kartagis> if I am keeping my mails in /srv/vmail/domain/user, what should the userdb static args home be? dovecot question,
<zul> morning
<boxybrown> on one of my machines, when you log it it automatically tells you which packages are out of date
<boxybrown> where is this configured? I'd like to have this on my other servers as well
<boxybrown> log in*
<jmarsden> boxybrown: That might be the output of landscape-sysinfo ?  That info is placed into the MOTD automatically.  Does running landscape-sysinfo by hand show you the info you want to see?
<boxybrown> jmarsden: it says it isn't currently installed
<jmarsden> Even on the one that displays the package update info?  OK... then that wasn't it :)
<boxybrown> jmarsden: correct.  the one thing I can think of is that I have bcfg2-server installed on the server that displays it
<boxybrown> I'm just surprised that would automatically do that...
<jmarsden> I'm not familiar with that package.  apticron can send you emails about package updates, but I don't think it generates info at login time.
<boxybrown> jmarsden: I think I found it.  I'm pretty sure it has to do with automatic updates functionality
<boxybrown> https://help.ubuntu.com/10.04/serverguide/C/automatic-updates.html
<boxybrown> and update-notifier
<Kartagis> can someone help me with dovecot? #dovecot isn't answering
<asadeddin> hey all. I need some help setting up VNC. I m getting this error trying to start up vncserver   Starting VNC server: 2:cmlserver                           [FAILED]
<asadeddin> any help appreciated!
<jjohansen> hggdh, hallyn: ping re 746751
<hggdh> jjohansen: I am here; hallyn was having some fun with the systems, and is probably more up-to-date
<Egonis> Is there such thing as an easy way to balance bandwidth per-ip on an internet gateway box running Ubuntu Server 10.10?
<jjohansen> hggdh: okay, just trying to come up to date so I know what is needed out of me
<phoenixsampras> how to install Ubuntu-server without GRUB2 ?
<hallyn> jjohansen: it's seeming ot me like a problem between libvirt and kvm.  still looking at libvirt strace logs
<jjohansen> hallyn: okay, let me know if you want anything/find anything
<hallyn> jjohansen: thanks
<jjohansen> hallyn: really more thankyou, your doing all the work atm :)
<uvirtbot> New bug: #747387 in samba "smbd crashed with SIGABRT in make_connection_snum()" [Undecided,Triaged] https://launchpad.net/bugs/747387
<Kartagis> can someone help me with dovecot? #dovecot isn't answering
<hallyn> jjohansen: it appears to be the guest kernel
<jjohansen> hallyn: what kernel is the guest?
<hallyn> jjohansen: when I add the '-kernel' argument, i get "couldn't allocate memory" as I do in the tests.  When I drop that and just do '-boot c', then it doesn't.
<hallyn> no idea
 * jjohansen needs to find the bug and read it again
<hallyn> uh, the kernel is plaintext
<hallyn> hggdh: I think it was a euca error all along.
<hggdh> hallyn: not really surprising... what was it?
<hallyn> cat /var/lib/eucalyptus/instances/eucalyptus/cache/eki-E98E1B85/kernel
<hallyn> on mabolo
<jjohansen> hallyn: plaintext???!!!??
<hallyn> I don't know where that file comes from originally but it looks like something went bad during setup
<hallyn> <euca:EucalyptusErrorMessageType xmlns:euca="http://msgs.eucalyptus.com"><euca:EucalyptusMessage><euca:correlationId>ab036b7a-2f02-402a-a8ee-7fd51f4184e6</euca:correlationId><euca:userId>admin</euca:userId><euca:_return>true</euca:_return></euca:EucalyptusMessage><euca:source>Bukkit</euca:source><euca:message>
<hallyn> caching failure</euca:message><euca:requestType>GetDecryptedImageType</euca:requestType></euca:EucalyptusErrorMessageType>root
<hallyn> to be precise
<hggdh> what the hell...
<zul> im thinking hggdh is becoming a nervous wreck
<hggdh> zul: not becoming, already there...
<hggdh> hallyn: I will clear up the cache, and try again
<zul> just dont find a large tower and use your high powered rifle :)
<hallyn> zul, were you recently in fla talking to a pastor about kerosene and matches?
<hallyn> 'just don't do THAT, whatever you do.'
<zul> hallyn: hehe
<bastidrazor> how do i get mutt to use a maildir from another local machine. mutt fromlaptop to access maildir on local lan server
<RoyK> bastidrazor: iirc man muttrc
<bastidrazor> RoyK: thank you.. i'll read up
<RoyK> bastidrazor: you probably want an imap server on that server, though
<hallyn> ppetraki: could you look at and comment on, if necessary, bug 737027?
<uvirtbot> Launchpad bug 737027 in multipath-tools "kpartx udev rule is broken" [Medium,New] https://launchpad.net/bugs/737027
<hallyn> ppetraki: it sounds to me like they're coming to a nice consensus.  I don't really want to deviate from debian, but it seems like the right path to take.
<hallyn> be nice to make natty with this, if unlikely.
<ppetraki> hallyn, sure
<hallyn> thanks
<ppetraki> so, in general, afaic, kpartx is "the way"
<hallyn> so a dmraid patch woudl be appropriate?
<ppetraki> yes
<ppetraki> it doesn't make sense to duplicate the partition handling
<RoyK> I've been reading a bit about UEC, and it seems rather complicated with one or two servers in front and the cluster nodes in the back. To make this failsafe, a lot of hardware is needed... Is there a way to create a cluster with only the cluster nodes and have failover between them, something like what's done in vmware/hyper-v?
<ppetraki> this is the "fake raid" stuff right?
<hallyn> yeah
<hallyn> so we should be able to reproduce in theory.  (but i haven't)
<hallyn> so then, multipath-tools only needs to be patched to s/dmraid/DMRAID in the kpartx rule
<ppetraki> RoyK, sounds like a Stratus box would solve your problem nicely
<ppetraki> hallyn, so that rule isn't provided by the dmraid package?
<ppetraki> if there is such a thing
<hallyn> i'm just going based on comments in the bug.  i've not looked at the package
<mjeanson> hi, does anyone run mcollective on hardy?
<hallyn> if i get a chance i'm thinking i'll try using dmraid on top of lvm to test multipath on a laptop.  possible in theory no?  :)
<ppetraki> hallyn, if you have a supported fakeraid chipset
<jorenl_> Hey everyone! I just tried to install ubuntu server 10.10, I ran through the whole setup process (all on default settings I think) but now this is happening when I try to boot: http://imgpaste.com/i/dbrsz.jpg http://imgpaste.com/i/konnd.jpg
<jorenl_> the first image is the error on normal boot, the second is recovery mode
<jorenl_> does anyone have an idea about what I can do?
<ppetraki> hallyn, there appears to be a "dos SW RAID" vector, maybe you can try that
<ppetraki> jorenl_, um, that looks bad, so, what kind of hardware does this box have, storage specifically?
<RoyK> ppetraki: stratus?
<jorenl_> ppetraki: I don't know exactly, some old 40GB hard drive that was running windows perfectly fine an hour ago
<jorenl_> should I open it up and check?
<ppetraki> RoyK, yeah, fault-tolerant, lockstep checkpointing
<ppetraki> RoyK, I know there's a single point of failure (or two) in the UEC design.
<ppetraki> RoyK, if that's the one you're talking about
<RoyK> it is
<hallyn> SpamapS: hey SRU padawan - can you take a look at bug 748834 and tell me if you'd require me to split that into two separately SRU'd bugs?
<uvirtbot> Launchpad bug 748834 in libvirt "libvirt segfaults on networkIsActive or networkIsPersistent" [Undecided,In progress] https://launchpad.net/bugs/748834
<ppetraki> RoyK, then, you're either going to have to live with the blackout, and restart all the VMs. Or install a ft solution that won't blackout your application
<ppetraki> RoyK, VMware vmotion only gets you so far, you still lose what you're working on.
<ppetraki> RoyK, so if you can't afford *any* downtime. then the tech Stratus offers will suit you.
<RoyK> ppetraki: you'd still have to restart the VMs unless you run them in fault tolerant mode (on vmware), and that is rather expensive
<ppetraki> RoyK, right, so what if it was transparent?
<ppetraki> RoyK, because stratus machines literally mirror the CPUs
<RoyK> well, how on earth can this Stratus solution mirror the contents of memory without slowing down the cluster by XXXXX%?
<ppetraki> RoyK, you can walk up to one, rip the primary processing unit out, and you might see a 1-2 sec pause
<ppetraki> RoyK, that's it, no loss of data, connectivity, nothing
<RoyK> rather cool networking between them, then
<jorenl_> please someone help :/ my box is bricked and I hav eno clue what to do to fix it
<ppetraki> RoyK, I used to work there, yeah it is cool :)
<RoyK> 10gigE won't take you long for such a setup
<ppetraki> RoyK, the entry level price tag is about 12K, but when you consider what you're getting, it's cheap
<RoyK> serious-looking infiniband, probably
<RoyK> sounds like a good alternative to vmware
<ppetraki> unless you're transaction based, with journaling etc etc, it won't matter
<ppetraki> it runs vmware
<hallyn> jorenl_: how did you install?
<ppetraki> the platform is ft, it can run windows, RH, and vmware esx
<ppetraki> so now you can virtualize as much as you want, and literally never worry about downtime
<ppetraki> great than 5 9's
<ppetraki> s/great/greater
<hallyn> jorenl_: try booting again, and hit shift or whatever to catch the grub menu,
<hallyn> then look at what the command line is and hwat it gives for a 'root=' option
<hallyn> jorenl_: the screenshot you supplied offers the valid choices :)
<ppetraki> RoyK, and no 'yet another custom HA' solution for IS to  maintain :-)
<jorenl_> hallyn: I downloaded the ubuntu server 10.10 install ISO; burned it to a disc and ran through the installation process, using mostly default settings (simple setup, only 1HD installed and ubuntu server as the only OS)
<jorenl_> hallyn: ok I'll try now and see, thanks.
<hallyn> jorenl_: I'm going to guess you wnat 'root=/dev/sda5',
<SpamapS> hallyn: looking now
<ppetraki> jorenl_, sounds more of an install bug than a HW thing.
<SpamapS> hallyn: is it fixed in natty yet btw? still shows as in progress.
<jorenl_> I'm in grub. 'c' for command line?
<uvirtbot> New bug: #752730 in autofs5 (main) "NFS mounts fail due to upstart condition on 'mounting TYPE=nfs'" [Undecided,New] https://launchpad.net/bugs/752730
<hallyn> jorenl_: no i think 'e' to edit the option
<jorenl_> well wrong button I guess, retry :D
<hallyn> SpamapS: yes, fixed in natty.  i was in the middle of updating that when i decided i wasn' tsur what to do for maverick
<hallyn> bc half of it is fixed in mav, half not
<hallyn> there updated for natty :)
<jorenl_> hallyn: ok I'll type what it says in pastebin brb (thanks for the help)
<SpamapS> hallyn: both patches are so small and come from upstream, fixing one bug report. I'd accept it.
<hallyn> SpamapS: \o/
<SpamapS> hallyn: don't forget to subscribe ubuntu-sru after you upload to lucid-proposed. :)
<CrunchyChewie> whenever I try to SSH into my 10.10 VPS it says "Connection closed by xxx.xxx.xxx.xxx"
<CrunchyChewie> I have a current SSH session open in another terminal I am afraid to close for fear of being permanently locked out
<hallyn> SpamapS: hm, i seem to misunderstand
<hallyn> SpamapS: i've always subscribed ubuntu-sru first and waited for permission to push to lucid-propsoed
<jorenl_> Ok here it is: http://paste.ubuntu.com/590367/
<hallyn> SpamapS: (which, of course, is bc i couldn't do it myself anyway :)
<hallyn> SpamapS: so i should push it to -proposed first, then subscribe -sru, and then it goes fromthere?
<hallyn> man i was off base t hen
<hallyn> SpamapS: i'm not hitting the trigger until you confirm :)
<SpamapS> hallyn: yes thats the best way, because really ubuntu-sru doesn't get involved until that point
<hallyn> jorenl_: that is very wrong :)
<SpamapS> hallyn: if you want to check that it will be accepted first before starting, then thats another time to subscribe ubuntu-sru
<jorenl_> hallyn: how? :/
<SpamapS> hallyn: also you should probably add the maverick task, even if you may not fix it, so its known that maverick is affected.
<jmarsden> CrunchyChewie: Read your logs to try to find out *why* that connection is being closed.  /var/log/messages and /var/log/auth.log is where I would start
<hallyn> jorenl_: oh, i see
<hallyn> SpamapS: oh, i can fix the maverick one, np on that
<jorenl_> hallyn: ?
<CrunchyChewie> jmarsden: on the server or the client?
<jmarsden> CrunchyChewie: On the server.
<hallyn> SpamapS: i'm doin gtoo many things at once to do that now and not mess it up though :)
<hallyn> jorenl_: it might not be wrong actually, but the 'root=/dev/mapper/HERENT--SERVER-root is probably the problem
<hallyn> jorenl_: please try replacing that with 'root=/dev/sda3' and see what happens
<jorenl_> hallyn: ok...
<hallyn> (and if that doesn't work, then with root=/dev/sda1)
<jorenl_> so not root='(hd0,msdos1)'
<jorenl_> but
<jorenl_> where it says /dev/mapper etc :)
<hallyn> no no, leave that
<jorenl_> I'll try
<hallyn> right
<hallyn> jorenl_: good luck :)  i'l lbe back in 10 mins
<CrunchyChewie> jmarsden: bind to port xxxxx on 0.0.0.0 failed: Address already in use
<jorenl_> hallyn : ok :D
<monaDeveloper> Hi I'm trying to create ami from scratch using this http://alestic.com/2007/11/ec2ubuntu-build-ami
<SpamapS> hallyn: I must commend you on the healthy stream of fixes flowing into lucid's libvirt/kvm/etc. :)
<monaDeveloper> but I needed to understand what's the meaning of this: Pick which instance type and kernel version you want in your Ubuntu AMI. Start an instance of the matching Amazon Fedora Core AMI
<jmarsden> CrunchyChewie: So you seem to have multiple server processes trying to use the same port xxxxx .  Don't do that.
<monaDeveloper> how to start an instance of the matching amazon fedora core ami
<jorenl_> hallyn: now it says Kernel panic - not syncing: No init found. Try passing init= option to kernel. See linux Documentation/init.txt for guidance.
<monaDeveloper> hello?
<CrunchyChewie> jmarsden: changing the port in sshd_config seemed to fix it, is the port range 54xxx bad to use?
<jmarsden> CrunchyChewie: Only bad if something else is using it :)
<CrunchyChewie> jmarsden: thanks!
<jmarsden> CrunchyChewie: You're welcome.
<Aison> damn, my nfs4 mounts are not automounted at startup :( even when I do  post-up mount /home || true   in network/interfaces
<Aison> something is really missing
<monaDeveloper> Hi I'm trying to create ami from scratch using this http://alestic.com/2007/11/ec2ubuntu-build-ami
<monaDeveloper> how to start an instance of the matching amazon fedora core ami
<hallyn> jorenl_: hm, that's not good.  try root=/dev/sda1?
<jorenl_> ok
<hallyn> jorenl_: you can try 'init=/sbin/init', but it should try that by default...
<hallyn> jorenl_: d'oh!  did i say root=/dev/sda3?  i meant /dev/sda5'
<jorenl_> hallyn: oh; I'll try sda5 then.
<hallyn> yeah that's the best bet
<jorenl_> Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown block-(8,5)
<jorenl_> that's the original error
<jorenl_> hallyn: forgot to mention your name if that matters :p
<monaDeveloper> hello?
<hallyn> jorenl_: well, try /dev/sda1, but if that fails  then i have to assume something went wrong at install
<hallyn> jorenl_: you could boot from the installcd and nose around the hard disk to see what is on /dev/sda1 and /dev/sda5
<jorenl_> hallyn: I'll try sda1 first :/
<jorenl_> No init found. Pfffffffffffffffffffffffff
<jorenl_> hallyn: Just go through the install again and reformat and everything?
<hallyn> jorenl_: sure if you don't have anything on there yet
<jorenl_> nope
<hallyn> jorenl_: when you get to disk partitioning, take note of what's there.  the /dev/sda2 being 1 block is suspicious
<jorenl_> hallyn: ok. Reinstalling. (thanks a heap for all the help, again)
<jorenl_> hallyn: reinstall or rescue?
<jorenl_> buhh. I'll try reinstall. nothing to lose.
<jorenl_> hallyn: Ok, there's the partitioning part. I must have taken a wrong choice so I'll check up here first. Partitioning method: 1) Guided - use entire disk 2) Guided - use entire disk and set up LVM 3) Guided - use entire disk and set up encrypted LVM 4) Manual
<jorenl_> someone? please?
<hallyn> jorenl_: use 1
<jorenl_> hallyn: ok there we go.
<hallyn> SpamapS: so i wonder how many bugs i have ubuntu-sru subscribed to for which i've not uploaded the branch...
<jorenl_> hallyn: select disk to partition: SCSI (0,0,0) (sda) - 40.0GB ATA WDC WD4000BB-60DG ; only choice :)
<addisonj> okay, riddle me this, what would allow me to "wget myhost.dom.com" and get back my index.html like one would expect, but when I "wget myhost.dom.com/index.html" I get a connection refused, but the same command does work on a different subnet
<hallyn> ok, so then presumably it offers /dev/sda1 as /boot, /dev/sda2 as extended and /dev/sda5 as / ?
<hallyn> jorenl_: ^
<jorenl_> Remove existing logical volume data <Yes>
<SpamapS> hallyn: heh.. no telling. ubuntu-sru is subscribed to thousands of bugs
<jorenl_> hallyn: I don't know, there was no way to check
<hallyn> jorenl_: hm.  ok, well at some point it'll look like it's offering you a final summary of what it's going to do,
<hallyn> and there is a button 'advanced options'
<hallyn> make sure to click that
<hallyn> and make sure it is going to install grub onto /dev/sda
<maccam94> i'm having trouble configuring the openssh sftp server. when my user tries to log in, they get "Received message too long 1131570529", which i believe is because the motd is being printed
<jorenl_> hallyn: Logical volumes to be removed: root, swap_1; volume groups to be removed: HERENT-SERVER; Physical volumes to be removed: /dev/sda5
<maccam94> does anyone have sftp working? (this is ubuntu 10.04)
<hallyn> SpamapS: feh, there is bug 750565 at least.  so go ahead and rebase that on top of the other one i just uplaoded to -proposed, and dput it ?
<uvirtbot> Launchpad bug 750565 in libvirt "Unable to attach an EBS volume" [High,In progress] https://launchpad.net/bugs/750565
<hallyn> jorenl_: weird.  ok, just do it :)
<SpamapS> hallyn: you can stack them but they all have to verify before they hit -updates ...
<jorenl_> hallyn: There's the summary. Soon to be on pastebin :p
<jorenl_> hallyn: ok here it is. http://paste.ubuntu.com/590393/
<jorenl_> hallyn: do it?
<uvirtbot> New bug: #299677 in mysql-dfsg-5.0 (universe) "package mysql-server_5.0.51a-3ubuntu5.4_all.deb: subprocess pre-installation script returned error exit status 1 (dup-of: 382713)" [Undecided,New] https://launchpad.net/bugs/299677
<jorenl_> hallyn: well I guess I'll just do it and see what happens. ;P
<hallyn> SpamapS: of course :)
<hallyn> jorenl_: no need to be squeemish, you can always start over :)
<hallyn> jorenl_: if it fails again next time, we might try 'rescue' to figure out what happened
<hallyn> but, lunchtime.  bbl
<jorenl_> hallyn: bye! :D
<adam_g> does anyone know where amazon makes requests for inclusion of patches such as https://bugs.launchpad.net/ubuntu/+source/linux/+bug/634316 ? also,  is there a running changelog or central repositority somewhere? AWS doesn't really document it anywhere on the official amazon linux AMI page
<uvirtbot> Launchpad bug 634316 in linux "include amazon EBS performance patch in -virtual kernel" [Undecided,Fix released]
<hallyn> feh.  lucid-security, forgot about that one
<hallyn> jdstrand: a comment on libvirt and quilt :)  merging trees is *so* much easier in lucid's libvirt, where patches are not kept applied, than in maverick's, where they are kept applied.
<hallyn> in lucid i can actually do bzr merge.  in maverick i have to re-do the change by hand
<jorenl_> hallyn: back :p
<hallyn> jorenl_: all set?
<jorenl_> hallyn; I was away for (late) dinner and  left the install idling, I'm selecting software currently :p
<hallyn> SpamapS: drat, bug 584048 is another.  i'll have to wait until someone approves the lucid-proposed push i made earlier today, bc i've already deleted my local copy of it.  zounds.
<uvirtbot> Launchpad bug 584048 in qemu-kvm "kvm images losing connectivity w/bridged network" [High,In progress] https://launchpad.net/bugs/584048
<jorenl_> hallyn; Install the GRUB to the master boot record? (Ubuntu server is the only OS)
<hallyn> yes
<jorenl_> ok :)
<jorenl_> installation complete: let's give it a try...
<jorenl_> hallyn: YEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAH!! xD
<uvirtbot> New bug: #752833 in clamav (main) "apparmor deny freshclam mask="r" on /var/run/samba/unexpected.tdb" [Undecided,New] https://launchpad.net/bugs/752833
<AtomicSpark> So I'm trying to tunnel the tcp/ip connection for postgresql (jdbc) over ssh because my college has a whitelist for outgoing ports.
<jorenl_> Ok. So far for the OS :) Now into the configuring. Hopefully I can do that on my own.
<hallyn> jorenl_: cool, have fun :)
<jorenl_> hallyn: thanks a heap, really...
<hallyn> np
<AtomicSpark> ssh -L 80:serverhost:80 serverhost works, ssh -L 8080:serverhost:80 serverhost works, ssh -L 5432:serverhost:5432 serverhost does not. Neither does changing the client port to other ports. My pg_hba is set correctly for I can access it at home from the servers public ip.
<AtomicSpark> I dont know what else to check. :<
<jorenl_> I just tried pinging a site to check my internet connection and now it doesn't stop. How do I make it stop pinging? (noob alert xD)
<AtomicSpark> I fixed it. I used localhost in the port host whatever area (I didn't use it before because I didn't have localhost set up for md5). But I guess thats allowed in host 0.0.0.0/24
<AtomicSpark> Yay, etc.
<AtomicSpark> jorenl_: Ctrl+C
<jorenl_> AtomicSpark: thanks xD
<AtomicSpark> You're welcome.
<hggdh> hallyn: on the kvm '-append <string>' -- shouldn't 'string be enclose in quotes?
<hallyn> hggdh: yes
<hallyn> hggdh: it's possible that euca's logger is just not quoting those
<hallyn> hggdh: until there is a valid kernel i wasn't going to worry about it :)
<hallyn> but yes, if it's doing it without quotes then that's wrong
<hggdh> hallyn: it's the /var/log/libvirt/qemu logs I am looking at
<hggdh> but yes, may be just printing
<hallyn> yes i saw it there too
<hallyn> have you got a valid kernel and still getting problems?
<hggdh> I am still having the problem, and should have a valid kernel. I am unsure how correct is the error, still digging in
<jorenl_> What's the default web directory for apache?
<genii-around> jorenl_: /var/www
<jorenl_> thanks
<suigeneris> hi
<suigeneris> is anyone available for help? #dovecot is not alive
<guntbert> suigeneris: you can try with your real question :)
<suigeneris> guntbert, I have virtual users, and I can't authenticate
<guntbert> suigeneris: I didn't say that *I* can help, but somebody might
<the_ramink> has your dovecot ever worked?
<suigeneris> the_ramink, yes, with local users
<suigeneris> the_ramink, please don't tell me to go back to local
<suigeneris> should I pastebin my dovecot -n ?
<the_ramink> I'm not sure what the means. All your users are local or virtual?
<suigeneris> the_ramink, at the moment they are virtual
<suigeneris> they are kept in a database
<the_ramink> and local users work and virtual users never have?
<suigeneris> yes
<jorenl_> I just made some changes to the Samba configuration
<jorenl_> how do I make it reload the config?
<suigeneris> jorenl_, sudo service smbd restart on ubuntu
<the_ramink> suigeneris: can you log into Mysql using the credentials you have configured in Dovecot?
<the_ramink> from the commandline
<jorenl_> suigeneris thanks :D
<jorenl_> Oh but I found this: "By default Samba will read the configuration file every 60 seconds so no HUP is needed."
<jorenl_> and it's working. I'm happy :)
<suigeneris> the_ramink, I found a working configuration but it wasn't talking about a sql file
<suigeneris> let me show you
<suigeneris> the_ramink, http://wiki1.dovecot.org/HowTo/SimpleVirtualInstall
<the_ramink> suigeneris: You trying to be obtuse?
<suigeneris> what does that mean?
<the_ramink> well I suppose this counts as a database
<RoyK> where's the obtosity and database parts of this?
<hggdh> hallyn: yeah, it is a timeout within eucalyptus -- walrus waiting for the kernel
<the_ramink> suigeneris: what in the error log when you fail to authenticate
<hallyn> hggdh: hopefully easily fixed in userspace...
<hggdh> rrrrriiiiighhhttt, yeah
<suigeneris> the_ramink, http://pastebin.com/vy5Xa9vt
<jorenl_> Can someone help me with some samba config trouble? :/
<jorenl_> I just can't seem to allow myself (from a windows pc) to add and edit files
<suigeneris> jorenl_, what is it?
<jorenl_> to my share
<suigeneris> jorenl_, directory permissions?
<jorenl_> in samba or in ubuntu?
<suigeneris> ubuntu
<jorenl_> Oh
<jorenl_> that might be the problem
<suigeneris> the_ramink, are you there?
<jorenl_> let me try
<the_ramink> suigeneris: these are virtual users so you need to use the entire email adresss as the username.  From the logs it appears that you're just using a username
<suigeneris> the_ramink, I used that too
<suigeneris> I telneted into mail server
<suigeneris> a login completeemailaddress pass
<suigeneris> no authentication
<the_ramink> what's the passwd file look like for that user? you can xxx out the passwrd has
<the_ramink> hash
<suigeneris> it's ssha
<suigeneris> wait
<jorenl_> suigeneris: now it even says that I don't have read access ><
<suigeneris> jorenl_, write list = yourusername
<suigeneris> jorenl_, valid users = yourusername
<jorenl_> yourusername, is that my ubuntu username?
<suigeneris> yes
<suigeneris> the_ramink, may I msg it to you? I don't want to pastebin
<the_ramink> sure
<jorenl_> no access to \\HERENTSERVER\www pffffffffffffffff
<suigeneris> the_ramink, now it says password mismatch, and shows the password I typed
<suigeneris> jorenl_, can you pastebin that share's conf part?
<jorenl_> yes, in a sec
<jorenl_> I just did sudo chmod ugo=rwx www and that fixed it; far from secure though :/
<suigeneris> that's evil
<jorenl_> suigeneris why? >:D
<suigeneris> make a group, make your users part or that group, and have the dir owned by that group
<suigeneris> and 770
<suigeneris> not 777
<suigeneris> 777 is evil, it means world writable
<jorenl_> hmm
<jorenl_> The only user that will ever be editing it is me :)
<suigeneris> *shrug*
<jorenl_> But I can't manage to get samba to ask for my login :(
<suigeneris> whatever floats your boat
<jorenl_> No really I want to do it properly
<jorenl_> I just suck :/
<suigeneris> jorenl_, does your current user in windows have a password?
<jorenl_> no
<suigeneris> give it a password, log out/in, it will ask
<suigeneris> trust me
<jorenl_> :/
<suigeneris> the same thing happened to me
<jorenl_> I really think I just misconfigured samba
<suigeneris> no
<foxbuntu> suigeneris, actually 777 is world write + exec
<suigeneris> trust me on this
<jorenl_> It asked a password earlier today, when the server was still using Windows XP :/
<suigeneris> jorenl_, generate a password
<suigeneris> foxbuntu, yes you're right
<jorenl_> first, what does guest = ok mean
<suigeneris> the_ramink, 503 error
<suigeneris> jorenl_, anybody can enter
<jorenl_> just read?
<suigeneris> yes,
<genii-around> jorenl_: It means people who do not have a username and password on the system can view or use shares
<genii-around> ( if you assign a guest account to point to some local account you make for this purpose)
<jorenl_> hey wait, it did ask for my pasword some time ago, so maybe windows just stored it.... very well possible
<uvirtbot> New bug: #752946 in drbd8 (main) "package drbd8-source 2:8.3.7-1ubuntu2.1 failed to install/upgrade:" [Undecided,New] https://launchpad.net/bugs/752946
<jorenl_> I managed to put some files in there so I guess I'll just leave it.
<jorenl_> ehhh. is an empty apache httpd.conf normal?
<suigeneris> look into apache2.conf
<jorenl_> Yeah, I saw it. I'm acting like an aweful noob and it's emberassing :/
<jorenl_> the apache2.conf has some include in it for virual hosts; should I be using <VirtualHost> anymore?
<jorenl_> shouldn't*
<cole> out of the box i think you want to look in /etc/apache2/sites-avalable/default for the def virtual host
<cole> available*
<jorenl_> Thank you cole
<cole> jorenl_: np
<queso> If I want to make some space by removing old kernels, how do I do that?
<SpamapS> tremor in LA
<SpamapS> I think
<SpamapS> could just be me breathing heavily from a workout
<cole> queso: if you are looking to save space, I'd look at what else is causing issues...executing du -sh /path or * should help you pinpoint large amounts of data
<queso> cole: I already know there is nothing else using the space.
<cole> just make sure you don't delete the kernel you are booting...you can look in /boot and /usr/src for kernel related things you don't want...rm works on those files as well as any other...
<queso> cole: Should I instead remove the package?
<cole> if you know what kernel packages you've installed and don't want...sure
<cole> clear
<queso> So I need to uninstall the old linux-image packages, but it won't because the current linux-image isn't installed (thus leaving an unresolved dependency) . . but I can't make room until the dependency is resolved, which can't happen until there's room.  Help?
<queso> How do I remove packages and ignore unresolved dependencies?
<jorenl_> Eh
<cole> queso: check out the --ignore-depends arg to dpkg
<jorenl_> what's the sun java package name? :p
<semiosis> jorenl_: its in the partner repo, sun-java6-jdk or something like that
<cole> joren: sun-java6-bin is the JRE
<jorenl_> partner repo :(
<cole> joren: sun-java6-jdk is the jdk
<jorenl_> yeah, sun-java6-jre is probably what I want right?
<semiosis> jorenl_: just uncomment the line in your /etc/apt/sources.list then apt-get update & install
<jorenl_> ok
<queso> How do I re-generate the grub boot list?
<suigeneris> queso, grub-install
<suigeneris> (I think)
<queso> sorry, in Hardy.  update-grub?
<suigeneris> no idea about hardy
<queso> got it to work, thanks
<suigeneris> the_ramink, ++
<hallyn> jbernard: hey, just checking, any progress on security libcgroup update for natty?
#ubuntu-server 2011-04-07
<bastidrazor>  when setting mutt to use imap and remote folders.. i don't know the path i should use for INBOX.
<jeeves_> ok, after an hour or so of pulling my hair out, I'm now down to figuring that the one port on my server isn't responding the internal requests.  I currently have a server with port forwarding on the router, and now that I have BIND setup, it's refusing to talk to the internal clients on the secondary NIC
<jeeves_> so, what would cause the server to forget it's brains when it's serving up requests from other networks?
<dku> Is there anything wrong with this iptables rule? "iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 10000 -j DNAT --to-destination 192.168.3.95:3000"  I'm trying to accept connections to my box on port 10000 and forward them to an internal IP at port 3000. Still getting connections refused to port 10000 after running that rule, so it doesn't seem to be working...
<dku> never mind, figured it out
<aliverius> i have installed a server inside a kvm in my server to isolate certain services accessible from the internet, for example a web server. i will need an aql server for that. can i use the sql server of the host or would that imply a security risk?
<aliverius> i have installed postgresql lighttpd and php5-cli
<aliverius> since i am a begginer i wanted to ask if that is all i need to run a webserver
<aliverius> i wanted to avoid apache and mysql
<mithran1> hi all, what is the correct way to set the ip address of an ethernet interface is it 'ifconfig eth0 <ip> netmask <netmask>'?
<mithran1> i actually have an ubuntu server installed on a machine and that server is not responding to a network boot request, it says no DHCP offers received, where do I start to troubleshoot this problem?
<twb> mithran1: an interfaces can have zero or more addresses, not just one
<mithran1> twb: ok..1 of my interfaces, is getting a funny address(something that is not part of the public network), but it needs an address thats part of the publi network for us to work with the server, the hardware cabling seems to be fine (ie the DHCP server and this network interface are on the same VLAN)
<mithran1> can someone give me some pointers on trouble shooting DHCP issues, with ubuntu or linux in general?, this is the first time i would be doing that, so I dont know much :(
<twb> mithran1: what is this "funny address"?
<mithran1> 192.168.5.xxx <- is the public address, 192.168.112.1 is what my ubuntu server is getting
<twb> Maybe you have a rogue DHCP server on your network
<twb> Run dhclient -v and see where the DHCPACK comes from
<mithran1> that command just seems to give me some version information?
<twb> Ugh, one moment
<twb> OK, no -v on lucid
<mithran1> twb: if i statically set my ip using ifconfig, will it remain if i restart the server?
<twb> No.
<mithran1> ok good
<mithran1> twb: so when i statically set the ip using the 'ifconfig eth0 <ip> netmask <netmask>' command, I am not able to ping computers on the public network..
<mithran1> DHCPDISCOVER on br1_101 to 255.255.255.255 port 67 interval 6, ok i just picked that up from when the server was rebooting....
<mithran1> twb : is there a way to say ask this ip for a dhcp request?
<twb> Yes -- DHCPREQUEST instead of DHCPDISCOVER
<twb> But I don't know how you configure that on the filesystem
<mithran1> No DHCPOFFERS received. \n No working leases in persistent database - sleeping.\n, please let me know if there is anything you want me to try, im trying to see if there is some issue with the DHCP server here...
<asadeddin> Hey all. I have a problem that i need some help with. We're moving from one ip to another and I need to update the MX record, doing this will give me some downtime because i have to wait for the thing to propagate and then swtich the hardware and software setup
<asadeddin> is there a way i can setup the new MX record while keeping the old one
<asadeddin> so that the new one propagates and then i can swtich the hardware and software setup quickly
<asadeddin> then i can delete the old record
<shauno> asadeddin: you can have multiple MXes and give a different weight to them
<asadeddin> priorities
<asadeddin> yeah
<asadeddin> and? how can i use that to my advantage, because i see my problem is that I need it to point to two different ip's, so one will give an answer while the other no response
<asadeddin> anyone?
<shauno> asadeddin: that's how I understand having multiple records should work?  eg, I have mail1 with priority 10, mail2 with priority 20.  so mail2 only gets used when mail1 is unavailable
<twb> shauno: MXs?
<joschi> asadeddin: just lower the TTL (e. g. 1h) for the respective zones in your name servers a few days before you need to do the switch.
<joschi> asadeddin: then do the switch and raise the TTL again
<shauno> twb, mail exchange records for dns
<twb> Yeah, you can have lots
<twb> Note that ill-behaved peers (read: spammers) might decide to try the "wrong" MX first
<asadeddin> so basically... Start a new MX record pointing to the new IP address with a lower priority than the current one. When the switch happens, the old one will fail and the new one will take over as the MX record. Is that correct?
<asadeddin> then i can delete the old one and raise the new MX priority
<twb> You may also want an entry like this one:
<twb> keegel.id.au.           8643    IN      MX      900 tarbaby.junkemailfilter.com.
<shauno> asadeddin: that's my understanding, yeah.  they 'should' use the highest priority they can connect to, and fall down the chain in order until one answers
<asadeddin> ok
<asadeddin> how do i know if the new one propagated? i mean it's difficult no?
<twb> asadeddin: you ask the DNS server you want to know if it propagated to.
<twb> e.g. dig @8.8.8.8 to test one of google's DNS servers
<twb> You probably want the old host to -j REJECT rather than -j DROP, though, so it falls through faster
<asadeddin> i dont know how to do that
<asadeddin> i m currently doing all of this in the domain host web panel
<asadeddin> let me see what's infront of me
<asadeddin> thanks tho for the excellent support. i was sure the ubuntu-server guys would know ;)
<asadeddin> ok
<asadeddin> basically what i see infront of me is a bit weird
<asadeddin> there are no custom MX records listed, but under A records I see the @ is pointed to our mail server
<asadeddin> anyideas?
<xampart> asadeddin: we did isp change recently. it was quite easy, as we first put 2NICs to our server, and configured our new ip to that. then we made dns changes and monitored the traffic
<twb> I don't support "web panels", sorry
<asadeddin> i like the NIC thing lol
<asadeddin> creative
<asadeddin> my problem is that i m not seeing any custom made MX records. I see the @ A record is pointing our IP
<xampart> so you should a an mx record pointing to either your mail server or (like our mx) to our email virus service ip
<asadeddin> i understand
<asadeddin> i should call network solutions and see what they have to say about this
<asadeddin> maybe i can get an idea of my current setup better
<asadeddin> but thanks a lot all! i really appreciate the help. :)
<eagles0513875|2> what is the right way to setup a cron job on ubuntu server lucid
<eagles0513875|2> hey guys anyone here
<joschi> eagles0513875|2: run `crontab` or edit the files in /etc/cron.*
<raphink> hi eagles0513875|2
<eagles0513875|2> joschi: either way will work right
<eagles0513875|2> joschi: if it requeres root to run this script would crontab be better to use
<raphink> eagles0513875|2, you can use /etc/cron.d/* whatever user is required
<joschi> eagles0513875|2: it depends. you could also run the skript with sudo in your user crontab
 * eagles0513875|2 is totally confused
<eagles0513875|2> thing is i need this script to run as root as i then need cron to send the root user an email if there is an error
<eagles0513875|2> ill use root crontab
<eagles0513875|2> joschi: what would i need to put in the cron tab to setup a daily run of my script at a given time
<joschi> eagles0513875|2: see `man 5 crontab`
<eagles0513875|2> thanks shoudl be able to figure it out now
<SuperRoach> Hello there. Has anyone here tried to and been able to update their php version to 5.3 (from 5.2) to help patch its security?
<eagles0513875|2> joschi: how can i update an already existing crontab entry
<eagles0513875|2> to change when its run etc
<joschi> eagles0513875|2: just edit the crontab file
<eagles0513875|2> my next question is how can i get it to send an email to root if there are errors with the backup script
<eagles0513875|2> if the script produces stderr
<eagles0513875|2> is that possible to do?
<joschi> eagles0513875|2: look for MAILTO in `man 5 crontab`
<eagles0513875|2> ok
<eagles0513875|2> joschi: now in the case of ubuntu if i want to mail to root would i use sudo?
<eagles0513875|2> or a user whose listed in the sudoers file
<joschi> eagles0513875|2: look for MAILTO in `man 5 crontab`
<eagles0513875|2> joschi: i did but seeing as ubuntu doesnt have root but uses sudo hence why im asking
<joschi> eagles0513875|2: try `id root` and you'll see that there is a root user
<eagles0513875|2> if leave MAILTO= "" no mail is sent otherwise mail will be sent to the owner of the crontab
<eagles0513875|2> so since i used sudo crontab -e it will send it to root right
<eagles0513875|2> if im reading the man page right
<xampart> eagles0513875|2: you could "echo "root your.user@domain" >> /etc/aliases && newalises"
<eagles0513875|2> O_o
<eagles0513875|2> is my understanding of the man page correct though
<eagles0513875|2> xampart:  i do mail to and pass what you pasted above
<eagles0513875|2> wow just found something similar to what you just mentioned
<eagles0513875|2> outa curiosity is it possible to have the root crontab one job runnign sending mail to root daily another sent to lets say user a to back up his data and let him know its backed up
<eagles0513875|2> is that possible
<xampart> what exactly?
<eagles0513875|2> lets say you have multiple users runnign their respective cron jobs via the root crontab
<eagles0513875|2> is it possible to have Mailto send an email to one user about their job and then to a different user about a different job
<xampart> "By default cron jobs sends a email to the user account executing the cronjob."
<eagles0513875|2> so in my case root
<eagles0513875|2> which makes the mail to redundant
<xampart> why not use users crontab
<eagles0513875|2> this one im running a backup into /mnt and i need root to back up my files to that location
<xampart> it is possible
<eagles0513875|2> well i have it setup in root crontab for now
<sky1> which packages for ubuntu i need for an basic authentification only with the form login page from request tracker .. no samba or other stuff is necessery
<sky1> on the client side
<jussi> Hrm, Im having an issue with my server, I want to have all pages in domain.com/subdir/ redirect back to domain.com/subdir/index.html - How would I acheive this?
<jussi> alternately, is it possible that I only allow trafic to there from one ip address?
<eagles0513875|2> jussi: traffic from one ip i think is done with .htaccess if im not mistkaen
<eagles0513875|2> if not that im sure you can use hosts.allow as well
<eagles0513875|2> and route accordingly in iptables
<shauno> htaccess is easiest, as it'd only affect that vhost & directory.  something like http://paste.ubuntu.com/590661/
<shauno> redirecting everything back to index.html would be some mod_rewrite magics that I'm not capable of doing off my head :)
<mithran1> is there a command like top for ifconfig??
<uvirtbot`> New bug: #753330 in samba (main) "The Samba 'panic action' script, /usr/share/samba/panic-action, was called for PID 1783 (/usr/sbin/winbindd)." [Undecided,New] https://launchpad.net/bugs/753330
<raphink> mithran1, why would you like such a command?
<raphink> mithran1, you want to watch your IP address?
<soren> mithran1: You mean ilke iftop?
<soren> s/ilke/like/
<jussi> shauno: thanks. I ended up googling for it, found post no 2 here: http://www.webmasterworld.com/apache/4085501.htm
<jussi> (which worked)
<shauno> jussi: looks convincing, but you can see why it's harder to produce blind :)
<jussi> shauno: yeah. :)
<mithran1> raphink: thank you so much, its looks awsome
<raphink> mithran1, you mean soren right?
<mithran1> ya raphink: I did mean soren, sorry..
<mithran1> soren: thanks a lot, that is a really good tool, helps me do exactly what I want to
<soren> np
<pluesch0r> hi everybody. i'm trying to install grub on a disc that i restored from a backup. i've booted my box from a install disc.
<pluesch0r> after mounting the disc, bind-mounting dev and supplying proc on the correct position, i'm trying to find /boot/grub/stage1 inside the grub shell.
<pluesch0r> however, the file is not found (it is there, though).
<pluesch0r> what could i be doing wrong?
<pluesch0r> i'm using grub 0.97-29ubuntu53 on an ext4 disc.
<jussi> Hrm, just downloaded lucid server, and Im getting:  Unknown keyword in configuration file gfxboot.
<jussi> vesamenu.c32:  not a COM32R image
<jussi> any way to boot?
<eagles0513875|2> lucid server has worked for me like a charm
<eagles0513875|2> brb need to reboot work laptop
<jamespage> Daviey: fix for bug 749720 uploaded and proposed if you would to review :-)
<uvirtbot> Launchpad bug 749720 in mod-wsgi "Wrong symlink in libapache2-mod-wsgi-py3 and incompatible with python3.2" [Medium,Confirmed] https://launchpad.net/bugs/749720
<Daviey> jamespage, super
<Daviey> what was wrong with the detecting python version line?
<Daviey> jamespage, fancy adding DEP-3 headers to the patch?
<jamespage> Well I'm no regex guru but it mapped 3.2-1 -> 3.2-1 instead of 3.2
<Daviey> ahh
<jamespage> Daviey: ack - occurred to me just after proposed the patch
<Daviey> super!
<jamespage> give me 5
<jamespage> Daviey: branch updated as requested :-)
<Daviey> jamespage, awesome, just reviewing something else... will then sort it out
<xampart> how are your central loggin systems set up?
<TeTeT> SpamapS: hi, could you fix bug 561750 for Lucid as well?
<uvirtbot> Launchpad bug 561750 in squid "squid starts and stops immediately (after upgrade from karmic to lucid)" [Medium,Fix released] https://launchpad.net/bugs/561750
<a7ndrew> less less
<daxroc> Hey all
<kaipanoi> Mornin
<kaipanoi> or afternoon or evening, as appropriate
<daxroc> Having a problem with mysql-cluster-server package, http://pastebin.com/WG74xyFb
<daxroc> any one know how I could fix the following error ?
<mok0> daxroc: line 16 says it
<daxroc> mok0: libmysqlclient is causing the conflict, I can't remove it tho ?
<raphink> daxroc, that looks like a bug in lucid
<daxroc> raphink: on 10.10
<raphink> daxroc, I doubt so
<daxroc> The version I am using is 10.10, with that error
<raphink> mysql-cluster-client-5.1 7.0.9-1ubuntu7 and libmysqlclient16 is from stock lucid
<raphink> sorry
<raphink>  mysql-cluster-client-5.1 7.0.9-1ubuntu7 is from stock lucid
<raphink> and libmysqlclient16 5.1.41-3ubuntu12.10 is from lucid-updates
<raphink> maverick has higher versions of both
<daxroc> raphink: sorry I am using lucid
 * daxroc hides
<raphink> anyway, this is a bug, there should be a conflict between the two packages
<raphink> is there something that prevents you from removing the libmysqlclient16 package?
<daxroc> not sure how? apt-get remove libmysqlclient16 ?
<raphink> that, or use apt-get install mysql-cluster-client-5.1 libmysqlclient16-
<raphink> which will remove libmysqlclient16 at the same time as it installs mysql-cluster-client-5.1
<daxroc> raphink: not letting me remove libmysqlcleint, php5-mysql depends on it , when I try and remove php5-mysql it gives an error about mysql-cluster-server not installed correctly
<raphink> daxroc, if you have packages depending on libmysqlclient16, you can use equivs to fix that
<raphink> (and do report the bug, please)
<daxroc> raphink: not sure what I should do wiht equivs
<daxroc> can I force uninstall and reinstall after ?
<N3> good domain registrar with privacy?
<asadeddin> hey all. quick question. I'm planning on moving ISP's and I found out we have no MX record, although we have a mail server in the office that's working. Our IP's for all other and none on the A records is set to our mail server
<asadeddin> no if we should move ISP's, all i have to do is change the IP's for the A records?
<raphink> sorry daxroc I was afk
<raphink> see http://www.debian.org/doc/manuals/apt-howto/ch-helpers.en.html about equivs
<raphink> build a fake package that provides libmysqlclient16 so php doesn't complain
<RoAkSoAx> morning all
<semiosis> morning RoAkSoAx
<RoAkSoAx> morning semiosis
<Klight> Hello all, anyone good with wpasupplicant, I've got it installed okay and even got a connection, however now the server crashes (hangs) on shut down or reboot. I'm very new at all this so I thought I ask the pros :)
<raphink> hi RoAkSoAx
<RoAkSoAx> hi raphink
<pmatulis_> quick one.  i created an alias with 'ip address add 192.168.7.100/24 dev br0'.  but this never shows up with ifconfig.  normal?
<uvirtbot> New bug: #753580 in dhcp3 (universe) "dhclient does not strip or escape shell meta-characters" [Undecided,New] https://launchpad.net/bugs/753580
<zul> SpamapS: ping
<uvirtbot> New bug: #753605 in mysql-5.1 (main) "removing mysql with apt doesn't delete user mysql" [Undecided,New] https://launchpad.net/bugs/753605
<hggdh> Daviey: there?
<uvirtbot> New bug: #753661 in nut (main) "upsd write() failed for 127.0.0.1: Broken pipe" [Undecided,New] https://launchpad.net/bugs/753661
<Daviey> hggdh, o/
<andygraybeal> can a group be inside of another group?  so for instance the group 'lpadmin' is inside the group  'coordinator'; so every time i assign someone to the group 'coordinator' they also are inside the group 'lpadmin' ... or is this rediculous thought pattern?
<SpamapS> zul: pong
<MTecknology> I'm having some troubles... http://dpaste.com/529722/ ... the physical volume has 1013.6 GB available to it - but pvdisplay indicates that it thinks there is only 704.00 GB available
<SpamapS> MTecknology: did you pvresize it?
<MTecknology> SpamapS: I wanna hug you
<SpamapS> MTecknology: please refrain .. people will talk
 * patdk-wk wants a hug too!
<MTecknology> well...
<patdk-wk> hmm, I couldn't locate any amavisd-new 2.7.0 packages anywhere :(
<MTecknology> SpamapS: i won't hug you then.. i'll just kiss your nick on the screen
<patdk-wk> just finished building one, took a few hours :(
<patdk-wk> now to throw it onto my production test server :)
<screen-x> Hi all, I'm having trouble building a debian package for a perl module with dh-make-perl. First error is "Too early to specify a build action 'vendor'. Do 'Build vendor' instead." full output: http://scsys.co.uk:8002/96335
<MTecknology> SpamapS: point is - thanks :)
<SpamapS> MTecknology: glad you got it going
<patdk-wk> hmm, slow launchpad day
<SpamapS> screen-x: seems like that module isn't built right for CPAN
<SpamapS> patdk-wk: the fact that you can notice when its slow is a testament to how much faster it has gotten of late. ;)
<patdk-wk> I know :)
<patdk-wk> sometimes I had builds take an hour to even notice I submitted them
<screen-x> SpamapS: hmmmm, so theres a bug in the module itself?
<patdk-wk> let alone build
<SpamapS> screen-x: or in its packaging.
<SpamapS> screen-x: meaning, its perl packaging
<RoAkSoAx> kirkland: ping
<RoAkSoAx> zul: ping
<screen-x> SpamapS: ok, thanks
<zul> RoAkSoAx: pong
<RoAkSoAx> zul: by any chance do you have some free time and a cobbler server ready to netbook?
<RoAkSoAx> zul: I'm getting an error during install that says
<RoAkSoAx> "No kernel modules were found. etcetc"
<zul> RoAkSoAx: not right know i dont
<zul> the iso import failed or soemthing?
<RoAkSoAx> zul: nope, the imported ISo is the same, and was working fine on Monday as far as I can remember
<RoAkSoAx> s/same/same I had for quite a while now/
<zul> weird...did you guys break something? ;)
<RoAkSoAx> zul: well only I patch was added from the time It was working fine till now, so that might be it
<zul> RoAkSoAx: which patch is this?
<RoAkSoAx> 36_tainted_file_path.patch
<RoAkSoAx> zul: but that really shouldn't affect in any way
<zul> RoAkSoAx: try it withouth
<RoAkSoAx> zul: yeah building now :)
<MTecknology> SpamapS: how about this one? http://dpaste.com/529732/
<MTecknology> OH!... I grew the physical volume wrong.... and now it's beyond it's actual capacity
<MTecknology> and i learned yet more today....
<MTecknology> working perfect now
<MTecknology> online resize of fs taking place and no issues expected
<patdk-wk> looks like amavisd-new is working good :)
<shaiguitar> hey, my main.cf looks like this: http://pastie.org/pastes/1768470/text?key=bt9yd6xho5kiie5ditgiq
<shaiguitar> andI was wondering how to config to forward everything (including local mail) to an external
<shaiguitar> @gmail address?
<shaiguitar> I'm a total noob at this, so TIA :P
<shaiguitar> specifically I guess I should change procmail -a "$EXTENSION" also? I don't know though TBH
<robbiew> jamespage: JamesPage: do you have a wordpress account?
<shaggy2> g'day helpfull people, I need some help, I have changed the network card that the server uses, I need to know how to find out what network cards are installed and what the short name (eth*) the system has given?
<robbiew> Daviey: fyi...granted you admin rights to the wordpress blog
<Daviey> robbiew, ta
<shaggy2> FYI, I had 3 cards installed, my network was on eth1 I removed 2 cards for use in another system and now I only have the onboard card, I thought it would just be eth0 but it's apparently an unknown interface assistance please anyone?
<kaipanoi> ifconfig -a
<shaggy2> thank you
<kaipanoi> np
<shaggy2> I almost have everything back up and running after a complete network overhaul... I changed from using my netgear router from handling everything to installing freebsd with pfsense (complete installer package) and using that to handle 2 different IP ranges
<kaipanoi> I love pfsense
<shaggy2> I am learning it.
<kaipanoi> its the only fw we use at work
<shaggy2> was told to just run it in a VM for a while till I learn it, I just went right ahead and installed it, stuffed it up then reinstalled it, and now got it fully working and handling everything todo with networking
<shaggy2> can I disable NAT on one adaptor (ip range) but not the other?
<kaipanoi> might be better to just use 1:1
<shaggy2> what is 1:1
<kpettit> I haven't had to deal with hardware in awhile.  What's the best mid-high end processor for running virtual machines, probally with virtualbox?
<shaggy2> is that like DMZ?
<kaipanoi> how much money do you have, kpettit? ;)
<kaipanoi> shaggy2, kinda/maybe. with 1:1 NAT you can make a private IP appear to have a public IP
<kaipanoi> http://doc.pfsense.org/index.php/1:1_NAT
<RoyK> kpettit: you can probably do well with an elderly opteron
<kpettit> kaipanoi, I was thinking of spending 700-1k on a desktop.
<RoyK> kpettit: the important part in most cases is memory, not cpu
<RoyK> kpettit: anything will do in that price range
<RoAkSoAx> zul/win 18
<RoAkSoAx> argh
<kpettit> So there isn't any specific processor feature or anythning that I should be looking for?
<RoyK> all new processors, except some atoms, support what you need
<kpettit> I know some of the older processors are missing some virtualzation features.  Just wanted to make sure i didn't miss something
<RoyK> and for a desktop, you wouldn't really want an atom
<kpettit> RoyK, perfect.  Thanks
<RoyK> older, yes, but that's like 3 years old or so
<kaipanoi> ensure it has AMD-V or TV-x
<kaipanoi> http://en.wikipedia.org/wiki/X86_virtualization
<kpettit> I've got a dual-core xeon now with 4gb.  And it just isn't keeping up wiht me doing 1 vm and regular desktop stuff
<RoyK> kpettit: also, kvm will probably do better than vbox
<RoyK> (just my 2c)
<shaggy2> kaipanoi: ok here is my setup, 1 nic is the WAN, next is the Private IP's (eg 192.168), 3rd is the public ip's a /29 setup. on the public IP nic I have 3 servers. that have the /29 ips (eg 150.101)
<kpettit> kpettit, I've been anxious to try that.  It's been awhile.  kvm wasn't quite there last time I tested, but it's been a good year or two sense I tried last
<RoyK> kpettit: talking to yourself? :)
<kpettit> apparently.  I think I forgot to take my meds.  kpettit, no you didn't
<shaggy2> lol we all talk to ourselves, we are allll NUTS :)
<RoyK> kpettit: kvm is in at least lucid and so on
 * RoyK doesn't talk tohimself, yes!, no, he doesn't
<kpettit> :)
<MTecknology> sudo -s
<RoyK> first sign of sanity, really
<MTecknology> sorry
<RoyK> you should hear me if I debug something bad :P
<kpettit> haha.  I bet.  I usually have to make sure my kids aren't around when I start coding.
<shaggy2> should here be when I stuff soemthing up with the network or the servers
<shaggy2> I abuse myself
<MTecknology> I'm a good coder, so no bad language comes from me
<kpettit> :) THat was pretty good
<RoyK> hah - the windoze guys at work had decided to use Ahsay for windoze backup instead of sticking with bacula for it all and then, suddenly, ahsay added another 800% to their pricing - boss decided to switch to bacula in a fraction of a second
<kpettit> RoyK, gotta love that.  Sticker shock seems to make alot of opensource converts
<RoyK> bacula is _fast_ btw
 * patdk-wk wants to play with bacula
<patdk-wk> I have a 25 lto lib, doing nothing
<patdk-wk> cause the expensive corperate software doesn't work
<RoyK> with 220TB worth of backup storage, we don't need further investments in a few years :P
<patdk-wk> royk, but what happens when you fill that next month? :)
<RoyK> we won't
<RoyK> if we do in a year or two, we get another disk shelf or two
<patdk-wk> when the commercial backup thing worked, it took >2weeks to do a backup
<patdk-wk> it never finished, I killed it
<patdk-wk> out data is completely replaced within 2 weeks
 * patdk-wk just doing an rsync was faster than 2 weeks :)
<RoyK> we were considering getting disk-based backup for Legato, but the pricing was hilarious - you have to pay for the amount of storage space available to Legato
<RoyK> and when 100TB doesn't cost much, paying > 10x the price of that for licenses, well, bacula was a better choice :P
<jamespage> robbiew: I do
<shaggy2> umm anyone here no anything about dns servers??? I have bind9 installed and have it set with a domian name and the ipaddress of where is it hosted, I have set the name servers with my register to that of my server but it's not happening
<MTecknology> time to start writing a bash app... and also time to start music so i don't lose it
<MTecknology> this this is going to be a minimum of 5 billion lines.....
<RoyK> shaggy2: what's the IP of the host and the domain name (zone) it's supposed to service? I can test form here if you like
<RoyK> MTecknology: in bash...
<shaggy2> admin.shaggyweb.net and the ip is 150.101.191.139
<RoAkSoAx> SpamapS: can I close this one as kirkland already worked on getting the cobbler-web package working? bug #705691
<uvirtbot> Launchpad bug 705691 in cobbler "cobbler-web should include a working configuration and a README file detailing the steps necessary" [Wishlist,Confirmed] https://launchpad.net/bugs/705691
<RoyK> Dora:~ roy$ host admin.shaggyweb.net 150.101.191.139
<RoyK> ;; connection timed out; no servers could be reached
<RoyK> shaggy2: I guess, either bind isn't started, or a firewall blocks it
<shaggy2> ok I'll look into it
<MTecknology> RoyK: ya... odly enough- i think that's the best choice unless except for possible python- but i'm also not a python fan
<RoyK> MTecknology: bash scripting is for tiny stuff, not for writing applications
<RoyK> MTecknology: bash is parsed, not precompiled
<RoyK> use something sane like python, php, perl, even mono
 * RoyK likes perl
<MTecknology> RoyK: it's not really an app.. it's a very simple management interface for sentinel servers. the hard part is going to be all the whiptail i'll be using
<MTecknology> or dialog.. not sure yet
<MTecknology> and saving configs
<RoyK> just use a sane programming language with database support (which includes them all, the sane ones)
<MTecknology> they need to be able to edit the config manually too
<shaggy2> can someone ping 150.101.191.139 for me please
<MTecknology> shaggy2: no
<MTecknology> RoyK: i'm still considering python for this - just not sure - it's the best tool, but i don't like it
<MTecknology> best tool for 'this'
<MTecknology> job
<MTecknology> I need to stop swapping keyboards
<RoyK> MTecknology: python, or java, or perl, or mono, or php, or ruby, or anything, really, will do the job nicely. which one you choose is only a matter of which one you know the best or like the most
<kpettit> I love me some python.  Espically for cli apps.
 * RoyK uses perl for that :P
<MTecknology> perl or python would be best suited for this i'm sure
<RoyK> I have to admit I use shell scripts for easy stuff, but when it comes to saving state, shell scripts rather suck
<kpettit> so many languages, so little time
<RoyK> jÃ¡, Ã©g veit
<kpettit> yeah, I agree.  I started using pythong becuase it was easier for me to pick up than some of the other ones at the time.  And I needed to be able to do cli, gui, and web stuff.
<RoyK> kpettit: then python is probably the best to you for you
<kpettit> My brain can't handle learning too many different things at the same time :)
<kpettit> last time I tried, I forgot my kids names.
<RoAkSoAx> zul: apparently it wasn't cobbler, but rather and issue with the archives, as I imported today's ISO and no error whatsoever
 * RoyK once attended German and Icelandic courses in parallel - NOT a good idea
<zul> RoAkSoAx: cool beans
<pittstains> can someone tell me what the aolserver4-nsd application does?  i see that it's running on a server I administer, but I have no recollection of installing it
<RoyK> pittstains: AOL Web server AFAICS
<RoyK> pittstains: if you don't recall installing it, I guess running chkrootkit might be a good idea :P
<pittstains> RoyK: the man page says only "Nsd is the AOLserver binary."
<RoyK> yeah, but are you running aolserver?
<pittstains> it was running when i logged in, and it was hogging port 80 so Apache couldn't listen on it
<RoyK> pittstains: did you install the server?
<pittstains> no, i am a little concerned about how it got there...
<RoyK> !chkrootkit
<RoyK> stupid bot
<pittstains> haha
<RoyK> !google chkrootkit
<ubottu> I have no google command, use http://www.google.com/
<pittstains> hm, i'm also seeing a /home/sysgames directory that doesn't look familiar
<pittstains> aaaaaaaaaaaaaaaarg
<RoyK> pittstains: see above :P
<Pici> !info chkrootkit
<ubottu> chkrootkit (source: chkrootkit): rootkit detector. In component main, is optional. Version 0.49-4 (maverick), package size 301 kB, installed size 824 kB
<pittstains> yup, and there's a new user called sysgames in /etc/passwd... the newest user, even
<RoyK> pittstains: I'd download chkrootkit from the source, not the ubuntu package, to be sure
<RoyK> pittstains: what's the id of that user?
<RoyK> `id sysgames`
<pittstains> royK: 503 according to /etc/passwd
<RoyK> k
<pittstains> why from source?
<RoyK> well, most of it is perl, plus some binaries, but then you know it's not been tampered with
<RoyK> use the source, luke...
<pittstains> RoyK: any suggestions for tracking down the entry point of this attack?  i'd prefer to close the hole in addition to eliminating the installed garbage
<RoyK> check all logs
<RoyK> and dates on new files
<RoyK> if the attacker has gained root access, which it seems, better reinstall
<RoyK> there may be setuid binaries around you can't find very easily
<RoyK> people can add additions to existing cron jobs to open tunnels to the outside as well
<bluethundr> hey guys...  how do I find what ethernet / wlan driver is loaded in ubuntu?
<RoyK> if the box is rooted, reinstall
<bluethundr> lspci turns up nothing
<RoyK> lspci is rather old
<bluethundr> it's 8.04 LTS
<RoyK> lshw is a bit better
<RoyK> both scan the bus
<bluethundr> hmm ok thanks I'll give that a try
<RoyK> to see what drivers are loaded, lsmod
<bluethundr> right thanks
 * RoAkSoAx off to lunch
<pittstains> RoyK: i hate computers
<RoyK> pittstains: hehe - so do I - I also love them :P
<pittstains> i think you're probably right.... thanks for your help
<RoyK> np
<pittstains> RoyK: huh!  somehow the date on the aolserver4-nsd file (in /usr/sbin) is november 5, 2008
<pittstains> not sure i've even had the server that long!
<patdk-wk> maybe it came out of a tar file with timestamps preserved :)
<RoyK> pittstains: file dates can be changed
<RoyK> pittstains: or perhaps it's just false alarm
<RoyK> pittstains: when was the last reboot? when was the last time did apache was running?
<RoyK> s/did//
<petani> all i problem with php5
<petani> i have problem with php5
<pittstains> $ uptime
<pittstains>  14:10:02 up 14:08,  1 user,  load average: 0.00, 0.01, 0.00
<petani> some body help me
<patdk-wk> petani, can't do that
<patdk-wk> you have supplied no info to help you
<petani> why php5 in ubuntu not support image anti alias on phpgd
<petani> my php5 not support image anti alias
<pittstains> RoyK: not sure how to figure out last time apache was running
<petani> or image rotating
<RoyK> petani: no idea - perhaps that's a newer feature or perhaps it's in a module not installed?
<pittstains> also not sure why uptime is only 14 hours
<RoyK> pittstains: smells bad...
<RoyK> pittstains: if you're close to the server (the server not being on the other side of the planet in some colo etc), I'd recommend reinstalling it
<petani> RoyK : i am install phpgd
<RoyK> petani: did you restart apache after you did that?
<petani> but it not run images anti alias
<petani> ye
<petani> i restart
<petani> in centos is running
<RoyK> sorry - no idea - might be a module missing
<pittstains> RoyK: no physical access to the machine :-/     .... looks like i have a long day ahead of me tomorrow
<RoyK> try asking on #php - maybe they know
<pittstains> also, the existence of the sysgames user is troubling
<RoyK> pittstains: can you give me its IP?
<RoyK> pittstains: it'd be fun to scan it to see what I can find from here :)
<petani> my friend tell me because scurity isues
<pittstains> RoyK: sent in a PM
<petani> join #php
<pittstains> please do tell me what you find :-)
<RoyK> pittstains: I'll send you a report - openvas just started :)
<petani> RoyK, my problem related it  http://www.jibas.net/content/fordis/fordisisi.php?kode=SISFO&page=59
<RoyK> petani: I don't quite understand your language, Malay?
<petani> not malay
<petani> is indonesian
<RoyK> ok, sorry
<RoyK> I still don't understand shit, though
<RoyK> better ask on #php
<petani> oke than's
<RoyK> perhaps someone there can point you to where to find a download
<petani> i try recompile php5 to support phpgd anti alias
<petani> images
<petani> rotating
<pittstains> RoyK: also have new users tor and messagebus...
<pittstains> don't really have time to dig into my logs until tomorrow, but suffice it to say i'm irritated
<RoyK> having a server hacked tends to make a sysadmin annoyed :P
<pittstains> gah, i'm a programmer first, sysadmin second
<pittstains> i hate having to wear so many damn hats
<pittstains> i'll be locking down SSH logins to a small set of IPs on all my machines tomorrow
<pittstains> that will at least minimize my exposure
<pittstains> i repeat: aaaaaaaaaaaaaaaaaaarg!
 * RoyK is a sysadmin first, a photographer second ..... and out there somewhere perhaps a programmer :)
<RoyK> pittstains: see pm
<pittstains> RoyK: i hate this report!
<RoyK> pittstains: it doesn't show any security holes
<pittstains> that SSH one doesn't look too nice!
<RoyK>   Successful exploits will allow attackers to obtain four bytes of plaintext from
<RoyK>   an encrypted session.
<RoyK> four bytes
<pittstains> :-)
<RoyK> you'll have to be seriously interested in hacking the site to gain anything from that
<pittstains> okok, reading comprehension
<pittstains> very interesting report, though!  i will be looking at openvas in more depth later!
<RoyK> pittstains: openvas rocks :)
 * RoyK runs another scan against the office
<geekbri> so is it correct that you can't use | in cron jobs?
<uvirtbot> New bug: #753924 in php5 (main) "package php5-fpm 5.3.5-1ubuntu6 failed to install/upgrade: Ð¿Ð¾Ð´Ð¿ÑÐ¾ÑÐµÑÑ ÑÑÑÐ°Ð½Ð¾Ð²Ð»ÐµÐ½ ÑÑÐµÐ½Ð°ÑÐ¸Ð¹ post-installation Ð²Ð¾Ð·Ð²ÑÐ°ÑÐ¸Ð» ÐºÐ¾Ð´ Ð¾ÑÐ¸Ð±ÐºÐ¸ 1" [Undecided,New] https://launchpad.net/bugs/753924
<RoyK> geekbri: use a shell script
<geekbri> RoyK: i guess i'll have to if | isn't supported in crontab.  IS that the case?
<SpamapS> RoyK: as long as it works "out of the box" and explains itself in the README, then I'd agree, that feature is implemented. :)
<RoyK> geekbri: I've seen variable success with using piping with cron
 * RoyK sticks to shell scripts - they work
<geekbri> RoyK: honestly i was just trying to pipe output to "logger" so i didnt have to rotate a new log :'(
<RoyK> sorry - I don't know if it works - I usually handle logs in a script
<RoyK> there's something rather fishy about cron/upstart in lucid
<RoyK> the bug is filed, but last I checked, it wasn't even accepted
<geekbri> yeah im just getting complained to by our DB about how they dont want a shell script claling a php script etc etc :)
<RoyK> a php script should be able to handle its logs quite easily
<geekbri> RoyK: yes i brought that up to them already :)
<RoyK> so... 380TB and counting :D
<RoyK> perhaps we'll reach 1PB next year
<geekbri> RoyK: it's valid to call something like source my.env && command in cron through right?
<RoyK> geekbri: just try - I've lost track on cron since Lucid - it seems to be a bit buggy
<geekbri> RoyK: thanks! i will :)
<RoyK> btw
<RoyK> source 'something' probably won't work
<geekbri> well i'd include the full path
<RoyK> since cron wants to run an executable
<RoyK> 'source' isn't
<geekbri> oh hrm... so can you not use source??
<RoyK> source is an internal bash command
<RoyK> just create a script that does the 'source' bit
<geekbri> yeah im just facing pressure from some high level folks to not create a seperate script thats run and to put it all just in cron directly
<RoyK> those 'high level folks' should be high level enough to write a script that is cronable
<geekbri> hehe you would think that right!
<RoyK> I work in a research institute - we have scientists complaining about all sorts of things
<RoyK> if you just tell them how things work, it's usually not a problem
<geekbri> the problem is, and i have no idea why they've designed it this way, but their php scripts rely on bash enviromental variables.
<RoyK> then create a script as a wrapper
<RoyK> you won't get the user environment into cron
<geekbri> right, but thats where all the complaining comes in, because they say, well why do we need to write a script to run a script... and i've tried to explain but they just dont seem to get it
<RoyK> because cron doesn't read .bashrc
<geekbri> yeah but, its actually a totally seperate file with just a bunch of exports in it.
<geekbri> so it resides in /etc/web/conf.d/stage2.env or something to that effect
 * RoyK gets tierd
<geekbri> anyway i'll have to try to see if the source will work, and if it doesn't which it probably wont, i'll have to write that wrapper and tell them thats just the way it'll have to be
<RoyK> geekbri: google cron environment
<geekbri> RoyK: rgr that
<RoyK> rgr?
<geekbri> roger
<geekbri> roger that
<RoyK> sorry, didn't know that tla
<hallyn> hggdh_, any update on bug 746751 ?
<uvirtbot> Launchpad bug 746751 in linux "kernel: [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 30)" [Critical,In progress] https://launchpad.net/bugs/746751
<hggdh_> hallyn: it seems, really, to be related to walrus issues, I opened bug 753779 on it
<uvirtbot> Launchpad bug 753779 in eucalyptus "walrus fails to retrieve images on instance startup" [Undecided,New] https://launchpad.net/bugs/753779
<RoAkSoAx> robbiew: systemd won't yet be considered till the LTS?
<RoyK> what's sysdamd?
<RoyK> ah
<RoyK> I guess we'll have to get pissed off another 18 months with upstart before we can attempt that one
<RoyK> s/off //
<RoyK> [offtopic] http://www.youtube.com/watch?v=kNxX4SDqpVU
<amber285> Hi All,My company has just recently migrated from a Windows to a Linux farm.  At the moment we donât have document storage system so we are using Google docs at the present time. We donât see this as the safest method of document storage so I have been assigned the task of finding a better solution.I have been advised to set an FTP server but this method seems dated and the search functionality isnât very good and Iâve a
<RoyK> heh - amber285 spent two whole minutes waiting for an answer :)
<semiosis> RoyK: she didnt even finish writing her
<RoyK> semiosis: agre
<amstan> i have around 6 comps in my house, all need to download latest debs
<amstan> i'm looking for the simplest clone of apt-proxy
<amstan> it would be nice if i didn't have to change all lines from sources.list
<jjohansen> hallyn: any new info I should be aware of on Bug #746751
<uvirtbot> Launchpad bug 746751 in linux "kernel: [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 30)" [Critical,In progress] https://launchpad.net/bugs/746751
<shauno> on postfix, should I have an alias for MAILER-DAEMON, since he's signing off on bounces? or is that meant to disappear to stop cyclic loops
#ubuntu-server 2011-04-08
<shaggy2> anyone here got any exp with pfsense?
<uvirtbot> New bug: #754210 in bacula (main) "package bacula-director-mysql 5.0.1-1ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/754210
<overrider> Im having a strange issue with 10.04.2 - it slows down after a few days of uptime or heavy data copying to it over the network. Gets so slow that its tough to login and run commands. Once top or similar actually ran though, no CPU Load or runaway process visible at all. iostat, free, top, anything does not show high load.
<overrider> Running a apt-cache search sensors (example) , is SLOW.. after a system reboot, everything is peachy again, up until sometime later, especially after copying a lot of files to it (its a backup fileserver)
<genii-around> Why does mail from apt-listchanges go to /var/mail/mail when it says it is emailing root? I don't see an alias for  user "mail" in /etc/aliases
<hggdh> Daviey: just FYI for your morning: today's ISO was missing python-psutil; as such, euca does not install from ISO
<awanti> Hi, When i am loging to my server its taking long time...?
<awanti> How do i reduce this login time
<amstan> why do ppl think irc is the fastest technology?
<amstan> you can't just go in a crowd and start yelling about things
<twb> amstan: *I* can
<amstan> especially if the crowd is afk
<amstan> lol
<lifeless> the trick is to take all your clothes off
<lifeless> then start yelling
<jmarsden> lifeless: In the IRC context, clothing removal would seem somewhat pointless and be unlikely to have much effect -- unless IRC clients are secretly enabling webcams these days?? :)
<twb> Only xmms2-irc
<lifeless> jmarsden: I was refusing the rl analogy;)
<twb> "real" life... who needs analogue signalling anyway
<gemclip> anyone around that could help me evaluate is Ubuntu Server would be a right fit for what I want to do?
<genii-around> gemclip: It might help to know first what plans you have
<gemclip> I have been a windows server guy for years and I am trying to migrate over and test *nix servers at home. I would like to setup a NAS server and a Multimedia server (streaming ect.) but I havent worked with nix much
<gemclip> I will be setting them up under vmware
<gemclip> Only stuff ive worked with in linux is BackTrack at work
<gemclip> I have a mac book, a ubuntu desktop and 4 pc's rnning windows 7 in the lab env
<gemclip> the machine I am setting up the vmware on is a i7 980x with 12gb ram and 10tb divespace
<twb> If you just want a NAS anything will do
<gemclip> streaming is a biggie I would like to find an open source solution
<twb> IMO the main benefit of Ubuntu is what it gets from Debian -- a strong Policy document which requires packages to "play nice" with one another, and infrastructure (dpkg and apt) to reliably distribute binary packages and their dependencies and updates.
<twb> Ubuntu also has better security than Debian (thanks to Kees Cook et al) and a "release train" model rather than a "when it's ready" release model.
<twb> The latter is useful for strategic planning in corporate environments.
<gemclip> I really like what Ive seen so far with ubuntu. The learning curve is a little rough buit I have been trying to stay away from the gui til I get comfortable under the hood
<lifeless> theres a gui ?
<twb> No server should have a GUI
<gemclip> lifeless:you can start one lol
<twb> We advise against doing so
<gemclip> I was talking linux in general
<gemclip> not just for server. I am trying to move over to linux a little at a time
<gemclip> but ultimately server is the goal
<amstan> servers are a good start for learning linux
<gemclip> do you have any good websites you would point a newbie who wanted to setup a NAS and webserver. Any how-to's?
<twb> !RUTE
<ubottu> documentation is to be found at http://help.ubuntu.com and http://wiki.ubuntu.com - General linux documentation: http://www.tldp.org - http://rute.2038bug.com
<gemclip> thank you. Ill start my searching there
<amstan> gemclip: you should look into sftp, samba, and software raid
<amstan> gemclip: that should get you covered for a nas
<amstan> unless you wanna get fancy with nfs
<gemclip> I bought a block of static IP's I wanted to setup a server to be able to access away from home and Let some of my neighbors connect to it as well
<amstan> gemclip: static ips are overrated, i just do with noip
<gemclip> I use 2 for work. I have to have them but it leaves me 3 that I can use personally
<gemclip> plus it only 10 bucks a month
<gemclip> I have a exchange server running 2008 right now but I have to keep that on its current ip
<gemclip> I could setup a 2008 server but I wanted to broaden my horizons
<gemclip> lol getting bored with windows and its expensive
<amstan> yes
<gemclip> is there anything windows 2008 can do that ubuntu server cant? functionality wise
<amstan> unix people like to do admining in different ways
<amstan> usually custom sollutions
<amstan> so there's not a lot of polish in those areas
<amstan> (read: active directory)
<gemclip> is there group policies and user level control in U/S? equiliv to win servers?
<amstan> well.. on unix you have groups and users, you can associate permissions for those on files
<genii-around> Better
<amstan> and since everything is a file, ...yes
<gemclip> cool. I'll do some reading. Thanks for all your time
<amstan> gemclip: are you a developer as well?
<gemclip> yeah but mostly .NET stuff
<amstan> well.. regardless
<amstan> it's a lot nicer deving under unix
<amstan> with automated package management, you need a new library.. you can install it and have it end up in the proper place for compiling
<gemclip> what what Ive read there is alot of C compliling tyoe stuff
<amstan> oh yeah, that stuff is pretty simple
<gemclip> fat fingers tonight
<gemclip> is there diffrent types of Ubuntu servers or just what you install on the server?
<genii-around> gemclip: Usually you have a base install of something like LAMP ( linux-apache-mysql-php) and then customise
<amstan> gemclip: it's pretty much the same OS, with different software preinstalled
<amstan> you can switch between them at will
<gemclip> Should I read up on UEC or wait?
<twb> Do you care about cloud stuff?
<gemclip> Not right now. It will be nice to know for any clients I may work with down the road. But I think I should just start with basic server install and go from there
<gemclip> I found the server guide so Im reading that atm
<gemclip> just waiting for the ISO to finish downloading
<SpamapS> gemclip: which version?
<jussi> Im looking to get my server set up with LVM/RAID in a mirrored config. can someone guide me through this?
<TeTeT> jamespage: would you have some time for questions and general info on hudson and automated server testing later today?
<jamespage> TeTeT: I do; TBH have time now or can do later if you are busy
<TeTeT> jamespage: I'm about to leave for lunch (5 mins), so later would be better. I need to get myself up to speed with the blueprints and would love to do some testing myself. So I guess right now I'm looking more for guidance on getting started than real questions
<jamespage> OK; lets catchup later then - say 1500 UTC?
<TeTeT> jamespage: 1500 UTC is fine, just after my 1:1
<jamespage> TeTeT: great - speak then
<Error404NotFound> every now and then i would see only 5M free ram with a lot of it eaten by the buffers, sometimes even around 1G. Is there a way i can limit the buffers?
<shauno> it's not much of an answer, but I don't believe that's really a problem.  buffers will be dropped as soon as the memory is required for something more useful
<shauno> but 'free' is the least useful state for ram
<oCean> buffers are for your benefit
<jussi> How does one set a static IP from the CLI?
<binBASH> moin \sh
<binBASH> jussi: do you want to set it up permanent?
<jussi> yes
<\sh> moins
<jussi> this server needs to sit at the same place always
<binBASH> so check http://www.cyberciti.biz/faq/setting-up-an-network-interfaces-file/
<jussi> ahh, excellent, thank you
<binBASH> welcome
<jussi> binBASH: what are the => Network ID: 192.168.1.0 => Broadcast IP: 192.168.1.255 for?
<binBASH> jussi: read this http://en.wikipedia.org/wiki/Broadcast_address
<binBASH> and Network id is the subnet your pc is on ;)
<uvirtbot> New bug: #754522 in autofs5 (main) "autofs not starting if no nfs-mount in fstab" [Undecided,New] https://launchpad.net/bugs/754522
<Saturn2888> I'm having some really strange issue. I rebooted my machine and get a "mountall: mount / [xxx] terminated with status 32"
<uvirtbot> New bug: #749402 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.37-1ubuntu5.5 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/749402
<uvirtbot> New bug: #752298 in bind9 (main) "package bind9 1:9.7.0.dfsg.P1-1ubuntu0.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/752298
<hggdh> a bug on the ISO (package not installable) should be opened against which package?
<ScottK> hggdh: What's not installable?
<hggdh> ScottK: python-psutil
<ScottK> Do you know why?
<hggdh> it seems to be missing from the ISO
<ScottK> Why is it needed on the ISO?
<hggdh> eucalyptus -- eucalyptus-common depends on it
<ScottK> It's in Universe.  That's why it's not on the ISO.
<ScottK> Looks like it's never been in Main, so it needs a Main Inclusion Report.
<jbernard> hallyn: ive made some progress, yes
<hallyn> jbernard, do you need any help?
<jbernard> hallyn: i was a bit ambitious and thought I could squash several other bugs in addition to the security issue, but that quickly turned into a small nightmare
<ScottK> hggdh: So the bug should be filed against python-psutil and ubuntu-mir subscribed.
<hallyn> jbernard: are all the bugs listed in lp?
<jbernard> hallyn: i have a package ready for 0.37.1 with the security issues fixed, but there is still a problem with upgrade
<jbernard> hallyn: between lp and bts, yes
<jbernard> hallyn: i now understand why tmpfs is needed
<hggdh> ScottK: duh. I really should have done my homework :-( thank you
<hallyn> ah, yes.  oh, right, i'd forgotten - is one of the things you're adding my upstart job?
<jbernard> hallyn: yeah, that is another thing i need to do. As I understand, dh_installinit in debian was updated to include knowledge of upstart jobs, but it seems it's not fully supported yet
<jbernard> hallyn: vorlon posted an explaination in debian-devel, ill see if I can dig it up
<hallyn> jbernard: oh boy
<jbernard> hallyn: my plan now is to upload 0.37.1 tonight, and then tackle the other issues in a subsequent upload(s)
<hallyn> what does .1 address?
<jbernard> hallyn: mostly the security issues
<jbernard> hallyn: and a couple of other smaller patches
<jbernard> hallyn: nothing major
<hallyn> ppetraki: anything in particular you want to ask for for information in bug 644489
<uvirtbot> Launchpad bug 644489 in multipath-tools "constantly changes /dev/disk/by-id/{scsi,wwn}-* LUN symlinks with multipathing" [High,Confirmed] https://launchpad.net/bugs/644489
<hallyn> jbernard: all of the security issues though?
<jbernard> yep, both of them
<hallyn> cool
 * ppetraki looking
<hallyn> jbernard: someone on #lxcontainers yesterday had a problem with libcgroup and lxc
<hallyn> jbernard: libcgroup was mounting each cgroup separately iiuc.  is that expected?
<jbernard> jbernard: separately in different locations? It should just create the dir strucure under /sys/.. but it would depend on which version they're on
<hallyn> they said they were on natty
<hallyn> i've not been able to (and am not able to) reproduce right now.
<ppetraki> hallyn, I wish they would have pointed out where in the udevadm monitor log the "legitimate event" was observed
<hallyn> ppetraki: yeah, i'm going to ask for a whole new set of data.  starting with running his stuff with -v4 and posting daemon.log
<hallyn> ppetraki: i noticed it was another netapp so figured you migth have ideas :)
<jbernard> hallyn: that sounds very odd, ive not seen that myself. do you remember their nick?
<ppetraki> hallyn, can't wait to get one or two :)
<hallyn> jbernard: it was dhubbard_ if i recall right
<jbernard> hallyn: cool, thanks
<jbernard> hallyn: i have to run to work, ill get .1 uploaded tonight
<jbernard> hallyn: and then we can go from there
<hallyn> great, thanks. ttyl.
<jbernard> hallyn: cheers
<ppetraki> hallyn, those additional logs should be enough, udev monitor alone can never tell you why the kernel emitted this event
<ppetraki> hallyn, if I had to guess, I would say a LIP occurred, or something on the FC bus that forced discovery
<hallyn> all right, let me ask for those logs now before i get back into what i was trying to do
<ppetraki> cool, thanks
<RoAkSoAx> morning all
<Daviey> RoAkSoAx, o/
<RoAkSoAx> Daviey: o/
<Daviey> RoAkSoAx, Having fun?
<RoAkSoAx> Daviey: well my day is just starting but I guess I will :)
<Daviey> RoAkSoAx, python-psutil needs a MIR :)
<Daviey> hggdh noticed it.
 * RoAkSoAx checks
<RoAkSoAx> Daviey: alright I'll add it to my todo for today
<Daviey> RoAkSoAx, awesome
<hallyn> zul: argh
<hallyn> zul: lxcguest bug - containers wn't shut down all the way
<zul> hallyn: do you have a fix for it?
<CrazyGir> to anyone using either puppet or chef: why choose one over the other?
<RoyK> hi all
<hallyn> zul: yeah
<zul> hallyn: bzr branch me baby
<hallyn> zul: will do.  creating a bug first.
<RoyK> I just installed this box, and at the end of the installation, it asks if it should update security updates automatically - I was a bit quick and chose no automatic updates - how can I change that after installation?
<Daviey> RoyK, https://help.ubuntu.com/10.04/serverguide/C/automatic-updates.html
<RoAkSoAx> Daviey: MIR's need FFe?
<RoyK> Daviey: thanks
<Daviey> RoAkSoAx, I believe so.
<RoAkSoAx> Daviey: does this look good enough to you then? bug #754661
<uvirtbot> Launchpad bug 754661 in python-psutil "[FFe] [MIR] python-psutil" [Undecided,New] https://launchpad.net/bugs/754661
<Daviey> RoAkSoAx, Yeah... the thing i am missing is how it got introduced as a new depends.
<Daviey> Ahh.. i see
<Daviey> scrub that
<hallyn> zul: bug 754655, bzr tree attached.
<uvirtbot> Launchpad bug 754655 in lxc "lxc guests on natty are not shutting down" [High,In progress] https://launchpad.net/bugs/754655
<hallyn> s/attached/linked/
<zul> hallyn: cool...gimme a sec
<RoAkSoAx> Daviey: hjehe yeah it was a bbug filed against it by hggdh :) I guess we all assumed it was in main as the installation didn't really fail
<RoAkSoAx> Daviey: I now see it in component-mismatches though
<hggdh> RoAkSoAx: I was also assuming it -- I needed ScottK to delicately cattle-prod me to the MIR
<zul> freaking firefox
<RoAkSoAx> hggdh: hehe yeah fortunately it is not really a difficult one as it is a small package
<hggdh> RoAkSoAx: yes, I was starting to look at it, seems clean
<RoAkSoAx> indeed
<RoAkSoAx> Daviey: is the problem with upstart and the IFACE thing still present?
<zul> hallyn: uploaded
<Daviey> RoAkSoAx, yes
<Daviey> RoAkSoAx, SpamapS proposed a solution, which i've not yet tested.
<RoAkSoAx> Daviey: ok, so apparently bug #726769 bus closed by mistake :)
<uvirtbot> Launchpad bug 726769 in eucalyptus "package eucalyptus-common 2.0.1 bzr1255-0ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [High,Fix released] https://launchpad.net/bugs/726769
<Daviey> RoAkSoAx, yup :(
<RoAkSoAx> Daviey: I think hggdh did the testing with not very succesful results
<hggdh> indeed. From not upgrading, to not starting
<Daviey> hggdh, hmm.. you didn't test SpamapS proposed solution, did you?
<hggdh> Daviey: SpamapS told me to add 'env IFACE=manual-start' in the eucalyptus-network.conf. I added it, and *then* upgraded
<Daviey> hggdh, Ahh... i thought it would need adding after dpkg craps out
<hggdh> Daviey: end result was the upstart processes went in to what looked like an infinite wait
<hggdh> well, not really infinite, just a long sleep
<Daviey> then sudo apt-get upgrade again
<hggdh> then this is not a solution
<Daviey> hggdh, no, it is - that would mimic it being in a new package.
<hggdh> ah well. I did ask -- a few times, mind you -- about what was to be done, and how. I worked on what I got back
<Daviey> hggdh, Ah, ok..  Once we've got the present issue resolved.. we'll revisit upgrades i think.
<hggdh> Daviey: indeed. We go back to beta1, and try again
<hggdh> and even better, we go from maverick
<hallyn> zul: thanks!
<zul> np
<uvirtbot> New bug: #754699 in eucalyptus (main) "Eucalyptus node (nc) fails to install, when done from PackageInstall" [Undecided,New] https://launchpad.net/bugs/754699
<TeTeT> jamespage: back to get more info on jenkins/hudson and the automated testing :)
<jamespage> TeTeT: fire away:-)
<TeTeT> jamespage: so I installed a Natty vm and saw that apt-cache search hudson | jenkins does not show anything - it has not landed in Natty, what were the problems?
<jamespage> TeTeT: so challenge has been around the size of this piece of work.
<TeTeT> jamespage: it's massive I take from the look at the PPA ppa:hudson-ubuntu/testing
<jamespage> TeTeT: :-)
<TeTeT> jamespage: so simply to much work for a release cycle to get this into the archive? Or was the change from hudson to jenkins also an issue?
<jamespage> TeTeT: yes it is quite big; however not far off original estimates; the intent for natty was to deliver alongside main release in PPA
<jamespage> TeTeT: so that we could be in a good place for Oneiric.
<TeTeT> jamespage: as it is Java, did you have to package all the specific versions of the build dependencies (I guess through maven)? Or was there a shortcut somehow?
<TeTeT> jamespage: yeah, wearing my corp services hat I count on 12.04 for delivering this tech to customers :)
<jamespage> TeTeT: no shortcuts; some of the versions of dependencies are slightly different, some are major versions different.
<jamespage> TeTeT: however it is hanging together OK; we have been using the package in the testing PPA for all of the ISO and ec2 testing for Natty and its been solid
<TeTeT> jamespage: ok, let's stay on jenkins for a bit, will query on automated testing in a bit
<jamespage> TeTeT: the newer versions of maven-debian-helper (not yet in Ubuntu) are much better and automate most of the package production;
<jamespage> TeTeT: TBH the rename from Hudson to Jenkins did not create to much work
<TeTeT> jamespage: I now have a jenkins webpage on port 8080 on my vm open. If I want to test something with jenkins, what are my next steps? URL is sufficient - is this one a good start or are there better tutorials? https://wiki.jenkins-ci.org/display/JENKINS/Use+Jenkins
<jamespage> TeTeT: thats a good place to start; the thing to understand about Jenkins is that although there is alot of functionality in the core product (enough for most basic requirements)
<jamespage> TeTeT: the really neat stuff is all managed through plugins
<TeTeT> jamespage: ok, not sure if I understand right now, but most likely will when I had a closer look at it
<TeTeT> jamespage: how much effort is it to test something? E.g. I have a couple of python scripts for testing UEC that themselves could use testing when I develop them. As an alternative I would have an open source board gaming tool that I would want to test - however jenkins can assist in testing graphical apps
<jamespage> TeTeT: so for your python scripts do you have unit tests?
<TeTeT> jamespage: I fear not
<TeTeT> jamespage: would this be a first step to implement them?
<jamespage> TeTeT: absolutely
<jamespage> TeTeT: All of the work I've done using Jenkins has been centred on integration
<TeTeT> jamespage: ok - do I see this right that jenkins will help me to a) build some software, b) run built-in tests, c) run integration tests
<jamespage> TeTeT: whatever Jenkins is doing under the hood should be executable by the developers who are developing the code.
<jamespage> TeTeT: It will help you automate those steps and report on results - yes
<jamespage> TeTeT: lemme just dig out a tutorial for working with python - that might help as well.
<jamespage> TeTeT: http://www.rhonabwy.com/wp/2009/11/04/setting-up-a-python-ci-server-with-hudson/
<TeTeT> jamespage: thanks, guess that will give me more than enough to get started on jenkins. Onto automated testing for Natty
<jamespage> TeTeT: OK
<TeTeT> jamespage: so the PPA is used for driving the automated iso testing for Natty?
<jamespage> TeTeT: URL first: http://jenkins.qa.ubuntu-uk.org/
<TeTeT> jamespage: he he, thanks, need a folder for jekins and automated testing now ;)
<jamespage> TeTeT: so I've helped put together 2 other projects which actually do the work under the hood
<TeTeT> jamespage: so this is the test results page for Ubuntu 11.04 testing?
<jamespage> TeTeT: filtered for 11.04 ISO tests - http://jenkins.qa.ubuntu-uk.org/view/ISO-server-Natty/
<jamespage> TeTeT: but broadly yes it is.
<jamespage> TeTeT: Jenkins provides us with all of the glue to make this stuff happen automatically
<jamespage> TeTeT: Monitoring for new images, triggering jobs when they appear and reporting on results.
<jamespage> TeTeT: the actual work is done by two other projects: ubuntu-server-iso-testing and ubuntu-server-ec2-testing
<jamespage> TeTeT: https://launchpad.net/ubuntu-server-ec2-testing and https://launchpad.net/ubuntu-server-iso-testing
<TeTeT> jamespage: these projects are about defining and implementing the tests I take?
<jamespage> TeTeT: thats right - the -iso-testing project uses libvirt and KVM to do automated installs of Ubuntu, and then run tests within those installs once completed.
<jamespage> TeTeT: Its all written in python; we used normal unittest's to actually implement the tests we wanted to run
<jamespage> TeTeT: they are executed using subunit and the subunit2junitxml filter
<TeTeT> jamespage: so when I look at http://jenkins.qa.ubuntu-uk.org/job/lucid-server-amd64_lvm/1/testReport/test/LvmTest/ I see that 3 (unit?) tests passed
<zul> RoAkSoAx: can you push your koan/ubuntu patch upstream
<jamespage> TeTeT: Jenkins is good at understanding Junit XML output - thats what you see in that link
<TeTeT> jamespage: where would I find the actual test definition for testLvs, testRootMount, testVgs?
<RoAkSoAx> zul: I already did
<zul> k
<RoAkSoAx> zul: fw 5 patches yesterday to the cobbler ML
<jamespage> TeTeT: http://jenkins.qa.ubuntu-uk.org/job/lucid-server-amd64_lvm/ - the unittest file is called 'lvm'
<jamespage> It gets archived there for future references - its actually part of the underlying ubuntu-server-iso-testing package
<TeTeT> jamespage: awesome
<TeTeT> jamespage: do you collect any statistics on how many tests fail during a release cycle and how many pass? Just for getting an overview
<jamespage> TeTeT: not really - its more used as a healthcheck throughout the dev cycle and for reporting on candidate image testing prior to milestones
<TeTeT> jamespage: and how do I find out what's going wrong when it fails: http://jenkins.qa.ubuntu-uk.org/job/natty_server_ec2/ looks like all is broken?
<jamespage> TeTeT: however you can see this
<jamespage> TeTeT: http://jenkins.qa.ubuntu-uk.org/job/natty-server-i386_samba-server/
<jamespage> this one runs daily
<TeTeT> jamespage: so 5 tests are run, most time one fails, at a few times two?
<jamespage> TeTeT: yes - there is a know issue with the unittest which causes the one failure - the others are actual failures
<jamespage> TeTeT: http://jenkins.qa.ubuntu-uk.org/job/natty-server-i386_samba-server/55/testReport/ (click on an error in the graph to see this view)
<jamespage> TeTeT: nmbd failed to start for some reason
<jamespage> TeTeT: So this project
<jamespage> http://jenkins.qa.ubuntu-uk.org/job/natty_server_ec2/
<TeTeT> jamespage: great, thanks for all the infos!
<jamespage> Is using a different type of job 'Matrix'
<jamespage> used for testing different combinations of setup;
<TeTeT> jamespage: is it testing the deployment rather than the execution?
<jamespage> TeTeT: not sure I understand the question
 * popey takes "ubuntu-uk" off hilight :)
<TeTeT> jamespage: nevermind
<jamespage> sorry popey :-)
<popey> not your fault :)
<jamespage> I'll stop pasting links in a minute :-)
 * popey glares at Daviey :)
<Daviey> popey needs a better IRC client that lets him exclude regexes :)
<TeTeT> jamespage: I'll need to take a look at all this now, will most likely come back with some questions next week
<jamespage> TeTeT: no problem
<TeTeT> jamespage: thanks a ton so far, much easier to get an insight than reading the blogs/wikis/documentation - overall it's huge
<jamespage> TeTeT: So the Ubuntu use is quite simple at the moment; other projects are doing more in-depth stuff - take a look here http://jenkins.openstack.org/
<TeTeT> jamespage: nice, there are builds happening
<jamespage> TeTeT: thats quiet - look at Drizzles - http://jenkins.drizzle.org/
<craigbass1976> I'm following along here: http://www.howtoforge.com/perfect-server-ubuntu-10.04-lucid-lynx-ispconfig-2 and can't start chrooted bind.  This is in messages... : operation="open" pid=16789 parent=16787 profile="/usr/sbin/named" requested_mask="r::" denied_mask="r::" fsuid=103 ouid=103 name="/var/lib/named/etc/bind/named.conf"
<Daviey> hggdh, seeing S3 failure?
<TeTeT> jamespage: guess we're really lightweight users in comparison
<Daviey> hggdh, I tested the IFACE upstart change, and it seems to resolve the issue...
<hggdh> Daviey: OK
<hggdh> Daviey: back to kernel sru testing
<Daviey> hggdh, Did you see S3 failing tho?
<hggdh> Daviey: I saw failures on walrus -- what causes the instances to fail
<Daviey> hggdh, This is *NEW*... wtf.
<RoAkSoAx> zul: we do not have xen kernels right?
<zul> RoAkSoAx: depends on what type of xen kernels you are talking about
<RoAkSoAx> zul: in archive.ubuntu.com/ubuntu/dists/maverick/main/installer-amd64 for example
<RoAkSoAx> zul: if I were to try to netboot into xen using virt-install
<semiosis> isn't the linux-ec2 package a xen kernel?
<hggdh> RoAkSoAx: there are the kernels in http://uec-images.ubuntu.com -- they are DomUs
<RoAkSoAx> hggdh: right, but talking about this :) https://www.redhat.com/archives/virt-tools-list/2011-April/msg00036.html
<hggdh> RoAkSoAx: as semiosis points out, the linux-ec2 are DomU kernels
<zul> RoAkSoAx: what are you trying to do then i can provide you with a better answer
<RoAkSoAx> zul: basically, the doubt came when I was looking into forwarding a patch to virtinst, and I read this: https://www.redhat.com/archives/virt-tools-list/2011-April/msg00036.html
<zul> RoAkSoAx: the virtual kernels i think are domU kernels we dont support dom0 (yet)
<zul> smoser: do we run the virtual kernels or the server kernels on ec2?
<smoser> virtual
<smoser> also, there is a difference between "xen kernel" and dom0
<smoser> dom0 kernel is a linux kernel that runs under the xen kernel as the priviledged guest named 'dom0'
<smoser> for that you need dom0 support in your kernel (which is what zul is asking for for 'o')
<RoyK> smoser: using xen on an ubuntu host?
<MetaJake> where should I save variables that I need to run each time I login to my server?
<zul> smoser: but virtual/server is/was the same kernel so it shouldnt matter
<smoser> to run xen, you need xen (a package that installs a hypervisor on your computer that then runs ubuntu as dom0)
<RoyK> MetaJake: .bashrc
<smoser> virtual/server is *close* to being the same kernel.
<RoAkSoAx> smoser: yeah that I understand, and I didn't mean dom0 :) I guess I should have phrased my question correctly
<MetaJake> thanks royk thats what I thought. I'll try this again. something didn't work last time I tried that.
<zul> smoser: right server just has a few more modules ;)
<RoAkSoAx> smoser: but yeah In my question was xen kernel as a guest (not as dom0)
<RoyK> last I checked, Xen wasn't very well supported on ubuntu - use kvm
<smoser> RoAkSoAx, you *need* dom0 support in your kernel, or "ubuntu and xen" dont really work
<smoser> (which is the case right now)
<RoyK> kvm works very well, though
<smoser> zul, its not exactly that any more
<RoyK> far less hassle
<smoser> -virtual is no longer a "sub flavour"
<smoser> it is a flul "flavor"
<zul> smoser: ah ok
<zul> smoser: since when?
<semiosis> what's the difference between linux-virtual & linux-ec2 kernels?  even ec2 instances use linux-virtual
<RoAkSoAx> smoser: right right. But what I mean was "If I have a xen running (let's say Debian), do Ubuntu have kernels to run as guests?"
<smoser> natty and maverikc.
<semiosis> some doc I can read up on?
<smoser> -ec2 is gone
<semiosis> ohhh ok :)
<smoser> it was a heavily patched ubuntu kernel
<smoser> we dropped that in maverick
<smoser> and moved to using the '-virtual' kernel, which has 'pv_ops' support
<smoser> which is what enables the generic -virtual kernel to run as a xen domU
<smoser> this has proven somewhat buggy
 * RoyK wonders why people use xen when kvm is so much easier to get around
<smoser> RoAkSoAx, i tried this once, and had some issues. but short answer is "yes, they should"
<smoser> and even "yes, they do"
<RoAkSoAx> smoser: ok thanks :)
<smoser> ubuntu kernels (-virtual) run on EC2, which is CentOS-ish xen install.
<semiosis> i've got a centos 5 xen server that i host several ubuntu VMs on (lucid & maverick) it works very well, but i've only been able to boot linux-ec2 kernels, never could get linux-virtual working
<semiosis> going to upgrade it to UEC sooner or later ;)
<semiosis> thx for explaining that smoser
<smoser> semiosis, well the -virtual kernels *do* work on xen
<smoser> i'm not going to pretend that amazon's xen is "pure xen"
<smoser> i've had some issues to, and have not even been able to do an install from CD into an hvm xen instance the one time i tried (centos 5.5)
<smoser> which, in theory, should have "just worked"
<semiosis> yeah hvm didnt work too well for me either
<zul> RoAkSoAx: the linux-ec2 kernels caused much grief and sorrow..they should die in a firey death
<RoAkSoAx> zul: ok :) thanks for the info
 * semiosis takes note of that
<shaggy2> is there a Sip/VOIP pbx server that ubuntu supports or will run on ubuntu server?
<RoyK> shaggy2: asterisk should work
<zul> SpamapS:  around?
<RoyK> shaggy2: just don't ask me about issues there - the code isn't very good, or at least, it wasn't, last time I was using it http://karlsbakk.net/fun/asterisk_architecture.jpg http://karlsbakk.net/fun/asterisk-installation.wav
<SpamapS> zul: yes , OTP, but will be off in 10 minutes
<NoqturnalX> has anyone ever had a problem with cups?
<NoqturnalX> I installed it and something happened to it, so I went to remove it and reinstall it and now it hangs when I apt-get remove cups or even apt-get purge cups
<NoqturnalX> and when I do apt-get install cups it hangs on setting up cups
<RoyK> NoqturnalX: can you try to strace that?
<NoqturnalX> what do I strace?
<NoqturnalX> i've only used strace once and it was a couple years ago so i'm not very familiar with it
<pmatulis_> how do i configure PAM to not prompt for a password during the login of a specific user?
<RoyK> pmatulis_: eeeeerm - why do you want some users to login without a password?
<pmatulis_> RoyK: i'd rather not say
<RoyK> if it's for ssh logins, just use ssh keys
<pmatulis_> RoyK: it's not SSH
<RoyK> if it's for local logins, don't
<pmatulis_> RoyK: it's to automate jobs basically
<NoqturnalX> i just killed everything that had cups in it lol
<NoqturnalX> so I want to reinstall cups, is there a way I can make sure it's completely gone before I try it?
<pmatulis_> NoqturnalX: purge i guess
<NoqturnalX> I had to interupt purge
<NoqturnalX> it locked up
<NoqturnalX> so I had to kill the tasks
<NoqturnalX> so I tried apt-get purge cups, then I had to kill the process stop cups and a few others, rm -rf /etc/cups, /usr/lib/cups, /usr/share/cups & did dpkg --configure -a
<NoqturnalX> what should be my next step?
<NoqturnalX> should I try apt-get install cups?
<NoqturnalX> or is there something I should try before that?
<NoqturnalX> apt-get install cups, made it to "Setting up cups (1.4.4-6ubuntu2.3) and is just sitting there now
<NoqturnalX> :(
<RoyK> did you try to strace it?
<RoyK> strace -f apt-get install ....
<NoqturnalX> i'll try that right now
<NoqturnalX> brb switching computers
<airtonix> protip : don't use apple ipod hard-disks as server disks
<NoqturnalX> strace -f apt-get install cups
<NoqturnalX> right?
<RoyK> yes
<RoyK> strace output is sent to stderr
<RoyK> so 2>somefile
<NoqturnalX> ok, I did "strace -f apt-get install cups | tee -a strace"
<RoyK> will send the data there
<RoyK> 2>&1 | tee
<NoqturnalX> well it's stuck on "read (0, "
<RoyK> erm... that's reading from stdin
<NoqturnalX> *gulp*
<NoqturnalX> ok so instead I should do "strace -f apt-get install cups 2>&1|tee -a strace" ???
<NoqturnalX> dpkg -l cups shows "Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend"
<gemclip> is there a url of all the man pages as html pages?
<NoqturnalX> Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
<NoqturnalX> still get the same thing with 2>&1|tee
<gemclip> duh a little google goes a long way
<NoqturnalX> Rawr Can I kill it
<FoolsRun> Hi, does anyone have any good personal experience using Ubuntu as a backup server for Windows clients?
<genii-around> FoolsRun: I had 4-5 Windows boxes here backing themselves up to an Ubuntu box using a Windows version of rsync. Worked pretty good
<NoqturnalX> there's supposed to be a /var/run/cups/cups.sock but it doesn't exist
<NoqturnalX> could this be part of the problem?
<semiosis> gemclip: manpages.ubuntu.com
<NoqturnalX> how do you creat a .sock?
<semiosis> NoqturnalX: i think the daemon (cups in this case) would usually do that for you
<NoqturnalX> I think I may have accidentally removed it
<NoqturnalX> and I can't get the daemon to even run :(
<semiosis> NoqturnalX: just scrolled back to catch up... did you even get the package installed ok?
<NoqturnalX> Nope, It always hangs up starting or stopping cups
<NoqturnalX> so when I install cups it gets to setting up cups and just freezes up on start cups process
<NoqturnalX> when I try to remove it same thing but with stop cops
<semiosis> NoqturnalX: anything in /var/log/messages, daemon.log, or syslog about cups?
<semiosis> NoqturnalX: or /var/log/cups
<NoqturnalX> /var/log/cups is completely empty folder there are no logs at all
<semiosis> NoqturnalX: ok what about the system logs?  any cups messages?
<NoqturnalX> daemon.log has init: cups pre-start process (958) terminated with status 1 and init: cups post-start process (1684 killed by TERM signal
<airtonix> if `sudo lshw -C network` reveals both my NICS are disabled, and thus they don't appear in the list output of ifconfig, what do I do to enable them ?
<NoqturnalX> /var/log/messages has Apr  8 12:28:18 FLHS-SERVER kernel: [40271.443951] type=1400 audit(1302290898.847:53): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1736 comm="apparmor_parser"
<NoqturnalX>  a bunch of times
<semiosis> airtonix: network interface manual: http://manpages.ubuntu.com/manpages/maverick/en/man5/interfaces.5.html (also 'man interfaces' in your shell)
<NoqturnalX> name= either "/usr/sbin/cupsd" or "/usr/lib/cups/backend/cups-pdf"
<NoqturnalX> theres 25 of each of those messages in /var/log/messages
<gemclip> could someone give me a hand setting up my resolv.conf I am not understanding what I am doing wrong
<gemclip> I set my server to a static ip
<airtonix> semiosis: already tried that, entering desired ifname in /etc/network/interfaces has no effect
<gemclip> I am running under vmware
<semiosis> gemclip: have you looked over the resolv.conf man page?  it's pretty straightforward, what are you trying to do?
<gemclip> i see in my gateway my dns ip numbers but im not sure about the search line
<semiosis> NoqturnalX: sorry i'm not too familiar with cups wish i coudl be more help.  wondering if you're able to install other packages, or if this is really just a cups issue.  if the cupsd binary is installed, can you launch it by hand?  on my system its running as "/usr/sbin/cupsd -C /etc/cups/cupsd.conf"  hope that helps you out at least a little
<gemclip> do i just enter my gateways ip address?
<semiosis> gemclip: what are you trying to do?  what do you expect to happen that's not happening?
<gemclip> resolving outside addresses
<gemclip> and seeing named servers
<gemclip> following along on the page said once I change my server ip to static I needed to change the resolv.conf file
<semiosis> gemclip: can you ping other hosts by IP, specifically, can you ping the DNS server IP address you'd like to use as your resolver?
<gemclip> sec
<semiosis> airtonix: have you tried 'ifup eth0' or whatever the interface name is you want to use?
<airtonix> semiosis: yes, no effect
<gemclip> said "network is unreachable"
<airtonix> semiosis: I'd provide a pastebin but the computer is obviously not network connected
<gemclip> everything in ifconfig looks right. I cleared the resolv.conf file
<semiosis> gemclip: generally speaking, it's not enough for everything to 'look right' in ifconfig, there also has to be an appropriately configured network on the other end of the wire attached to the interface.  sounds like you've got connectivity issues.
<semiosis> gemclip: can you even ping your default gateway address?
<gemclip> before switching over to static ip's I was able to get a dhcp address ok
<gemclip> checking
<gemclip> yep
<semiosis> gemclip: you may be missing a default route... does the command 'route -ne | grep ^0.0.0.0' return a line beginning with 0.0.0.0   <default gateway address> ?
<gemclip> seems if I try to ping anything outside my network i get "network unreachable" internal; pinging works
<semiosis> airtonix: what kinda network interface is this you want to get running?  ethernet, wireless, something exotic?
<airtonix> semiosis: it's two gigabit ethernet onboard NICS that worked before with lucid 10.04.2 on a different hard-drive. but switching to another lucid 10.04.2 install on a different hard drive now reveals they are disabled.
<gemclip> no but i did a route -ne i get Destination 108.78.41.40 gateway 0.0.0.0 genmask 255.255.255.248 flags U MSS 0 Window ) irtt 0 iface eth0
<airtonix> semiosis: /etc/NetworkManager/nm-system-settings.conf contains (among other things) : [ifupdown] managed=false
 * NoqturnalX wonders what just happened
<semiosis> airtonix: are you using ubuntu desktop or server?
<airtonix> semiosis: desktop functioning as a server (bind, sshd, apache, gitolite, ldap, dovecot, nfs-kernel-server)
<semiosis> gemclip: you need a default route.  i suspect you're missing (at least) a 'gateway' line in /etc/network/interfaces
<gemclip> ok looking
<semiosis> airtonix: i dont know about networkmanager, someone else here might be able to help with that, or possible another #ubuntu channel geared toward desktop
<airtonix> I'm not convinced it an issue with NetworkManager
<airtonix> it's*
<semiosis> airtonix: cool, neither am i :)
<gemclip> its there here is my interfaces:
<gemclip> iface eth0 inet static
<gemclip> address 108.78.41.45
<gemclip> netmask 255.255.255.248
<gemclip> network 108.78.41.0
<gemclip> broadcast 108.78.41.7
<gemclip> gateway 108.78.41.6
<gemclip> my resolv.conf is empty
<semiosis> gemclip: first things first, you need a default route to reach any system outside your local subnet, then adding the DNS server to resolv.conf is easy
<trimeta> I recently upgrade my DSL, but now I only have a "sticky" IP address, not a static IP address. What's the best way to automatically update my domain's A record when my IP changes?
<semiosis> gemclip: you say the 'route -en | grep ^0.0.0.0' returned nothing, that means you dont have a default route, but interfaces file is configured with one... maybe 'service network-interface restart INTERFACE=eth0' will reload the config from file
<gemclip> semiosis: should I look at the man pages for 'route' sorry im really new at this
<semiosis> gemclip: i'd rather get it loaded from the config file, which is how it will work at boot... if you use route cmd directly (which is an option) it will not persist across reboot
<semiosis> gemclip: try that network-interface restart command, replace eth0 with the name of the interface if it's not actually eth0 of course
<gemclip> yep tried no luck
<gemclip> let me bring it down and back up
<gemclip> blah gateway still missing
<semiosis> gemclip: oh i just noticed something about your interfaces file... your IP address is not in the subnet, but your default gateway is.  thats not going to work
<gemclip> oops
<semiosis> gemclip: your subnet is ...41.1 - ...41.6, and .6 is the gateway, so you can only use IPs 1-5
<gemclip> grr nice catch
<semiosis> gemclip: ...assuming your network & netmask & broadcast are correct that is
<gemclip> yeah I set it wrong
<semiosis> gemclip: well fix the file, do the network-interface restart cmd, and you'll probably be able to ping your DNS server's IP address... verify that, then setting up the resolv.conf is super easy
<semiosis> gemclip: all you really need is one line, 'nameserver <DNS Server IP>'
<semiosis> gemclip: the most popular optional parameter after that is 'domain', then sometimes also 'search' check out the resolv.conf man page for info about those.
<genii-around> trimeta: Probably to have a script which replaces the date/serial number in format yyyyMMddss in your zonefile with the current moment, does same for the A record, then restarts rndc
<genii-around> on restart it will push the new record out since the timestamp is newer than the cached versions out there
<semiosis> trimeta: check out 'dynamic dns' on google or wikipedia
<semiosis> airtonix: idk what to say, maybe check out the /etc/network/interfaces on the old HDD and compare, if it worked on the old system, then that system has all the answers, hopefully you can still read the disk
<semiosis> good luck, and have a nice weekend everyone :)
<cosmin_s> hello
<cosmin_s> I have a question about license of ubuntu server
<cosmin_s>  I want to use ubuntu server on a server on my company , I should buy it or it's completly free ?
<gemclip> semiosis: got the interface fixed and I can ping the dns server
<gemclip> woohoo i can ping yahoo! the little things in life that matter lol
<kirkland> RoAkSoAx: hey
<kirkland> RoAkSoAx: you still need me to review/sponsor anything for you this week?
<RoAkSoAx> kirkland: now that you are here, sure :)
<RoAkSoAx> kirkland: bug #751979
<uvirtbot> Launchpad bug 751979 in virtinst "virt-install fails to install Ubuntu ISO when it is located in an HTTP location" [Medium,New] https://launchpad.net/bugs/751979
<RoAkSoAx> kirkland: i proposed another patch first, which was uploaded, but didn't really fix the problem as enabled something and disabled something else. This new patch (which is actually an update to the same patch) hanldes both situations
<RoAkSoAx> kirkland: and my cobbler patches have already been applied upstream :)
<samira-t> who can help me to run an embedded web server?
 * RoAkSoAx -> EOW
<samira-t> who can help me to run an embedded web server?
<amstan> samira-t: what do you mean?
<samira-t> amstan: something like this http://www.gnu.org/software/libmicrohttpd/
<samira-t> i can't run it
#ubuntu-server 2011-04-09
<gholms|work> Daviey: You around?
<Datz> what makes the server version distinct. Is it only the kernel?
<Patrickdk> technically the only difference is the installer :)
<ScottK> And a different kernel on amd64.
<ScottK> Of course a very different default package selection.
<gholms> Any rampartc maintainers around?  :)
<Patrickdk> kernels and packages can all be changed after install though :)
<Datz> ScottK: ah, ok thanks
<Datz> and Patrickdk
<Datz> I was just thinking if I built from minimal, would it be supported here :P
<gholms> Ok, I'll just ask my question generically.  I'm working on a security issue and upstream doesn't appear have a way of securely reporting them.
<gholms> If I file it in LP then I can't be sure that upstream will get the patch before a new package is released, but if I file it in upstream's tracker it immediately becomes public.
<gholms> Is there a preferred way of handling this?
<ryoohki> is there a seperate cloud channel for ubuntu?
<ryoohki> for eucalyptus?
<gholms> You mean like #ubuntu-cloud?
<ryoohki> gholms: thanks!
<^robertj> help i need 3 harry-potter themed server names in a hurry! database, web, and reporting!
<^robertj> hagrid, buckbeak, luna, dumbledore, hermione, hedwig, fakes, and ron are already taken
<anth_> hi all
<patdk-lap> heh
<gemclip> harry, snape,moody
<phocho> hi
<phocho> al
<phocho> all
<phocho> I am a new one.
<samira-t> who can help me to run an embedded web server?
<cyril_> Hello, I would like advice about memory required to install ubuntu server 10.04 + apache tomcat with mediawiki / phpBB3 / subversion. I want a test server for developing web apps in java. Is 2Gb of RAM enough for such a test server?
<remix_tj> cyril_: i suggest you more ram
<remix_tj> if you develop apps for tomcat
<cyril_> remix_tj: thanks. 4Go then?
<remix_tj> yes
<remix_tj> because as far as i know tomcat is pretty heavy
<cyril_> yes, it is possible
<cyril_> but I also think it depends on the traffic.. For a test environement, there would be very few traffic...
<cyril_> Do you know about any online resources where I could compare RAM usage figures for different scenarios of Ubuntu Server + Apache tomcat usage? I have hard time finding clear figures for now
<jfb_h20> for 10.10 should I use pm-utils or laptop-power-manager
<eichi> hello. I try to make some cronjobs fÃ¼r users in group users with crontab -e, but they did not work
<eichi> 1 4 * * *            /home/dev/backupDb.sh      this does not work. but this has worked: 0 16 * * *            /home/dev/backupDb.sh
<eichi> i dont understand why
<eichi> its just another hour...wtf
<padhu> eichi try with double digit
<eichi> 04 ?
<padhu> 01 04 * * *
<eichi> padhu: thanks
<padhu> yw
<padhu> :-)
<uvirtbot> New bug: #755613 in squid (main) "package squid 2.7.STABLE7-1ubuntu12.2 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/755613
<samira-t> who can help me to run an embedded web server?
<ScottK> !ask | samira-t
<ubottu> samira-t: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-)
<samira-t> ScottK: i asked my question :), but for more information something like this http://www.gnu.org/software/libmicrohttpd/
<ScottK> People are unlikely to volunteer to provide unspecified general assistance.  If you have specific questions, ask them.
<ScottK> samira-t: libmicrohttpd is packaged in Ubuntu.  https://launchpad.net/ubuntu/+source/libmicrohttpd
<samira-t> ScottK: okey, thanks, but i'm confused in how to use it
<ScottK> You install libmicrohttpd-dev and then it should be available like any library for build against.
<ScottK> Specifics of how to develop libmicrohttpd based applications are probably best asked upstream.
<samira-t> ScottK: thanks again, i'll try more
<uvirtbot> New bug: #755672 in php5 (main) "package php5-fpm 5.3.5-1ubuntu6 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/755672
<uvirtbot> New bug: #755673 in mysql-dfsg-5.1 (main) "package libmysqlclient-dev (not installed) failed to install/upgrade: trying to overwrite '/usr/include/mysql/decimal.h', which is also in package mysql-devel 0:5.5.11-2" [Undecided,New] https://launchpad.net/bugs/755673
<atari2600a> hey, I'm installing this server VM & I accidentally my workspace
<atari2600a> or the bash equivelant
<atari2600a> I don't know where the installer is or more importantly how to get there
<atari2600a> obviously CTRL-ALT acts on the host instead of the VM
<JanC> atari2600a: I think a word is missing there somewhere?
<JanC> in your first sentence
<atari2600a> I accidentally it D:
<atari2600a> it's just gone
<JanC> switched, closed, ... ?
<atari2600a> switched :P
<JanC> are you using byobu or screen?
<atari2600a> bash, I would assume
<atari2600a> it's the ubuntu server installer
<JanC> and Alt+Fx doesn't work?
<atari2600a> well look at that
<atari2600a> I guess I'm too used to CTRL-Alt to switch workspaces
<atari2600a> thanks
<JanC> Ctrl+Alt is needed in X
<eri82> hi there
<eri82> i'm havin a problem with a gre tunnel
<eri82> the tunnel is build over a ipsec connection
<eri82> can ping each gre interface from the 2 servers
<eri82> from server 1 can ping vpn interface
<eri82> but from server 2 cannt ping vpn interface of server one from the gre tunnel
<eri82> i see the traffic from tcpdump going and commin
<eri82> but not comming on the ping program
<eri82> ??
<uvirtbot> New bug: #755755 in postfix (main) "package postfix 2.8.2-1ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 75" [Undecided,New] https://launchpad.net/bugs/755755
<eri82> is there any way to find where the ping gets blocked
<Gadu> why is java limited to using 2658MB on my server that has 8GB of RAM and over 6.5GB free?
<Gadu> Ubuntu Server 10.04 32-bit
<Dr_Jekyll> i'd say you gave the answer yourself - you're on a 32bit system
<Dr_Jekyll> i'm no java expert though
<Datz> I think ubuntu servers 32bit kernel is pae anyway
<Dr_Jekyll> yes, but the java binary is 32bit, so it has certain boundaries on how much memory it can use
<JanC> still limited address space per application
<Datz> ah
<JanC> it uses 32-bit pointers remember
<Datz> ok :)
<Gadu> so using a 64-bit Ubuntu Server is recommended to get more memory for the application
<Gadu> or does each java application have a limit (like a memory limit per program)
<Dr_Jekyll> both is correct - you need 64bit to access large memory in the first place, and there can be a memory limit per program on the java side of things. that's configurable, however.
<Dr_Jekyll> does anyone here use likewise-open for windows domain integration? does it work well?
<Gadu> Is Ubuntu Server 64-bit any less secure than 32-bit? (paranoia from the 64-bit root exploit)
<Andre_Gondim> I've tried to install cacti with, apt-get install cacti, I already have lamp-server rrdtools snmpd snmp, but I don't have sucess with localhost/cacti, where may I see the error? in the log just say don't find /var/www/cacti
<ScottK> More likely the other way around.  There are security features that are only enabled on amd64 and not i386.
<ScottK> Gadu: ^^^
<Gadu> so now that the exploid involving 32-bit calls on a 64-bit system has been discovered and fixed, 64-bit is more secure than 32-bit?
<ScottK> That's my opinion.  It may be wrong, of course.
<Gadu> I appreciate your opinion, thank you
<Gadu> I don't suppose there is a program that can detect what packages have been installed or uninstalled on a system since its creation?
<Gadu> would love to be able to quickly determine what changes I've made in that area for a reinstall should I switch to 64-bit
<ScottK> Gadu: You can use dpkg --get-selections to get a list of what you've installed (the dpkg --set-selections to install on the new system)
<ScottK> That will get your package list back to where it is.
<ScottK> It won't track configuration, data, and other changes.
<guntbert> !clone
<ubottu> To replicate your packages selection on another machine (or restore it if re-installing), you can type Â« aptitude  --display-format '%p' search '?installed!?automatic' > ~/my-packages Â», move the file "my-packages" to the other machine, and there type Â« sudo xargs aptitude --schedule-only install < my-packages ; sudo aptitude install Â» - See also !automate
<ry> Dr_Jekyll, i have used likewise-open on a few machines, it seems to work pretty good -- i'm not sure what other options there are to begin with? likewise was the best/easiest i found at the time
<ry> ubottu, that was on my vast list of things to figure out, thanks =)
<ubottu> Error: I am only a bot, please don't think I'm intelligent :)
<ry> x_x
<ry> guntbert, is ubottu's database "open" ? can it be viewed?
<guntbert> !brain | ry
<ubottu> ry: Hi! I'm #ubuntu-server's favorite infobot, you can search my brain yourself at http://ubottu.com/factoids.cgi | Usage info: http://ubottu.com/devel/wiki/Plugins | Bot channels and general info: https://wiki.ubuntu.com/IRC/Bots
<ry> awesome, thanks
<guntbert> ry: you're welcome :-)
<guntbert> ry: and I can really talk too - not give only bot commands :)
<ry> lol yes i figured as much
<koolhead17> hi all
<guntbert> hi koolhead17 --  Do you have an ubuntu support question?
<koolhead17> guntbert, no.
<guntbert> koolhead17: :)
<shauno> any suggestions for a sensible jabber daemon to use for a small (4 users) site?
<nimrod10> shauno, try ejabbered or openfire
<nimrod10> shauno, try ejabberd or openfire
<shauno> nimrod10: thanks
<nimrod10> shauno, np
#ubuntu-server 2011-04-10
<CrazyGir> shauno: I would advocate for ejabberd, you could be msg'ing in 30 minutes
<MetaJake> if something line Gunicorn says it is for Unix, does that mean it is not for Linux?
<MetaJake> line=like*
<MTecknology> you guys think it's possible to use a variable and pass it to su as a password?
<patdk-lap> shouldn't be
<patdk-lap> isn't that what sudoers was made for?
<MTecknology> su, not sudo
<patdk-lap> sudo su
<MTecknology> that would require something I can't do
<patdk-lap> I'm pretty sure su won't take input from stdin
<qman__> not sure why you'd use su
<qman__> sudo can do everything su can and more
<uvirtbot> New bug: #756406 in ntp (main) "there is no easy way to edit/include servers for NTP to sync from like it used to" [Undecided,New] https://launchpad.net/bugs/756406
<liekzomg> just out of interest, what is python used for in the minimal ubuntu-server install?
<DigitalFlux> Hi Guys
<DigitalFlux> I was wondering if someone can give me some hints/docs on how to port some Debian package to Ubuntu ?
<DigitalFlux> may be the o/p should go into a PPA or something ..
<eichi> http://codepad.org/2JXu0y0x someone can see the problem with this cronjobs? they do not work :(
<joschi> eichi: are both .sh files executable and accessible for the user "dev"?
<eichi> joschi: yes, works without problems with ./backupDb.sh as user
<joschi> eichi: are you using relative paths inside your scripts?
<eichi> funnies thing: 10 16 * * *            /home/dev/backupDb.sh works
<eichi> 00 04 * * *            /home/dev/backupDb.sh not
<joschi> eichi: try 0 and 4 instead of the padded values
<joschi> eichi: although this *should* work with any decent crond
<eichi> i did allready
<koolhead17> DigitalFlux: https://wiki.ubuntu.com/PackagingGuide/Basic is this what you were looking for?
<joschi> eichi: post your scripts
<eichi> oh, damn, now i can see something ;D its mysqldump and then > folder/whatever
<eichi> means this is relativ. never thought about that. wtf. where can this be now?
<joschi> eichi: use `find` ;)
<koolhead17> DigitalFlux: http://daniel.holba.ch/temp/guide/udd-intro.html  or this :)
<DigitalFlux> koolhead17: I guess, I just wonder if someone wrote/developed a way to quickly hack Debian packages to be done for Ubuntu
<DigitalFlux> koolhead17: That second link is regarding the PPA ?
<eichi> joschi: what about ~/backup/whatever_filename.sql in script? is this *like* absolute?
<DigitalFlux> koolhead17: aha, i see it now :)
<eichi> at the moment its just "backup/whatever_filename.sql
<DigitalFlux> koolhead17: Checking, Thanks ;)
<joschi> eichi: this should work if you want to write the files into the home directory of the user for which the cron job is running
<joschi> eichi: you could also use $HOME for that purpose
<eichi> yes, thats what i want
<eichi> and which one is "better" ?
<joschi> I like $HOME better for this purpose
<eichi> ${HOME}/backup/prod_mysql_dump_${timestamp}.sql
<eichi> is okay?
<joschi> yes
<koolhead17> DigitalFlux: welcome :)
<eichi> joschi: thanks ;)
<eichi> joschi: last question today ;D: can I use 00 04 * * *            ${HOME}/backupDb.sh        in crontab too?
<eichi> then all users have the same crontab, would be nice
<joschi> eichi: no
<joschi> eichi: wait, you mean this line in all user's crontabs or in one crontab?
<eichi> on all user's crontab
<joschi> eichi: first case should work, second doesn't
<joschi> eichi: try it ;)
<eichi> okay, thats nice
<eichi> okay
<eichi> thanks, by and have a nice day then ;)
<joschi> eichi: btw you should also use the absolute paths for all the programs used in your script *or* set PATH inside the crontab to a reasonable value
<arvandy> malam kk, ada yg bisa bahasa indonesia ngk?
<jMCg> Hello happy people, I'm trying to get a kvm started on Ubuntu, however, I'm hitting some obstacles:
<jMCg> This is what happens when I start up the domain: http://dpaste.com/530507/
<jMCg> Now I dug a little deeper and found: http://dpaste.com/530518/
<jMCg> This doesn't make a terrible lot of sense since libvirt automatically creates apparmor profiles (/etc/apparmor.d/libvirt/) -- which explicitly allow these actions.
<jMCg> the created profile: http://pastebin.com/PMVuj9bx
<jMCg> The internet isn't being very helpful either: http://serverfault.com/questions/220238/kvm-guest-does-not-boot-qemudparsepcidevicestrs
<jMCg> Interestingly, /dev/pts/* isn't allowed, but nobody complains about that.
<rotten777> good afternoon folks... anyone conscious?
<rotten777> i'm looking to find a way to remotely (ssh) unlock the encrypted filesystems on a server after a reboot... has anyone done this?
<Error404NotFound> whats different between php-memcache and php-memcached ?
<rotten777> Â» memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
<rotten777> Memcache module provides handy procedural and object oriented interface to memcached, highly effective caching daemon, which was especially designed to decrease database load in dynamic web applications.
<rotten777> memcache = module; memcached = daemon
<jMCg> no.
<jMCg> http://pecl.php.net/package/memcache vs http://pecl.php.net/package/memcached
<jMCg> Error404NotFound: ^
<rotten777> http://php.net/manual/en/book.memcache.php
<rotten777> http://php.net/manual/en/book.memcached.php
<Error404NotFound> thanks
<uvirtbot> New bug: #596298 in php5 (main) "impossible to install - dokuwiki and its dependencies cannot be configured (dup-of: 654165)" [Undecided,New] https://launchpad.net/bugs/596298
<uvirtbot> New bug: #654165 in php5 (main) "update-manager error when upgrading php5-idn" [Undecided,New] https://launchpad.net/bugs/654165
<jeeves_moss> how can I make a script in /etc/init.d/ so that my CS game servers will auto start with the server and I can stop/start/restart them like a service?
<HazRPG> jeeves_moss: hmm, not really done much of that... but might be an idea to have a look at this: http://www.debian-administration.org/articles/28 (title: Making scripts run at boot time with Debian)
<HazRPG> its about adding scripts in /etc/init.d/ folder
<HazRPG> also, if you browse to that folder, you can check out some existing scripts and see how they do things, should be pretty straight forward I think :)
<jeeves_moss> HazRPG, thanks!!  I had it running once before, and for the life of me, I can't remember what I did to make it work!
<HazRPG> jeeves_moss: no problem, hopefully its helpful... that site makes for interesting read in general btw :). Also, I could be possible that it already has one in there... or you need to copy something into there maybe?
<HazRPG> s/Also, I/Also, it*
<jeeves_moss> HazRPG, yea.  I'm thinking that I'm going to get this dumb game server to run properly, then offload it to a deticated box
<uvirtbot> New bug: #756894 in ocfs2-tools (main) "mount.ocfs2 protocol not available mounting error" [Undecided,New] https://launchpad.net/bugs/756894
<CrazyGir> https://help.ubuntu.com/community/KVM/CreateGuests <--- this mentions a "bug" requiring "restricted" in the --compontents list. is this current?
<CrazyGir> Bug notice: it seems that the package 'linux-package' is not found during the machine building process unless 'restricted' is also specified as a component option to ubuntu-vm-builder.
<jMCg> Okay, so nothing at all happens, when I try to boot this KVM domain (http://pastebin.com/bw877Yt3) with virsh, and it's not apparmor's fault, which I disabled: http://pastebin.com/bw877Yt3
<jMCg> And starting the whole thing by hand it seems that my network config is wrong.
<jMCg> There we go: http://pastebin.com/RU3SmKPL
<jMCg> So the question is, of course: Considering this http://pastebin.com/bw877Yt3 (or this KVM startup line: http://pastebin.com/RU3SmKPL) -- what's wrong with this network setup: http://dpaste.com/530664/ -- and the blindingly obvious counter question is, of course: That depends on what you're trying to achieve.
#ubuntu-server 2012-04-02
<mgw> any tips on getting cobbler replicate to work?
<mgw> on oneiric
<mgw> cobbler 2.1
<uvirtbot> New bug: #971248 in libpam-ldap (main) "pam_ldap passwd entry when using kerberos" [Undecided,New] https://launchpad.net/bugs/971248
<journeeman> Hello. I am trying to install Precise Beta2 AMD64 in Virtualbox. The installation menu appears and once I select `Install Ubuntu Server' the console goes blank. It's not accessing the disk or the network at all.
<journeeman> Any suggestions? Thanks in advance.
<twb> journeeman: hit f6, add nomodeset (IIRC)
<journeeman> twb: okay
<twb> IOW force it to use 80x25 VGA console instead of fancier grahpics
<twb> That's just a guess, I don't have vbox
<journeeman> Oh okay
<journeeman> twb: It worked once. There was a segfault error and got the installation steps menu. Any option I selected failed. Tried to save the debug logs but that failed too. Tried rebooting with nomodeset but just hangs like before.
<uvirtbot> New bug: #969937 in mysql-dfsg-5.1 (main) "package libmysqlclient16 5.1.61-0ubuntu0.10.04.1 failed to install/upgrade: trying to overwrite '/usr/lib/libmysqlclient.so.16.0.0', which is also in package mysql-cluster-client-5.1 0:7.0.9-1ubuntu7" [Undecided,New] https://launchpad.net/bugs/969937
<uvirtbot> New bug: #971314 in ntp (main) "1:4.2.6.p3+dfsg-1ubuntu3 on Precise generates a memory corruption" [Undecided,New] https://launchpad.net/bugs/971314
<twb> sparticus: I notice precise still has that problem where it doesn't chvt 1 when plymouth has finished
<twb> Er, SpamapS
<Daviey> Gooooooooooooooood morning.
<koolhead17> moring Daviey :)
<koolhead17> *morning
<Daviey> How is everyone this rather sunny day?
<jamespage> morning all
<twb> That's interesting; lucid's installer won't let me manually place the 1M "biosgrub" partition at 0% offset; it demands to put it after
<twb> i.e. so there's a 1M gap at the start of the disk, then the stupid 1M grub stage umpteen, then the rest of the system
<lynxman> jamespage, Daviey: morning!
<lynxman> morning o/
<Daviey> mornin' lynxman
<jamespage> morning lynxman
<RoyK> hi all. is there a ppa somewhere to allow using recent bacula versions prebuilt for ubuntu lucid?
<uksysadmin> blimey Daviey - you had an extra Weetabix today?
<uksysadmin> how's OpenStack Essex looking on Ubuntu? any plans to increase deb updates in the run up to Essex final release?
<DoomGuy> hello all
<DoomGuy> I want to install this servers : FTP, Mysql, Apache, LDAP, SAMBA, DNS, SVN.
<DoomGuy> but I am confused about what can be regrouped in the same server and what can't
<DoomGuy> any diea /
<DoomGuy> ?
<koolhead17> DoomGuy: https://help.ubuntu.com/10.04/serverguide/C/  See if it helps
<DoomGuy> koolhead17 thanls
<RoyK> DoomGuy: you can install all of those on the same server
<DoomGuy> RoyK but DNS is crucial and has nothing with application layer and if the server crashes. everyone will stop working :P
<RoyK> DoomGuy: that's why you have more dns servers
<DoomGuy> RoyK I am using DNS server for sharing virtual hosts, allowing clients to access servers by name, and doing cache DNS in the same time.. So I really need it.
<RoyK> DoomGuy: you always need more dns servers
<RoyK> always
<koolhead17> nijaba: will you be coming for ubuntucloudday :D
<koolhead17> Daviey: was looking for you the other day :(
<Daviey> koolhead17: yeah, contentless pings tend to be pretty useless.
<koolhead17> Daviey: yes sir.
<koolhead17> something urgent had come up at that point of time and thats why :(
<Daviey> koolhead17: what was it?
<koolhead17> Daviey: will talk on it now during UDS, am making list of things :P
 * koolhead17 needs some suggestions
<Daviey> koolhead17: groovy
<koolhead17> Daviey: :)
<uvirtbot> New bug: #971494 in mysql-dfsg-5.1 (main) "package mysql-server-core-5.1 (not installed) failed to install/upgrade: trying to overwrite '/usr/bin/my_print_defaults', which is also in package mysql-cluster-server-5.1 0:7.0.9-1ubuntu7" [Undecided,New] https://launchpad.net/bugs/971494
<zul> Daviey:  btw glance ftbfs if the testsuite fails now as well
<Daviey> zul: oh dear, have you investigated why?
<zul> Daviey: pep8 from one of my patches ;) but before it didnt fail if the testsuite fails now it does
<zul> although some of the tests require a full swift install now as well
<Daviey> O_o
<Daviey> *sigh*
<Daviey> thanks zul
<jamespage> lynxman, any opinion on bug 941922?
<uvirtbot> Launchpad bug 941922 in puppet "do-release-upgrade races puppet for file contents" [High,New] https://launchpad.net/bugs/941922
<lynxman> jamespage: let me have a look
<jamespage> lynxman, ta
<_ruben> that's one tricky scenario...
<lynxman> jamespage: I'd say it's plenty justified, puppet shouldn't be running during d-r-u otherwise it'll try to enforce the wrong things, also it shouldn't start automatically afterwards either IMHO
<lynxman> jamespage: but I'd say that's something a bit more on the sysadmin side of planning rather than the OS trying to outsmart him, restarting puppet after d-r-u can also be fatal
<Kiall> Just noticed Ubuntu MAAS on the 12.04 CD boot menu, Is it just orchestra renamed? Can't seem to see too much info on it
<tarvid> when I switched from dhcp to static networking broke on 12.04
<tarvid> http://paste.ubuntu.com/911465/
<tarvid> this project is dead in the water without networking
<tarvid> I can ping a neighbor but I cannot ping my gateway
<Kiall> tarvid, what does "broke" mean exactly?
<Kiall> Also.. It looks like those settings are wrong
<tarvid> what looks wrong
<Kiall> "70.167.242.233" with a netmask of "255.255.255.224" means you are assigning the network address to the server
<tarvid> broke means I can ping a neighbor but I cannot ping my gateway
<Kiall> actually
<Kiall> nevermind
<Kiall> My CIDR-fu has a momentary lapse there ;) Could you ping the GW beforehand with DHCP?
<tarvid> on myn laptop I set a static address with the network manager GUI and it works
<tarvid> I have a different gaterway on DHCP but indeed I did updates
<tarvid> but I can't get back to DHCP either
<Kiall> Okay, and from other servers using the same GW, can you ping the GW? (Sometimes, ping is blocked etc etc)
<Kiall> The config syntax is correct. Assuming the actual settings are also correct, the issue is likely on the GW
<tarvid> yes I can ping the gateway from the neighbor
<tarvid> the route is very different
<tarvid> I have two nics in the neighbor
<Kiall> Okay, well.. all I can tell you in that case is - the config syntax is correct, considering you can ping the neighbor, I believe the ip/mask are correct.. I would double check the GW IP, and double check the settings have been applied correctly, and after that it's most likely an issue on the GW
<Kiall> `route -n` and `ifconfig eth0` should let you check the settings have been applied..
<tarvid> the neighbor is the only host I can ping, all others are broken
<Kiall> maybe an IP conflict so?
<tarvid> Interesting Bcast:0.0.0.0 on the one that workw
<RoyK> IIRC that's disabling broadcasts, which might not be what you want
<RoyK> 15:40 < Kiall> "70.167.242.233" with a netmask of "255.255.255.224" means you are assigning the network address to the server
<RoyK> erm... .233 isn't a network address
<Kiall> RoyK, yea.. see my "ignore me" message ;)
<Kiall> <Kiall> My CIDR-fu has a momentary lapse there ;)
<RoyK> :D
<tarvid> yes - an IP confllict
<amarcolino> does anyone know the solution to ubuntu 12.4 not booting even knowing the install was succeful, all I get is a black screen
<tarvid> grub is probably broken
<amarcolino> ...
<tarvid> grub is written last, if your install was waiting for a response it didn't get grub was not installed
<amarcolino> maybe, but based on the install process it did get installed
<amarcolino> just trying to avoid doing another clean install considering I might get the same result
<greppy> amarcolino: a quick google shows this: http://askubuntu.com/questions/110315/ubuntu-11-10-and-12-04-blackscreen-on-boot
<ahxcjb> hi all
<ahxcjb> smoser, utlemming - hi, I'm trying to create a new AMI using ec2-bundle-vol from an Ubuntu EC2 10.04 image. I've successfully done it using an AMI image, but received an error when running it against an EBI store
<hallyn> amarcolino: try alt-leftarrow?
<ahxcjb> http://pastebin.com/9MGn1W9m
<smoser> ahxcjb, well, you probably want create-image for EBS
<ahxcjb> smoser: ah
<smoser> what error did you get i wonder?
<ahxcjb> smoser: I can run create-image within your ec2 image?
<hallyn> amarcolino: the problem (if that works) is grub.conf is saying to switch to vt 7, which normally runs x but you don't have it.
<smoser> ahxcjb, you can, but you dont have to run it htere.
<smoser> basically , for ebs root, you can create a snapshot of the volume
<smoser> and then register that
<ahxcjb> simple as that?
<smoser> the cleanest way to do that is to shut the instance down and snapshot the root
<ahxcjb> ok
<ahxcjb> thanks smoser
<ahxcjb> will give it a go now
<smoser> there is a option even in the aws UI that will do that.
<smoser> and it basically does those things..
<smoser>  * shut down instance
<smoser>  * create-snapshot
<smoser>  * register-snapshot
<smoser>  * start instance
<ahxcjb> I notice I can right click on the ec2 instance and there's a 'create-image' option?
<smoser> and 'ec2-create-snapshot' is the command line that will dot htat.
<smoser> right. thats it.
<ahxcjb> will it capture all my bash history etc?
<ahxcjb> if so, i guess i better clear that all out :)
<Amoug> why isn't Xinetd used/installed by default ?
<amarcolino> hallyn, for it to go to v7 it has to boot at least I get no disk activity after the bios splash screen
<hallyn> ah
<hallyn> i did have one case where all kernels under /boot seemed to disappear
<hallyn> pretty sure it was my own fault, but dont' knwo what i did.
<amarcolino> hallyn, maybe I aint asking the right question considering I ommited certain things, is there an issue with 12.4 with an lvm install while boot is on a separate partition/HD?
<hallyn> amarcolino: the only issue I'm aware of is the one that can cause up to 2min wait while waiting for a timeout (i'm hoping that's fixed by now, but not sure)
<amarcolino> hallyn, ok, will try another clean install and see what happens, thanks for the help
<ahxcjb> smoser: looks like it worked.. :)
<ahxcjb> smoser: only very slight issue is that the key I tell it to use doesn't get added to authorized_keys
<ahxcjb> that does work when I created an instance based store using the ubuntu ec2 image, mind.
<smoser> ahxcjb, it should tstill work
<smoser> you launched a new instane from that image ?
<ahxcjb> yep
<ahxcjb> but, i didn't shut down the image first before I snapshotted..
<ahxcjb> let me try again
<smoser> well, you should always shut down the instance, basically  you're snapshotting a dirty filesystem (which will force a fsck)
<ahxcjb> ok, stopping image
<hallyn> stgraber: any problems with my pushing an lxc just to add the mediate_deleted flag?  (I now stop at /dev/mapper/ accesses, but trust you'll work aroudn those with device cgroup?)
<stgraber> hallyn: nope, sounds good, go ahead. I indeed fixed all devices access using the device cgroup, so that part works as expected
<tarvid> I see no evidence that dns-nameservers 68.105.28.16, 68.105.29.16 works in either 11.10 or 12.04
<hallyn> stgraber: trying ot figure out whether to add 'LP:' to the changelog or not :)  it works aorund the bug, doesn't fix it...
<stgraber> hallyn: well, it fixes the problem, so we should close the bug against lxc at least and maybe keep an apparmor task open for these extra flags
<hallyn> the bug isn't open against lxc :)
<hallyn> i'll open it right quick
<hallyn> stgraber: oh you mean an open bug to remove the extra flags.  i see.
<hallyn> haven't done that yet
<uvirtbot> New bug: #969299 in apparmor (main) "apparmor prevents dpkg-divert and localedef from working in a container" [Critical,Confirmed] https://launchpad.net/bugs/969299
<uvirtbot> New bug: #971596 in cloud-init (main) "landscape module errors if no /etc/landscape/client.conf" [Undecided,New] https://launchpad.net/bugs/971596
<SamV522> Can I setup an ubuntu server on a network with windows pcs?
<zul> glance-rc2 uploaded
<ahxcjb> SamV522: sure?
<ahxcjb> Samba will integrate with windows for file-sharing and so forth
<SamV522> That's more what I was looking for^. Thanks
<SamV522> Are there any compatibility issues with file sharing and what not?
<SamV522> Cause I recently built a pc as a dedicated server for files and what not in my home, and I was hoping I'd be able to install ubuntu server instead of paying 200 or so dollars for windows 7 AGAIN.. :/
<ahxcjb> I think the majority of Samba issues with Windows 7 have been ironed out
<ZarroBoogs> You shouldn't have any issues.
<ahxcjb> use 10.04 LTS so you won't need to upgrade the Os anytime soon
<SamV522> Okay, cool. Thanks.  I just downloaded 10.04 LTS this afternoon.  I can install it via booting to USB, correct?
<lynxman> SpamapS: ping, whenever you're around :)
<zul> and nova-rc2 uploaded
<ahxcjb> SamV522: correct
<SamV522> Excellent.  Can't wait to get the faulty parts replaced :D
<lynxman> zul: \o/
<mtaylor> hey guys - how do I file a FFE request to re-sync a package from debian?
<ahxcjb> ad
<ahxcjb> oops
<mtaylor> yay. I figued it out
<amarcolino> hallyn, weird that it now works, inserted the server cd and chose boot from hard disk, at first I thought nothing is happening but than it actually booted, did the update and upgrade and now it seems to have fixed the issue. I am content that it is fixed but would've liked to know what was the issue
<hallyn> amarcolino: yeah, i hate when that happens :)  most likely seems an upgrade went bad and grub/initramfs were not ok
<amarcolino> hallyn, probably, heck it works :)
<Amoug> which one to use, tcp wrapper or iptables ?
<ahxcjb> depends on your usage
<ahxcjb> if you want application level blocking - ie, user foo can login, but user bar can't
<ahxcjb> then use tcpwrappers
<ahxcjb> as iptables can't do that
<ahxcjb> iptables can, however, block people attempting to login to your SSH server or allow access as set (IP, magic packet etc)
<ahxcjb> therefore, people often use BOTH.
<Amoug> Any example on users restrictions for a certain service using TCP wrapper ?
<ahxcjb> SSH logins
<ahxcjb> FTP logins
<ahxcjb> http://www.cyberciti.biz/faq/restrict-ssh-access-using-tcpd-tcpwrapper/
<jamespage> mtaylor, you found 'requestsync' right?
<mtaylor> jamespage: I did!
<mtaylor> jamespage: thanks!
<mtaylor> jamespage: if you happen to feel like doing anything with https://bugs.launchpad.net/bugs/971663
<uvirtbot> Launchpad bug 971663 in weechat "FFe: Sync weechat 0.3.7-1 (universe) from Debian testing (main)" [Wishlist,New]
<mtaylor> jamespage: I'll buy you yet another beer next time we char
<mtaylor> chat
<jamespage> mtaylor, you will need to link to the upstream changelog as well so the release team can see what changes are included
<adam_g> roaksoax: ping
<jamespage> mtaylor, have you tried it out (i.e. tested it?)
<jamespage> (I will sync it assuming the release team approve BTW :-))
<mtaylor> jamespage: running it right now
<mtaylor> jamespage: and the debs from unstable worked without upgrading anything other than weechat itself
<roaksoax> adam_g: pong
<adam_g> roaksoax: ah, nvm. was scratching my head about resource-agents vs cluster-agents, but im good now
<roaksoax> adam_g: hehe ok
<adam_g> roaksoax: im considering an upload to fix bug #957913 and a couple of other similar related bashisms, which affect ubuntu similarly
<uvirtbot> Launchpad bug 957913 in resource-agents "pgsql resource agent uses == in test operator" [Low,New] https://launchpad.net/bugs/957913
<adam_g> roaksoax: what do you think?
<roaksoax> adam_g: sure, go for it
<zul> hallyn: have you seen this before? http://paste.ubuntu.com/911772/
<mgw> is there a way to get apt-get to abort if it needs interaction?
<mgw> -y -q sometimes still brings up debocnf
<hallyn> zul: no.  does the .xml verify the error msg???
<zul> hallyn: no
<mgw> How an I pase preconfigure options to apt-get?
<mgw> *pass
<mgw> e.g., if I want to configure with priority=critical
<patdk-wk> -o=
<mgw> with what option?
<mgw> Dpkg::Priority?
<_johnny> hi. any users of rrdtool? :) i've been looking at examples all day, and read a very nice intro ( http://oss.oetiker.ch/rrdtool/tut/rrd-beginners.en.html ), but i can't seem to do the fetching (or updating) properly. this is based on an example: http://pastebin.com/1v5rycxA . am i missing something obvious here? even with examples of defining the average with 1 datapoints (no averaging) i still get no values.
<hallyn> zul: any more info?  can you open a bug?
<zul> still looking
<zul> hallyn: it looks like this bug: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/897750 but my config might be broken as well
<uvirtbot> Launchpad bug 897750 in qemu-kvm "libvirt/kvm problem with disk attach/detach/reattach on running virt" [Low,Confirmed]
<level15> hi, all: I really need help migrating a kvm guest from one server to another. both hosts run 10.04 lts. host 1 is called server1 and has a domain called vm01. When I try virsh migrate vm01 qemu+ssh://user@server2:port/system I am prompted for user@server2 password, and then I get a msg sayin "error: unknown failure". Anyone here has experience doing VM migrations?
<level15> Perhaps you can help me find out what I might be missing
<uvirtbot> New bug: #959294 in keystone "Can't delete users" [Undecided,New] https://launchpad.net/bugs/959294
<ha1dfo> hi all. I'm running dhclient on ubuntu 10.10 on a network interface what is renamed programatically later on. How can I stop dhclient gracefully (not killing)?
<tarvid> is there a cli version of network manager?
#ubuntu-server 2012-04-03
<uvirtbot> New bug: #971981 in quantum (universe) "usptart and missing bin files" [Undecided,Fix released] https://launchpad.net/bugs/971981
<uvirtbot> New bug: #972019 in quantum (universe) "quantum-common missing /etc/quantum" [Undecided,Fix released] https://launchpad.net/bugs/972019
<uvirtbot> New bug: #960731 in quantum (universe) "Missing: quantum, quantum-server, python-quantum" [High,Fix released] https://launchpad.net/bugs/960731
<adrien2> Hello
<adrien2> I have reason to believe 5 or so people are attempting to hack my computer
<adrien2> is there any recent security flaws in ubuntu 11.10?
<twb> Yes.
<adrien2> I'm scared
<adrien2> I logedd off from there though
<twb> Then unplug all your computers and leave them off forever
<adrien2> Why? I did nothing wrong
<twb> That's the only way to be safe
<adrien2> You guys sure are helpful
<twb> Also run an angle-grinder through your hard disks
<adrien2> what is your problem?
<twb> Man, he was uptight.
<fyrfaktry> lol
<twb> I was just trying to give him some perspective.
<twb> https://en.wikipedia.org/wiki/Computer_security offers some background theory
<twb> Also http://cwe.mitre.org/top25/
<CyberAlejo17> Hello, someone speaks Spanish?
<virusuy> CyberAlejo17: si
<CyberAlejo17> Hola :) que gusto
<virusuy> :-D
<CyberAlejo17> serÃ¡ que puedes ayudarme con un pequeÃ±o problema que tengo en un server openvpn?
<CyberAlejo17> mas precisamente en la configuracion de iptables con politica por defecto drop
<virusuy> uy.. conozco poco de iptables
<CyberAlejo17> mira mas info: http://www.ubuntu-es.org/node/166700
<twb> CyberAlejo17: are you using ucf, or iptables directly?
<uvirtbot> New bug: #972043 in samba (main) "smbd crashed with SIGABRT in rep_strlcpy()" [Undecided,New] https://launchpad.net/bugs/972043
<CyberAlejo17> ucf?
<twb> ucf is Ubuntu's wrapper around iptables
<twb> http://cyber.com.au/~twb/doc/iptab is an example using iptables directly
<CyberAlejo17> Estoy aplicando iptables directamente.
<CyberAlejo17> Mediante un script.sh
<twb> CyberAlejo17: please pastebin your script.sh
<twb> Also please read http://jengelh.medozas.de/documents/Perfect_Ruleset.pdf, and consider joining #netfilter (which is English only)
<CyberAlejo17> ohhh, no tengo ahora acceso al servidor. No puedo hacer conexiÃ³n mediante SSH, creo que quedÃ³ confgurado con DROP cuando salÃ­.
<CyberAlejo17> De esta forma no es mucho lo que se pueda hacer :(
<twb> CyberAlejo17: I'm sorry to hear that.
<twb> CyberAlejo17: ask me again, when you have access to your server.
<CyberAlejo17> ok. Eso harÃ©. Muchas gracias. Disculpa la molestia.
<stgraber> hallyn: I pushed a very minor fix to ubuntu:lxc (won't upload just for that), just adds a missing space before the = sign of "lxc.network.hwaddr"
<stgraber> hallyn: http://paste.ubuntu.com/912396/
<psusi> bug #919281 appears to be an iso spin error for the server iso... kernel modules are missing... what is the correct package that should be assigned to?
<uvirtbot> Launchpad bug 919281 in ubuntu "devmapper kernel modules missing from precise server cd" [Critical,Triaged] https://launchpad.net/bugs/919281
<Hodgy> I just installed Ubuntu Server 11.10 and I am accessing it from SSH, is there anything neat I could try out with it?
<FrozenFire> For some reason, on my gateway/router server running Ubuntu Server, the internal interface which is supposed to always be set to 192.168.0.1, dhclient is reconfiguring that interface with a DHCP address.
<FrozenFire> The interface is set to static in /etc/network/interfaces
<FrozenFire> And I've even set supersede fixed-address 192.168.0.1 in dhclient.conf
<FrozenFire> But it keeps happening
<FrozenFire> It's getting frustrating as heck, because something's causing dhclient to reconfigure the interfaces on a regular basis
<FrozenFire> Any ideas as to why?
<FrozenFire> https://bugzilla.redhat.com/show_bug.cgi?id=556001 Essentially equivalent to this
<uvirtbot> bugzilla.redhat.com bug 556001 in dhcp "dhclient sets wrong interface when the correct one is disconnected" [High,Closed: worksforme]
<RoyK> hi all. I have two kvm nodes setup with ssh auth between them, both running oneiric. When I try to migrate this vm, it tells me http://paste.ubuntu.com/912552/ - source kvm host has been running for some weeks, dst host has just been rebooted for good measure
<RoyK> oh, and after that attempt, the vm loses contact with its root device
<RoyK> anyone awake?
 * RoyK += 0xc0ffee
<lynxman> morning o/
<twb> RoyK: surely you should XOR that
<RoyK> twb: heh
<jamespage> morning all
<twb> What's nomodeset called in lucid?
<twb> post-install I want it to fuck off and stop loading a fuzzy, larger font than the 80x25 it starts with
<twb> That is, vga16fb
<twb> Fuck it, blacklisting it in modprobe.d works
<twb> grub-common has an update in lucid/updates, but grub-efi-amd64 doesn't -- WHY?  They're from the same source package.
 * smb chants iscsitarget into Daviey 's generic direction...
<Tm_T> language ...
<_ruben> iscsitarget as in iet? yuck :P
<twb> _ruben: it's OK provided you remember to use a non-terminated BLACK goat
<_ruben> twb: hehe
<twb> OK, this is confusing.  lucid's grub-efi-amd64 says that it needs an EFI partition, and doesn't take a device.  The same program in precise, DOES take a device, and if you just do "grub-install /dev/sda" without an EFI partition on that device, it succeeds without output.  Looking at the partition table afterwards shows there is still no EFI partition.  WTF?
 * twb reboots to see if anything has changed
<twb> rebooting makes it boot from the MBR still
<kokyu> is anyone here (kind of) experienced with Ubuntu plus OpenStack?
<uksysadmin> kokyu: what do you need to know? dev support or end user help/
<kokyu> I just installed 12.04 (daily build) and despite the fact, that OpenStack enabled during installation process, it did not succeed without telling me why, so I excluded it again, and did continue install
<kokyu> (still writing my issue :-) )
<kokyu> now, after install reboot, I see, that it actually has installed OpenStack, but SWIFT failed to start up, Compute (nova) seems to be running, at least when checking the process list.
<kokyu> I am kind of new to both, Ubuntu (not Linux et al) and OpenStack, so I am a little confused on how to fix things now. is OpenStack now half or fully installed, or am I missing just some db configuration bits?
<kokyu> maybe choosing 12.04 wasn't that a great idea, but this is going to be the next LTS and to be released in a few weeks, so I chose this one (also due to the recent kernel and userland :)
<kokyu> uksysadmin: I kind of need someone to give me hints to get OpenStack initially running :)
<kokyu> we're currently using Proxmox 1.9 with OpenVZ, with LTS 8.04 and would like to switch to OpenStack for the new hardware with 12.04 LTS ideally
<kokyu> and since OpenStack seems to be core part of Ubuntu now, it just seems ideal.
<uksysadmin> kokyu: check this out: http://uksysadmin.wordpress.com/2012/03/28/screencast-video-of-an-install-of-openstack-essex-on-ubuntu-12-04-under-virtualbox/
<kokyu> oha, interesting. I'll watch that. many thanks so far ;)
<uvirtbot> New bug: #972268 in clamav (main) "clamscan crashed with SIGSEGV in pthread_cond_timedwait@@GLIBC_2.3.2()" [Undecided,New] https://launchpad.net/bugs/972268
<kokyu> uksysadmin: thanks :)
<uksysadmin> np kokyu - check out #openstack too - other guys can help you in there with OpenStack issues
<kokyu> uksysadmin: is your vimeo video guide really without sound, or is it my audio messing up right now?
<uksysadmin> ah - sorry - I should put a message up - it is without sound
 * uksysadmin went for the 1900s silent movie genre ;-)
<kokyu> lol damn it, and I was searching for the issue locally :D
<uksysadmin> I'll update my blog. sorry! :S
<kokyu> never mind, now that I now, I really apreciate ppl doing screencats, however, I can just encourage you to actually speak with it, it is really much more helpful with audio text :)
<kokyu> uksysadmin: I also found this one, btw: http://www.hastexo.com/resources/docs/installing-openstack-essex-4-ubuntu-1204-precise-pangolin
<koolhead17> uksysadmin: ^^
<uksysadmin> that's a great tutorial too
<uksysadmin> and don't encourage koolhead17 to make me do a voice over
<uksysadmin> ;-)
<uksysadmin> There are some great guides coming out now that accompany the documentation
<kokyu> hehe
<koolhead17> lol. you should definately do that man
<uksysadmin> ok ok, I'll try and find some time to add some audio
<sergevn> hi
<uvirtbot> New bug: #972299 in ntp (main) "package ntp 1:4.2.6.p3+dfsg-1ubuntu3 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/972299
<ironm> !daily
<ubottu> Daily builds of the CD images of the current development version of Ubuntu are available at http://cdimage.ubuntu.com/daily/current/ and http://cdimage.ubuntu.com/daily-live/current/
<sergevn> hi
<ironm> !ubuntu-server+1
<ironm> !ubuntu-server
<ubottu> Ubuntu Server Edition is a release of Ubuntu designed especially for server environments, including a server specific !kernel and no !GUI. The install CD contains many server applications. Current !LTS version is !Lucid (Lucid Lynx 10.04) - More info: http://www.ubuntu.com/products/whatisubuntu/serveredition - Guide: https://help.ubuntu.com/10.04/serverguide/C/ - Support in #ubuntu-server
<hlan> hello, I'm using the ppa ondrej/php5 because I need php 5.4 but php segfaults so I also need the debug symbols so I can do a gdb backtrace... how do I proceed?
<ironm> hello. Please allow me one question. I use XCP 1.5 (xen) as host and I can successfully install ubuntu-server 11.10 as VM but I am running in "CD-ROM mount" issue with ubuntu-server 12.04. Mor detals are in: xcp1.5-ubuntu-server12.04.error.txt
<ironm> http://paste.debian.net/161870/
<ironm> can anyone confirm this issue? Thank you in advance for any hints.
<ironm> would be #ubuntu+1 channel better for the question above?
<alein> Hi can I ask in the channel?
<uksysadmin> alein: I can't speak for everyone, but I'll let you. :p
<alein> Ok thanks, I have one big problem with one of my servers. The problem is that some bad person make a huge ddos syn flood on port 80. That overload my server, http stop working and other services become slow and useless.
<ahxcjb> your front end firewalls should mitigate against syn floods
<alein> I don't talk about webserver flood - I mean tcp syn flood on port 80
<alein> I talk about 200Mbps syn flood traffic
<uksysadmin> +1 on ahxcjb.
<alein> ahxcjb what you mean? my ISP should stop the flood?
<rbasak> alein: what's your question?
<uksysadmin> ok, few solutions - set up your systems to not allow the creation of many syn connections without the full ack etc... firewall, ips.
<alein> how to deffend myself
<uksysadmin> if its to port 80... another quick win - check out www.cloudflare.com and have your site live behind that
<ahxcjb> set your front end firewall to mitigatge against the attack
<ahxcjb> what is your front-end firewall?
<ahxcjb> is it Ubuntu?
<rbasak> Enabling syn cookies should protect against a plain syn flood
<alein> ahxcjb iptables doesn't help at all
<ahxcjb> http://www.lainoox.com/tag/syn-flood-iptables/
<ahxcjb> alein: of course it does!
<alein> I can drop all the traffic and the server gets overloaded
<ahxcjb> then you're not doing it correctly
<alein> lol
<ahxcjb> if you're getting a 200Mbps syn flood, you should involve your ISP
<alein> iptables -I INPUT -i eth0 -s 0.0.0.0/0 -j DROP
<patdk-wk> http://www.symantec.com/connect/articles/hardening-tcpip-stack-syn-attacks
<ahxcjb> well that's just silly
<alein> that doesn't help
<ahxcjb> it doesn't help because you clearly haven't got clue
 * patdk-wk has never been syn flooded
<alein> ;)
<patdk-wk> atleast not more than my server could handly by itself
<patdk-wk> without adjustments
<patdk-wk> people do love to do POST floods for some reason
<ahxcjb> if you're suffering a major DoS you need to involve your ISP
<ahxcjb> as they can mitigate against the flood far better than a home user can.
<alein> ahxcjb I call them, wtite them a letter nad the answer was "We can't handle it"
<ahxcjb> alein: then change ISPs
<alein> write*
<alein> its not that easy
<alein> so  http://www.lainoox.com/tag/syn-flood-iptables/ should help?
<ahxcjb> I think if you are being hit by the size of DoS  that you state, then you have to involve your ISP
<ahxcjb> and if your ISP doesn't act, then MOVE ISP
<alein> I have 1gbps connection so they hardly can owerload my bandwith
<alein> but that last flood was ugly
<ahxcjb> if you have a 1gbps connection, then you're a fixed line user
<ahxcjb> and should have budget for proper firewalls
<ahxcjb> which i suggest you purchase
<ahxcjb> to allow you to mitigate against such attacks
<alein> Nope, I don't have it
<ahxcjb> then how and why do you have a 1gbps connection? Are you a business?
<alein> Nope, I'm not a business, I have little game server
<ahxcjb> on 1gbps? pull the other one
<alein> just have friends in that ISP and I have 1 Gbps
<ahxcjb> so are you paying for this co-lo
<ahxcjb> ?
<alein> Yes with 9 years work in that company
<patdk-wk> sound like a, I'll put it under my desk, deal
<alein> possible SYN flooding on port 80. Sending cookies I sick of this
<dork> how distributed is it
<dork> are the bots leaving any sort of fingerprint in the access log?
<dork> try tarpit'ing the string of their client
<dork> http://www.spinics.net/lists/netfilter/msg17583.html
<dork> but bottom line is anyone who runs a box and gets dos'd only mitigates and eventually contacts upstream carrier to filter
<dork> so do it the right way
<dork> and stop making excuses :)
<rbasak> jamespage: I apt-get installed jenkins on oneiric, but it failed to start because /etc/init/jenkins.conf uses JAVA_HOME=/usr/lib/jvm/default-java which doesn't exist. Changing it to /usr/lib/jvm/java-6-openjdk fixed it. Is this a bug? Any idea why I don't have a /usr/lib/jvm/default-java?
<jamespage> rbasak, bug 971952
<uvirtbot> Launchpad bug 971952 in jenkins "Java home not correct causes jenkins to crash at start" [Undecided,New] https://launchpad.net/bugs/971952
<rbasak> thanks :)
<jamespage> jenkins depends of default-jre-headless | java6-runtime-headless...
<jamespage> rbasak, you can fix it bey installing default-jre-headless
<jamespage> it works differently in precise so its an oneiric specific issue
 * jamespage wishes for good JAVA_HOME detection
<jamespage> virtually every server type package has this problem
<jamespage> rbasak, hence things like bigtop-utils....
<ironm> hello. Does anyone run ubuntu-server 12.04 on XCP 1.5 host? (free xen-server)
<rbasak> jamespage: that worked - thanks!
<jamespage> rbasak, its a PITA
<jamespage> rbasak, BTW I've been backporting precise jenkins to ppa:hudson-ubuntu/backports if you want something a bit more up-to-date for the next few weeks
<rbasak> jamespage: thanks, I'll use that if I find something I need that's missing
<ironm> hello. Anyone around running a kvm  based host on ubuntu-server 11.10 or 12.04? I am looking for some documentation about configuring and running kvm VMs
<SpamapS> lynxman: pong, was out yesterday.
<SpamapS> ironm: https://help.ubuntu.com/8.04/serverguide/C/libvirt.html
<ironm> thanks a lot SpamapS :)
<lynxman> SpamapS: hey I'm having some problems with the splice command, wanted to pick your brain for a bit :)
<SpamapS> hrm.. why does google insist on giving me the hardy docs? We need to setup a sitemap for help.ubuntu.com
<lynxman> SpamapS: http://pastebin.ubuntu.com/913051/ <-- already converted all the soft links to regular files, still errors
<SpamapS> ironm: https://help.ubuntu.com/11.10/serverguide/C/libvirt.html probably more current :)
<ironm> SpamapS,  I will check this one too. Merci :)
<SpamapS> lynxman: hmm
<lynxman> SpamapS: easily reproducible, negronjl suggested me that soft links wouldn't work so I ran a small script to convert them
<lynxman> SpamapS: the error message is kinda confusing, that's why I wanted to ask you :)
<SpamapS> lynxman: the logic looks a bit out of order
<lynxman> SpamapS: what would you suggest?
<lynxman> SpamapS: do the charms need to be in any special order?
<lynxman> SpamapS: even if I try to do one it's failing I'm afraid
<SpamapS> lynxman: yeah I think the lack of tests for splice is showing. ;)
<lynxman> SpamapS: hah yeah :)
<koolhead17> Daviey: around?
<SpamapS> lynxman: I believe the simple fix is to add an os.mkdir before the proxy_relation calls
<lynxman> SpamapS: just create a silently failing os.mkdir before the proxy_relation call to create the dir if it doesn't exist?
<SpamapS> lynxman: yeah, or perhaps move the make_hook calls before the proxy relation calls.
<SpamapS> (they do a mkdir)
<lynxman> SpamapS: could you pass me a small diff so I know where to look at quickly? :)
<lynxman> SpamapS: ah neverminds scripts/splice
<lynxman> SpamapS: I've been too long doing ruby, almost forgot python now ;)
<lynxman> SpamapS: yeah that worked \o/
<SpamapS> lynxman: I think its probably time that we merge splice into charm-tools
<lynxman> SpamapS: would be good, it's extremely useful
<SpamapS> lynxman: still feels very experimental though.. hrm
<ironm> SpamapS, I am wondering a bit, why virtinst has not been installed even I have chosen the host tasksel option (for kvm)
<SpamapS> ironm: dunno, I have to admit, my libvirt knowledge is pitiful, I usually just use virt-manager
<ironm> SpamapS, wirt-manager hasn't been installed too
<SpamapS> ironm: its a GUI so thats no surprise
<ironm> oh .. I see :)
<hlan> I'm trying to automate apt and I'm copying sources.list however that makes apt hang on /var/lib/dpkg/info/base-passwd.postinst
<hlan> I guess some trust/security files must also be copied...  what more files do I need to copy except /etc/apt/sources.list
<hlan> ?
<SpamapS> hlan: what exactly are you trying to automate?
<SpamapS> hlan: sources.list would have nothing to do with /var/lib/dpkg/info/base-passwd.postinst ..
<SpamapS> hlan: if you want to create a new, tiny ubuntu, you want debootstrap, not apt
<hlan> SpamapS: apt-get spawns that process and it waits for some kind of user prompt
<SpamapS> hlan: that process is the post install script for a package. dpkg is spawning it, not apt
<hlan> SpamapS: what kind of information is it asking for?
<hlan> it's trying to read from stdin
<hlan> unfortunately I can't see stdout
<SpamapS> hlan: no idea, but if you want to not be prompted you can use export DEBIAN_FRONTEND=noninteractive
<SpamapS> hlan: it wil then choose defaults for all questions
<konradb> hi, is it possible to make dist-upgrade without rebooting?
<NGNTNT> hi everybody
<konradb> everypony*
<NGNTNT> can anyone help me with my stucked-at-the-boot ubuntu server ?
<NGNTNT> noone ?
<SpamapS> NGNTNT: can you be more specific than "stuck at the boot" ?
<SpamapS> konradb: yes you can upgrade almost anything without rebooting.. notable exceptions are upstart and the kernel (though there is 'ksplice' for kernels, I don't know how stable it is)
<NGNTNT> at the boot sequence the server goes to busybox prompt. The previouse lines said mounting /dev to /root/dev failed
<NGNTNT> I tried to launch fsck booting from a live cd but nothing worked yet
<jamespage> Ursinha,  http://reports.qa.ubuntu.com/reports/ubuntu-server/triage-report.html is looking better (if a little scary)
<Ursinha> jamespage, is the data correct? I removed one constraint that was making that miss some bugs
<jamespage> Ursinha, well I could see bugs moving through the queue so I think so
<jamespage> Ursinha, ~260 New bugs was the scary bit (was 275 this morning :-))
<raubvogel> Does ubuntu now do disk alignment when partitioning hard drives?
<SpamapS> Ursinha: thanks for fixing that
<SpamapS> jamespage: and well done noticing it was wrong ;)
<Ursinha> thanks guys for using it
<Ursinha> :)
<jamespage> Ursinha, makes my life easier (well it does now)
 * jamespage thinks we need to have a blitz on New bugs
<jamespage> ubuntu-server team meeting in #ubuntu-meeting about to start...
<ironm> SpamapS, I used the following line to create VM. I am not sure if it is correct syntax. How can I connect to the install console?
<ironm> virt-install -n web70 -r 2048 --disk path=/dev/sdd -c /var/lib/libvirt/ubuntu-11.10-server-amd64.iso  --network network=default --connect=qemu:///system --graphics none -v
<ironm>  10 web70                running
<uvirtbot> New bug: #972578 in rabbitmq-server (main) "rabbitmq-server 2.7.1-0ubuntu4 failed to start due to wrong directory owner" [Undecided,Invalid] https://launchpad.net/bugs/972578
<ironm> hmm... : ironm@dev10:~$ virt-viewer --connect qemu:///system 10
<SpamapS> ironm: you're asking the wrong person. ;)
<SpamapS> hallyn: ^^ perhaps you can help ironm ?
<ironm> ok .. thanks a lot anyway SpamapS :)
<ironm> it looks like the VM is running. I don't know how to connect to console using virt-viewer
<ironm> console of this VM ...
<hallyn> i don't use virt-install.  but perhaps 'virsh console 10', if you have a serial console hooked up inthe guest
<ironm> thank you hallyn  .. i will check it
<ironm> hmm ... I am gettint the following output but nothing happen anymore and I am not able to type in  ...
<ironm> Connected to domain web70
<ironm> Escape character is ^]
<ironm> hallyn, has the following line a correct syntax? virt-install -n web70 -r 2048 --disk path=/dev/sdd -c /var/lib/libvirt/ubuntu-11.10-server-amd64.iso  --network network=default --connect=qemu:///system --graphics none -v
<hallyn> ironm: as I say I don't use virt-install.  looks fine based on what i know
<hallyn> i wonder if mdeslaur uses it...
<hallyn> i'll give it a whirl though
<ironm> thanks a lot hallyn
<hallyn> looks fine especiallly per https://help.ubuntu.com/11.10/serverguide/C/libvirt.html
<mdeslaur> ironm: I don't think you can connect to a virt-install console
<hallyn> ironm: why exactly did you say --graphics=none?  if you do vnc, you'll get the console over vnc
<hallyn> which i think is what you need right now
<hallyn> it won't cause x to be installed
<mdeslaur> ironm: ah, I take it back, hallyn is right
<ironm> hallyn, I am on console of the host (ssh)
<ironm> it looks like I need a client with vnc ...
<hallyn> ironm: i'm afraid we have terminology confusion.  'ssh' gives you a pty, fwiw.  'console' usually means a getty running on /dev/ttyX
<hallyn> right.  once it's all set up you can then ssh into the guest
<hallyn> virsh console itself "works", but I dont' knwo if virtinst is setting /dev/ttyS0 up, nor do i think ubuntu server is setting it up
<hallyn> so virsh console gives nothing bc there is no getty running
<utlemming> kirkland: ping
<ironm> hallyn, yes .. I thought it is possible to use an install console also from the KVM host
<hallyn> i don't know what you mean
<smoser> kirkland, you see https://bugs.launchpad.net/ubuntu/+source/byobu/+bug/966686
<uvirtbot> Launchpad bug 966686 in byobu "byobu clears screen on login" [High,New]
<ironm> hallyn, I try to follow you
<smoser> RoyK, you were asserting somewhere that a cd install in a vm results in an empty console on server install, is that right?
<smoser> gah
<smoser> s/RoyK/RoakSoax/
<roaksoax> smoser: i'm here
<smoser> roaksoax, then look above. stupid caps change.
<smoser> anyway
<roaksoax> lol
<roaksoax> smoser: yeah, I've seen that issue
<smoser> can you open a bug.
<smoser> hallyn, ^
<roaksoax> smoser: sure, let me test it again to confirm and will open a bug
<smoser> and we should determine if thats vm only.
<roaksoax> k ;)
<hallyn> smoser: eh what?
<hallyn> if you install a non-server iso without x, grub.conf still redirects you to vt7, which is empty.  is that what you're referring to?
<hallyn> it's not only in vms
<hallyn> you can edit /etc/default/grub, or jsut hit alt-left to get a console
<smoser> this was server iso install
<smoser> but admittedly possibly via preseed and cobbler/maas
<hallyn> and what does /proc/cmdline show
<roaksoax> hallyn: yes
<roaksoax> hallyn: that's it, it shows a black screen with cursor, but changing ttys gives you the login prompt
<hallyn> I assume there is vt.handoff=7 in /proc/cmdline
<roaksoax> let me check, doing a new install
<kirkland> utlemming: howdy
<utlemming> kirkland: have you perchance seen my bug on byobu clearing the screen on login?
<utlemming> bug 966686
<uvirtbot> Launchpad bug 966686 in byobu "byobu clears screen on login" [High,New] https://launchpad.net/bugs/966686
<kirkland> utlemming: yeah, haven't had time to look into that
<kirkland> utlemming: is that a difference between tmux and screen, perhaps?
<kirkland> utlemming: I think that's because the older byobu used the /usr/bin/byobu-shell to launch a shell
<kirkland> utlemming: which cats the motd
<utlemming> kirkland: yeah, the tmux version is the one that clears the screen
<kirkland> utlemming: do you think this is release critical?
<kirkland> utlemming: I reckon it is, since it removes the landscape commercial, huh?
<utlemming> kirkland: yes...we are putting some logic in to warn people of invalid or uninstalled locales. There is a problem with some packages where if LC_* variable are exported, the package may not installed.
<utlemming> kirkland: and it removes the blatant commerical advertising too
<kirkland> utlemming: well, byobu is off by default now, so meh :-)
<utlemming> kirkland: hence the reason I filed it as "high" instead of critical. Although, the tmux version of byobu is pretty slick. I'm using it a whole lot more myself.
<kirkland> utlemming: okay, I'll get that one fixed, please assign it to me, mark it triaged/high and milestone it appropriately
<kirkland> utlemming: i *love* it ;-)
<kirkland> utlemming: 1920x1080 with about ~6 splits usually
<kirkland> utlemming: and rarely more than 1 window
<kirkland> utlemming: okay, i'll get on that today
<utlemming> kirkland: thanks :) Marked triaged, assigned and targeted
<hallyn> roaksoax: i really don't know what's to be done about htat :)  unless we have the core x package do the appending of vthandoff line
<bobweaver> Does anyone know where to get this how much it costs . Is it real ? does it work on deb systems or only rpm ? ect   http://www.hepsia.com/ .Talk about bad advertising, All I can find is demo lol
<jamespage> adam_g, around? want to discuss squid3?
<uvirtbot> New bug: #967887 in glance "Glance's auto-recovery of db connections is incompatible with newer sqlalchemy" [High,Fix released] https://launchpad.net/bugs/967887
<dexter76> hello, on a fresh ubuntu 11.10 server virt-install raise "Could not find an installable distribution at" error whatever iso/http/ftp i give to the --location
<dexter76> any ideas what to check?
<adam_g> jamespage: sorry, lost in an email. yea
<adam_g> jamespage: still around or did i miss you?
<kklimonda> huh, it seems like idmapd doesn't start early enough on precise when used with autofs..
<kklimonda> ah, it's a different issue - my network doesn't start early enough so idmapd can't figure out the domain..
<kklimonda> but that makes no sense
<imjustmatthew> I'm having some trouble getting an upstart job to work right, is there an even fires when a DHCP lease is accepted?
<smoser> hallyn, ping
<hallyn> .
<KM0201> !ping
<ubottu> another contentless ping... sigh...
<KM0201> lol
<smoser> how would you boot a kvm instance with networking other than guest net
<smoser> ie, i'd like to use kvm without libvirt, but the only time i ever do something other than guest net is with libvirt
<hallyn> smoser: hold on lemme pastebin what i do
<hallyn> smoser: http://paste.ubuntu.com/913492/
<hallyn> or are you asking for libvirt xml to do that?
<smoser> hallyn, thats what i wanted
<smoser> minus...
<smoser> how do you get a network interface there
<hallyn> you mean br0?
<smoser> in the guest
<smoser> static?
<hallyn> i dno't understand.  the cmds above will give you an eth0 in the guest
<smoser> right.
<smoser> but how does it get an IP
<hallyn> depends on how br0 is set up
<smoser> ah
<smoser> i see
<smoser> br0 from libvirt ?
<hallyn> it should ping the same dhcp sever as your host does
<hallyn> or br0 that eth0 is slaved to
<smoser> ah.
<smoser> yeah.
<smoser> i need to provide it one then.
<smoser> k.
<hallyn> or,
<hallyn> you can set one in the guest by hadn i suppose, but it probably won't talk to the network right
<smoser> hallyn, thanks.
<hallyn> np
<hlan> will the log rotator process any log directly stored in /var/log ?
<hlan> or just "syslogs"
<uvirtbot> New bug: #972786 in ipsec-tools (main) "racoon does not bind to interfaces brought up afterwards" [Undecided,New] https://launchpad.net/bugs/972786
<zul> adam_g:  so novnc doesnt have tarballs per say, like release tarballs, so i think we should be doing another snapshot with the horizon patches applied
<alein> hi all
<alein> I would like to ask, is there any way to catch NULL TCP packets with tcpdump?
<RoyK> alein: "Null Packets are neither sent nor acknowledged when not received."
<RoyK> 2.1. Formal Definition
<RoyK> [This section is intentionally left blank, see also Section 0 of [NULL].]
 * RoyK loves april fool RFCs :D
<alein> RoyK I'm trying to catch the true ip address of spoofed syn flood attack.
<alein> Can I do this with tcpdump and wireshark
<RoyK> how do you want to catch the real IP when it's spoofed?
<RoyK> spoofed means it's overwritten
<uvirtbot> New bug: #972845 in tftp-hpa (main) "after upgrade to precise, service did not start" [Undecided,New] https://launchpad.net/bugs/972845
<RoyK> and the routers don't track what they do
<alein> The only way to detect default ip is to looking for NULL TCP packets (meaning no TCP flags set) with destination ports of 0.
<alein> but I'm not sure that I can do this with tcpdump or only with an intrusion-detection system
<RoyK> alein: what should generate that null packet, then?
<alein> meaning no TCP flags set
<zul> adam_g: we 600 the keystone config files dont we?
<adam_g> zul: keystone.conf, yeah, we should
<adam_g> zul: also, /var/lib/keystone/keystone.db if it exists
<zul> k
<rmk> Alright so the Ubuntu dhcp client seems to just give up and die if the dhcp server is down during the time a request is bad..
<rmk> s/bad/made
<uvirtbot> New bug: #950942 in glance (main) "glance-registry upstart should also include 'started' for mysql/pgsql" [Low,Invalid] https://launchpad.net/bugs/950942
<rmk> So, when we lose our dhcp server, our dhclient process retries for about a minute then exits rather than sleeping.  Ubuntu 11.10 64-bit server.  Is this expected behavior and is there a way to change that?
<rmk> I can obviously script aorund it but I figure there has to be a cleaner way.
<jiboumans_> hi smoser, just tried to launch a new ami (ami-37af765e) in us-east-1a and cloud-init exited with code 1. Using the slightly older ami-3e9b4957 everythings works just dandy. figured you'd want to know.
<dork> it is expected behavior because it's assuming you chose the wrong interface, meant to provision a static ip, etc
<dork> just hit go back and do it again
<smoser> jiboumans_, i suspect mirror issues. but will give a quick check.
<smoser> hm.. i dont know of ami-37af765e
<jiboumans_> http://uec-images.ubuntu.com/query/lucid/server/released.current.txt
<jiboumans_> smoser: it's listed there ^
<smoser> ah. k.  my cache was just out of date
<jiboumans_> smoser: this is the last bits in the syslog: https://gist.github.com/2296262
<smoser> well, i can't be sure why your pupet died.
<dork> rmk: oh nevermind thought you meant during installation
<smoser> perhaps it could not reach the master ?
<rmk> no I need it to retry forever
<dork> rmk: try dhclienf.conf
<jiboumans_> smoser: possibly, but it left the ami in a non-good state and appeared to exit the run.
<dork> dhclient.conf
<jiboumans_> am i seeing that wrong?
<dork> looks like the params are in there
<jiboumans_> there was no /etc/puppet generated for example
<smoser> jiboumans_, console output (get-console-output) is more helopful. it will have more info. i suspect it has a apt-get update failre.
<smoser> but there will probably be something meaningful to you there.
<jiboumans_> smoser: i've scrapped the instance, but happy to respin one if it helps you diagnose
<smoser> (and note, in later releases, you should set
<smoser>  output: {all: '| tee -a /var/log/cloud-init-output.log'}
<smoser> you have access to the instance up to 1 hour after termination
<smoser> jiboumans_, so above, then you'll have everything that output by cloud-inti or subprocess in that log file.
<smoser> just easier to get at thene console
<jiboumans_> smoser: i don't seem to be able to start it up again from the console though.. am i missing something?
<jiboumans_> thanks, adding that to our start up script
<smoser> you cant start it up again
<smoser> but at least from the tools, you'll be able to get console output
<smoser> its just stored for 1 hour.
<smoser> ie, euca-get-console-output <i-iabababab>
<jiboumans_> checking
<jiboumans_> smoser: you're right. updated the gist: https://gist.github.com/2296312
<jiboumans_> W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/lucid-updates/main/source/Sources.bz2  Hash Sum mismatch
<smoser> jiboumans_, you  might be a good candidate for our s3 mirrors.
<jiboumans_> smoser: i'm listening :)
<smoser> which (given disabled apt pipelining , which is current in daily images, or anything with up to date cloud-init)
<smoser> should be more stable.
<smoser> we'll have offically released amis later this week that have the optoin already disabled inside them
<utlemming> smoser, jiboumans_: lucid was officially released today with the update
<smoser> but you can either laucnh the daily, or set the option in apt yourselfbefore update.
<jiboumans_> sorry, you mean that new images will use the s3 mirror by default?
<smoser> jiboumans_, so there ya go.
<smoser> jiboumans_, no, htye use the other mirrors. but you can tell them fairly easily to use the s3
<smoser> utlemming, can tell you how
<utlemming> run 'sed -i "s,ec2.archive.ubuntu.com,ec2.archive.ubuntu.com.s3.amazonaws.com,g" /etc/apt/sources.list'
<jiboumans_> then i didn't quite follow; what's the apt-pipelining option? I see the lp repo, but not the rationale behind it
<utlemming> APT uses a micro-enhancement (HTTP Pipelining) to eak out a few microseconds of performance. S3, well, it doesn't get along with pipelining. If you disable apt's pipelining, then S3 works well.
<jiboumans_> ah, that makes sense
<jiboumans_> utlemming/smoser: is the s3 apt repo code viewable somewhere? it's on my bucket list to do that internally for our own apt repo too
<utlemming> yup... lp:s3aptmirror
<jiboumans_> thanks utlemming smoser, very helpful :)
<smoser> utlemming, could you open an RT about the apt mirror issue
<smoser> utlemming, oh, its the stale issue
#ubuntu-server 2012-04-04
<smoser> utlemming, could you push on that mirror update ? looks like they're stale again.
<tkeith> I apt-get installed python-django-doc, which automatically installed libpython2.6, python2.6, and python2.6-minimal. I then apt-get removed python-django-doc, and then ran apt-get autoremove. This removed libpython2.6, but not python2.6 or python2.6-minimal. Why aren't these other dependencies autoremoved?
<rwilson> I wonder if google uses linux
<twb> rwilson: what do you think android and chromeos are based on
<uvirtbot> New bug: #880023 in juju "machine agent disconnects from zoopkeeper on heavy loads (dup-of: 846106)" [High,Confirmed] https://launchpad.net/bugs/880023
<rwilson> I was speacking in tearms of servers
<rwilson> Yes they do http://en.wikipedia.org/wiki/Google_platform
<uvirtbot> New bug: #883988 in glance (main) "db migration failing when upgrading glance - trying to create existing tables" [High,Confirmed] https://launchpad.net/bugs/883988
<utlemming> smoser: ack
<hodgy> Is it possible to setup Ubuntu server to stream music and play it by SSHing into it through Putty and running cmus on it?
<hodgy> Say the music is stored on the server, and I ssh on it to it with Putty from a windows machine. I run cmus on the server in the putty window. And decide to play music. How can I have that music stream and come through the speakers on the windows machine?
<twb> Apart from cmus, that all sounds fine to me
<twb> What I used to do was simply run an httpd (e.g. thttpd or busybox httpd) on the music directory, and then make a little m3u linking to all the tracks
<twb> Then your client side just pulls them down magically as ordinary HTTP files (instead of a stream)
<hodgy> There is no way I can make cmus do this? It's my player of choice.
<twb> hodgy: I am simply not familiar with cmus
<hodgy> I happen to love it.
<twb> For all I know it works fine
<twb> cmus appears to be a unix audio *client*, i.e. it can't serve music streams to a Windows consumer
<hodgy> My idea is, music and software is on the server(no speakers/kb/screen), and the ssh session is from the laptop or desktop of choice, running cmus in the terminal window. Making audio play through the system that is remotely connecting
<hodgy> The Linux server would actually be running cmus... Only thing active on the connecting system is an SSH window
<twb> hodgy: are the speakers hooked up to the ubuntu server, or the windows desktop?
<hodgy> Like right now I am on my windows desktop, in putty running IRSSI through SSH on the linux box.
<twb> hodgy: where are the speakers
<hodgy> Connected to the windows system.
<twb> I don't think you can use cmus for this, then
<hodgy> What about mpd?
<twb> mpd and cmus both assume they run on the host where the speakers are
<twb> you need more like icecast or thttpd on the server and winamp on the desktop
<hodgy> I could try it I assume, transfer over one or two mp3's via ftp
<hodgy> I want to try and avoid client side music players if possible.
<hodgy> twb: what is DECnet?
<twb> where did that question come from?
<delinquentme> zookeeper?
<twb> delinquentme: no where hodgy asked out of the blue about decnet
<twb> Presumably he's reading a REALLY silly howto blog or something
<delinquentme> twb, ? Im asking about its functionality
<twb> delinquentme: ok, carry on then
<delinquentme> O_o
<geek0091> I have achieved true geek, a server rack in my room.
<Guest32813> f
<whoaski_> anybody awake?
<greppy> whoaski_: nope.
<whoaski_> thats what I thought
<whoaski_> hey greppy if you were deploying a production ecommerce site what would you use?
<whoaski_> right now I'm liking Ruby on Rais on ubuntu server but the rest is up in the air. I don't know whats best or fastest
<greppy> whoaski_: it would depend on lots of things.
<greppy> are you writing an ecommerce site? or just using something off the shelf?
<whoaski_> yes I'm wantint currently getting my duck in a rowg to host it myself and all kind of a project of mine. jus
<whoaski_> yes writing it
<twb> That's probably not the best attitude for a production ecommerce site
<whoaski_> would you say that b/c of the risk involved or.... my experience level?
<twb> Because it's a hard problem and you're bound to fuck it up
<twb> Frankly, the whole problem domain makes me naseous with its crawling salesmen and optimistic puppy developers
<twb> *nauseous
<whoaski_> I'm not tring to make millions I'm just tring to learn about the whole process
<twb> Well then it's for pedagogy, not production
<twb> And being a cowboy is acceptable
<twb> As soon as it has actual peoples' actual money in it, you'd better be sure you have your shit squared away
<whoaski_> even if I fuck it up,  karaoke bars still stay open even when people can't sing
<whoaski_> twb i understand you and what your saying, it's just something I wanna do you know, after a few falls I might be good at it
<whoaski_> who knows
<whoaski_> I've looked at usps api's and I want to incorporate arduino's to help automate fulfillment, and currently tring to learn everything I can about ubuntu servers and hardening them
<twb> fulfillment, eh?  This is some kind of humanist ecommerce site, I take it
<whoaski_> no more like a giant vending machine
<whoaski_> have you ever written your own kernel?
<whoaski_> yes I want to sell things but I want to get it from the shelf to a place where a mailman can pick it up automatically
<uksysadmin> morning all
<uksysadmin> can someone let your ubuntu.com team know the download the final beta link is pointing to Beta1, not Beta2 page.
<whoaski_> but my question is what do would you use for a secure webserver    I'm thinking ubuntu server, ruby on rails, apache2, mysql.    and what do you think about nginx?
<twb> uksysadmin: #ubuntu-devel might be a better place to mention it
<twb> Make that #ubuntu-release
<jamespage> morning all
<uksysadmin> cheers twb
<lynxman> morning o/ morning jamespage
<mrintegr1ty> hi, can anyone tell me what the number represents as prepended to various config files (any in /etc/apt/apt.conf.d for example)
<xranby> jamespage: morning
<jamespage> morning xranby!
<mrintegr1ty> if it is a documented feature then i dont know how to find it!
<greppy> mrintegr1ty: it's to control the order that they are executed in most of the time.
<mrintegr1ty> greppy: but they are config files.. not executed
<greppy> ok, fine, "read"
<mrintegr1ty> hmm ok
<mrintegr1ty> so for /etc/apt/apt.conf.d it contains 01autoremove and 01ubuntu.. does that mean that it doesn't matter which is read first or something else?
<greppy> most likely, it will sort on the number first and then the alphas that follow.
<mrintegr1ty> something doesn't quite sound right about that.. in the absense of a different explanation i will go with what you say though :)
<mrintegr1ty> seems like an ultra basic dependency system
<greppy> this is a bad thing? :)
<greppy> why make it more complicated than it has to be?
<mrintegr1ty> no, guess not
<mrintegr1ty> should be documented somewhere though as it's not 100% obvious
<mrintegr1ty> i guess udev does it too..
<greppy> mrevell: man apt.conf
<mrevell> mrintegr1ty, ^^
<mrintegr1ty> greppy: hmm i was looking there.. must have missed it
<greppy> mrintegr1ty: bah, man apt.conf
<greppy> description, item 3
<greppy> dangit, item 2
<greppy> ( too much multitasking )
<mrintegr1ty> greppy: i don't have an ubuntu system with man pages installed so reading this: http://linux.die.net/man/5/apt.conf and I can't see it
<uksysadmin> hey all - anybody know if OpenStack Swift 1.4.8 is landing in 12.04? (ttx?)
<greppy> mrintegr1ty: http://manpages.ubuntu.com/manpages/natty/man5/apt.conf.5.html
<ttx> uksysadmin: err, should already be in there
<ttx> uksysadmin: interesting
<ttx> uksysadmin: Daviey should know more about that.
<ttx> I no longer presides over the contents of Ubuntu Server :)
<uksysadmin> cheers ttx - just did an apt-get update and still on 1.4.7
<greppy> mrintegr1ty: is there a reason that you don't have man pages installed?  ( the irony of you complaining about a lack of documentation and not having man pages installed is... priceless )
<uksysadmin> I'll wait for Daviey to respond
<mrintegr1ty> greppy: fantastic, thanks
<mrintegr1ty> greppy: haha. i looked extensively through the online docs, i do have manpages installed just not on any of the ubuntu servers. locally on my fedora desktop (which wouldn't help here obviosuly)
<mrintegr1ty> greppy: reason being minimal virtual machines don't (usually) need them installed
<greppy> mrintegr1ty: ah, I normally install them for the occaision when I don't have access to anything else and only have the man pages to check things against.
<mrintegr1ty> also, i wasn't really complaining. i actually assumed that it was probably documented somewhere but i just didn't know what terms to search for
<greppy> they don't take up that much space.
<mrintegr1ty> true
<iclebyte> where can I find v2.5 debs of puppet?
<alamar> any ideas on what tool to use to generate "forged" ethernet packets in the shell (meaning i'd like to set src,dst,type etc. manually)
<jibel> jamespage, I'm re-running raid1 i386. It failed on grub.
<jamespage> jibel, oh - nice
<jamespage> amd64 was OK?
<jibel> jamespage, yes, same version of grub
<uvirtbot> New bug: #965410 in horizon (universe) "More explicit Apache config file name" [Low,Fix released] https://launchpad.net/bugs/965410
<dogmatic69> Hi all, I am looking for help in changing my dns config permanently. using /etc/resolv.conf is reset during startup.
<dogmatic69> using http://tinyurl.com/cafkwbm in the gui seems to work, but I need to do it on a server.
<dogmatic69> I cant seem to find what that is changing to replicate on the server. Any ideas? thanks
<rbasak> dogmatic69: are you usingn dhcp on the server?
<smb> dogmatic69, If you set static ip addresses in /etc/network/interfaces add dns-nameservers and dns-search to the interface stanca
<dogmatic69> rbasak: the router is doing dhcp an I am not able to change it.
<dogmatic69> smb: the thing is, that window I pasted does not do the config in /etc/network/interfaces
<smb> dogmatic69, You are using dhcp anyway. Maybe somewhere under /etc/resolvconf/resolv.conf.d/
<smb> or update.d
<dogmatic69> smb: I have gone through all the files int /etc/resolvconf and /etc/dhcp and /etc/dhcp3
<rbasak> dogmatic69: dhclient does it automatically.
<dogmatic69> cant find anything with the ip I added
<smb> rbasak, I think it needs to be different from what dhcp sets for some reason
<rbasak> dogmatic69: on a really old server (hardy I think) I can override DNS using a file that does new_domain_name_servers="$old_domain_name_servers" in /etc/dhcp3/dhclient-enter-hooks.d but I don't know if that's still the recommended way and can't remember where it's documented
<dogmatic69> smb: I set up a dev server with a dns, so I have *.dev for sites I build.
<dogmatic69> running my desktop through the server
<smb> I have not done that variation... so unlikely of much help... :/
<rbasak> dogmatic69: see /sbin/dhclient-script - looks like that's still current
<rbasak> dogmatic69: it allows for overrides
<dogmatic69> I will eventually get a better router and that will be easier. Stupid internet provider locks you to their routers
<rbasak> dogmatic69: that's the script that actually changes your resolv.conf (when not using resolvconf)
<dogmatic69> thanks for the help any how
<dogmatic69> better get back to actual work now
<rbasak> dogmatic69: how about a cron job to replace your /etc/resolv.conf every minute? :-P
<dogmatic69> rbasak: :/
<dogmatic69> I cant believe this is so unknown. the amount of blog posts that say edit /etc/resolv.conf (which works, till reboot) and then comments on the post with doggy hacks like that :D
<smb> There is a lot of new thinking needed now as the resolvconf replacing /etc/resolv.conf is new
<rbasak> The simple reason is that DHCP isn't generally used on servers.
<rbasak> (or even if it is, DHCP servers are expected to serve the right thing)
<rbasak> And on the desktop, Network Manager allows you to set DNS overrides, and that covers most DHCP users.
<zul> morning
<Adri2000> zul: hi. why didn't you merge my branch for the horizon bug fix? you included the fix, that's ok, but now I've got a branch + an open merge request lying around in LP. if you really merged the branch, I guess those would have been marked "merged" accordingly?
<zul> Adri2000: i did merge it its in the archive right now
<jamespage> zul: can you give me an opinion on https://bugs.launchpad.net/ubuntu/+source/dovecot/+bug/970782
<uvirtbot> Launchpad bug 970782 in dovecot "Please merge new upstream dovecot version 2.0.18-1" [Wishlist,New]
<jamespage> I've checked the merge and fixed up another issue that the upstream author pointed out to me - mail-stack-delivery works OK
<jamespage> but wanted a second opinion on the upstream stability of 2.0.x branch before I talk to the release team
<Adri2000> zul: why is https://code.launchpad.net/~adri2000/horizon/rename-apache-config-file/+merge/99574 still open then? I suppose if you did "bzr merge" it would have updated that page, etc.
<zul> Adri2000: i merged it against our ci branch, ill close it in a couple of minutes when im done
<Adri2000> ok. that's not really important anyway, just wondering why you did it that way
<zul> jamespage: is that the only fixes in 2.0.19?
<zul> jamespage: and you scanned the ml to make sure there are no regressions?
<zul> jamespage: im happy with 2.0.19 but i would talk to ivoks as well
<jamespage> zul, lemme take a scan of the ML
<jamespage> zul: nothing that I can see - I'd probably just push it up to 2.0.18 as thats been in Debian for a while now
<zul> jamespage:  safer than sorry
<jamespage> ivoks, if you are around would value your opinion on the above as well :-)
<ivoks> jamespage: sure.. let me see what's that we are talking about :)
<ivoks> jamespage: oh, dovecot
<ivoks> jamespage: lucid's dovecot is of 1.4.x series; meaning that 'mbox: Fixed accessing Dovecot v1.x mbox index files without errors' is a - must have?
<jamespage> ivoks, thats a good point
<ivoks> that changelog doesn't tell much
<ivoks> is that fixed or wasn't broken, but just printing errors? :)
<ivoks> if it's just printing errors, while everything works, maybe we can stay with debian
<jamespage> ivoks, looking at the commit does not tell much more - but it does do an explicit upgrade step to sort out old indexes
<ivoks> sorry, i haven't looked at the code; give me a minute
<ivoks> jamespage: right; it looks like it does an upgrade of the mailbox
<ivoks> errr. index
<jamespage> agreed
<ivoks> so, that's a must have
<jamespage> ivoks, ack - building now (I'd only merged 2.0.18 so far)
<ivoks> it's also something that can be pushed later
<ivoks> or just patched to 2.0.18
<jamespage> ivoks, TBH the delta for 2.0.19 is quite small - I'd be more comfortable taking the whole release I think
<ivoks> jamespage: ack
<ivoks> i should find some time for that mail-stack stuff again :)
<jamespage> ivoks, its been a bit neglected for the last couple of releases TBH
<jamespage> I keep finding time about now in the cycle :-)
<jamespage> which is a little late
<ivoks> my plan is to work more on it in next releases
<jamespage> the drac plugin is completely borked in the current package (Timo pointed this out to me...)
<ivoks> too bad it's after lts
<ivoks> jamespage: and it supports only ipv6
<ivoks> typos, damn typos...
<ivoks> jamespage: ipv4 of course
<uvirtbot> New bug: #973377 in nova (main) "/var/lib/nova/instances/_base has wrong permissions" [Undecided,New] https://launchpad.net/bugs/973377
<ivoks> oh, that reminds me...
<ivoks> zul: around; when i install dashboard, i tries writting in /var/www/.novaclient/ as www-data user; is there a bug about that?
<zul> ivoks: yeah fixed in rc3
<ivoks> awesome
<zul> which i just uploaded like 2 minutes ago
<ivoks> zul: you rock :)
<zul> ivoks: thats novaclient doing it
<ivoks> oh, could be; i wasn't investigating
<kirkland> smoser: do you have time to review https://code.launchpad.net/~kermit666/ubuntu/precise/ssh-import-id/newline-fix/+merge/100000 ?
<Daviey> kirkland: you need to get the beers in for capturing review number 100000
<smoser> kirkland, if i were going to touch that code one more time, i think i'd just remove all the checking.
<smoser> you're trusting the other side of the https connection
<smoser> the sanity checking has only hurt us
<ivoks> win 15
<cr3> hi folks, I'm getting this error when running virsh on a configuration containing <os><type>hvm</type></os>: unknown OS type hvm
<jcastro> mrevell: hey can we talk maas docs real quick?
<mrevell> jcastro, Sure. What works for you. Hangout?
<jcastro> IRC is fine
<jcastro> I just need to know the canonical location of the docs
<jcastro> https://wiki.ubuntu.com/ServerTeam/MAAS
<jcastro> and there's also, http://people.canonical.com/~gavin/docs/lp:maas/install.html
<kirkland> smoser: yeah
<kirkland> smoser: my original implementation had no checking
<kirkland> smoser: and just hoped that you knew what you were doing when you put that key
<kirkland> smoser: I think jdstrand added most of that checking, IIRC
<mrevell> jcastro, The link on the Ubuntu wiki is what we're giving people. That'll always be either the docs themselves or have a link to them. Gavin's docs are generated from the restructured text in the MAAS branch. I'm going to pare back the docs in the branch and link out from them to the wiki. Within time we'll have a chapter in the server guide.
<jamespage> SpamapS, OK if I pull your proposed upstream patch for http://code.google.com/p/memcached/issues/detail?id=252 into memcached for precise?  looks like we hit this on i386 during the most recent rebuild test...
<mrevell> jcastro, So, ignore Gavin's docs unless you're interested in hacking MAAS.
<jcastro> mrevell: ok awesome, that's all I needed, thanks!
<mrevell> great :)
<jcastro> mrevell: ok so one thing we're trying to figure out
<jcastro> is where in the docs we tell people to test.
<jcastro> so is it like "follow these 2 pages, and then run the checkbox tests" or are the tests done along with the instructions step by step
<mrevell> jcastro, matsubara has been working with balloons to get some checkbox tests ready. matsubara is just at lunch. I'll get him to fill you in as soon as he's back.
<jcastro> ok awesome, I'll just chill  until he returns
<mrevell> cool
<SpamapS> jamespage: totally ok. I talked to dormando about it the other day and he said that instead of 5, we should just make it 20
<SpamapS> jamespage: I was looking at that FTBFS the other day actually, so glad you're picking it up. :)
<jamespage> SpamapS, it should probably re-check the assertion after the loop as well so if it really does fail we pick it up on that test - I'll revise the patch and attach to the upstream bugtracker
<SpamapS> jamespage: I talked to dormando about it.. the only reason I didn't merge it was that he was going to do exactly that, but he just hasn't gotten itto it yet
<jamespage> SpamapS, ack - on it now
<jcastro> jamespage: what flavor of hadoop is in our charm/hadoop PPA?
<jamespage> jcastro, hadoop hadoop
<jcastro> Apache Hadoop then?
<jamespage> vanilla upstream 1.0.1 (will be 1.0.2 once it gets released)
<jamespage> jcastro, spot on
<jamespage> well almost vanilla - I had to pull in build system patches for precise and arm
<MagicFab> If you use Zabbix or will be using it in Ubuntu 12.04 LTS, please mark this as "affects me too": https://bugs.launchpad.net/ubuntu/+source/zabbix/+bug/972881
<uvirtbot> Launchpad bug 972881 in zabbix "Please update to 1.8.11" [Wishlist,Triaged]
<EvilResistance> isn't 12.04's repos under freeze?
<jamespage> MagicFab, EvilResistance: that does look like a bug fix only release so once its in Debian we should sync up - hopefully early next week looking at the DM's comment in the Debian bug report.
<EvilResistance> you assume it'll be uploaded to Debian in time
<EvilResistance> if its not in Debian by release, you'd have to have an update done which would dump it in precise-updates
<EvilResistance> then everyone would need precise-updates repo added who wants that update
<jamespage> EvilResistance, lets hope it is then :-)
<zul> jdstrand: fyi https://launchpadlibrarian.net/99952594/buildlog_ubuntu-precise-i386.keystone_2012.1~rc2-0ubuntu1_BUILDING.txt.gz
<jdstrand> zul: cool, thanks. am commenting in the bug on your latest
<zul> jdstrand: cool
<adam_g> jamespage: ping
<dork> anyone know if it's possible to use compiz over vnc and a xen domu?
<jdstrand> zul: commented
<zul> jdstrand: thanks
<Rapid2214> Hiya, does anyone have a working Bond (LACP) - Bridge with VLAN support?
<patdk-wk> Rapid2214, sure, many
<patdk-wk> but then, what do you mean by bridge?
<zul> adam_g: i updated the changelog to get it ready for tomorrow
<adam_g> zul: for what?
<zul> for release tomorrow
<adam_g> what pkg are you talking about?
<matsubara> jcastro, hola!
<matsubara> jcastro, mrevell asked me to join this channel as you're looking for me
<jcastro> hey
<jcastro> hey is there a maas channel or just this one?
<jcastro> matsubara: https://wiki.ubuntu.com/ServerTeam/MAAS/Testing
<jcastro> so I've been working on the wiki pages
<jcastro> and the one thing left to do is fill in "Testing"
<jcastro> which balloons will be filling out
<matsubara> jcastro, AFAICT, there's no #maas channel here on freenode
<jcastro> ok so this is the home channel then, good. :)
<matsubara> I think this is the right place to talk about maas stuff
<jcastro> matsubara: ok so I have an appointment so I have to step out for an hour, but my general idea is to build up this set of wiki pages
<jcastro> with step by step instructions on what we want people testing
<jcastro> and then we should be good to go
<MagicFab> EvilResistance, yes, but I'm hoping to get a sync anyways.
<matsubara> jcastro, how about the checkbox tests? would them help?
<matsubara> they're fairly complete now
<marcoceppi> Can I get some clarification on MAAS, the setup docs recommend having two servers for MAAS, is it that MAAS doesn't use virtualization and only uses the entire hardware or does it virtualize/create containers on the baremetal?
<matsubara> marcoceppi, if you're using real hardware, then you'd need at least two machines. one for the maas server and another for the node (and if you add juju to the picture you'd need at least 2 nodes + the server)
<matsubara> marcoceppi, what the setup guide doesn't cover yet is how to setup everything using virtual machines but I'm pretty sure there's some doc in the maas tree with that info
<zul> jdstrand: we already supply a keystone.8 btw
<marcoceppi> matsubara: So, in the case of Juju + maas, if I run a juju bootstrap and that uses an entire server for the bootstrap node?
<jdstrand> zul: yes, I was saying add the lack of SSL to it :)
<matsubara> another thing you can do is to install the maas server on the host and spin up a couple of virtual machines to act as the nodes. I'm trying that use case today and couldn't get it working just yet
<matsubara> marcoceppi, yes
<zul> jdstrand: ah ok
<marcoceppi> matsubara: Ah, okay. I've got some beefy hardware. Using one of these machines for a bootstrap doesn't make sense. But if it's possible to do with virtual machines that might be better. So, I know this is all still new, just fishing, in the even of using maas with virtual machines as nodes, each node would have to exist prior to it being used, correct? Or would maas be able to create these nodes (
<marcoceppi> doesn't seem like it would, given that I know what it does now)
<hallyn> roaksoax: hey, would you mind adding guest uuid support to testdrive?
<matsubara> yes, you'd need to setup your virtual machine to boot up from an precise-server iso and then choose the option to enlist into an existing MAAS
<matsubara> marcoceppi, ^
<marcoceppi> matsubara: cool, I look forward to playing with this more tonight
<zul> jdstrand: done
<matsubara> marcoceppi, cool! if you have any trouble setting it up, feel free to ping me
<marcoceppi> will do, is there a preferred vm software to use?
<claude2_> anyone here having issues with the megaraid_sas driver for a perc 5i with ubuntu 11.10?
<marcoceppi> I guess, virtualizing method
<matsubara> marcoceppi, I couldn't get enlistment working on my local machine but it might be a problem with my laptop. I'm debugging that
<claude2_> the raid keeps coming up read only
<hallyn> roaksoax: there's been a request to do it in kvm directly, but while "nothing is impossible" there is no good clean place to do that there.
<matsubara> marcoceppi, I use virtual box
<roaksoax> hallyn: sure, could you please file a bug with the details?
<hallyn> roaksoax: I'd mark bug 959308 as affecting testdrive
<uvirtbot> Launchpad bug 959308 in qemu-kvm "kvm does not generate a system uuid by default" [High,Confirmed] https://launchpad.net/bugs/959308
<matsubara> marcoceppi, but that's because I already had it set up. I think you can use any virtualization method you prefer as long as a) it can boot from the cd b) the guest can reach the host through the network
<roaksoax> hallyn: cool thanks
<hallyn> roaksoax: I don't know if it makes sense to specify a uuid in testdriverc, but it's also been suggested on qemu m-l that you could use the fs uuid of the root fs
<marcoceppi> matsubara: I've got a spare proliant dl380 lying around, I may try to setup a few Xen machines. Would it be preferable for the maas server to be on the same server, or would it be fine to run it from say my desktop or laptop?
<marcoceppi> Then treat each virtual machine on the 380 as a node
<roaksoax> hallyn: cool, i'll look into it as soon as I have some free time
<hallyn> roaksoax: thanks!
<matsubara> marcoceppi, it should be fine to run on your desktop/laptop as long as they're in the same network
<marcoceppi> matsubara: awesome, thanks. I'll play around this this more when I get home
<matsubara> marcoceppi, ok. let me know how it goes. if i'm not around email the list ( maas-devel@lists.launchpad.net) and someone might be able to help you
<matsubara> marcoceppi, and thanks for helping testing this!
<marcoceppi> matsubara: will do!
 * pehden is back (gone 00:34:57)
<jamespage> adam_g, pong
<adam_g> jamespage: hey about squid
<adam_g> missed you yesterday
<amarcolino> I have added my user to www:data and made /www writable to users within that group, however, I would like to know how I can make it that when I create file and folders the owner and group should be www:data instead of my user?
<jamespage> adam_g, ah yes we did
<jamespage> so whats you opinion on how to resolve this?
<jamespage> I think we should not be aiming to transform the configuration - it risks to much in terms of edge cases and exposing thing accidentally.
<adam_g> jamespage: i agree. i think the only thing we can really do is warn very loudly if we find that they are using a custom squid(v2).conf
<jamespage> I'd rather place the users existing squid config somewhere accessible and let then know that they need to rationalise that into a squid 3 config
<adam_g> jamespage: perhaps give an option to move it to the squid3 location with a warning that it may not parse or produce the same service
<jamespage> adam_g: maybe
<adam_g> jamespage: squid.conf (in both v2 and v3) is generated a build time, i went back to lucid and got the hashes for squid(v2): http://paste.ubuntu.com/915009/
<jamespage> adam_g, I'll give it a bit of thinking over the next few days
<jamespage> anyway have to dash now - later...
<uvirtbot> New bug: #973663 in quota (main) "quota returns nothing on 12.04 " [Undecided,New] https://launchpad.net/bugs/973663
<adam_g> jamespage: k
<thys> hi
<thys> where do I configure the  php limit in ubuntu?
<thys> ubuntu server
<thys> php memory limit
<amarcolino> Hi I have added my user to www:data and made /www writable to users within that group, however, I would like to know how I can make it that when I create file and folders the owner and group should be www:data instead of my user?
<amarcolino> thys,  /etc/php5/apache/php.ini
<thys> I use lighttpd still the same place?
<amarcolino> I actually wouldn't know
<amarcolino> a quick google search gave this /etc/php5/cgi/php.ini
 * pehden is away: I'm busy
<Zaitzev> hello
<Zaitzev> I just installed proftpd-basic and wonder what I can do about users, shares and whatnot
<Zaitzev> anyone here that can help me out a little?
<jcastro> matsubara: ok what do you think of this:
<jcastro> https://wiki.ubuntu.com/ServerTeam/MAAS/Testing
<matsubara> jcastro, it looks great to me. I'm really glad we won't have to duplicate the checkbox test in the wiki :-)
<jcastro> is there a summary page with all the maas testing information?
<jcastro> or is that on each person's hwdb-submissions launchpad page?
 * robbiew just did a header cleanup of MAAS pages...fyi
<jcastro> saw that, ta
<jcastro> matsubara: what's the bare minimum # of machine to test maas?
<jcastro> I'm thinking 3 right? one for maas and then 2 nodes for juju?
<matsubara> jcastro, yep, for maas only, 2 is enough, so you can test that one node enlists into the server
<matsubara> but if you want to use juju, then you'd need 3
<matsubara> one for the server, and one for the bootstrap env and another for the service you're deploying
<matsubara> jcastro, I don't think there's any other page with a summary page with all the maas testing info. I think we should use https://wiki.ubuntu.com/ServerTeam/MAAS/Testing as it looks good to me
<jcastro> ok, doing the call for testers now then
<matsubara> and we can improve from there, so let's make that the official page from now on :-)
<jcastro> rock
<matsubara> jcastro, great! thanks a lot for helping with this
<matsubara> jcastro, also, you probably seem it, but Nicholas blogged about it here: http://www.theorangenotebook.com/2012/04/testing-maas-no-mas-no-poco.html and the blog post has all the contact info in case people get blocked
<jcastro> I have 3 people already
<matsubara> s/seem/seen/
<jcastro> but with real hardware they can't just up and test right away, so it gave me some time to fix up the docs
<jcastro> matsubara: yeah I am reblogging what he posted since he's not on planet yet.
<matsubara> cool
<matsubara> jcastro, and feel free to tell people to ping me here or email me if they need help. I'll help as much as can, and if I don't know I'll find someone who can help
<jcastro> oh don't worry I got one guy who was so lost in the docs I am sure he'll have an earful for ya. :)
<jcastro> though I do like how in one day we're already better than the orchestra docs ever were. *snicker*
<matsubara> :)
<jcastro> matsubara: hey so, I promise this isn't a trick question. But did we forget to announce MAAS on the server list?
<matsubara> jcastro, don't know. mrevell will be able to help you with that as he's doing the announcement
<sako> how can i check on which software raid# is configured?
<jMCg> Hey folks o/~
<jMCg> since I enabled ufw, I see tons olf loglines in syslog which look like this:
<jMCg> Apr  4 21:43:09 monitoring kernel: [1539121.328202] [UFW BLOCK] IN=eth0 OUT= MAC=52:54:00:9f:f6:ef:fe:54:00:9f:f6:ef:08:00 SRC=176.9.39.37 DST=176.9.55.236 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48470 DF PROTO=TCP SPT=879 DPT=40222 WINDOW=14600 RES=0x00 SYN URGP=0
<jMCg> Yeah. Someone scanning for an exploit.. in.. exchange?
<jMCg> Or something.
<jMCg> Anyway. I think I should like to reduce the noise.
<jMCg> Looks like I'll have to disable logging, because it already *is* on low
<jMCg> I see I'm talking to myself again.
<jbicha> hi, this is awfully late in the release cycle but would updating libvirt to 0.9.11 be considered?
<jbicha> Debian just uploaded it to unstable today
<jbicha> I was asking because gnome-boxes needs libvirt-glib 0.0.5 which needs libvirt 0.9.10
<hallyn> zul: ^
<hallyn> Daviey: ^
<zul> ummm.....i would say no
<hallyn> jbicha: we had a merge of 0.9.9 from unstable, but nacked it bc we had no compelling need
<zul> what we are like a month a way from release arent we?
<hallyn> less
<zul> so yeah...nack from me
<jbicha> ok, if it's a no, I think I'll backport the quirky libvirt (when it's ready) to the GNOME3 PPA then
<jbicha> thanks, I was assuming it wouldn't happen but thought it was at least worth making sure! :)
#ubuntu-server 2012-04-05
<adam_g> win 17
<stgraber> hallyn: would you consider adding shutdown.conf to lxcguest in the ppa for lucid?
<Zac_o_O> hello!  The drives on my server spin up every time a windows PC on the network is woken up from sleep.  This PC backs up over SMB.  How can I make this not happen?
<Zac_o_O> It's driving me crazy!
<twb> not spinning them down would be one way
<twb> Making sure the windows PC is speaking DNS to the DNS resolver, instead of speaking NetBIOS to the samba server, might help.
<twb> It depends why it's talking to the server on resume
<Zac_o_O> how would i know if it's speaking DNS?
<twb> you go into its registry somewhere
<twb> IIRC DNS is the default since about vista
<twb> Prior to that it would prefer netbios
<twb> ##windows probably can help with that
<twb> And/or do packet sniffing
<Zac_o_O> ah.  ok.  is there anything I can do on the server side?
<twb> 14:21 <twb> And/or do packet sniffing
<Zac_o_O> hm.  so no change in a smb config file eh?
<uvirtbot> New bug: #973953 in nova (main) "euca-describe-instance returns error VolumeNotFound" [Undecided,New] https://launchpad.net/bugs/973953
<koolhead17> morning all
<Zac_o_O> morning?  late here :)
<koolhead17> Zac_o_O: i am in India :P
<lynxman> morning o/
<jasonmsp> hey all.  I've installed munin and nagios on our server. Its been great.  But I would like to see which programs are contributing most to the CPU%.  Is there a way to do this with, or without either of these?
<lynxman> jasonmsp: I reckon something like this might be useful http://serverfault.com/questions/185804/monitor-and-graph-cpu-usage-per-process-and-per-thread
<lynxman> jasonmsp: unfortunately there's no such default plugin for munin, so as it says on the post, enabling process accounting then writing a very quick munin plugin would be the most convenient option
<jasonmsp> lynxman: thanks
<lynxman> jasonmsp: my pleasure :)
<Oxygen02> hello everyone. Running ubuntu-server 12.04LTS beta. I've to add a repo to the sources.list deb http://repo.percona.com/apt lenny main. is it lenny or?...
<Oxygen02> cat /etc/debian-version shows me something different.
<Oxygen02> it shows. wheezy/sid  What shoud I add as repo? anyone?
<__sjh> lo all, just tried in #ubuntu to no avail, its kinda noisy, figured i'd try here too:
<__sjh> loha, using 10.04, trying to change locale have followed https://help.ubuntu.com/community/Locale, but after running update-locale lang=en_GB.UTF-8, relogged, then run locale, now i see LANG= .... and nothing else, any ideas? ... just noticed i give en_GB.UTF-8 to locale-gen, but locale -a reports en_GB.utf8, any significance to that?
<__sjh> if anyone is listening, if i login as root I get the locale i want, but as a normal user i dont ... any ideas how that could happen following on from previous comment?
<Vivek> nick58b: Are you around ?
<Vivek> nick58b: Please do check your pm.
<uvirtbot> New bug: #974172 in lxc (universe) "lxc-ls show duplicated names" [Undecided,New] https://launchpad.net/bugs/974172
<Rapid2214> Hi, does anyone have a HP NC375T PCI Express Quad Port Gigabit Server Adapter working in a bond > bridge ? My bond works with my internal NICs but a bond on this card fails - I have seen bug reports but cannot find drivers to replace it
<patdk-wk> Rapid2214, should work fine, you have more details?
<Rapid2214> Its using the netxen_nic - Im using bond > Bridge with VLANS - The bond I have works with my VMs if I am not using this NIC, thanks :)
<Rapid2214> Onboard drivers are using the bnx2 drivers
<Rapid2214> From mii-tool: SIOCGMIIPHY on 'eth5' failed: Operation not supported
<hallyn> stgraber: sure (shutdown.conf in lucid lxcguest) - note lxcguest is purely in ppa in lucid, btw
<hallyn> hm, my backgrounds have gone away and been replaced by way too bright ones.  So I'm back to a black background.  Old times!
<hallyn> stgraber: ok i'll go ahead and push shutdown.conf to lxcguest in lucid today.
<hallyn> it's definately not going into upstart there?
<stgraber> hallyn: thanks
<stgraber> hallyn: no, it's a new "feature" so won't ever get into an update
<hallyn> is it?  i would consider it a bugfix :)
<stgraber> hehe, I guess you could argue that ;)
<stgraber> if you want to be safe, rename it to lxc-shutdown.conf to avoid potential conflicts
<hallyn> good point
<uvirtbot> New bug: #974256 in nova (main) "Exception during the installation of Nova Essex-rc4 on Ubuntu 12.04" [Undecided,New] https://launchpad.net/bugs/974256
<jcastro> matsubara: any testing feedback from people? I have one
<matsubara> jcastro, nope
<hallyn> stgraber: pushed
<jdstrand> SpamapS: hi! what are your plans with mysql-5.1 in precise? it is currently in universe and trails behind oneiric. I thought I remembered reading it was going to be removed
<smoser> rbasak, around ?
<rbasak> smoser: yes
<smoser> you think you migh thave time to improve the fix for bug 971820 ?
<uvirtbot> Launchpad bug 971820 in squid-deb-proxy "squid-deb-proxy needs special handling of Release, Packages, Source" [Undecided,In progress] https://launchpad.net/bugs/971820
<smoser> i'm concerned that what we have there does not really fix the development release issue.
<rbasak> I'd love to but I need to better understand how it's broken first
<smoser> so you think that the change you put in makes squid check the source headers versus what it has cached, right?
<smoser> what headers explicitly does it check ?
<rbasak> I don't know - squid calls it a "refresh" and supposedly does the right thing
<rbasak> I can investigate that
<smoser> well, whatever it does, i had a case where the server had new stuff, but squid was caching old stuff.
<smoser> (and the old stuff was bad)
<rbasak> what were the timestamps?
<smoser> one possibility is that the server is busted. that it does not correctly update those timestamp.s
<smoser> yeah... it sucks.
<smoser> i'm  abad bug report
<smoser> i know
<rbasak> As in had the server's new stuff really gone up in timestamp, and was the previous fetch with a previous timestamp
<rbasak> I've got one thought about this
<rbasak> I could run a cron job
<smoser> i stopped trying to find the correct fix after losing 2 hours on it. and realizing that orchestra had done the deny business.
<smoser> the one thought i had is to just hard code the development release to deny
<smoser> but let others go to cache
<rbasak> That hits the archive with squid and without squid (with current squid-deb-proxy) and lots the timestamps and md5s
<smoser> oh, and always deny -updates and -security
<rbasak> logs
<smoser> right. to catch it.
<smoser> that would be good.
<rbasak> yep
<rbasak> then we'd also hopefully have the timestamp information which should help pin it down
<smoser> but the hack of "if devel release, or -updates or -security or -backports, then deny"
<SpamapS> jdstrand: yes, it should be removed. I beliee the rdeps are all gone.
<smoser> would cache the largest hunk of data (as the main repo is the biggest by far)
<SpamapS> jdstrand: believe.. even
<SpamapS> jdstrand: re mysql 5.1
<rbasak> does the dev archive tend to move as fast over the weekend as it does during the day?
<jdstrand> SpamapS: would you mind double checking that, and filing a bug for its removal?
<smoser> rbasak, just make sure you store the headers that you get back in both requeswts. and i think you'll get enoug hinfo to catch it.
<smoser> oh, and you might as well pin your server too
<jdstrand> SpamapS: feel free to assign it to me
<smoser> ie, avoid the round-robin failures.
<SpamapS> jdstrand: looks like libmyodbc and sphinxsearch are the last hold-outs.. will investigate today
<jdstrand> SpamapS: awesome, thanks!
<rbasak> smoser: I'm going to leave archive.u.c unpinned. I'll test ports.u.c in parallel which is effectively pinned because there is only one. That gives us both because I want to understand how well it works unpinned as well. I will only be donwloading the files - not giving them to apt - so the Release/Packages mismatch won't matter.
<rbasak> smoser: do you think I need to download the whole file for the non-cached version, or will a direct HEAD request do? I think we can trust the E-Tag, can't we?
<smoser> but ports.u.c gets updated less frequently i suspect.
<smoser> ping the host
<smoser> you'll get random failures we're not trying to address if you do not.
<smoser> rbasak, i dont know. i think yo might need to download the file
<smoser> to verify that it is different.
<smoser> i guess the headers shoudl tell you that.
<smoser> but the only way i can explain the issue is bad headers.
<rbasak> smoser: what makes you think it was a squid issue rather than a mirror out of sync with itself?
<smoser> because if i used check-archive at the same time, it was fine
<smoser> and, actually... i tried to get check-archive to fail going through squid, but could not.
<smoser> the reason for that is the way that check-archive works. it resolves the IP address and sets 'Host: ' header.
<smoser> so squid didn't then recognize that 'http://archive.u.c' was the same as what check-archive was asking for.
<smoser> and check-archive got a refreshed versions.
<smoser> i debugged that a bit, but in squids logs, check-archive's hits were logged by server IP address, where the 'apt-get update' was getting dns name.
<jcastro> robbiew: best MAAS comment so far: http://www.reddit.com/r/Ubuntu/comments/rufub/metal_as_a_service_canonical_announces_ubuntu/
<chmac> I'm seeing incredibly slow disk performance on one of our servers hosted with OVH.
<robbiew> jcastro: heh
<chmac> I'm not sure how to investigate. Looking at the output of `sudo iostat -k 15 12` shows disk writes about 500K/s.
<chmac> Any suggestions on how I can debug what's causing the slow disk access?
<chmac> cpu utilisation looks like 25% iowate, 75% idle
<Rapid2214> chmac: Is it a VM? or dedicated?
<chmac> Rapid2214: Dedicated hardware, two SATA2 2T disks running software raid
<Rapid2214> Have you checked for a degraded disk? /proc/mdstat ?
<chmac> Rapid2214: Not sure how to read the output, nothing screams error to me...
<chmac> I sent the smartctl output of each disk to the host, but waiting for a response, also not sure how to read that.
<Rapid2214> chmac: [UU] means both disks are good
<chmac> Rapid2214: Both disks are good
<Rapid2214> anything in dmesg?
<chmac> Rapid2214: Anything I could grep for? Not familiar with the output, so not sure what to look for...
<chmac> Rapid2214: Thanks a lot for your help btw, I'm at a bit of a loss on this one.
<Rapid2214> chmac: Would be obvious, if anything was wrong, constant messages is a tail tail sign, any processes in perticular which are using high wait? Is it a new server? Or has it been in use a while? No probs :)
<chmac> Rapid2214: It's a reasonably new server, I think we've had the issue since it was brought online.
<chmac> Rapid2214: dmesg looks good to me, `tail -f /var/log/dmesg` doesn't show anything new being added.
<chmac> Rapid2214: The issue seems to be primarily with MySQL because it's the biggest user of disks.
<chmac> Rapid2214: I had thought to try some kind of test copying data between partitions just to see what's going on.
<Rapid2214> chmac: I have a MySQL server on software raid, personally i wouldn't recommend it, but 20% is terrible for that application tbf - What does # hdparm -tT /dev/<your-main-disk>  ----- return?
<Rapid2214> *isn't terrible
<chmac> Rapid2214: Ok, I think I'm onto a red herring, I just copied a 2G file from one partition to another and saw one report of 40M/s to md2
<chmac> Seems like it's an issue with MySQL somehow then, that's what got me started on this, mysql is abysmally slow.
<chmac> Although, a 2G file takes longer on this machine than on another single disk machine of a much lower spec (ram/cpu).
<Rapid2214> chmac: Humm yeh, thats the issue I seem to have, its a backup server so it's not so important - good luck
<chmac> Rapid2214: Same here, it's also a backup machine.
<chmac> Rapid2214: You have the same issue where mysql is abysmally slow on software raid?
<Rapid2214> chmac: Its noticeably slower with high waits, Im on software raid, but have put it down to old SCSI disks - I'm gonna be virtualising it soon
<chmac> Rapid2214: It took me >1hr to import a 1.7M file that takes <2 minutes on our primary server, and 8m on a crappy Â£17 a month dedicated box.
<chmac> Rapid2214: The mysql slave is 44h behind master :-(
<Rapid2214> chmac: That is impressive, have you been using mk toolkit?
<chmac> Rapid2214: I'd like to install mk_hearbeat, but wasn't aware of the toolkit
<Rapid2214> chmac: Its quite good, it allows checksum checks, and sync to fix replication issues
<Rapid2214> chmac: Its quite good, it allows checksum checks, and sync to fix replication issues
<PhotoJim> normally Ubuntu does that shallow reboot that lets it do a kernel reload on server edition.  but how do you force a proper full reboot, if desired?
<mgw> for use in d-i partman, how can I determine the partition id (for a raid recipe)?
<smoser> PhotoJim, i'd look in the kexec-tools man pages
<mgw> e.g., in this example it uses sdX1, sdX5, etc
<mgw> http://d-i.alioth.debian.org/manual/en.amd64/apbs04.html
<smoser> PhotoJim, 'coldreboot'
<smoser> holy crap.
<smoser> and i just look at the implementation of that.
<smoser> it looks like it can easily be changed by /etc/default/kexec, but as it is
<smoser> NOKEXECFILE=/tmp/no-kexec-reboot
<smoser> any user can touch that file, and next reboot will not be kexec.
<PhotoJim> smoser: thanks much.
<PhotoJim> k
<smoser> PhotoJim, but the right way is to use 'coldreboot'
<PhotoJim> smoser: greatly appreciated.
<mgw> partman-auto-raid: mdadm: device /dev/sda5 not suitable for any style of array
<mgw> any ideas what exactly that means? Here is my preseed:
<mgw> https://gist.github.com/d3be3b4e2b5a9ec9221a
<Rapid2214> mgw: Has the superblock been cleared?
<mgw> How would I know?
<mgw> I don't see anything about superblocks in rsyslog
<Rapid2214> Oh, i don't know that program, but i would imagine it could be something like that
<Rapid2214> mgw: Does the program run inside of a running ubuntu box?
<mgw> no, it's a preseed
<mgw> for the installer
<mgw> there's no OS on the system at all
<Rapid2214> Oh ok, do you get any other errors before that one?
<hallyn> smb: sorry... to be clear, I should add https://launchpadlibrarian.net/99981918/no-vfb-for-hvm.debdiff  to my next libvirt push?
<smb> hallyn, Yes, sorry for the confusion. We now agree to agree.
<hallyn> ok
<zul> hallyn:  404 on the libvirt-rbd debdiff
<hallyn> hm
<hallyn> zul: retry?
<zul> hallyn: got it now
<hallyn> zul: weird thing is it had a one-letter typo, but it was all done with cut/paste!
<hallyn> perplexing
<zul> hallyn: doesnt the ceph stuff need to be enabled in libvirt?
<hallyn> <shrug>
<hallyn> i don't get why they want it
<hallyn> maybe they just want to have a ppa with it enabled but with exact same source as the archive version?
<hallyn> or maybe it does make a difference
<hallyn> they said it makes their testsuite pass
<zul> o...k
<hallyn> ack
<zul> well if it doesnt mess with anything sure why not
<hallyn> right, it takes a few hours away from me but other than that is fine
<hallyn> (i'm using the rig on which i was going to start the kvm perf tests, to run regression test)
<hallyn> s/rig/wussie laptop/
<zul> heh...im ok with it check with daviey
<hallyn> yup, i did
<hallyn> thx
<zul> SpamapS: around?
<zul> SpamapS: unping
<SpamapS> zul: mooping
<zul> SpamapS: i think i have a juju problem
<zul> SpamapS: i have a ec2 keypair that i use for instances on openstack that doesnt seem to being used when i do juju status
<esuave> is there a way i can lock the crontab while im editing, so nobody else can edit the same crontab and overwrite my changes?
<SpamapS> zul: how did you tell juju about them?
<SpamapS> zul: and, maybe -> #juju ?
<zul> hold on
<hallyn> esuave: i thought crontab in olden days used to flock...  i suppose you could wrap crontab in a script that flocks a file to make that happen
<esuave> hallyn: yeah.. ill have to check it out
<esuave> hallyn: running 8.04
<hallyn> esuave: it doesn't do it in 12.04 either.  by olden days i was thinking sunos 4.x
<esuave> ahh.. yeah thats a pisser.. cause while im editing.. someone else can edit the same crontab and overwrite my changes if they finish after me
<stgraber> hallyn: I updated https://wiki.ubuntu.com/UDS-Q/Plenaries with details on the LXC plenary, I'll probably poke you to review the details when I start preparing it
<hallyn> lxc plenery?
<uvirtbot> New bug: #974450 in irqbalance (main) "irqbalance classifies network interfaces with custom/renamed interfaces as class other" [Undecided,New] https://launchpad.net/bugs/974450
<hallyn> stgraber: cool, thanks :)
<esuave> ok so im trying to create an alias.. is there a way i can do an IF this file exists.. execute this command?
<esuave> like alias='if /home/test.txt exists; execute crontab -e'
<esuave> i just dont know they syntax lol
<esuave> the* not they
<MarKsaitis> so can somebody help me? Or nobody knows a solution? I have vnc4server (tried tightvncserver too) installed, using hidden virtual desktops (the default). When I click logoff in my session, IT DOESNT DO A DAMN THING, DOESNT LOGOFF! FTW? Ubuntu 11.04 LXDE!!!
<MarKsaitis> I bet nobody knows, as usual
<patdk-wk> hmm, this is the server channel, we don't deal in gui
<hallyn> i wonder...
<hallyn> !gui
<ubottu> The graphical user interface (GUI) in Ubuntu is composed of many elements, including the !X server, a window manager, and a desktop environment such as !GNOME or !KDE (which themselves use the !GTK and !Qt toolkits respectively)
<hallyn> not *particularly* helpful
<hallyn> looks like #lxde is on irc.oftc.net
<zul> utlemming: ping http://paste.ubuntu.com/916324/ whats nov.ec2?
<utlemming> zul: I have no idea...what's your cloud-config look like?
<zul> /etc/cloud/cloud.cfg?
<utlemming> zul: /var/lib/instance/cloud-config.txt
<utlemming> er.../var/lib/cloud/instance/cloud-config.txt
<zul> utlemming: http://paste.ubuntu.com/916337/
<zul> utlemming: http://paste.ubuntu.com/916324/ is the console-log, this is running on openstack as well
<utlemming> zul: can you get me the output of "ec2metadata" (scrub appropriately)
<uvirtbot> New bug: #912267 in python-keystoneclient (universe) "[MIR] python-keystoneclient" [Undecided,Fix released] https://launchpad.net/bugs/912267
<zul> utlemming: http://paste.ubuntu.com/916349/
<utlemming> zul: at face value, this is a bug in cloud-init
<zul> utlemming: yeah
<zul> utlemming: http://paste.ubuntu.com/916324/
<zul> doh..
<zul> utlemming: https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/974509
<uvirtbot> Launchpad bug 974509 in cloud-init "Issues resolving ubuntu archives." [Undecided,New]
<hallyn> jdstrand: in http://people.canonical.com/~jamie/libvirt/qatest.tar.bz2 for qa-regression-testing, the qatest.xml really shouldn't have <emulator>/usr/bin/qemu</emulator>
<smoser> zul, use opendns or google dns
<smoser> 8.8.8.8
<zul> smoser: yeah
<smoser> i did know of this potential issue whe i did this.
<smoser> dns redirectors suck
<smoser> oh. shoot. you're not even hitting the bug i programmed for.
<smoser> you can pass cloud-config to specify apt_mirror and it wont do that.
<smoser> or you can fix your dns.
<uvirtbot> New bug: #974509 in cloud-init (main) "Issues resolving ubuntu archives." [Undecided,New] https://launchpad.net/bugs/974509
<utlemming> smoser: he's canadian
<utlemming> and that means he has rogers isp, which is know for crappy DNS games
<smoser> as are others.
<smoser> but you can set what you use.
<smoser> 8.8.8.8 will not give you such bogus stuff
<smoser> neither will opendns
<utlemming> opendns does now...I stopped using them becuase of that
<utlemming> http://paste.ubuntu.com/916401/
<jdstrand> hallyn: re the emulator line> how come?
<hallyn> jdstrand: bc /usr/bin/qemu doesn't exist any more
<hallyn> though maybe we have to explicitly use qemu-system-i386
<jdstrand> test_emulators() in test-libvirt.py will need to be adjusted as well
<hallyn> dat
<hallyn> drat
<hallyn> i don't suppose that could be related to test_guest_aa_attach_detach_physical hanging
<jdstrand> I would expect massive failures if the emulator was wrong
<jdstrand> hallyn: this is going to have to be changed in a way that is compatible going to lucid. one way to do that would be to conditionally update the xml on 12.04 and higher
<peter_> hi all.. i got a bunch of servers running ubuntu all giving disk speeds of 30mb/s then i noticed that installs with my newer template gives 130mb/s.. just can seem to find the reason behind this. all are running the same ubuntu server version+kernel. must be some module or config? they are xenserver virtualized. any pointers/help appreaciated
<jdstrand> hallyn: that could be done in LibvirtTestCommon::_setUp()
<hallyn> jdstrand: the <emulator> line really shouldn't ever have been all that important...  qemu-kvm picks /usr/bin/kvm if it exists
<hallyn> maybe _setUp() should just create the qemu symlink :)
<jdstrand> hallyn: well, maybe... but this dates back to karmic
<jdstrand> hallyn: yeah, that could happen too
<hallyn> jdstrand: any advice on test_guest_aa_attach_detach_physical() hanging?
<jdstrand> hallyn: it hangs forever? there is a 30 second sleep in there
<hallyn> jdstrand: no, virsh -c qemu:///system attach-interface qatest-i386 network default --mac 52:00:00:00:00:00  actually hangs forever
<hallyn> i was thinking it was the script, but it's just the virsh action hanging
<hallyn> checking git log...
<jdstrand> hallyn: that was going to be my next suggestion (checking virsh)
<Daneil54> hi
<hallyn> recon 'virNetSocketReadWire:996 : End of file while reading data: Input/output error' is involved
<hallyn> actually maybe this is where the command.c patch i was going to put in but hadn't yet would help.
<jdstrand> exit
<jdstrand> heh
<Daneil54> i have a probleme with subversion
<Daneil54> can someone help please
<Daneil54> ?
<hallyn> Daniel54: I'd recommend looking for a #subversion
<tash> shouldn't setting root: test@test.com in /etc/aliases mean that an crons that email root should be sent to test@test.com?
<hallyn> feh, how many times does it re-run the same (failing) attach-inteface test?
<chmac> Rapid2214: Apologies, got disconnected earlier, thanks again for your help, it seems to a be a mysql specific issue so I'll start investigating in that direction :-)
<chmac> Think I got dropped before that got through a few minutes ago :-)
<adam_g> smoser: ping
<adam_g> dah, nvm
<hallyn> jdstrand: actually, when I connect to qatest-i386 with vnc, it seems the problem is that it doesn't boot.  just hangs at 'uncompressing linux'
<hallyn> i think the attach-interface is just hanging while qemu waits for the os to ack it
<hallyn> i guess 'booting hte kernel' is where it hangs
<hallyn> hm, i see.  seems perhaps a regression with unaccelerated qemu
<tash> this is really strange, has anyone ever had this happen? Boot up and you've got a black screen with a blinking cursor?
<tash> After a few reboots, eventually able to login, then after a few moments, things locked up again .... maybe SATA controller going bad or hard drive issue?
<adam_g> zul: gonna start populating this: https://wiki.ubuntu.com/Openstack/Essex/Notes
<RoyK> tash: check dmesg
<RoyK> tash: or setup syslog logging to another host
#ubuntu-server 2012-04-06
<Alternity> I have a 128MB vps, is that enough memory to run ubuntu server + cherokee + php + mysql?
<RoyK> Alternity: isn't cherokee a java thing?
<RoyK> Alternity: 128MB is probably the minimum for an ubuntu server, add mysql and an app server to that, I'd double or quadruple the memory
<RoyK> oh, cherokee, that's that minimal http server...
<RoyK> still, with mysql, another small lot of memory would be nice, but 128MB might do
<Alternity> :/
<Alternity> hmm
<Alternity> maybe i should find a more lightweight distro
<Alternity> i feel like that shouldn't be impossible.....
<RoyK> ubuntu is lightweight
<RoyK> that is, what's lightweight is linux
<Alternity> hmm
<Alternity> cherokee is just lightweight httpd
<Alternity> lighthttpd might be smaller tho
<Alternity> isn't there some like 40mb distros?
<RoyK> no, listen, what uses memory is linux and the base setup
<RoyK> 128MB is somewhat a minimum
<RoyK> with gentoo and friends, you can cut that down to 64MB or perhaps 32MB
<RoyK> but performance is bound to suck
<Alternity> ye doesn't matter that much tho
<Alternity> i am only going to run like 2 pages
<Alternity> to like 10 ppl a day
<Alternity> but they are php and mysql..
<RoyK> which will do well on 120MB
<RoyK> 128 even
<Alternity> hmm
<Alternity> okmight try
<Alternity> nginx
<RoyK> it doesn't matter much
<RoyK> even apache will do
<Alternity> pass
<RoyK> nginx is good, btw
<Alternity> ye haven't used it before
<Alternity> be fun to setup ^^
<RoyK> it can be used with fastcgi for php
<RoyK> and some new stuff, don't remember the name
<zul> adam_g: cool
<blendedbychris> i'm trying to determine why my system won't boot using a rescue diskâ¦ last time i upgraded from lucid to maverick the menu.lst in grub screwed up
<blendedbychris> i mounted /dev/sda1 to /mnt/boot and grub seems to have binary files in it instead of configuration files
<blendedbychris> what's up with that?
<blendedbychris> bunch of .mod file
<qman__> that's how grub2 works
<qman__> the text config is in /etc/grub and /etc/default/grub
<qman__> which is then built into binary config with update-grub
<blendedbychris> qman__: and idea how to fix this? i'm guessing mount /boot and chroot into my other drive to run that junk?
<blendedbychris> my other install has a menu.lst
<blendedbychris> in /boot/grub/
<qman__> yeah, reconstruct your / and /boot in a folder, chroot there, fix config, update-grub
<qman__> grub2 has been default since 9.10, but if you upgraded from a system that had grub1, it kept it unless you manually replaced it
<qman__> though I don't know about a lucid to maverick upgrade, I've not done one
<blendedbychris> menu.lst is grub2 or grub1?
<qman__> grub1
<blendedbychris> chances are after maverick update i should run grub-update
<qman__> I stick to LTS whenever possible, I'd honestly upgrade to precise before maverick at this point
<blendedbychris> well i'm trying to get to oneiric but i have to follow the upgrade path
<blendedbychris> should i just not "restart" when it asks during the upgrade?
<blendedbychris> (sounds like a bad idea)
<qman__> no, you should restart when asked
<blendedbychris> this is a vm so i'm stuck either upgrading their image or trying some fancy network boot junk
<qman__> what type of VM system
<blendedbychris> actually i lied it's a dedicated boxâ¦ heh
<blendedbychris> softlayer
<blendedbychris> all i have access to is a resculayer because their kvm is no bueno
<qman__> well, you don't have to restart immediately, you can try to fix whatever it's not doing right first
<qman__> but don't try to upgrade again or install new software without restarting
<blendedbychris> ya i'm reloading lts right now to try that
<blendedbychris> what a pain in the butt
<imyousuf> Hi
<imyousuf> I am trying to use port forwarding along with UFW.
<imyousuf> In UFW log I can see port forwards to the host I am asking it to forward but UFW blocks the forward request
<imyousuf> My configuration and log output - http://pastebin.com/ATuCacJk
<imyousuf> Note: eth1 is the public interface and eth0 is private network interface
<imyousuf> I believe I am missing something for which UFW is blocking the packet
<imyousuf> another note I have 'ufw default deny' set
<imyousuf> can someone please help me?
<blendedbychris> what's the idea behind using /srv vs /var
<blendedbychris> i think /srv is more "correct"?
<mgw> does anybody have replication working in cobbler/orchestra?
<blendedbychris> what cluster config sync do you guys recommend?
<SpamapS> blendedbychris: "cluster config sync" ?
<koolhead11> hi all
<uvirtbot> New bug: #974966 in openldap (main) "package slapd 2.4.28-1.1ubuntu3 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/974966
<Oxygen02> Hello guys. I Having a problem with qemu/kvm. I don't really know where the problem relies. but this is the error. I also asked at #qemu.
<Oxygen02> error: Failed to start domain hk-mysql2
<Oxygen02> error: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
<Oxygen02> running ubuntu 10.0.4.
<Oxygen02> Did someone have the same problem?
<imyousuf> I am running a Ubuntu 10.04.2 LTS in VirtualBox environment
<imyousuf> I want access "Safe Mode" during startup, but no grub menu is showing up
<qman__> imjustmatthew, hold left shift down during the boot process
<qman__> oops, imyousuf ^
<imyousuf> qman__: Tried but does not help, but trying again
<qman__> that's definitely how grub is accessed, trying to think if there's an option to make it always show
<imyousuf> qman__: It happens so fast that I does not register the event in remote desktop mode :(
<imyousuf> I keep the left shit key down from the moment I see VirtualBox logo, but nothing happens
<imyousuf> it shows me the login prompt
<imyousuf> Finally worked :)
<qman__> you want the timed display mode, looking up how to set it
<imyousuf> I had to reboot like 10 times to see it :)
<RoyK> you can enable the menu in the grub config too
<qman__> here: https://help.ubuntu.com/community/Grub2#Configuring_GRUB_2
<qman__> edit that file, remove grub hidden stuff, and add grub timeout
<qman__> then sudo update-grub to apply the changes
<chmac> I upgraded mysql-5.5 on oneiric from a ppa, I'm trying to downgrade, but getting "version not found"
<chmac> The previous version is showing as superseded in the ppa package list
<qman__> remove the ppa first
<chmac> qman__: I want to downgrade to the previous version in the ppa though
<qman__> alternatively, download the .deb manually, remove the existing version, and dpkg -i the version you want
<qman__> then hold the one you want so it doesn't upgrade
<chmac> qman__: Ok, sounds like that's what I'll need to do
<chmac> Ok, thanks, I'll do that
<uvirtbot> New bug: #975074 in mysql-5.1 (universe) "package mysql-server-5.1 5.1.58-1ubuntu1 failed to install/upgrade: Package is in a very bad inconsistent state - you should  reinstall it before attempting a removal." [Undecided,New] https://launchpad.net/bugs/975074
<uvirtbot> New bug: #975085 in nova (main) "nova mysql DB can't be restored from backups" [Undecided,New] https://launchpad.net/bugs/975085
<uvirtbot> New bug: #974959 in nova (main) "nova daemons are not conforming to the LSB standard" [Undecided,New] https://launchpad.net/bugs/974959
<imyousuf> I have defined 'Host_Alias HOME = 192.168.1.101' and '%admin HOME = (ALL) ALL' using visudo; this blocks sudo from every IP. Can someone please help me in what I am doing wrong?
<chmac> imyousuf: I'm not very familiar with sudo config, what are you trying to achieve?
<imyousuf> http://pastebin.com/0XUR7Ke2
<imyousuf> I want to allow sudo command to server if user logins from certain host/network
<chmac> imyousuf: Spacing issue? Does it need to be LAN=(ALL) ?
<imyousuf> but not from untrusted network
<imyousuf> chmac: tried that too
<uvirtbot> New bug: #975167 in samba (main) "package samba 2:3.6.3-2ubuntu1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/975167
<imyousuf> just tried again chmac, doesn't help
<chmac> imyousuf: How about just LAN=ALL instead of (ALL) ALL ?
<imyousuf> lets try
<chmac> imyousuf: Actually, probably a red herring
<chmac> I'm reading up on sudo host aliases
<imyousuf> does not help chmac
<chmac> imyousuf: How about commenting out the 3rd line, where you reference two host aliases within another host alias
<imyousuf> ok trying that too
<chmac> I'm really just guessing though :-(
<imyousuf> deleted it :)
<imyousuf> lets see
<imyousuf> No improvement chmac
<chmac> imyousuf: You are logging in from the machine 192.168.1.1 to test right?
<imyousuf> good question, let me check :)
<chmac> imyousuf: Or do you want Host_Alias LAN = 192.168.1.0/24 ?
<imyousuf> Host_Alias specifies network address should be 192.168.1.0/255.255.255.0 and I want to achieve it for a single host first
<chmac> imyousuf: You can list multiple hosts within one alias like `Host_Alias LAN = 192.168.1.1,192.168.1.101`
<chmac> imyousuf: I'm seeing /24 notation here http://onlamp.com/pub/a/bsd/2002/09/12/Big_Scary_Daemons.html
<imyousuf> chmac: I read it from - http://www.gratisoft.us/sudo/sudoers.man.html
<chmac> imyousuf: Both might be accepted...
<imyousuf> yes chmac, that might be the case
<imyousuf> but why ain't single host working? strange
<chmac> You're logging in from 192.168.1.1 and you're in the admin group but not the sudo group?
<imyousuf> yes the user is in admin group and I am logging from 1.1
<chmac> gtg
<imyousuf> anyway to confirm from where I logged it?
<chmac> imyousuf: anything in the logs?
<imyousuf> chmac: auth.log only gives Apr  6 20:27:08 vmguest4 sudo:   sample : user NOT authorized on host ; TTY=pts/2 ; PWD=/home/sample ; USER=root ; COMMAND=/bin/ls
<chmac> imyousuf: Not sure what else to suggest, this stuff is all double dutch to me :-)
<imyousuf> :)
<journeeman> Hi all. I am running MaaS on precise beta2 and the text in the web interface appears some 20 seconds after the page has loaded. Anyone knows the reason for this?
<ROmeyro> hello guys, i m about to install a mail server on my ubuntu, on which i already ahve a webserver, do you guys have some tips or advise for me
<koolhead11> !mailserver
<ubottu> Ubuntu supports the Simple Mail Transfer Protocol (SMTP) and provides mail server software of many kinds. You can install a basic email handling configuration with the "Mail server" task during installation, or with the "tasksel" command. See also https://help.ubuntu.com/community/MailServer and https://help.ubuntu.com/10.04/serverguide/C/email-services.html
<ROmeyro> ubottu: thank you :)
<ubottu> ROmeyro: I am only a bot, please don't think I'm intelligent :)
<journeeman> seems to be happening only in chrome
<Cygnus-X1> ROmeyro: Actually, you might want to look at iRedMail.  I've only used it on CentOS, but it works like a charm on that platform.  That will save you a ton of heartache.
<ROmeyro> Cygnus-X1: oh thanks, actually I already started with postfix, but if its not going good, i ll try iRedMail. But can i use SSL/TLS with that ?
<qman__> never heard of it before, but that looks like exactly the software I use anyway
<mcloy> hi, i want to learn about how to send email by (postfix or any other) but while configuring the server (on which postfix is installed) and having a proper setup by which the email will be send by me@MYDOMAIN.COM  . i think will need to setup dns and rdns ? where can i read in detail....! ?
<qman__> LAMP, postfix+dovecot, roundcube, and all the extras
<rvba> journeeman: I can't reproduce your problem.  I tried with 0.1+bzr415+dfsg-0ubuntu1 (version in precise) and with the latest dev version.
<rvba> journeeman: is your problem happening just with the homepage?
<koolhead11> ikonia, :P
<journeeman> rvba: our versions of chromium are different looks like. it happens with all the pages. not just the home page.
<rvba> journeeman: I use chrome 18.0.1025.142.  The versions above are maas' versions.
<roaksoax> rvba: what's the issue?
<roaksoax> ah I read it
<journeeman> rvba: maas and chromium versions are the same
<roaksoax> rvba: that's weird
<journeeman> firefox works fine though
<uvirtbot> New bug: #960547 in cloud-init "Chef provisioning fails if validation_cert not defined" [Medium,Fix committed] https://launchpad.net/bugs/960547
<roaksoax> journeeman: try cleaning your chromium cache and try again maybe?
<journeeman> okay
 * koolhead11 hates css
<rvba> roaksoax: my first idea was that this was a js problem related to the dashboardâ¦ but if it happens on all the pages it can't be the problem.
<journeeman> roaksoax: there would hardly be any cache in it as MaaS is the only thing I have opened after installation
<roaksoax> rvba: yeah but hanve't really experienced any issues with that rev...
<roaksoax> journeeman: woudl you be willing to try a newer package version?
<journeeman> roaksoax: um.. i am trying to get openstack running right now. later, i guess.
<mgw> has anyone tried installing cobbler 2.2.2 on oneiric?
<roaksoax> journeeman: sure, it should affect you in any way though. So, as soon as you available just let me know and I'll point you to a newwer version
<journeeman> roaksoax: cool
<koolhead11> mgw, whats the issue
<mgw> with 2.1.0?
<mgw> koolhead: or with installing 2.2.2?
<mgw> koolhead11 ^
<koolhead11> mgw, are you hitting any bug?
<mgw> koolhead11: cobbler replicate seems broken
<mgw> It eventually works after running a few times and restarting cobbler between runs
<koolhead11> mgw, please file a bug with the logfile, it would be really helpful
<TonyTheTiger> I'm trying to install opsview as outlined in http://docs.opsview.com/doku.php?id=opsview-community:ubuntu-installation but when I try to install the package I get the error "depends: libperl5.10 (>= 5.10.1) but it is not going to be installed." what do? i'm running 11.10
<mgw> (I just installed 2.2.2 on a dev system from the 12.04 repo, and it seems to work fine, no special dependencies)
<mgw> don't know yet if it solves the replicate problem
<mgw> 2.1 is also a dev branch of cobblerâ¦ cobbler folks say shouldn't use it in production
<mgw> anybody from the orchestra team have input on that?
<qman__> TonyTheTiger, according to the site you linked, 11.10 is not supported by them, only 8.04 and 10.04
<qman__> in all likelyhood it wants an older version of libperl than is available in 11.10
<TonyTheTiger> ok, that's kind of what I thought, the only reason we're using 11.10 is because we're trying to run it on hyper-v (derp), I guess we'll have to either find another virualization method for this or just run it on a physical box. thanks
<koolhead11> mgw, on 12.04 we have maas
<koolhead11> https://wiki.ubuntu.com/ServerTeam/MAAS
<mgw> koolhead11: no more orchestra?
<mgw> anyway, i'm mostly interested in cobbler for now
<koolhead11> mgw, its a bigger better version of orchestra, thats wht i can tell :)  <-- roaksoax
<roaksoax> mgw: if you could file a bug it would be great!
<mgw> roaksoax: I'll capture the output next time I try to replicate
<mgw> So there've been no other reports of replicate breaking?
<mgw> I have to run twice to get all profiles (sub profiles don't come in first time)
<mgw> then restart cobbler
<mgw> then I can sometimes replicate the systems
<mgw> roaksoax is there any inherently bad thing about running 2.2.2 on oneiric?
<mgw> it's all python, right?
<mgw> I seemlessly installed the precies python-cobbler, cobbler-common and cobbler dpkgs on an oneiric dev machine
<roaksoax> mgw: yeah there shouldn't be any issues
<mgw> (I did this b/c #cobbler says to avoid 2.1)
<roaksoax> mgw: and TBH we haven't test the replicate feature ;)
<mgw> oh
<mgw> I seem to recall reading a bug on it somewhere
<mgw> that's been fixed
<mgw> in the fedora project
<mgw> ok, i'm going to upgrade to 2.2.2
<mgw> if the bug is fixed, i'll let you know that as well
<roaksoax> mgw: so if you find that link and the path to it I'll be more than glad to apply it
<mgw> this is part of it
<mgw> https://github.com/cobbler/cobbler/issues/63
<mgw> and this
<mgw> https://github.com/cobbler/cobbler/issues/54
<journeeman> rvba: roaksoax: koolhead11: seems to be an http proxy configuration problem. thought i didn't need to set up proxy for chromium as i had to access maas on localhost. works fine now.
<rvba> journeeman: cool, thanks for the heads up!
<roaksoax> mgw: cool, could you please file that bug and paste those link. It would help me a lot
<journeeman> rvba: sure :)
<uvirtbot> New bug: #958831 in samba (main) "Samba rebroadcasts information it should not" [Undecided,New] https://launchpad.net/bugs/958831
<uvirtbot> New bug: #972603 in amavisd-new (main) "package amavisd-new-postfix 1:2.6.5-0ubuntu3 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/972603
<andrew___> hello
<andrew___> please how can I install php 5.4 on ubuntu server? I've done aptitude versions php and there's only 5.3.10. Do I have to build it or is there 5.4 package somewhere hidden? Thanks
<patdk-wk> download the .tar.gz file, and compile it
<andrew___> patdk-wk: ok, thanks. I'll try
<excalibour> hi how do i know what ftp daemanon is installed on a pc
<ikonia> excalibour: what version of ubuntu ?
<excalibour> 10.10
<ikonia> excalibour: if you do dpkg -k | grep -i ftp , you should see the ftp version you are using
<excalibour> ty ikonia, tryying right away
<ikonia> excalibour: dpkg -l sorry
<ikonia> I made a typo, I didn't notice
<ikonia> not -k , -l
<excalibour> yep :) returned an erro :)
<excalibour>  dpkg -l | grep -i ftp
<ikonia> that's better
<ikonia> -l is correct
<excalibour> ii  ftp                                 0.17-19build1                                   The FTP client
<excalibour> ii  proftpd-basic                       1.3.2c-1ubuntu0.1                               Versatile, virtual-hosting FTP daemon - bina
<ikonia> there you go,
<excalibour> i didnt understand virtual hosting thingy but it says proftpd i guess
<ikonia> it's just wording
<ikonia> the daemon is proftpd
<excalibour> has different versions? like basic/pro..,etc?
<ikonia> nah, don't worry about that
<ikonia> it's "proftpd"
<excalibour> ty very much ikonia, i needed to know that as i cant ftp to my WWW
<excalibour> lemme go to phase B, is it user permissions that s unabling to upload to www
<ikonia> what's the actual problem ?
<excalibour> i can ftp to my test server but cant upload via ftp
<ikonia> what's the error
<excalibour> perm denied error on filezilla
<ikonia> ok, so there you go "permissions"
<uvirtbot> New bug: #950193 in cobbler (main) "[FFe] [MIR] Cobbler" [High,In progress] https://launchpad.net/bugs/950193
<excalibour> i guess so, i think it s because settings i ve done before to harden security
<excalibour> as a newbie
<jdstrand> adam_g: hey, can you point me to docs on how I might setup horizon to work with my openstack? (MIR review)
<adam_g> jdstrand: sure
<adam_g> jdstrand: actually its very straight forward, just modify /etc/openstack-dashboard/local_settings.py , set OPENSTACK_HOST to the address of your keystone host, restart apache and you should be good
<jdstrand> oh, hey
<jdstrand> heh
<jdstrand> that'll work :)
<adam_g> jdstrand: the dasboard is essentially just a web-front end to the various openstack client libraries
<adam_g> jdstrand: oh, also, you should probably 'apt-get install python-memcache memcached' and set CACHE_BACKEND='memcached://127.0.0.1:11211/' in local_settings.py as well, otherwise you'll have to re-login a lot
<jdstrand> ok
<jdstrand> I'll add this to the wiki page when I get it working
<adam_g> jdstrand: cool. let me know if you have any problems
<jdstrand> adam_g: thanks!
<excalibour> ikonia>> can u have a look at my conf file @ http://pastebin.com/7QUteC7c
<Daviey> adam_g: Please can i see a diff of your horizon change for juju support before you upload please?
<adam_g> Daviey: yeah, i wasn't planning on applying before getting an OKAY
<adam_g> Daviey: http://paste.ubuntu.com/918025/
<adam_g> Daviey: actually thats a lot bigger than it should be
<Daviey> adam_g: oh, i knew that.. but wanted to give it a smoke before you did :)
<Daviey> adam_g: good to see there is a test.. Do you think this would be accepted for Folsom ?
<adam_g> Daviey: actually that test is just cp'd from the ec2 panel, in fact the whole juju panel is. :)
<Daviey> lol
<adam_g> Daviey: im not sure there'd be interest in having such a deployment-specific add-on
<Daviey> adam_g: is it viable to use inheritance better, if there is duplication ?
<Daviey> adam_g: yeah, i think you are right.. it opens upstream to having to support everything.
<adam_g> Daviey: could be, this was my first crack at ever even touching django. ill see if i can squash some of the dupe
<adam_g> Daviey: right, plus i think the idea is that deployers would customize the dashboard to fit their offering
<Daviey> adam_g: Yeah
<uvirtbot> New bug: #975519 in samba (main) "package samba-common-bin 2:3.6.3-2ubuntu1 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 2" [Undecided,New] https://launchpad.net/bugs/975519
<zul> adam_g: thats a rather large patch
<jdstrand> adam_g: worked great. that was easy! :)
<zul> Daviey: arm session has been accepted
<adam_g> jdstrand: nice :)
<adam_g> zul: as i said, that diff is larger than it needs to be
<hazmat> hallyn, i was trying out the lxc ubuntu-cloud template.. it documents a userdata parameter (-u/--userdata) but the getopt invocation doesn't include it so it barfs if you try to pass it in
<hallyn> hazmat: pls talk to utlemming about that, i have no idea
<utlemming> hazmat: looking
<hazmat> utlemming, thanks
<hazmat> i'm running out to dinner, bbiab
<utlemming> hazmat: real fast, can you give me  paste of what your seeing?
<Daviey> zul: schweet
<excalibour>  can u have a look at my conf file @ http://pastebin.com/7QUteC7c i cant upload to my WWW folder
<adam_g> zul: have you seen this test failure before?  https://launchpadlibrarian.net/100498070/buildlog_ubuntu-precise-i386.glance_2012.1-0ubuntu5_FAILEDTOBUILD.txt.gz
<zul> adam_g: thats new
<adam_g> zul: yeah, passed fine on the last glance upload
<adam_g> hmph
<zul> adam_g: just rebuild it
<adam_g> disabling tests for now on the PPA, will look at getting the test suite passing again once i verify my fixes
<zul> adam_g: ubuntu5?
<tech936> hey
<adam_g> zul: im testing a bunch of changes in a PPA
<zul> k
<zul> adam_g: when you put it in to proposed compress the debian/changelog into -ubuntu2 please though
<adam_g> zul: yup
<tech936> would anyone like to help run a server with a couple of friends?
<tjf> dumb question, how do I update libssl without actually removing it?
<tjf> can't I just compile it and overwrite the current version?
<tech936> libssl?
<tech936> u wanna just update yeah
<tjf> openssl, I guess
<tech936> well theres a simple way
<tech936> sudo apt-get update
<tech936> then sudo apt-get upgrade
<tjf> that doesn't work.
<tjf> it just uses 0.9.8
<tech936> what version you got?
<tech936> openssl version
<tjf> 0.9.8
<sbeattie> tjf: perhaps the better question is, what problem are you trying to solve?
<tech936> hmm you tryed this ? "  cd /usr/src
<tech936> hold on cba to type the whole thing heres a link for you http://www.directadmin.com/forum/showthread.php?t=163&page=1
<tjf> sbeattie: a bunch of root@techessentials:~/openssl-1.0.1# openssl version
<tjf> OpenSSL 0.9.8k 25 Mar 2009 (Library: OpenSSL 0.9.8o 01 Jun 2010)
<tjf> err
<tjf> a bunch of packages I want don't work because the version of OpenSSL I have is too old.
<tech936> anyone looking for a server to help monitor? as im currently working on a server and looking for people to help maintain it?
<tjf> so my question is, how do I overwrite the library I already have with the one I just compiled?
<tech936> why not save your self the Hassel dude and just re install the entire thing
<tjf> that won't work.
<tech936> what server you running?
<tech936> cause the last problem i had with OpenSSL i just ran sudo apt-get purge openssl then, sudo apt-get install openssl
<tech936> and walla everything was working fine and dandy, the reason why some Packages may not be working with yours will mainly fall down to the fact you havent kept upto date with package updates
<tjf> tech936: when I try to install libssl-dev it says...
<tjf>   libssl-dev: Depends: libssl0.9.8 (= 0.9.8k-7ubuntu8.8) but 0.9.8o-7ubuntu1 is to be installed
<hazmat>  utlemming http://pastebin.ubuntu.com/918189/
<utlemming> k, I'll look at that
<uvirtbot> New bug: #975616 in nova (main) "package nova-compute-uml  not installed  failed to install/upgrade: trying to overwrite  /etc/nova/nova-compute.conf , which is also in package nova-compute-kvm 2012.1-0ubuntu1" [Undecided,New] https://launchpad.net/bugs/975616
<uvirtbot> New bug: #975617 in nova (main) "package nova-compute-qemu  not installed  failed to install/upgrade: trying to overwrite  /etc/nova/nova-compute.conf , which is also in package nova-compute-kvm 2012.1-0ubuntu1 (dup-of: 975616)" [Undecided,New] https://launchpad.net/bugs/975617
<hazmat> utlemming, looks like its one line fix to the getopt invocation.. options=$(getopt -o a:hp:r:n:Fi:CLS:u:T:ds: -l arch:,help,path:,release:,name:,flush-cache,hostid:,auth-key:,userdata:,cloud,no_locales,tarball:,debug,stream: -- "$@")
<hazmat> hmm.. maybe not
<hazmat> that parses the args, but it doesn't seem to actual use the provided init
#ubuntu-server 2012-04-07
<excalibour> how do i know what port is mysql listening to?
<wdroberts> check '/etc/my.cnf' and run 'netstat -antp'
<wdroberts> default is tcp/3306
<excalibour> thanx
<excalibour> i m trying to install forum software on 10.10. What s the username for mysql format? user or user@pc ?
<wdroberts> mysql uses the 'user'@'host' format, but web software will usually ask for them separately
<excalibour> ty wdroberts, silly me, changed hostname from ubuntu to localhost and i m done :(
<violinappren> Hello. What is the proper upstart command enable and disable services?  in Lucid and Precise
<violinappren> Editing /etc/init/*.conf is the proper way?
<wdroberts> i'm not terribly familiar with upstart, but in ubuntu i generally use the update-rc.d command to control automatic startup of services in different runlevels. it edits the scripts in the /etc/rc#.d directories
<violinappren> wdroberts: thanks,  im reading the man page
<delerium_> Hi guys, I have a dedicated server and I have two friends seeking for web hosting.  So Instead of setting up an apache vhost for them, and manage their thing (no time for that!)  I thought of installing something like cPanel.  Is there a free and great alternative to cPanel? Thx!
<wdroberts> delerium_: webmin comes to mind
<Rallias> Is there any way to have 2 separate instances of php fastcgi for different parts of my web site simultaneously on one server?
<delerium_> wdroberts: Thanks, I'll take a look at it.  Appreciated.
<SpamapS> Rallias: yes definitely
<SpamapS> Rallias: just have them listen on different sockets/ports
<Rallias> How would I modify my /etc/init.d/php-fcgi script to accomodate that?
<SpamapS> Rallias: I'd recommend php5-fpm for this, it is now quite superior to php5-fastcgi and has a specific method for doing this
<uvirtbot> New bug: #975777 in cobbler (main) "cobbler 2.2.2-0ubuntu31 refuses to add apt repository" [Undecided,New] https://launchpad.net/bugs/975777
<uvirtbot> New bug: #975781 in quota (main) "package quota 4.00-3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/975781
<azert> hello
<azert> i can't delete an mounted partition
<azert> what is the command to able to delete a partition during the next reboot ?
<pmatulis> azert: unmount it now and create a new filesystem on it
<azert> ok, is it possible to transfer file over 54 Mo over Tftp ?
<pmatulis> azert: dunno but why are you using TFTP?
<azert> Trivial_File_Transfer_Protocol as it says just for file transfer
<pmatulis> azert: dunno but why are you using TFTP?
<azert> ok i understand
<pmatulis> azert: maybe if you explain your situation someone can help better
<azert> to transfer firmware
<azert> pmatulis: is that enough?
<JanC> azert: TFTP supports up to 4 GiB (unless your software only supports the old pre-1998 protocol version which only supports up to 32 MiB)
<azert> how to check software version ?
<Rapid2214> azert: What software are you using?
<Rapid2214> dpkg -s <package>
<hazmat> utlemming, fwiw, i made some minor changes to fix the userdata passing issue with the lxc cloud template http://paste.ubuntu.com/918869/
<hazmat> unfortunately that even with that in place cloud-init refuses to run the full cloud-init module suite against it
<hazmat> still trying to debug cloud-init wrt to that
<hazmat> smoser, what actually runs the runcmd part? i see a handful of upstart confs for cloud init, and i've tried tracing through the code, but all i see it doing is writing out the cloud-init part
<hazmat> never executing it
<hazmat> er.. the cloud-config part
 * patdk-lap gives apache2.4 a test on precise
<uvirtbot> New bug: #819329 in juju "Tests depend on AWS_ACCESS_KEY_ID being set" [Medium,Fix released] https://launchpad.net/bugs/819329
<uvirtbot> New bug: #975985 in openssh (main) "please start sshd as early as possible during boot" [Undecided,New] https://launchpad.net/bugs/975985
<itgeo> hello guys I installed a mail server iReadMail on my server, but i dont know why i can't send and received email to/from my gmail account
<ikonia> where did you get ireadmail from ?
<itgeo> from the official website
<itgeo> eveything is working
<itgeo> exaceopt that dont receive email when i send it with my server to my gmail
<itgeo> hi guys i just install iReadMail as mail server on my ubuntu, i can receive mail only for users of my server. But I cant send an email to myself on my gmail account, or receive an email from  my gmail account to my email server. Is there something that I had to setup ?
<Doonz> hey guys, anyone here familiar with force10 networking switches?
<Doonz> http://cgi.ebay.ca/ws/eBayISAPI.dll?ViewItem&item=130677305627&fromMakeTrack=true&ssPageName=VIP:watchlink:top:en#ht_3469wt_881 <-- can someone tell me if this is a good buy to set up a 10gb backhaul network between 10 of my servers?
<RoyK> Doonz: looks cheap enough
<RoyK> Doonz: but you might need media modules in addition to that
<RoyK> no, wait, it's CX4
<RoyK> 10 servers all on 10G?
<Doonz> yeah
<Doonz> im just not sure how the 10gb switches work
<RoyK> it's just a switch
<RoyK> nothing fancy
<Doonz> would it be the same as a unmanaged 1gbe switch?
<RoyK> some of them have L3 support
<RoyK> I *really* doubt you'll find a 10Gbps unmanaged switch
<Doonz> yeah i dont need anything like that since its just a private backhaul between the servers
<Doonz> RoyK yeah i figured that
<RoyK> VLAN support can be nice if you want to split the switch somehow
<Doonz> ive used managed before just nothing on the 10gbe level
<RoyK> or LA support if you want to use two links at a time to get failover and better bandwidth
<RoyK> it's the same thing
<RoyK> it's just a switch
<Doonz> kool
<Doonz> i cant find anything on documentation on that switch but it looks like dell bought that company a few years back
<Doonz> but a new switch from dell is 15k for that one
<RoyK> dell sells a 24port SFP+ switch from Delta Electronics, same thing as Super Micro sells, for <$10k
<Doonz> yeah this is for a temp setup while i work on a project so if i can keep cost as cheap as possible that mean mor emoney for me
<RoyK> then just get that thing off ebay
<RoyK> if it fails, well, it didn't cost too much :Ã¾
<Doonz> im not a networking guy so i was looking for another opinion
<Doonz> it only has to last 90 days or so
<RoyK> networking is mostly about closed, black boxes, so you never really know what's inside, meaning you can gamble it works (read: buy something cheap) or you can pay for something you know works (read: get something that costs the same as a nice house)
<Doonz> yeah like i said tho its only for about 90 days
<RoyK> then I'd get that thing off ebay, if you can get it quickly ;)
<Doonz> ill pay for next day shipping
<RoyK> $5k for a 10G switch and eight 10G boards is rather cheap
<Doonz> those cards are going from 200 - 600 on ebay
<Doonz> brand new there 479
<RoyK> I have a few of those
<Doonz> seem to be a good card
<RoyK> "Switching fabric capacity of 480 Gbps and forwarding capacity of 360 Mpps"
<Doonz> whats that mean
<patdk-lap> 48GB/sec max, and 360M iops max
<RoyK> meaning total switching capacity is 480Gbps, which is good (10G*2 per port), and that it can handle 360 million packets per seconds
<RoyK> patdk-lap: I don't think "iops" is the right thing for a switch...
<Doonz> cool
 * Doonz is outta his league
<patdk-lap> royk, fcoe switch?
<patdk-lap> fc switch?
<RoyK> patdk-lap: ethernet switch
<patdk-lap> ah
<patdk-lap> iscsi switch? :)
<RoyK> patdk-lap: no such thing...
<Doonz> its being used to transfer large simulation packages between systems and time is money
<patdk-lap> doonz, infiniband
<patdk-lap> 40gbit per port :)
<Doonz> not supported from the custom os
<RoyK> "custom os"?
<Doonz> its a freebsd modified os for running simulation packages for and ideas simulation package
<Doonz> very small acl
<Doonz> hcl*
<Doonz> and of course they have a sweet deal with the hardware vendors. Nothing lik ebuying a dell server at 1000000000% markup
<RoyK> what do you use this for?
<Doonz> its used for process simulation
<RoyK> k
<RoyK> large datasets?
<RoyK> (since you need 10Gbps)
<Doonz> 4 - 5 TB
<RoyK> what do you use for storage?
<Doonz> pcie ssd cards
<RoyK> nothing like zfs for a large storage pool?
<Doonz> yeah i have two large file servers
<RoyK> !define large :)
<ubottu> RoyK: I am only a bot, please don't think I'm intelligent :)
<patdk-lap> poor ubottu
<Doonz> http://www.supermicro.com/products/chassis/4U/847/SC847E26-R1400U.cfm x 4 with 2TB ent drives
<RoyK> on linux?
<Doonz> freenas actually
 * RoyK uses openindiana
<Doonz> come sept im going to be re doing my comeplete set up
<Doonz> i started out on my own 2 years ago with only a few small projects
<RoyK> I'd recommend something on native ZFS
<RoyK> and I have close to half a petabyte net space on zfs now
<Doonz> but now ive surpassed that so im looking at moving thigs into colo
<Doonz> Royk nice
<Doonz> im also looking at moving into blade systems as well because of the density factor
<RoyK> striped mirrors where we need it, raidz2 on backup stuff, and raidz3 for secondary backup
<Doonz> cool
<RoyK> and lots of neat SSDs to do our bidding for caching
<Doonz> yeah my wifes pissed at me cause of the set up in our basement and the fact that im out of power drops at home
<RoyK> lol
<Doonz> like i said  i never intended on getting this much work
<RoyK> how much storage is it you need on this?
<RoyK> 10-20TB? 100?
<Doonz> but ive gotten a few large customers now who really like the work i do and my next project wont start until Jan of 2013
<Doonz> im going to be going to around the 500TB usuable mark here
<Doonz> but ive gotten such a mismash of systems that need to be consolidated upgraded decomissioned
<patdk-lap> how exactly do you run out of power drops?
<RoyK> if you go for zfs, keep in mind that adding new VDEVs (that is, more drives, instead of replacing small drives with bigger ones), will result in a badly balanced pool
<Doonz> only have 100amp service to the house
<patdk-lap> oh, that is like nothing :(
<Doonz> to get 200amp service i have to upgrade my service from the curb at $30k
<patdk-lap> yuk
<Doonz> so there is a dc not to far away from me so im looking at 2 46u racks
<RoyK> 100amp on 230V is like 23kW, sufficient to heat rather a lot
<Doonz> my basement is refered to as the sahara
<Doonz> :/
<RoyK> hey, you could sell heat :D
<Doonz> SWEET Lunch time
<Doonz> bb in 45
<patdk-lap> a full rack, loaded with disks, will use like 15amps at 230v
<RoyK> patdk-lap: a full rack with disks is like 252 or 462 or perhaps 500 drives
<RoyK> patdk-lap: meaning 1PB minus redundancy, meaning rather a lot :p
<RoyK> or 2PB with New And Fine Four Terabyte Drives
<patdk-lap> I figured 480 disks
<patdk-lap> ah, ok, 16amps :)
<patdk-lap> 8watts per disk, 480 disk, 230v
<patdk-lap> work fine for my setup
<patdk-lap> I install two 5kva ups, rack is good for 24amps at 240v
<patdk-lap> assuming 2tb disks, that is like 900gb
<patdk-lap> 10.4amps if you use the 5400rpm disks :)
<Doonz> im back
<itgeo> hi guys i just install iRedMail as mail server on my ubuntu, i can receive mail only for users of my server. But I cant send an email to myself on my gmail account, or receive an email from  my gmail account to my email server. Is there something that I had to setup ?
<qman__> DNS
<qman__> you need an MX record on the internet pointing to your server as the mail server for your domain, and you need to make sure you're not on a blacklist
<qman__> no one can send you mail without the MX record, and most people won't accept mail from you without it to prevent spam
<itgeo> i used the SPF to put my info
<itgeo> but what am I suppose to put in MX record, because in the SPF file i have "v=spf1 a mx ptr mx:mydomaine.com include:mydomain.com -all"
<qman__> the MX record contains a priority number, and the IP of the server accepting the mail
<qman__> lower numbers are higher priority
<qman__> should be something like yourdomain.com IN MX 10 1.2.3.4
<qman__> also keep in mind that DNS can take up to 48 hours to propogate
<qman__> and check your IP for blacklists on mxtoolbox
<itgeo> I don't really have a file for this, i m using no-ip and my domaine name is with them. but when i clic on my hostname, i have mx record and  mx priority
<qman__> you need a static IP address to run a mail server
<itgeo> the first entry is mydomaine.com 5
<itgeo> not file but field***
<qman__> and most mail services will block mail from services like no-ip
<qman__> because it's usually used by spammers
<itgeo> awww :S
<qman__> domains are cheap, get you one
<itgeo> i have 1 domaine
<itgeo> i moved it to no-ip
<itgeo> because with my ISP i have a dynamic IP
<qman__> can't run a mail server on a dynamic IP
<itgeo> well what i did is I install noip2 on my ubuntu server to get access from outside of my home to my webserver
<qman__> also, many residential ISPs silently block mail traffic to prevent spam
<itgeo> my ISP allow me 1 server of each kind
<itgeo> i can run only 1 mail server, 1 webserver, I setup the fowarding from my modem
<qman__> you can't run a mail server on a dynamic IP
<qman__> it must be static
<itgeo> ohhh ok ok
<qman__> even if your IP doesn't change very often, each time it does you'd suffer a day or two mail outage, and that's only if people would even accept mail from you in the first place, which they won't
<itgeo> there is no solution for that ?
<qman__> get a static IP
<qman__> if you can't, get a VPS and run your mail server there
<qman__> if you can't afford either of those and you just need one account, get a gmail account and configure your server to send mail through that
<patdk-lap> qman__, it's invalid to use an ip address in a mx line
<itgeo> I already have a gmail account, but i was using my email@mydomaine.com too :S and since I move it to noip2  i dont have my email working :S
<itgeo> I had to choice 40$/year or install a mailserver
<qman__> my ISP charges $4/mo for a static IP
<patdk-lap> hmm, arin is charging me 3.2c per month per ip
<itgeo> oh thats cool, I ll ask to my ISP how much it will cost me for a static
<qman__> it gets better value if you buy more, but just one on one business internet account, it's an extra $4
<itgeo> well I don't thing I have arin here, I m from Quebec. I m using bell
<patdk-lap> itgeo, you have arin :) but you probably won't be dealing with them
<patdk-lap> arin is where you go to get your own ip addresses
<patdk-lap> my isp wants to rape me for a static ip
<patdk-lap> they want $15 per ip
<patdk-lap> plus upgrade to business account, for a $150 a month
<patdk-lap> ontop of the $150 a month I already pay
<itgeo> thats what I m afraid lol the 150$/month
<guntbert> !ot
<patdk-lap> and that only gets me 1mbit up, and 5mbit down
<ubottu> #ubuntu is the Ubuntu support channel, for all Ubuntu-related support questions. Please use #ubuntu-offtopic for other topics (though our !guidelines apply there too). Thanks!
<patdk-lap> guntbert, last I knew, setting up static ip on ubuntu was ontopic
<guntbert> patdk-lap: sure, but talking about how much an ISP charges ??
<patdk-lap> still ontopic as far as I'm concerned
<patdk-lap> he was concerned about the price
<qman__> maybe a little gray but it's discussing options to get his mail on ubuntu, directly accessory
<itgeo> he is telling me the differents possibilities
<itgeo> so with a static IP it should work ?
<patdk-lap> with a static ip you have a hope in making it work :)
<patdk-lap> you will need to configure your reverse dns and forward dns correctly
<patdk-lap> and also setting up your server correctly
<patdk-lap> so others don't automatically mark you as a spammer
<itgeo> i just made a check up with mxtoolbox.com my ip is not on the blacklist and everything look good
<itgeo> but do i have to mention a value for a and ptr ?
<patdk-lap> you will want to check reputation sites
<patdk-lap> more than blacklists
<itgeo> actually my reputation is neutral
<itgeo> what is the difference between a VPS and a hosting webserver ?
<qman__> itgeo, Shared web hosting only gives you access to certain features, usually through a web panel. VPS gives you a whole server under your control, but it runs in a virtual envornment on shared hardware. Dedicated server gives you an actual hardware server under your control.
<itgeo> qman__: thank you :).. I ve never heard about VPS before. And the prices are interesting
<qman__> itgeo, the idea is to give you the features and control of a dedicated server, without the steep price, the cost being reduced performance and resources
<pacemkr> hello. i have a question about the init system. i see that there is both sysv init ssh in /etc/rc2.d and an upstart job ssh.conf in /etc/init. shouldn't one be enough? don't they conflict?
<qman__> pacemkr, there is only upstart, but upstart still has a sysv mode where it manages sysv style scripts for compatibility
<qman__> the one in rc?.d is usually a notice of how it has been moved to upstart and to use that instead
<pacemkr> qman__:  i dont know if it matters, but im on 10.04 LTS and both files have startup procedures for sshd
<pacemkr> and upstart starts rc.d
<qman__> I just checked on a lucid server and I don't have any ssh scripts in rc.d
<pacemkr> hmm. this is on a linode image, maybe they put it there....
#ubuntu-server 2012-04-08
<patdk-lap> what is whoopsie?
<uvirtbot> New bug: #975973 in samba4 (main) "integrate with bind package" [Medium,Triaged] https://launchpad.net/bugs/975973
<wdroberts> patdk-lap: not sure, but i noticed it on my 12.04 build. maybe try 'man whoopsie' or 'vi /etc/init.d/whoopsie'.
<patdk-lap> I tried
<patdk-lap> missing
<patdk-lap> I've installed 12.04 many times, for iso testing, but this is the first time I actually looked at it
<wdroberts> the name 'whoopsie' reminds me of 'oops' in the kernel, not sure if they're related or not
<patdk-lap> seem to be
<patdk-lap> but no info about it or how it works or anything
<delerium_> Hi, Ok... I was stupid.... I upgrade my server to 12.04 and I have some software (Zimbra) that doesn't work anymore.  Is there a way to roll back this upgrade ?
<delerium_> (I was coming from 10.04)
<sysc> no way to roll back afaik...
<sysc> unless your running vmware and took a snapshot pre-upgrade
<delerium_> sysc: nha.... I confirm, I was stupid ;(
 * pehden is away: I'm busy
<darkqaos> Hello
<darkqaos> installing windows server 8
<darkqaos> right now
<darkqaos> to test
<darkqaos> anyone played with it?
<koolhead17> Daviey, ping
<TeLLuS> Hi I noticed after upgrading that vservers is not working in precise. Since they moved to another server I have not been using it myself for some time.. I wonder if there is any other similar containment option available so I do not have to patch a kernel every time. If I look around only KVM is looking ok, but from what I can see it is not handeling memory in an efficient manner like with the vserver.
<darkqaos> so
<darkqaos> does anyone ever discuss anything here
<jacobw> people discuss problems to which they desire answers
<jacobw> and sometimes other things :)
<azertyi> hello there
<azertyi> is there anyone know how to mount using nfs ?
<darkqaos> yeah
<darkqaos> NFS
<darkqaos> what version of Ubuntu
<darkqaos> are you running?
<sw> !nfs | azertyi
<ubottu> azertyi: nfs is the network file system. See https://help.ubuntu.com/community/SettingUpNFSHowTo for information on installing and configuring NFS.
<darkqaos> wow
<darkqaos> very nice!
<darkqaos> okay ubunt-server
<darkqaos> I just put ubuntu on an old p3 server
<darkqaos> :) used to run windows server 2000
<darkqaos> i dont know wha tto do, its in my lab right now im SSH'd in
<darkqaos> any fun tricks? Thought id do some web hosting for fun
<azertyi> 11.10
<azertyi> ubuntu 11.10 darkqaos
<sw> darkqaos: fun tricks?
<darkqaos> @sw yeah - I've only used ubuntu as a samba server so far
<darkqaos> along with a web host (LAMP)
<darkqaos> also ran qemu, libvert, and more for an Archipel project
<darkqaos> now im looking to get some life out of this old machine
<darkqaos> htop currently shows it running with only 21mb of ram consumed :)
<pehden> http://ubuntuforums.org/showthread.php?p=11828468#post11828468
<pehden> anyone?
<virusuy> gents
<pehden> Hi
 * pehden is away: I'm busy
<patdk-lap> pehden, kill the autoaway, and why would you be attempting to compile software yourself?
<patdk-lap> expecially if you don't know how to solve simple issues like that
<patdk-lap> you should attempt to follow a tutorial that is for the os your using
 * pehden is back (gone 00:10:08)
<pehden> it wasnt an auto away
<pehden> it was a switch
<pehden> its noticable
<pehden> that tut didnt work. thats why i cam here.
<pehden> the file is there. it just says its not. my system is 64bit
<Monkeypaws> can someone chat with me about LVM on my NAS?  I want to do it, i think it makes sense for me, but i dont want to paint myself into a corner with something complicated...  I want to add in drives of varrying sizes regurly and have it presented as 1 store.  I don't think i want the snapshot stuff, just the ability to add in drives as I go.
<KM0201> wha tdo yo mean, "paint yourself into a corner?
#ubuntu-server 2013-04-01
<devral> my ubuntu server install keeps hanging and I don't get it..it hangs right after "Freeing unused kernel memory: 1102k freed"
<devral> two things that may help are: "VFS: Mounted root ext4 filesystem readonly" and "ext3-fs sda2 error: couldn't mount because of unsupported optional features (240)" and also "ext4-fs sda2: mounted filesystem with ordered data mode Opts: null"
<dandkburt> Devral
<dandkburt> is it the server or the desktop your trying to install
<devral> it's the server. I've had it installed for months. it was working, then just started doing it suddenly after a power failure
<dandkburt> what ver
<devral> 12.10
<dandkburt> can I pm you
<devral> sure
<petey> hello would anyone know why my /etc/mysql/my.cnf would be empty?
<sarnold> petey: it's easy to empty a file with > -- just > /etc/mysql/my.conf would do it
<petey> how do i repopulate it with the information it needs to run properly?
<petey> mysql -V and all the jazz of creating databases works, i just cant optimize my mysql config!
<Yuvraj> hi
<jolaren> I was fiddling and trying to get my certificate to work but now after switching everything back I can't get my website to be displayed outside the lan :(.. what did I Do?
<jolaren> I solved it by disabling 443 in ports.conf
<jolaren> hmm
<lolyp0p> hi all, i'm trying to set up a vpn connection on a server I only can acces via ssh(or a 6ours-train-trip), so I'm a bit afraid to do it as if I make the server lose internetconnection, I'd be very annoying for me... I tought about using this step-by-step http://cviorel.easyblog.ro/2009/02/09/how-to-set-up-a-vpn-server-on-ubuntu/ , does anyone have any "better" guide? (server : ubuntu 10.04)
<lolyp0p> (ifconfig output : http://paste.ubuntu.com/5667063/)(my main problem is I'm not used to work on server and the ifconfig output isn't the same as on a normal computer, I don't know what I should put in "remoteip" and "localip"))
<GeekDude> What is the best/easiest way to view and edit your firewall?
<resno> ufw
<resno> comes to mind
<resno> GeekDude: best / easiest is subjective
<resno> some would say, iptables is the best (direct control)
<resno> others would say ufw is good
<resno> i personally use cfs...
<shauno> iptables wins points for viewing, simply because it'll show them no matter what frontend (ufw etc) created them.  but if you even have to ask, it's probably going to fail you on all other fronts.
<halvors> I'm trying to setup nagios3 on ubuntu-server 13.04.
<halvors> I found that there is a lot of commands in the /usr/lib/nagio3/plugins folder. But none of this can actually be used as a service in my switch.cfg file...
<halvors> Is there any way i can actually use all these plugins that i've installed?
<Pici> halvors: you'd need to define commands for them.
<halvors> hmm.
<halvors> In the /etc/nagios3/conf.d/services_nagios2.cfg?
<Pici> You may want to check out nagios's online documentation, it can get kind of complicated.
<halvors> Shouldn't this been included by the package?
<Pici> I don't know what a default nagios3 install looks like anymore, sorry.
<halvors> Pici: Nagios's documentation and ubuntu's nagios package seems to differ when it comes to configuration files...
<halvors> Pici: Do you know any services.cfg file i can get from the net that covers all the plugins?
<Pici> halvors: its all very fluid. You can call the files whatever you want as long as they are referenced in your nagios.cfg file
<halvors> Pici: Do you know the command for monitoring the port status using snmp
<ZarroBoogs> halvors: Sorry, I don't have any snmp commands setup on my nagios install here, and even if I did, I don't know the MIB for that.
<halvors> hmm. The commands seems to be "check_snmp!-C public -o ifOperStatus.1 -r 1 -m RFC1213-MIB" in the nagios docs, but that doesn't work on ubuntu.
<halvors> Also width the ubuntu package...
<guma> when I am trying to do gcore I am getting at times this messages... warning: Memory read failed for corefile section, 1048576 bytes at 0x7f700846c000. What can cause this? (Ubuntu 12.10 x64)
<sarnold> guma: does /proc/pid/maps show that area lacking read permission?
<guma> sarnold: let me check
<guma> sarnold: I never looked this befroe. I am new to this. I am assuming you asking me to check range for 0x7f700846c000?
<sarnold> guma: yeah
<guma> sarnold: 7f700846c000-7f700866b000 ---p 0000c000 08:12 2097441                    /lib/x86_64-linux-gnu/libnss_files-2.15.so
<guma> I guess so :)
<guma> why would this be?
<sarnold> good question :) I wonder the utility of a mapping with no privileges..
<halvors> I'm trying to use snmp with nagios, but are getting the error "Error: Service check command 'check_snmp' specified in service 'Uptime' for host 'infected-tech' not defined anywhere!"
<halvors> Anyone knows why?
<andol> halvors: Well, have you defined the service command check_snmp anywhere? :)
<andol> halvors: Unsure on how familiar you are with Nagios, but having a check_snmp service command is separate from possibly having a binary withthe same name.
<andol> halvors: As an example, take a look in /etc/nagios-plugins/config/ping.cfg and you will see the service command check_ping using the check_ping plugin, but also a bunch of other service commands relying on the same plugin
<guma> sarnold: I am wondering is ubuntu has some special security "enabled" that does lock some areas...
<sarnold> guma: not that I'm aware of..
<guma> sarnold: what makes me think about this that this is libnss.
<sarnold> guma: on my bash, libc, libdl, libtinfo, libnss_files, libnss_nis, libnsl, libnss_compat all have mode ---p ...
<guma> ok
<guma> ok so perhaps this is how it is and gdb should just be "silent" about this...
<guma> I was just worrying that something really odd going on. But looks like you have this too :)
<Quest> hi
<halvors> andol: But why isn't it there by default?
<Quest>  Is there ANY way, if I have multiple dsl / internet connections (e.g 2mbps, 1 mbps, 2 mbps) , to use them as a combined strength of 5 mbps?
<halvors> andol: What will the command be then?
<sarnold> Quest: chapter 4 of the LARTC guide may help: http://lartc.org/howto/
<Quest> sarnold,  hm.. so its possible?
<RoyK> Quest: not for single connections, but you can combine them so that different connections use varying paths
<sarnold> Quest: yeah; I'd have thought it would require running a routing daemon to do a good job, but that guide makes it look surprisingly feasible.
<Quest> k
<RoyK> sarnold: no point in running a BGP or OSPF daemon without one on the other end of the links
<sarnold> RoyK: indeed.
<sarnold> RoyK: aka "well beyond my pay grade" :)
<RoyK> but yes, it's possible with some LARTC tweaks to make it work (somehow)
<Quest>  when i press tab key when typeing a path name like /etc/apache2  at the point /etc/apac   when tab is pressed. it doesnot auto completes the name to apache2 . rather it adds a tab space.   this behavriour is different from all the vps I ever used. why is that? aim on ssh
<zatricky> Quest: Is bash-completion installed and are you using bash or a different shell?
<andol> halvors: Well, does any of the commands in /etc/nagios-plugins/config/snmp.cfg satisfy what you need to check? Otherwise, just create your own, and use them as examples/inspiration.
<andol> halvors: I mean, the way the service commands are defined is all about what parameters you want to be able to pass to the plugin.
<Quest> zatricky,  iam on ssh by console
<Quest> zatricky,  I dont know about bash completeion
<RoyK> Quest: which shell?
<RoyK> Quest: ps $$
<sarnold> Quest: dpkg -l bash-completion will tell you about the bash-completion package
<RoyK> sarnold: bash doesn't need that for tab expansion - that package just makes it better
 * RoyK guesses Quest is using dash
<RoyK> (or tab completion, even)
<tonyyarusso> Does anybody actually understand what makes the MOTD say "X packages are security updates"?  I just had that in there, but doing an 'apt-get -s upgrade' everything was from -updates, not -security.
<jdstrand> tonyyarusso: /etc/update-motd.d/90-updates-available
<Quest> RoyK,  I have command concole in my linux OS. and the vps iam sshing is also linux
<jdstrand> tonyyarusso: note that security updates are copied from security.ubuntu.com (ie, -security) to archive.ubuntu.com (ie, -updates) within an hour of publication
<RoyK> Quest: ps $$ ?
<jdstrand> tonyyarusso: for mirroring
<tonyyarusso> jdstrand: aaah, hrm.  So there must be some separate marker then.
<Quest> sarnold,    bash-completion                            1:1.3-1ubuntu8                             programmable completion for the bash shell
<Quest> RoyK,  ps $ ? whats that
<zatricky> type the "ps $$" in the shell without the quotes
<RoyK> ps $$ prints the process info of the current process (your shell)
<zatricky> its two dollar-signs
<sarnold> Quest: is it 'ii ' or some other status? 'ii ' indicates installed without error
<Quest> RoyK,  yes. i noticed that the differenct amoungs other vps and this is that its only shows $  rather  user@someipt$
<Quest> sarnold,  ii
<RoyK> Quest: its output should be something like this http://paste.ubuntu.com/5668131/
<jdstrand> tonyyarusso: looks like /usr/lib/update-notifier/apt-check is what generates the actual message
<Quest> mine is like this
<Quest> PID TTY      STAT   TIME COMMAND
<Quest>  3371 pts/0    Ss     0:00 -sh
<RoyK> Quest: chsh -s /bin/bash
<sarnold> RoyK: another guess well guessed. :D
<RoyK> sarnold: :)
<Quest> RoyK,  did that
<Quest> RoyK,  what now?
<RoyK> Quest: then log out and in again
<Quest> hm
<Quest> what was actually wrong?
<RoyK> whong shell
<RoyK> you had dash as your default shell, and dash sucks
<Quest> dash?
<RoyK> dash
<RoyK> or sh, which normally is a symlink to dash
<sarnold> dash does its best to be a small and fast POSIX-compliant 'sh'. But 'sh' is not very friendly. 'bash' is friendlier to more people.
 * RoyK has no idea why dash should be the default shell, since it's not like we're killing our computers running bash
<sarnold> RoyK: it might be a reasonably VPS setting though, they might give him only 128 M or something. if you're not using those extra features, that extra megabyte might be awesome :)
<Quest> hm
<RoyK> setting SHELL=/bin/bash in /etc/default/useradd makes bash the standard shell for new users
<Quest> )
<Quest> :)
<Quest> hm
<Quest> the admin made my account incompitentlt
<Quest> y
<RoyK> sarnold: really? normally you're not logged into the box most of the time, and using dash really sucks
<RoyK> in my 'raidtest' vm, ps v $$ tells me RSS for bash is 3376kB - not a whole lot
<Quest> how to give /home/usename folder to the 'username' ?
<sarnold> RoyK: I'd prefer to have it for tab-completion, but on a slow-enough link, tab completion uncertainty makes me prefer typing commands entirely...
 * RoyK generally isn't using that slow links :P
<sarnold> RoyK: it's jarring when on poor-quality wifi.. :)
<Quest> the admin has sucked
<RoyK> sometimes, ok, some Edge link out in the middle of nowhere - that can be rather bad
<RoyK> Quest: what's bad? just the shell?
<RoyK> that's default, btw, not the admin's fault
<zatricky> or when the server you're looking at has unexplained high load and you need to figure it out
<Quest> the shell is ok now. but he made my account by my name and gave my home folder as root owner
<Quest> RoyK, ^
<RoyK> Quest: hehe
<Quest> RoyK,  how to own my home folder?
<RoyK> Quest: should have just added you with 'useradd -m <username>' to have that created automatically owned by you
<RoyK> Quest: do you have sudo access?
<Quest> I guess
<RoyK> sudo -i
<Quest> I guess i couldnt without sudo access?
<RoyK> no
<RoyK> that's part of standard unix security
<Quest> ya. i can sudo
<RoyK> sudo -i; cd ~yourusername; cp -R /etc/skel/. .; chown -R yourusername:yourusername .
<Quest> what wil  cp -R /etc/skel/ do
<RoyK> the 'skeleton' of what should be in a homedir is in /etc/skel. that's stuff like .bashrc. if those files are there already, you don't need that part
<RoyK> I just guessed they're not, since the dir was owned by root
<Quest> hm
<Quest> RoyK, http://paste.ubuntu.com/5668208/
<RoyK> Quest: first, check if you have those files in /etc/skel/ in your homedir - just 'ls -a' in both $HOME and /etc/skel/
<RoyK> Quest: then, if you don't, first sudo chown -R $USER:$USER $USER
<zatricky> strictly-speaking not sure if this is server-related - battling to find a ppa or repository for sickbeard - any hints on where I should look?
<RoyK> then cp -R /etc/skel/. . # with the dots and spaces
<Quest> RoyK,  chown Masood /home/Masood              made drwxr-xr-x  2 Masood root 4096 Mar 27 20:45 Masood
<RoyK> Quest: $USER:$USER, not just $USER
<RoyK> you want the group to be changed too
<Quest> hm
 * RoyK wonders what sort of admin that creates usernames with initial caps
<Quest> :) dumb admin
<sarnold> heh
<RoyK> no, it doesn't matter, it's just bad practice
<zatricky> I was under the impression that a username could *not* be uppercase
<Quest> done . Masood@li53-245:/home$ sudo chown Masood:Masood /home/Masood
<Quest> drwxr-xr-x  2 Masood Masood 4096 Mar 27 20:45 Masood
 * RoyK hasn't even thought of using anything but lowercase
<RoyK> Quest: looks ok - pastebin 'ls -a $HOME'
<sarnold> don't forget the -R ...
<RoyK> sarnold++
<RoyK> Quest: sudo chown -R Masood:Masood /home/Masood
<RoyK> otherwise you'll only chown the dir, not its content
<Quest> thanks !
<Quest>  ls -a $HOME
<Quest> .  ..
<RoyK> ok, do 'cd; cp -R /etc/skel/. .'
<RoyK> cd without arguments will send you to $HOME
<Quest> where do i need to be while doing cd; cp -R /etc/skel/. .
<RoyK> logged in as your user. cd alone will take you to your homedir
<RoyK> Quest: or cp -R /etc/skel/. $HOME if you're nervous
<Quest> RoyK, http://paste.ubuntu.com/5668248/
<RoyK> Quest: what? no .bashrc or anything in /etc/skel/?
<sarnold> o_O
<Quest> http://paste.ubuntu.com/5668250/
<RoyK> well, copy those into your homedir
<RoyK> the cp -R command should certainly do that for you
<Quest> RoyK,  i should cp -R    while iam in /etc/skel?
<RoyK> just tried locally here, and it does
<RoyK> 20:36 < RoyK> Quest: or cp -R /etc/skel/. $HOME if you're nervous
<sarnold> yeah, my cp -R works as expected and copied the dotfiles too...
<Quest> Masood@li53-245:/etc/skel$ cp -R
<Quest> cp: missing file operand
<RoyK> Quest: did you try the full command I gave you, or just cp -R? ;)
<Quest> :/etc/skel$ cp -R /etc/skel/. .
<Quest> cp: `/etc/skel/.' and `./.' are the same file
<RoyK> Quest: run cd alone first to go to $home, or cp to $HOME
<RoyK> I'm rather certain I said that
<Quest> iam at home now. what should i type?
<RoyK> 20:39 < RoyK> 20:36 < RoyK> Quest: or cp -R /etc/skel/. $HOME if you're nervous
<Quest> k
<Quest> done
<RoyK> ls -a $HOME
<Quest> .  ..  .bash_logout  .bashrc  .profile  www
<RoyK> then try to log out and in again
<Quest> .  ..  .bash_history  .bash_logout  .bashrc  .cache  .profile  www
<Quest> RoyK,  ok?
<sarnold> woo.
<sarnold> Quest: now try your cd /etc/apa<tab> and see what happens. :D
<Quest> sarnold,  that worked ages ago :)
<Quest> RoyK,  RoyK,  so all that was for making these  bash ,cache , profile dires visible? why were they so important?
<RoyK> your prompt should be good now too
<Quest> yes it is
<RoyK> Quest: no, it was to make your shell work as it should
<Quest> hm
<Quest> why were those files important?
<sarnold> Quest: they're mostly important if you want to make changes to them; having the skeleton versions gives you a reasonable starting point and good comments in the files showing you some simple changes you can make
<Quest> RoyK,  you said the admin must have done useradd -m <username> . thats the correct way of doing? if so . what would be the gourp name I would be associated to?
<Quest> sarnold, hm
<sarnold> Quest: 'Masood' appeared to work fine (chown didn't complain, anyway) so that should be your group. the 'id' command will show you your id, primary group, and supplementary groups
<Quest> great
<RoyK> Quest: not *must* have done, *should* have done
<Quest> hm
<RoyK> but don'1t worry about that
<Quest> RoyK,  sarnold  thanks guys !!!!!!
<sarnold> Quest: have fun :)
<Quest> I will
<Quest> I have another question that no one answered . I have this error while mounting a ntfs partition. what can be wrong http://paste.ubuntu.com/5668301/ ?
<Quest> windows dont checks it. it says to format it
<sarnold> Quest: I've successfully used The SluethKit to recover data from damaged NTFS filesystems in the past: http://www.sleuthkit.org/
<Quest> hm
<Quest> recuva is also good. but I dont want to see my partition go in recovery
<RoyK> Quest: then don't use NTFS :P
<Quest> hm
<RoyK> I use ext4 and xfs on my systems, depending on how large disk systems I have
<Quest> RoyK,  I heard linux ext4 data cannot be recovered as linux dels the files from hd. ntfs or fat32 only dels the reference
<RoyK> Quest: well, most filesystems don't have undelete built-in as bad design like fat32 and perhaps ntfs has. they either remove a file when told to, or keep snapshots to allow rollback. the fat32 way is just very bad design
<RoyK> Quest: keep a backup - that works - on all filesystems
<Quest> is ext3 or ntfs most difficult to recover?
<RoyK> neither is hard to recover with a backup ;)
<sarnold> Quest: ext4 _can_ be configured to tell the block layer to throw away the data when it is no longer needed, via the 'discard' mount option, but since the TRIM command is a non-queuable command, it somewhat stalls the IO pipeline...
<Quest> hm
<Quest> is ext3 or ntfs most difficult to recover (assumed theres not backup)
<sarnold> Quest: the source code for ext3 is easily available. NTFS source is not. ext3 is likely more recoverable, but .. you should not be in that position in the first place. backups. :)
<RoyK> sarnold: I think Quest means he wants extN or whatever fs to misbehave like fat32
<sarnold> RoyK: oof. the number of mistakes in fat32...
<RoyK> Quest: I want my filesystems to remove a file when I tell them to
<RoyK> Quest: if you want snapshots, zfs has it, btrfs has it, lvm has it (somewhat buggy, but hell, better than nothing), xfs might get it soon
<Quest> i just wanted to know on which files system files can be recovered most easily if damaged
<RoyK> good filesystems don't get very easily damaged
<RoyK> fat32 gets very easily damaged, since it's a lousy filesystem
<RoyK> ext4 checksums metadata, meaning it'll know if something goes wrong on the lower levels
<RoyK> zfs/btrfs checksums everything, meaning it'll know if a single bitflop happened
<sarnold> is btrfs really ready for production use?
<RoyK> those are self-healing. ext4 is pretty good in that aspect too
<RoyK> sarnold: no
<RoyK> sarnold: I have it on a couple of machines for testing - works for that, but wouldn't recommend it in prod yet
<sarnold> RoyK: okay :) keep me updated. It's souded promising for .. uhh .. six years? :) but I'm still hesitant to try.
<Quest> RoyK,  there are many recovery tools for ntfs. are there as good as for ext4?
<sarnold> the wikipedia infobox on filesystems needs another entry, "developer keeps home directory in this format: yes/no"
<RoyK> sarnold: btrfs raid-[56] was just released in 3.8 (iirc)
<RoyK> Quest: I can hardly remember a recovery problem with ext4
<RoyK> Quest: it happens very rarely, since the filesystem is rock solid
<RoyK> Quest: so keep a good backup and ask if something goes wrong
<RoyK> Quest: I've only managed a few hundred linux machines the last 5 years or so, so I may be wrong, but I trust myself on the filesystem part
<Quest> ntfs is not as solid as ext4?
<RoyK> just use native filesystems
<RoyK> on whatever platform
<Quest> hm
<Quest> I have installed php5 and apache2 but php is not working. i mean .php files are not parsed. rather downloaded. why?
<thesheff17> try to restart apache2
<genii-around> Hopefully you've installed libapache2-mod-php5 and then for good measure issued:  sudo a2enmod php5     and then sudo service apache2 restart
<shodan45> what's the bare minimum space required to get a bootable OS?
<Rack27> Hey, I'm having a hard time getting nullmailer to send a mail. The mail log always shows "Temporary error in gethostbyname". Any idea what to do next?
<sarnold> Rack27: can you use nc or telnet to connect to your relay's hostname?
<Rack27> sarnold, no. Seems that this is not possible
<sarnold> Rack27: I asked about nc or telnet since they'll use the standard resolver configuration; iirc, ping has its own resolver configuration.
<sarnold> Rack27: can you do any dns from that host?
<Rack27> sarnold, sorry, I'm not quite experienced. Trying to learn this, but I can't find any good tutorial for nullmailer online. Someone wrote I have to set up /etc/hosts properly so this can work.
<sarnold> Rack27: maybe...
<sarnold> Rack27: if your mail relay doesn't have a resolvable hostname, that's a good second option. best is if it actually resolves. :)
<Rack27> sarnold: okay, thanks. Will try something ^^
<sarnold> Rack27: good luck :)
<Rack27> :)
<Quest> can any one tell why i cannot hibernate? $ sudo hibernate now .       hibernate:Warning: Tuxonice binary signature file not found. /usr/sbin/hibernate: 481: shift: can't shift that many
<holstein> are you running the server edition?
<holstein> that might be it..
<keithzg> So wait, how in the world would I retrieve a prior version of a file from S3? With versioning enabled, I would've thought it'd be easy, but even the web interface give no option to download specific older versions, much less unofficial tools like s3cmd.
<keithzg> Does anyone have any idea? Surely there must be *some* way of retrieving prior versions of files from Amazon S3.
#ubuntu-server 2013-04-02
<premera> keithzg: Have look here http://docs.aws.amazon.com/AmazonS3/latest/dev/RetrievingObjectVersions.html
<keithzg> premera: Soooo I'll probably have to use curl or such after setting up an ACL? The latter saddens me (I find Amazon's documentation on permissions extremely unfriendly), but oh well. I'll tackle this tomorrow I guess.
<premera> keithzg: I am not sure, from the example on that page it looks like you can pass version number in a url, if you know the version that is
<pmatulis_> does one typically edit iptables filter rules created from iptables-save ?
<lifeless> smoser: hey, are you planning on publishing arm cloud images ?
<lifeless> smoser: (also, https://wiki.ubuntu.com/UbuntuCloud/Images/Publishing has stale releases; is it otherwise up to date?)
<anepanaliptos> hey guys, you know how in linux we can make our net/interfaces look like  iface eth0 inet dhcp and then also put a line that says inet eth0:1 static
<anepanaliptos> any way to do this in windows? so my laptop always has a static ip but whatever wifi im connected to i also get the 'local' dynamic one?
<anepanaliptos> (only reason why im asking here is cos there is no way in hell a windows user would know what eth0:1 is)
<ScottK> Doesn't particularly make it on topic.
<ScottK> Personally, it's been about a decade since I'd have known that.
<tgm4883> anepanaliptos, you can set a second IP address on a NIC in windows, I'm unsure if it will work like you want it to though
<dandkburt> can I get some help diagnosing my eggbot
<ScottK> !ask | dandkburt
<ubottu> dandkburt: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<dandkburt> cannot get eggbot to connect
<blag> I can't get postfix to send mail to gmail servers, i always get either "connection timed out" or "network is unreachable" errors. i know that my server can connect to the internet, so can anybody help me?
<blag> dandkburt:  why did you pm me to connect to ddb.ddbirc.net ?
<dandkburt> because that was the only place that was able to help me on my postfix prob
<dandkburt> all I got here was go to a http here
<dandkburt> and I found out it was a problem with anoter area
<chrislabeard> is there anyway to mount an afp share in ubuntu? I tried afpfs-ng but there seems to be an outstanding bug on it.
<cusion> hi, there, i tried to set up proxy server on my ubuntu server 12.04 using squid3, after i set "http_access allow all" and run squid, the tcp port 3128 did begin to listen, however, when i configured proxy on another PC and attempted to surf the Internet, it did not seem to work as intended. I found that the server did not send back ack packet against request of my client at all. how is
<cusion> that possible, and could anyone help me? Thanks
<bestdnd> hi. i want to record all network trafic with a certain server over several minutes. what program should i use? i need the info about ports, protocols and the time. thanks
<workerbee> Hello. How would I install/run a webserver without root access (protable). Perhaps in the home or any other directory?
<workerbee> Im not a privilidged user there.
<Jeeves_> workerbee: You can run it from /home and as your own user, you will just not be able to listen on ports < 1024
<doogy2004> Python has a webserver built into it.  Check this out: http://www.linuxjournal.com/content/tech-tip-really-simple-http-server-python
<workerbee> Jeeves_: so ports >= 1024. but is that bad ?
<workerbee> What Im about to install drupal on that one.
<workerbee> doogy2004: ok. ill check.
<Tzunamii> Q: I'm getting garbled kernel logs running 12.04.2 server with LXC. Anyone with a solution, please?
<workerbee> doogy2004: unfortunately the Simple server is not installed. not actually a ubuntu. but a debian system.
<Jeeves_> workerbee: Well, depends on what you want to do with the site :)
<elithrar> Hi all. Having an issue with 12.10 + ufw on EC2. apt-get won't work/resolve hostnames. ufw default deny incoming/outgoing, but have ufw allow out 53 & ufw allow from <dnsip>
<elithrar> join ##aws
<workerbee> Jeeves_: I just want to test a drupal7 modules locally on a terminal machine. So I dont have root rights. I hoped theres a hit-and-run webserver application for this purpose.
<Jeeves_> workerbee: You can run any php-capable webserver
<Jeeves_> As any user
<Jeeves_> So what you want is possible, just not on port 80
<Jeeves_> But any port > 1024 will do, unless it's in use
<Jeeves_> Just visit the site using http://localhost:port/
<thejoecarroll> i'm having trouble with sshd on 12.04.1 after trying to restart the daemon. 'sudo service ssh start' results in errors like the following in syslog: 'Apr  2 14:53:13 dev kernel: init: ssh main process (10234) terminated with status 255'. i'm currently connected to the VPS via ssh and absolutely must get this fixed ASAP or the server will become unmanageable :-(
<elithrar> thejoecarroll: Do you have a malformed config file?
<thejoecarroll> i don't think so
<thejoecarroll> i've looked throug it for problems but haven't found any
<freesbie_> try starting sshd by executing it directly, then you may get a more usable output
<thejoecarroll> good idea. not much of help in any logs
<freesbie_> checked daemon.log ?
<thejoecarroll> that's empty
<thejoecarroll> i wanted to tighten the security settings in /etc/ssh/sshd_config there must be some error i've introduced
<freesbie_> then best action is to execute directly to get the error
<thejoecarroll> oops. just spotted a typo: i was comparing my previous etckeeper/bzr-committed and new versions of ssh_config instead of sshd_config
<thejoecarroll> no wonder they were the same
<thejoecarroll> let's have another look :-)
<thejoecarroll> i reverted to the last working version of the config and it's working again. phew! now i'll see what i messed up and do it right
<thejoecarroll> thanks for the support :-)
<thejoecarroll> turns out i had mistyped "no" after PermitRootLogin
<workerbee> Jeeves_: I think you assumed the webserver is already installed.
<Jeeves_> workerbee: If it isn't. Just download the source from $website and compile it
<Jeeves_> I didn't say it would be easy :)
<workerbee> Jeeves_: alright :) Still, it should be made one easy thing, dont you think?
<Jeeves_> workerbee: No.
<workerbee> Jeeves_: one click webserver sandboxing.
<Jeeves_> workerbee: You have vps suppliers for that :)
<workerbee> Jeeves_: you mean a sandbox hosted somewhere else. not locally?- I would not feel comfortable with that, because the data to test on is sensible (user accounts, passwords).
<Jeeves_> workerbee: Your test-environment is sensitive on security?
<Jeeves_> workerbee: Anyways, nevermind.
<Jeeves_> I think you should fix a vps or vm on your own hardware to do proper testing, instead of fiddling with a webserver on a machine without proper permissions
<workerbee> Jeeves_: yes its propably the best. thanks a lot.
<thejoecarroll> i'm doing an rsync backup from a hosted VPS to local, physical server and twice it has gotten stuck at the same point on the same large file. the verbose progress output includes the following errors, which are making me concerned about the possiblity of a pending hardware failure or unreliable storage where the VPS is hosted:
<thejoecarroll> https://gist.github.com/thejoecarroll/5292211
<dandkburt> who is hosting the vps
<thejoecarroll> OVH
<zul> yolanda: https://code.launchpad.net/~zulcss/nova/rc2/+merge/156558
<yolanda> done
<zul> yolanda: thanks https://code.launchpad.net/~zulcss/horizon/rc2/+merge/156562
<yolanda> ok
<zul> yolanda:  one more after this one https://code.launchpad.net/~zulcss/keystone/rc3/+merge/156564
<zul> yolanda:  https://code.launchpad.net/~zulcss/swift/rc2/+merge/156569
<zul> yolanda:  one more https://code.launchpad.net/~zulcss/python-cinderclient/1.0.3/+merge/156575
<dizopsin> hi! now this really sounds like a faq but i have so far been unable to research anything conclusive ever since i first used 10.04:
<dizopsin> how can i get "traditional" output during boot, ie at least one line of output per service started, with an "OK" if everything worked out or the corrpesponding error msg in case of an error?
<zul> jcastro:  ping
<jcastro> pong
<zul> jcastro:  do we have a list of whats in the charm store and what servers are being used by the charms?
<jcastro> what do you mean by "what servers"?
<zul> jcastro:  ie if the charm is using nginx or apache
<jcastro> http://jujucharms.com/charms/precise has the full list for precise
<jcastro> that depends on the charm author
<jcastro> other than wordpress most use apache off the top of my head.
<zul> jcastro:  cool thanks
<jcastro> if a charm has support for nginx but not apache I consider it a bug
<jcastro> but like wordpress has an option to switch, etc.
<jcastro> actually, the policy is to default to things in main if available
<zul> jcastro:  gotcha thanks
<zul> adam_g:  http://people.canonical.com/~chucks/ca when you get a chance
<hallyn_> roaksoax: guess we'll chat later :)
<roaksoax> hallyn_: :)
<Darkstar1> hi all anyone here can soundboard with me on how to configure apache to proxy for tomcat?
<Darkstar1> I have installed mod jk
<Darkstar1> configured virtual hosts
<Darkstar1> and the tomcat server.xml
<stercor> I'm installing Ubuntu 12.10 server. It stops at "Unable to install busybox-initramfs".  10.04LTS, 11.* all do this. Console 4: apt-install or in-target is already running, so you cannot run either of them until the other instance finishes. You may be able to use 'chroot /target instead.'  Where to from here?
<RoyK> stercor: what sort of hardware? or is it a vm?
<stercor> e-machines AMD64
<stercor> 4G RAM; 2Tb hd
<stercor> Currently no system.
<RoyK> first hit on google: http://ubuntuforums.org/archive/index.php/t-1103751.html
<stercor> I have another box to get the boot USB drive
<RoyK> !bug 904021
<uvirtbot> Launchpad bug 904021 in ubuntu-meta "Problem during setup" [Undecided,New] https://launchpad.net/bugs/904021
<RoyK> seems others have the same issue
<RoyK> stercor: which is rather strange - I've installed 12.04 (and 10.04) on numerous machines and never seen this
<RoyK> stercor: what make of machine is this?
 * RoyK hasn't heard of e-machines
<genii-around> Dell crap
<sarnold> RoyK: they were first to market with a <$1000 USD PC!
<RoyK> http://en.wikipedia.org/wiki/EMachines this one?
<ogra_> e-machines definitely isnt dell
<ogra_> its even cheaper
<genii-around> Yeah, I said Dell when it was Acer
<sarnold> heh, somehow I missed both the gateway _and_ acer acquisitions
<genii-around> I took a few apart before, mostly OEM boards from MSI in them
<RoyK> perhaps they have some funny BIOS?
<sarnold> hah, love the bios update that bricked machines. way to test it, guys...
<genii-around> sarnold: Weirdly, it would still boot from the floppy though. No video. So you had to boot to an older bios disk and type the commands in blind
<RoyK> sounds like an engineering masterpiece
<jcastro> utlemming: Hey are we going to try to get our images listed on vagrantbox.es?
<utlemming> jcastro: we are :)
<utlemming> jcastro: I just need to update the list
<jcastro> utlemming: oh I see
<stercor> I'm installing Ubuntu 12.10 server. It stops at "Unable to install busybox-initramfs".  10.04LTS, 11.* all do this. Console 4: apt-install or in-target is already running, so you cannot run either of them until the other instance finishes. You may be able to use 'chroot /target instead.'  Where to from here?
<jcastro> we're under "Official Ubuntu blah blah" and not under "Ubuntu"
<utlemming> yup
<sarnold> genii-around: wow. just wow.
<genii-around> sarnold: Yeah, pretty messed up.
<RoyK> !patience | stercor
<ubottu> stercor: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com/ or http://ubuntuforums.org/ or http://askubuntu.com/
<RoyK> is there something like vagrant for kvm/libvirt?
<Free99> hey everyone. I'm getting a lot of spam from cron regarding a php session cleanup, short of just putting the output into /dev/null, does anyone know of a better method to only email me if there is an error?
<RoyK> fix the script ;)
<Free99> (shrug) came from the repos :D
<tonyyarusso> Free99: Either modify the script or create a wrapper that only gives output on error.
<RoyK> Free99: which script from what package? what ubuntu version?
<Free99> /etc/cron.d/php5 on ubuntu 12.04.2
<Free99> x64 by the way
<dandkburt> anyone here know php, css, and html5
<RoyK> !ask | dandkburt
<ubottu> dandkburt: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<RoyK> dandkburt: but then, I don't think this is the best channel for web coding
<dandkburt> Royk that is my question
<stercor> I'm installing Ubuntu 12.10 server. It stops at "Unable to install busybox-initramfs".  10.04LTS, 11.* all do this. Console 4: apt-install or in-target is already running, so you cannot run either of them until the other instance finishes. You may be able to use 'chroot /target instead.'  Where to from here?
#ubuntu-server 2013-04-03
<lifeless> smoser: ping
<petey> hey how do i change the permissions of a file?
<petey> i want to change the ownership from root to peter but i cant quitoe get it
<petey> peter ah got it, chown
<elithrar> Hi all. Having issues with an EC2 instance + Ubuntu 12.10 + ufw. A `ufw default deny outgoing` breaks aptitude, ping & DNS, despite allowing
<elithrar>                    53/80/443 out, and allowing established connections through modifying
<elithrar>                    ufw.before.rules
<elithrar> (excuse the line breaks!)
<sarnold> elithrar: ping uses the icmp protcol; I don't think ufw does anything with icmp. Are you sure you aren't fighting the aws security groups?
<elithrar> sarnold: You're right re: ping. Definitely not an AWS sec-group issue, as the problem disappears if I disable ufw/turn off the deny outgoing rule
<sarnold> elithrar: aha :) that's a bit conclusive then
<elithrar> Extremely similar to this issue: http://serverfault.com/questions/416727/ufw-blocking-apt-and-dns
<sarnold> elithrar: ha! my mistake. ufw does do icmp: https://help.ubuntu.com/community/UFW#Allow_Access
<sarnold> ... and out of battery, good luck :)
<casemanspaceman> Hey y'all! First time ever on IRC... Have a question that is not easily answered by combing FAQ's/archives etc: I've been playing with Ubuntu variants for a couple of years now, and I'm just installing Ubuntu Server for my first time (on a hand-me-down rackmount server: HP/Xeon 2.8 ghz {x2} w 8gb ram, 2 36 gb ultrascsi's). Install was surprisingly easy, but here is my question: how many services/functions is it reasonab
<casemanspaceman> le to run on a unit like that: ie; I'm interested in the following uses: hosting my own e-mail (citadel), hosting a wordpress site/blog, deploying an instance of mediagoblin, running mythbuntu and one or two other functions. Should I have a box for each service, or is combining any or all of these going to work? Suggestions?
<casemanspaceman> Is it crazy to use an Ubuntu Server box for more than one function?
<sw> casemanspaceman There's not a simple answer apart from to test yourself. You could host one blog with a couple of K hits day or a couple of K blogs with a handful of hits per day.
<sw> No, not at all, depending on the scenario ...
<sw> Or, if you don't have too.
<sw> :b
<casemanspaceman> I won't have a ton of hits by any means
<sw> So you'll be fine.
<sw> You could host that on a PC, without issues, probably, lol.
<casemanspaceman> just playing around to learn my way around the system and personal use mostly
<casemanspaceman> Sweet
<sw> So just install them and test.
<casemanspaceman> Experimentation it is then.
<casemanspaceman> Sounds kile how I like to roll...
<casemanspaceman> Thanks Ya'll.
<casemanspaceman>  : )
<feisar> hi, if my Nagios server is outside my LAN do I need a public IP for each NRPE client I want to monitor?
<andol> feisar: Sounds like your LAN is doing that NAT thing? Well, without having any specific experience of NRPE it feels like an individual tcp port ought to do the trick?
<feisar> andol: thanks, yeah only a couple of servers are natted to a public IP but I'd like to monitor them all... so your're saying maybe 1 public IP with a different port for each NRPE client?
<andol> feisar: Well, that is what my gut is telling me at least.
<andol> feisar: Or you could start doing the IPv6 thing, and not have to worry about NAT anymore :)
<feisar> andol: yeah i'm planning on setting ipv6 set up later this year but I need it to work sooner... how about a tunnel between the remote server and the LNA?
<andol> feisar: Individual tunnels between each clients, or one tunnel giving the Nagios server access to the NAT ip range? The later does actually sound like a potentially good idea.
<feisar> andol: 1 tunnel giving the nagios server access to the ip range is what I was thinking
<feisar> OpenVPN I guess? (it's not something I have done before)
<andol> Would have been my choice.
<feisar> andol: thanks
<zetheroo1> how do I get something like this to work:
<zetheroo1> echo "world" | mail -s "hello" me@email.com
<zetheroo1> I am doing this with my email address in the path and it reports no errors but I am not getting any new email in my email account either
<zetheroo1> do I need to configure some email SMTP settings on the server?
<James_L> Hi I have a Windows 7 machine that I want to change in to a Ubuntu server, so I downloaded the Ubuntu server .iso, burned it to disc and selected boot from disc but then it just goes to Windows a couple of moments after.
<James_L> Any help?
<ogra_> "selected boot from disc" ... you mean you set your BIOS to boot from CD ?
<James_L> Yes
<ogra_> and you did a raaw burn of the iso to the CD ? not just drag and drop the iso file into a burning app ?
<James_L> Erm, there's an .iso on the disc
<James_L> Is that bad?
<ogra_> yes
<ogra_> thats wrong :)
<James_L> Is that what's stopping this?
<ogra_> your burning app should have an option to write the iso directly to the CD
<ogra_> if you put it in afterwards you should actually see a lot of files and folders
<ogra_> so burn it again (i think there are wikipages on the ubuntu wiki about how to burn an iso under windows)
<James_L> Windows 7 and 8 just have built-in burning apps now don't they?
<James_L> I think, anyway
<James_L> I did that one on a Mac, but I'm on Windows now at work
<ogra_> try right clicking the iso file, might be there is an option ...
 * ogra_ hasnt used any windows since win 98 ... 
<ogra_> look at the wiki, i'm sure there are pages describing how to do it right
<James_L> Ok, thanks!
<James_L> :D
<thejoecarroll> can anyone help me more acurately identify an apparently troublesome device on a hosted VPS? some files from /sys/devices/LNXSYSTM:00/device:00/PNP0A03:00/device:13/physical_node/ tell me the following:
<thejoecarroll> vendor = 0x15ad (VMware)
<thejoecarroll> device = 0x07a0
<thejoecarroll> class = 0x060400
<thejoecarroll> unfortunately i couldn't find the device or class information here: http://www.pcidatabase.com/vendor_details.php?id=391
<thejoecarroll> path = \_SB_.PCI0.PE51
<jotterbot1234> has anyone used carddav-php here? https://github.com/graviox/CardDAV-PHP
<jotterbot1234> I would love an example of correct usage
<Sander^work> Do anyone have any software to recommend to keep track of virtual and physical servers and they're IP adresses? I'm not sure if a spreadsheet is best to use.
<jotterbot1234> I have setup a LAMP server but cannot seem to pass the right command to it?
<zetheroo1> how do I get something like this to work:
<zetheroo1> echo "world" | mail -s "hello" me@email.com
<zetheroo1>  I am doing this with my email address in the path and it reports no errors but I am not getting any new email in my email account either
<zetheroo1>  do I need to configure some email SMTP settings on the server?
<ogra_> check your logs :)
<zetheroo1> all mail logs in /var/log are empty
<zetheroo1> and nothing pertinent in syslog
<ogra_> well, your mail server daemon should definitely  log something to mail.info ...
<zetheroo1> hmm ... mail server  ...
<ogra_> is it actually running (whatever you installed for delivery, check its up)
<zetheroo1> is there a default mail server package for Ubuntu?
<ogra_> i think postfix is the default
<zetheroo1> when I try to install postfix I am told that exim4 will be removed
<ogra_> oh, so you installed exim4 already ... just configure and start it then
<zetheroo1> ok
<thejoecarroll> anyone here using the combination of backupninja and duplicity for backups?
<BenyG> Some people should not even be allowed near a computer, let alone a server...
<zetheroo1> very strange issue here ... I have a samba/cifs share on a server that was working perfectly on Windows and Linux clients (mounting the share with no issue), then I changed the disk that the share was on and now the share is no longer mountable in both Linux and Windows ... the only diff between the two disks was that the one that worked wast 1TB and the other that did not work is 2TB
<zetheroo1> also the drive that worked was ext3 and the drive that does not work is ext4
<Nico_O> Hi installing Ubuntu server 12.10 via CD-ROM but receive error 'Your installation CD-ROM couldn't be mounted'.
<Nico_O> The CD-ROM is ok though, otherwise the installer wouldn't load!
<Nico_O> Any suggestions?
<Nico_O> Hi installing Ubuntu server 12.10 via CD-ROM but receive error 'Your installation CD-ROM couldn't be mounted'.
<Nico_O> Hi installing Ubuntu server 12.10 via CD-ROM but receive error 'Your installation CD-ROM couldn't be mounted'.
<a11235> Hi, I'm unable to connect to my webserver. Nginx is listening on port 80 and I have set up ufw to allow any connection to port 80. What can I do to diagnose the problem?
<a11235> hem never mind, the problem seems to be my isp..
<rigorm0rtis> Hello. I'm having issues with a freshly-upgraded 10.04 -> 12.04 Ubuntu server. I have a RAID1 device that is no longer assembling on boot. However, I can assemble it manually after boot if I stop the incorrect devices that are there and use MDADM to assemble manually. How can I get auto-assembly working again?
<xnox> rigorm0rtis: what's the output of "ls /etc/udev/rules.d/65-mdadm.vol_id.rules" ?
<xnox> does that file exist?
<rigorm0rtis> No
<xnox> ok, good.
<xnox> rigorm0rtis: and you have all updates applied? there were mdadm updates released.
<rigorm0rtis> xnox, aptitude shows no updates
<rigorm0rtis> It's really strange. This system is supposed to have a RAID1 device md0 with devices sdb1 and sdc1. When it comes up, it has md0 with sdc1 as spare and md127 with sdb as spare. Neither devices can be activated. In order to get raid going, I have to stop md0 and md127, then use mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 and then everything works.
<xnox> rigorm0rtis: so for some reason, it does not identify that raid as matching "homehost" hence it's punting it to md127. *sigh*
<soren> hallyn_: I'm having trouble with kvm on Raring. I'm tryig to run the installer (using the quantal mini.iso) and it freezes when it gets to the "detecting hardware" stage. At 0%.
<rigorm0rtis> xnox, so, what does that mean?
<genii-around> rigorm0rtis: Does: raid1     exist in /etc/initramfs-tools/modules   ?
<soren> hallyn_: Does that sounds familiar at all?
<rigorm0rtis> genii-around, no, it doesn't
<hallyn_> soren: i normally get that when mini.iso is out of date i think
<genii-around> rigorm0rtis: I would suggest to add it there with admin priveleges, then sudo update-initramfs -u
<hallyn_> (i.e. it downloads some kernel modules which don't match the kernel)  otherwise no.  i can give it a shot right now...
<rigorm0rtis> genii-around, I will try that.
<soren> hallyn_: Oh, hang on. I may be doig something silly.
<hallyn_> soren: yeah i got past the partitioner just now
<rigorm0rtis> genii-around, It is still not assembling on boot. Still comes up as md0 and md127.
<soren> hallyn_: I just ran "kvm -hda blah -cdrom mini.iso -boot d".
<soren> hallyn_: I'm guessing I may simply be out of RAM.
<hallyn_> yeah it's too bad we lost that patch setting minimum ram.  I could re-add it, but have to make sure to do it only for x86 target builds.
<soren> hallyn_: Yeah, adding a gig of RAM totally solved it.
<soren> hallyn_: Sorry about the noise.
<hallyn_> np :)
<genii-around> rigorm0rtis: Apologies on lag, work is requiring me, plus trying to assist in another channel
<rigorm0rtis> genii-around, No problem.
<genii-around> rigorm0rtis: Can you pastebin result of sudo mdadm --detail /dev/md0       ?
<rigorm0rtis> genii-around, do you want that in the broken (right after boot) state, or in the fixed state (after I stop the bad md devices and assemble manually)?
<genii-around> rigorm0rtis: Both would be interesting... but the fixed state is what I would like
<genii-around> rigorm0rtis: After you re-assembled it the first time, did you fsck it before mounting?
<rigorm0rtis> genii-around, No, I didn't fsck it before mounting.
<rigorm0rtis> genii-around, here's md0 in it's working state: http://pastebin.com/d5jFKW5U
<rigorm0rtis> its*
<genii-around> Meh, boss wants me for 5-10 minutes... url copied will view on return
<rigorm0rtis> No problem, I'm gonna reboot and grab the broken state as well.
<soren> hallyn_: Did we use to have a higher default RAM allocation on x86?
<rigorm0rtis> genii-around, Here's mdadm --detail information in the broken state. I also included some other info. http://pastebin.com/MhEhbJxB
<hallyn_> soren: yes, 386 or something - kirkland introduced that a long time ago
<kirkland> hallyn_: ?
<kirkland> hallyn_: qemu?
<kirkland> soren: oh, yeah, once upon a time, I upped the default ram in qemu/kvm to whatever was needed to boot an ubuntu iso
<kirkland> hallyn_: did that get dropped?
<genii-around> rigorm0rtis: Please pastebin mdadm.conf
<hallyn_> kirkland: yeah we kept it around a long time, but we can't just blindly apply it nwo bc the same source pkg builds other targets
<hallyn_> kirkland: mjt even tried to push it upstream, but upstream pointedout that some arches have max 256M, and other problems
<rigorm0rtis> genii-around: mdadm.conf: http://pastebin.com/VBqNrGQQ
<hallyn_> so i do want to re-add it only for x86 target builds, but it's sort of low prio, and i don't wnt to get that wrong :)
<kirkland> hallyn_: gotcha
<kirkland> hallyn_: okay, thanks
<kirkland> soren: if you use testdrive to launch your iso, it'll handle that for you
<soren> kirkland: I'll keep that in mind.
<mjeanson> Hi, anyone using cinder from the folsom-updates cloud archive with the nexenta driver?
<hallyn_> kirkland: to be honest '-m 1024' is so reflexive i never don't type it, so i forget the patch is gone
<kirkland> hallyn_: yep
<kirkland> hallyn_: frankly, if I'm launching kvm to boot an iso, I use testdrive
<kirkland> hallyn_: it handles all of the niceties
<LargePrime> does Boot up manager work for Ubuntu server?
<LargePrime> all the guides seem to assume a gui
<LargePrime> how do you all do boot automation?
<Pici> LargePrime: automation?
<LargePrime> sutff starting at boot
<LargePrime> automagically
<LargePrime> Pici:
<Pici> LargePrime: drop things in /etc/rc.local or make an upstart job for it.  For non admins, you can define a cronjob
<LargePrime> NooB asks what is " upstart job "
<LargePrime> Pici:
<rigorm0rtis> Okay, so after running  mdadm --examine --verbose --scan, I see that /dev/sdb is being detected as a RAID device, how can I make sdb not be detected as RAID, but continue allowing sdb1 to be detected as raid
<xnox> rigorm0rtis: sounds like superblock got messed up or is misread by the new mdadm.
<xnox> rigorm0rtis: I'd recommend you to backup superblock and possibly seak linux-raid mailing list on how to "correct" the superblock. Very odd though.
<LargePrime> Pici: it seems rc.local happens when you login, not on boot?
<LargePrime> bah, that not right
<rigorm0rtis> xnox: it looks like the drive has an old superblock on it that the old mdadm elected to ignore. The creation date is older than the correct superblock that is in the partition(s).
<xnox> rigorm0rtis: interesting. If you take this dump of superblocks and if possible attach to launchpad bug & then maybe forward it to linux-raid mailing list. It's really a regression if we pick the "wrong" one.
<rigorm0rtis> xnox: How should I dump the superblocks?
<xnox> rigorm0rtis: do you have anything in /var/backups/ ?
<xnox> rigorm0rtis: looks like $ mdadm -Esc path-to-backup.dump but I somehow thought there is more to it that just that.
<rigorm0rtis> xnox: that command is just giving me my array list and not saving anything.
<rigorm0rtis> xnox: I filed this bug earlier and have since added the bit about the old superblock being detected. https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1164008
<uvirtbot> Launchpad bug 1164008 in mdadm "mdadm devices not recognized correctly at boot" [Undecided,New]
<rigorm0rtis> I'm thinking that if I zero the superblock on /dev/sdb that would stop the incorrect detection.
<xnox> rigorm0rtis: sure, but you will need to recreate a new one.
<xnox> rigorm0rtis: see https://raid.wiki.kernel.org/index.php/RAID_Recovery
<rigorm0rtis> xnox: but it seems like there are two superblocks on /dev/sdb? One at /dev/sdb and another at /dev/sdb1. Are those not separate superblocks?
<xnox> rigorm0rtis: sure. but I don't know what --zero-superblock will do. E.g. wipe all superblocks it can find, or not.
<rigorm0rtis> xnox: I took a backup before messing with this, so I think we'll get to find out! :P
<xnox> rigorm0rtis: good luck =D
<rigorm0rtis> Perfect! I ran mdadm --zero-superblock /dev/sdb and after a reboot everything appears to work as it should.
<xnox> rigorm0rtis: \o/
<rigorm0rtis> xnox: Thank you for all of your help! genii-around too, but s/he seems to be gone.
<genii-around> rigorm0rtis: I had a work emergency earlier :(    . Did you get anywhere yet with your RAID issue?
<sarnold> genii-around: < rigorm0rtis> Perfect! I ran mdadm --zero-superblock /dev/sdb and after a reboot everything appears to work as it should.
<sarnold> genii-around: < rigorm0rtis> xnox: Thank you for all of your help! genii-around too, but s/he seems to be gone.
<genii-around> sarnold: Nice, thanks!
<rigorm0rtis> genii-around: Basically there was an old superblock living on /dev/sdb that had to be zeroed. Ubuntu 10.04 mdadm was happy to ignore it, but in 12.04 it did not ignore it.
<mjeanson> are there plans to update the folsom openstack packages to further point releases (12.2.x) in the cloud archive?
<zul> mjeanson:  yes we are in the process of doing that
<zul> mjeanson:  they follow the same SRU process
<mjeanson> zul, are bugs related to cloud archive packages tracked in launchpad?
<atdprhs> I just installed ubuntu 12.10 and my apache2 settings doesn't seem to enable "test.com/test" instead of "test.com/test.php" (I want to enable multiviews)
<atdprhs> when I compare the settings for server 12.10 to a server 12.4, it seems same
<atdprhs> but it doesn't work on 12.10
 * RoyK has never used that
 * RoyK neither uses non-LTS stuff on servers
<Pici> atdprhs: Did you enable MultiViews explicitly?
<atdprhs> s
<atdprhs> yes
<atdprhs> they are supposedly enabled
<atdprhs> according to what I see
<atdprhs> I"m restartin gthe server
<atdprhs> You wanna see it?
<atdprhs> and apt-get update doesn't work
<atdprhs> err http://us.archive.ubuntu.com quantal-update InRelease
<RoyK> atdprhs: please pastebin any errors such as from apt-get
<RoyK> !pastebin | atdprhs
<ubottu> atdprhs: For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imagebin.org/?page=add | !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<atdprhs> okayz RoyK, Sorry
<RoyK> atdprhs: no problem - pasting a single line isn't an issue ;)
<atdprhs> How can I pastebin from the server direct, because the server doesn't have gui
<Pici> you could use pastebinit
<RoyK> !pastebinit
<ubottu> pastebinit is the command-line equivalent of !pastebin - Command output, or other text can be redirected to pastebinit, which then reports an URL containing the output - To use pastebinit, install the Â« pastebinit Â» package from a package manager - Simple usage: command | pastebinit -b http://paste.ubuntu.com
<atdprhs> it seems apt-get not working :S
<atdprhs> failed to fetch
<atdprhs> I tried to ping www.googe.com and it says unknown host
<RoyK> dns issues?
<atdprhs> let me check
<RoyK> try "host google.com"
<GrueMaster> Anyone here that can help woth oem-config on 12.04 server?  It seems to crash on me when I try to reinitialize it on a running system.  My steps are "apt-get install oem-config;oem-config-prepare" and reboot.
<atdprhs> timout
<atdprhs> where is the nameserver configurations?
<RoyK> atdprhs: /etc/resolv.conf
<atdprhs> OK, it says donot edit this file by hand -- your changes will be overwritten, I remember when I installed that server today, it wasn't like that
<atdprhs> do I reinstall the server?
<RoyK> then configure /etc/network/interfaces and add to your iface section "dns-nameservers x.x.x.x"
<RoyK> atdprhs: no need for a reinstall ;)
<atdprhs> iface is configured properly
<RoyK> with dns-nameservers (ip.to.working.dns-server)?
<atdprhs> Yes
<atdprhs> I'm thinking about actually remove 12.10 and install 12.04
<atdprhs> it seems less problems
<RoyK> can you do a "host google.com ip.to.dns.server"?
<atdprhs> Becaues I've been using google for 5 hours to fix the multiviews and it seems stupid
<RoyK> atdprhs: networking works on my quantal machines
<atdprhs> let me check
<RoyK> atdprhs: first of all, check if /etc/resolv.conf contains the right values. if not, it won't work regardless of what /etc/network/interfaces has
<RoyK> the values in /etc/network/interfaces are used during bootup to write /etc/resolv.conf
<atdprhs> no values in resolv.conf, it says that I shouldn't edit it by hand because it will be overwritten
<RoyK> reboot, then, and it should be written correctly
<RoyK> or
<RoyK> wait
<atdprhs> I rebooted already 3 times
<atdprhs> ok
<RoyK> edit it manually - just add "nameserver x.x.x.x"
<RoyK> to that file
<RoyK> then you should be able to update and use pastebin, and perhaps we can find the real problem
<atdprhs> that's completely work
<atdprhs> what possibly edited that?
<RoyK> nothing should have edited that
<RoyK> pastebin /etc/network/interfaces
<hallyn_> stgraber: on some of the arm platforms you're on, is 12k on the stack for pathnames and temp copy buffers too much to demand?
<atdprhs> I installed pastebinit, and typed "pastebin /etc/network/interfaces" and it says pastebin not found
<RoyK> pastebinit, not pastebin
<atdprhs> 5674699
<RoyK> no dns settings there
<atdprhs> yes because the default gateway handles the dns
<RoyK> you still need to tell linux which dns server to use :p
<RoyK> you don't have smart anycasts on ipv4, you know
<atdprhs> alright
<atdprhs> dns x.x.x.x,x.x.x.x Right?
<GrueMaster> Is there a different channel I should be asking in regarding oem-config and ubuntu 12.04 server?
<RoyK> no, see above
<RoyK> 22:22 < RoyK> then configure /etc/network/interfaces and add to your iface section "dns-nameservers x.x.x.x"
<atdprhs> nameserver x.x.x.x then new live nameserver x.x.x.x ?
<atdprhs> line*
<RoyK> if you read what I type, this may take a wee bit shorter time...
<atdprhs> okayz
<atdprhs> done
<RoyK> perhaps add dns-search mydomain.tld to that
<atdprhs> 5674707
<RoyK> please post full URLs
<atdprhs> http://paste.ubuntu.com/5674707
<RoyK> dns-nameservers - in plural - followed by the name servers available, separated by spaces
<RoyK> 22:34 < RoyK> 22:22 < RoyK> then configure /etc/network/interfaces and add to your iface section "dns-nameservers x.x.x.x"
<atdprhs> done
<atdprhs> do you know how multiviews can be enabled in 12.10?
<RoyK> 22:07  * RoyK has never used that
<RoyK> !patience | atdprhs
<ubottu> atdprhs: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com/ or http://ubuntuforums.org/ or http://askubuntu.com/
<atdprhs> Thank you Royk!
<pythonirc101> Seagate Constellation ES.3 2 TB  vs Western Digital Caviar Black 2 TB  -- anyone with any experience here running them? I need to buy some and hence I ask. Planning to run ubuntu software raid.
<sarnold> pythonirc101: storagereview.com (used to be?) the place to get drive reviews..
<pythonirc101> sarnold: thanks. I've not gone to that site in ages.
<sarnold> pythonirc101: the downside is that for every manufacturer, you can find someone whose opinion you respect who will never buy another one of their drives again, ever. heh.
<RoyK> pythonirc101: most drives work. is it a 512 or 512e or 4096 sector sizedisk?
<sarnold> "512e"?
<keithzg> Hmm, I've been trying to upgrade an old VM running 10.04, but it's failing on "Failed to fetch ....groff-base_1.21-7_i386.deb Hash Sum mismatch".
<yousaf> Hi all
<yousaf> dmesg shows several [    0.000000] *BAD*gran_size: 256K 	chunk_size: 2G 	num_reg: 8  	lose cover RAM: -1G
<yousaf> i am a complete beginner and I don't know what that means
<yousaf> Google searches aren't returning helpful results either
 * keithzg tried a third time and now there's no hash sum mismatch on the package, weird
<yousaf> Anyone?
<sarnold> yousaf: smells a bit like bad memory
<yousaf> as in physical ram, right?
<sarnold> yousaf: yeah
<yousaf> phew
<yousaf> thank you
<sarnold> yousaf: .. or something broken in the memory controller or something related
<yousaf> again, physical?
<sarnold> yousaf: the linux kernel has the ability to ignore bad memory, if you want to go to that effort
<sarnold> yousaf: yeah
<yousaf> Can't believe that my host was refusing to even look at the physical stuff
<yousaf> and was telling me it is software related
<sarnold> yousaf: you could run memtest86+ from the 12.04 disc (not the 12.10 disc, that one is apparently broken) and keep note of the reported bad addresses, and then you can give a range of addresses at the kernel command line for it to avoid
<yousaf> sarnold... tonight I learned what "dmesg" does
<yousaf> I think what you are asking is a tiny bit beyond me :D
<yousaf> </sarcasm>
<sarnold> yousaf: heheh
<yousaf> Thank you though
<yousaf> Helpful to know that my host should deal with it
<sarnold> yousaf: here's a nice little guide.. "option 3" describes it: http://gquigs.blogspot.com/2009/01/bad-memory-howto.html
<RoyK> pythonirc101: the Black disk has crippled firmware with scterc disabled, so it's not very good for raid
<RoyK> sarnold: 512e is 512b emulation, it reports 512b sectors using 4k sectors under the hood
<yousaf> sarnold that brings me to an interesting question
<yousaf> how do I know that my host isn't cutting corners i.e. doing option 2 and option 3?
<yousaf> they are running mem/cpu tests as we "speak"
<sarnold> yousaf: nothing wrong with knocking out a few broken megabytes here or there...
<yousaf> But shouldn't they use a new ram?
<sarnold> yousaf: though it does mean you've to make sure that host's command line _always_ has those memmap= commands...
<yousaf> given that i am paying for a dedicated server
<yousaf> with 16GB ram?
<sarnold> yousaf: well, that -is- different. I wouldn't notice one megabyte or two here or there, but it -is- a maintainence burden for you for as long as you have that machine. it might be better for them to rededicate the machine to VM use for someone else...
<yousaf> actually, 24GB RAM
<sarnold> on the other hand, 24 gigs RAM ought to be something like $100, right? :)
<yousaf> so you are saying that I should ideally get a different server?
<yousaf> roughly $200 a month
<yousaf> I am trying to find some info your comment "make sure that host's command line _always_ has those memmap= commands..."
<sarnold> yousaf: dunno if it is an option or not, but here's a 32 gb system that's 59EUR/mo: http://www.hetzner.de/hosting/produkte_rootserver/ex4s
<yousaf> sarnold I started off with hetzner
<sarnold> cripes I oughta get one. that's crazy.
<sarnold> yousaf: oh? what'd you think?
<yousaf> probably the worst experience ever
<yousaf> infact it was that very server
<yousaf> Support is obviously non-existent but so is their portal
<yousaf> couldn't log into my portal for two days straight after I signed up
<halvors> Anyone knows how to use the check_ifoperstatus command?
<sarnold> yousaf: damn :/ thanks for the heads up :/
<yousaf> Current server is with Incero
<yousaf> And I might get this for another app https://www.datashack.net/cart/?id=191
<sarnold> yousaf: anyway, the command line -- you'd need to have something in your grub configuration to ensure that memmap=.... is passed to the linux kernel at boot. you'd probably want to even write a little script to check /proc/cmdline to make sure it is there for every boot.
<sarnold> that's a lot of IPs. wow.
<Dovid> hi all. i am working on a project for a client. should i go with the latest and greatest (12.10) or stick with 12.04.2?
<sarnold> Dovid: probably 12.04.2 -- it'll have longer support and give you more time before you have to do anything with it again.
<Dovid> ok. but 12.10 will have latest drivers i assume? client orig. wanted 11.10. i have lots of issues that i think are driver related.....
<sarnold> Dovid: 12.04.2 has a "hardware enablement" stack of kernel, xorg, and maybe a handful of other 12.10 things backported
<sarnold> Dovid: those specific things (e.g., kernel) will end support when 12.10 ends support, iirc, so they'll need updating to whatever HWE stack replaces it, in about 13 months, but it'll probably be less drastic than everything new
<pinPoint> GrueMaster:
<pinPoint> oops
<pinPoint> I need some advice
<GrueMaster> heh, ok.
<pinPoint> I have a system that has 6GB ram but it is running ubuntu 10.04.4 lts. Originally it had 2GB but I merged two computers into one. So now the assembled machine has 6GB, 2 HDDs(one ubuntu, one win server 2012), I don't need the windows drive's content anymore. Is there a way for me to upgrade to a 64bit Ubuntu 12.xx possibly and move my apache/mysql/php/etc to the new updated system?
<pinPoint> and all of my /home too
<pinPoint> main drive with ubuntu is 500gb, secondary drive is 640GB(it can be wiped).
<pinPoint> my old system was a core2duo 1.8, now the new sys is core2quad, 6gb.
<pinPoint> also 2gb on the old sys, now 6GB available
<yousaf> Thanks for your help sarnold. need to get some sleep
<yousaf> G'night
<GrueMaster> pinPoint: I'd recommend installing 12.04 to the 640G then rebooting and migrating that way.
<sarnold> g'night yousaf :)
<GrueMaster> It's kind of what I plan on doing for my system (although I will need a spare 1T drive).
<pinPoint> GrueMaster: is there a way for me to migrate all my server stuff without a lot of hacking/tweaking?
<pinPoint> and all the stuff in /etc/init.d/*
<pinPoint> sorta clueless about it all but I know I have messed around with stuff there before
<GrueMaster> You shouldn't need to change the init.d stuff.  The configs for other things in /etc can be migrated/merged.
<GrueMaster> One possible solution would be to clone the 500 to the 640, upgrade the 500 to 12.04 32bit, then go from there.
<GrueMaster> Any way you look at it, plan on a few hours to go from 32bit to 64bit.
<pinPoint> but if I upgrade the 500GB to 32bit, I still need to do some sort of move to 64bit
<pinPoint> 12lts^
<pinPoint> gawd, I should have seen this crap coming but this was almost 4years ago
<GrueMaster> Right, but if you upgrade first, then do a fresh install on the other drive, you can migrate the configs over easily, as the upgrade will do most of the config migration to the later versions.
<GrueMaster> What all are you running app wise that will need migrating?  Apache and what else?
<turkey2013> TURK
<turkey2013> varmÃ½
<turkey2013> onemli
<pinPoint> GrueMaster: apache,mysql,php, wordpress, phpmyadmin
<pinPoint> plus I play videos on this thing sometimes
<GrueMaster> Apache and mysql should be fairly easy.  The rest are mainly web apps stored in your /var/www directory.
<pinPoint> the apache is hacked up with dynamic hacks in /etc and a timer somewhere I setup from an online readup yrs ago
<GrueMaster> Mysql stores it's data in /var/mysql iirc.
<pinPoint> so clone 500->640GB, then move 500GB to precise, then?
<GrueMaster> after cloning, upgrade one or the other.  Test it thoroughly.
<GrueMaster> If it works fine, you can wipe the other with 12.04 64bit then migrate configs & data.
<pinPoint> GrueMaster: someone recommended I move to server, would you agree?
<GrueMaster> Only if you are running this as a dedicated server.  No reason it can't do both.
<pinPoint> 64bit is the crucial one really
<pinPoint> it is a private dedicated for me
<GrueMaster> It all depends on usage model.
<pinPoint> but I don't need it to be a server though
<donvito2> i put this line on rc.local /var/etc/CCcam2.x86 -C /var/etc/CCcam2.cfg &
<GrueMaster> Then there is no reason to move it to server only.
<donvito2> but it reads only /var/etc/CCcam2.x86 -C /var/etc/CCcam2.cfg &
<donvito2> not the line after -C
<GrueMaster> My home desktop is a Qore2Quad with 8G.  Currently running 12.04 32bit with PAE (so I can at least use the memory).  It runs KDE (Kubuntu), but also has apache, mysql, postgresql, and many VMs.
<pinPoint> PAE does what exactly GrueMaster ?
<sarnold> allows 32 bit x86 to access up to 64 gigabytes of memory, though in practice anything beyond 16gigabytes might be "difficult" on 32bit...
<GrueMaster> Exactly.  It works by setting up page tables for each 4G region.  It will not allow the system to address more than 4G at a time per app, but it does allow the kernel to move apps into larger memory spaces.
<pinPoint> essentially it would utilize all of 6GB then?
<pinPoint> why is there no possible way to upgrade in ubuntu or linux in general?
<sarnold> converting from 32 to 64 bits is an odd one. I've had debian machines that I'd upgraded for seven or eight years from release to release without hassle. I've upgraded ubuntu machines for years without trouble, though probably only three or four years worth..
<pinPoint> what is the limiting factor though? In different OSes it can be done.
<GrueMaster> Different OS's???  I don't know of any that have a 32->64 bit migration path.
<pinPoint> GrueMaster: you cannot install/upgrade a win machine from 32->64bt by just planting the install disk during bootup?
<GrueMaster> Windows is a very different beast.  Until recently (Vista), a lot of the code was still 16 bit.
<GrueMaster> Plus they have a huge amount of backwards compatibility layers built in.
<pinPoint> ok
<halvors> May anyone please help me out with the nagios-snmp-plugins package? :)
#ubuntu-server 2013-04-04
<MFSOT> Hi all, intermediate linux user here, have my home boxes running linux mint and just switched from win2k8 to ubuntu server with a webmin gui, I have an external hdd on my server that I would like to be able to access/share/write to and it's proving to be difficult via the webmin, I've read I should SSH into the server, this that and the other thing, just looking for opinions and some help
<ScottK> http://www.catb.org/esr/faqs/smart-questions.html is not a bad place to start.
<MFSOT> thanks bud
<MFSOT> took a lot of effort to be a dick
<qman__> MFSOT, firstly, that was a perfectly valid response, and secondly
<qman__> !webmin
<ubottu> webmin is no longer supported in Debian and Ubuntu. It is not compatible with the way that Ubuntu packages handle configuration files, and is likely to cause unexpected issues with your system.
<qman__> all of that said, you need to get a bit more specific in what it is you want to accomplish
<histo> Why would outlook completely fail at sending mail from my server when other clients work fine
<histo> ughhh
<histo> I'd assume it has to do with starttls or something goofy. Anyone experience this?
<histo> Running postfix/dovecot setup with virtual users controleld by text files
<qman__> because microsoft follows standards about as well as square pegs fit in round holes
<qman__> outlook, and only outlook, requires a non-standard auth line to work on your server
<histo> qman__: what is the non-standard line and where in the config do I put it?
<qman__> everybody else uses "AUTH PLAIN", but outlook requires "AUTH=PLAIN" to be accepted
<qman__> there's a bit in the default config file regarding this
<qman__> you need to make sure it's enabled
<histo> qman__: in postfix's config?
<qman__> yes
<qman__> this may be helpful: http://holdenweb.blogspot.com/2008/04/outlook-is-driving-me-nuts.html
<histo> qman__: I don't see it anywhere
<histo> k will read
<histo> i've added both those changes
<histo> already
<histo> qman__: With no luck
<MFSOT> qman__ I appreciate your response as at least you asked for something and didn't point me to a useless page to try to make me feel stupid, there's too much of that shit on irc - I know webmin isn't supported any longer but I've read that it works just fine and to be honest it's a lot cleaner and easier to use than the other gui's I've been messing around with. I ultimately want to be able to share files on my home network and pipe
<MFSOT> in via my vpn that I have set up and browse and use my home file system, I also want to either have an FTP site or SSL site, a way to share larger files with friends basically
<qman__> "works just fine" is anecdotal, I've seen it break plenty of stuff, and if you come here with broken stuff and webmin, we really can't help
<bobbyz> histo: broken_sasl_auth_clients = yes
<qman__> but if you want some basic instructions, try the server guide
<MFSOT> I'm not opposed to giving webmin the axe, this is a setup I just started today, I was looking at using ISPConfig, or just putting an gui on until I got things where I wanted them
<histo> bobbyz: yes I have that set
<ScottK> MFSOT: If you think that page is useless, go back and reconsider.  No one here is under any obligation to help you at all.  Personally, I'm done with the prospect.
<histo> bobbyz: also have plan login cram-md5 in my dovecot.conf as well
<histo> I noticed though when I telnet to my server port 25 it doesnt' display auth anything
<MFSOT> well you haven't helped in anyway, considering you never started, glad you're done
<bobbyz> histo: this might be a dumb question, but you issued a 'postfix reload' after you set that in /etc/postfix/main.cf right?
<histo> bobbyz: It's been set that way the entire time i've been trying let me restart just to be sure
<ScottK> Also look in /var/log/mail.log for information about what's going on.  Both postfix and dovecot log there.
<histo> bobbyz: qman__ still getting relay access denied from windows live clients
<histo> ScottK: yeah i'm tailing the log but it just rejects it and says relay access denied
<qman__> histo, are the clients attempting to log in? you have to specifically tell them to do SMTP auth, they try anonymous by default
<histo> Thunderbird works perfectly with my setup
<qman__> even if you have pop3/imap configured
<histo> qman__: yes
<histo> I'm testing on a windows 7 box with windows live mail. And it will not send friggen mail.
<histo> Just 554 relay denied
<histo> So it has to be something witht he AUTH
<ScottK> Dunno if it's still the case, but for a long time Microsoft clients didn't support TLS properly and you had to use SMTPS on port 465.
<qman__> also, with many typical virtual user configs, the username is "user+domain.com"
<ScottK> There's a commented out entry in master.cf for that service.
<qman__> rather than "user" or "user@domain.com"
<qman__> but that should show against any client
<ScottK> IIRC the server guide has a decent description of setting up dovecot for SASL auth with postfix.
<histo> ScottK: I've been following the server guide
<ScottK> OK.
<ScottK> If it works for non-MS clients, I'd try SMTPS on port 465 then.
<histo> qman__: I know as I can configure thunderbird to login and send mail just fine. Windows live mail is the problem
<qman__> just thinking aloud, my go-to ideas have been exhausted
<histo> ScottK: If I enable ssl it still errors denying the relay I haven't tried port 25 although I don't see how that matters
<ScottK> port 465/SMTPS is a different technology than TLS, so it was worth a shot.
<ScottK> pastebin the log entry for the entire transaction.
<histo> ScottK: which the mail.log?
<ScottK> yes
<ScottK> Also pastebin the output of postconf -n.
<bobbyz> histo: share your dovecot.conf too?
<histo> ScottK: http://paste.ubuntu.com/5675238/ is the mail.log http://paste.ubuntu.com/5675239/ is the postconf
<histo> bobbyz: ScottK http://paste.ubuntu.com/5675244/ is my dovecot.conf
<histo> This setup is from the wiki postfix virtual users and domains with clamav
<histo> I believe the dovecot.conf has some obsolete settings from the wiki however there are only warnings in the dovecot logs
<histo> I added the disable_plaintext_auth = no to the dovecot.conf to get outlook to login for pop and imap
<ScottK> So there's no indication of SSL/TLS or SASL in that log snippet.
<histo> ScottK: No that's the problem outlook isn't using either. Thunderbird does starttls and just works
<histo> I don't have enough experience with these setups to know what needs to be enabled for outlook to just work.
<ScottK> What if you enable SMTPS and set outlook to connect via port 465?
<histo> I have pop working
<histo> ScottK: How would one do that?
<histo> I'm kind of confused by the whole listener thing in the dovecot.conf so is postfix actually listening or dovecot??? I'd assume it's just for internal communicaiton between dovecot and postfix but I don't know why they would even need to communicate
<ScottK> histo: Look in /etc/postfix/master.cf and you'll find a commented out entry that starts: smtps     inet  n       -       -       -       -       smtpd
<ScottK> Uncomment that and the succeeding lines then stop/start postfix.
<ScottK> (not reload, for a new master.cf service to be recognized, you need to stop then start.
<ScottK> Dovecot only talks to the outside world via pop3 and imap.  Any mail sending/delivery between other mail servers is handled by postfix.  So when postfix receives mail that is to be delivered locally, it has to give it to dovecot (thus communication required)
<histo> ahh
<histo> ScottK: what would the service imap and service pop lines in the dovecot.conf be for. I don't believe they are even needed
<ScottK> I believe they aren't in the default setup if you have the right packages installed, but I mess with dovecot config rarely enough that I tend to go look things up to be sure.  Don't quote me.
<ScottK> One problem at a time though.
<histo> It's still denying it
<histo> New error though client host regjected
<ScottK> Progress.  Paste the log again
<histo> Apr  3 21:07:45 bremecmail postfix/smtps/smtpd[7524]: connect from h230.171.189.173.dynamic.ip.windstream.net[173.189.171.230]
<histo> Apr  3 21:07:46 bremecmail postfix/smtps/smtpd[7524]: NOQUEUE: reject: RCPT from h230.171.189.173.dynamic.ip.windstream.net[173.189.171.230]: 554 5.7.1 <h230.171.189.173.dynamic.ip.windstream.net[173.189.171.230]>: Client host rejected: Access denied; from=<admin@bremecgroup.com> to=<histoplasmosis@gmail.com> proto=ESMTP helo=<KimPC>
<histo> Apr  3 21:07:46 bremecmail postfix/smtps/smtpd[7524]: disconnect from h230.171.189.173.dynamic.ip.windstream.net[173.189.171.230]
<histo> Only 3 lines in log there
<histo> ScottK: Those are the new entries in the log. Same generic errors
<ScottK> Oh.
<ScottK> I thought you said there was a new one.
<histo> ScottK: On the client side there is a new error
<ScottK> Oh.
<ScottK> What's the exact error there?
<histo> let me restart postfix to be sure
<histo> ScottK: Server response 554 5.7.1 IPblah...: Client host rejected: Access denied
<ScottK> Actually the error is different.
<histo> Gawd I hate microsoft
<ScottK> Before it said "Relay access denied"  Now is says " Access denied"
<histo> Yes and keep in mind thunderbid on my linux laptop works perfectly fine
<ScottK> So that's an indication you've switched from refusal to relay unauthenticated mail to an authentication failure.
<ScottK> I don't have any MS clients here to mess with, so I'm not sure exactly what to do next and I can't mess with it.
<ScottK> You're making progress though.
<histo> It's still a 554 error though.
<histo> This can't be that random of a question getting MS clients to work with a linux mailserver.
<qman__> you're sure you have SMTP authentication set up on the client? from what I see it looks like it's not even trying
<histo> qman__: I'm ticking my server requires authenticaiton in the windows live config
<histo> and it's set to use the same settings as my incoming mail server
<qman__> they bury it, http://blog.arvixe.com/how-to-turn-on-smtp-authentication-on-windows-live-mail/
<qman__> ok
<qman__> just checking, I've seen that problem derail lots of people
<histo> qman__: yes I have those settings there. If I try port 465 and requires ssl I get that client host rejected. If I try port 25 without ssl I get the relay denied
<ScottK> Does that user name and password work from thunderbird?
<histo> in the mail.err log there is an error fatal: no SASL authentication mechanisms
<histo> ScottK: yes
<ScottK> Ah.
<histo> ScottK: nvm that is an old error
<ScottK> OK
<histo> from hours ago when I was mucking around in configs
<histo> It's fixed now
<ScottK> Anything in mail.err should also be in mail.log, so that would have been weird.
<histo> well the error is fixed but the problem persists from outlook
<ScottK> What SASL mechanisms do you have enabled?
<histo> ScottK: dovecot
<histo> ScottK: and the path is private/auth-client
<ScottK> No, I mean like PLAIN, LOGIN, etc.
<histo> ScottK: plain login cram-md5
<histo> in the dovecot.conf
<ScottK> OK  That should be fine since login is what MS clients like.
<histo> ScottK: in the postfix main.cf I see smtpd_sals_security_options noplaintext is that potentially the issue?
<ScottK> Yes
<ScottK> Good point.  Since you're using SSL/TLS you don't need that.
<ScottK> LOGIN is a plain text mechansim.
<qman__> yeah
<qman__> I forget the names but the way it works is, you have to allow plaintext logins, but then there's another setting that says only allow plain over SSL/TLS
<histo> jesus christ it works
<histo> stupid outlook
<histo> okay so now to enable pop3s imaps and disable the others
<histo> That way everythign is ssl and outlook can be happy plaintexting passwords
<ScottK> Good catch.
<histo> ScottK: qman__ TY guys for helping me think through this
<ScottK> You're welcome.
<histo> Nother notch in my belt for reasons to never us M$
<qman__> your linux client must have been using cram-md5
<qman__> because that setting in particular is not outlook specific
<histo> qman__: yea it was
<histo> looking back at old linux client logs
<histo> ScottK: I noticed in my master.cf smptd isn't enabled but bsmtp is that normal?
<histo> nvm i'm seeing stuff now
<ScottK> No
<ScottK> OK
<histo> Been at this awhile now i'm seeing things
<histo> Now trying to enable imaps and pop3s
<histo> actually dont' care about those nvm
<hachre> I've just installed a fresh server and for some reason it is settings some of my vars like LC_ADDRESS to de_DE while the german language pack isnt even installed. I have LANG set to en_US.UTF-8 in /etc/default/locale but that doesn't seem enough.. What's missing here?
<hachre> I also keep seeing this error message on many things apt does: "locale: Cannot set LC_ALL to default locale: No such file or directory", I'm not sure what it's referring to when it speaks of no such file or directly though since locale-gen works fine
<sarnold> hachre: 'locale' will report the values of all the locale variables..
<sarnold> hachre: maybe grep -r de_DE /etc/ would be useful?
<hachre> sarnold: locale reports en_US only for LANG and the rest is on de_DE... grep shows only the normal aliases in /etc/locale.aliases
<sarnold> hachre: how about your ~/.??* files?
<hachre> sarnold: I can solve the issue by setting LC_ALL in /etc/default/locale but I'm wondering where it's getting the idea for de_DE from
<sarnold> hachre: I think _some_ environment variables can come through via ssh; any chance you've ssh'd in from a de_DE terminal?
<hachre> sarnold: nope
<hachre> sarnold: oh wait... wow you're right
<hachre> sarnold: my machine here has the same issue
<sarnold> hahaha
<hachre> sarnold: but it isnt complaining about it
<hachre> running raring here on the desktop
<sarnold> I wonder why (both counts...)
<hachre> and I wonder why my desktop doesnt complain about it like the server does
<hachre> I have no german language pack installed here either
<hachre> youre probaly right and the server takes it from my ssh
<hachre> im gonna set all my LC variables right and ssh back in to check
<sarnold> at least the desktop I can flummox my way to changing the setting system-wide.. in the settings, language support, there's an "apply system-wide" button
<hachre> ok
<hachre> the server is fine
<hachre> its really coming from my ssh
<hachre> nice catch :)
<sarnold> woo :)
<sarnold> now you get to answer the same question on your desktop. hehe.
<hachre> on my desktop the /etc/default/locale has this crazyness set up
<hachre> was easy to find :)
<hachre> who knows how that happened
<sarnold> I've got a vague feeling I've seen some geolocation code in the installer to pre-select "the right timezone" and stuff, perhaps it also automatically picks locales, too? (though that'd put more faith in the geolocate stuff than I'd care to use myself...)
<hachre> yea I've been thinking that too
<hachre> and it detected rubbish for me when i installed
<hachre> but i didnt care
<hachre> maybe thats why
<gazoombo> utlemming: I'm still interested in learning more about the build process for the Vagrant boxes, if you have time.
<gazoombo> utlemming: CentOS is also looking to start releasing official Vagrant boxes: http://bugs.centos.org/view.php?id=6365
<yousaf> Could do with some help
<yousaf> dmesg output shows multiple instances of [    0.000000] *BAD*gran_size: 128M 	chunk_size: 2G 	num_reg: 8  	lose cover RAM: -904M
<yousaf> its a dedicated unmanaged server, the hosts says they have done CPU/MEM test and are confident that its fine
<yousaf> so what does those msgs in the dmesg output mean then?
<R1ck> hard to tell from a single line, there are probably alot more
<R1ck> could you pastebin the entire dmesg?
<Arun_GP> Hello
<Arun_GP> Is someone listening?
<R1ck> yes
<Arun_GP> :)
<Arun_GP> I am writing a tutorial on LAMP stack
<Arun_GP> My public_html has permissions 750. The problem is when my webserver uploads a fil/folder it takes ownership of that file/folder. So I can't modify them.
<Arun_GP> Does you know a method in which, a newly created file/directory(by other users) inherits ownership of that directory.
<Arun_GP> I know ACL won't work, because it inherits POSIX permissions only
<Arun_GP> SGID inherits GID of directory
<R1ck> Arun_GP: you mean when a file gets uploaded the owner is whatever Apache is running as (ie www-data) ?
<Arun_GP> and POSIX states newly created file/directory inherit ownership from UID os that process
<R1ck> don't pm me.
<Arun_GP> Pardon me...new to IRC
<Arun_GP> And, Yes that is what I meant
<R1ck> ask your questions in here so multiple people can answer them or learn from the answers to your questions
<Arun_GP> ok
<R1ck> so, when you say "So I can't modify them.", what do you mean exactly, in what way are you trying to modify them?
<Arun_GP> R1ck I went to stackoverflow and saw many webmasters are confused by this
<Arun_GP> And some of them uses scripts to periodically check the permissions
<Arun_GP> But I got another solution fsniper that uses inotify instead of continuous polling - http://freecode.com/projects/fsniper
<R1ck> Just answer my question please
<Arun_GP> Suppose Apache creates a directory
<R1ck> Just answer my question please
<Arun_GP> I do not have write access to that directory
<Arun_GP> Because Pache takes ownership of that directory
<Arun_GP> Did you get it R1ck?
<R1ck> answer my question and I'll try to help you.
<Arun_GP> Creating a subdirectory inside a directory can also be considered as 'modifying'
<Arun_GP> If you knew how to solve this, you would have already did
<Arun_GP> DOn't be a D1ck R1ck
<Arun_GP> Bye
<R1ck> *how* are you trying to create a subdirectory
<R1ck> in a shell? ftp? website? rsync?
<R1ck> ugh.
<shafox> i added the ppa:nathan-renniewaldock/ppa for the latest php and mysql-server , but when i did sudo apt-get install php5 it installed the php fine but couldnt install the mysql-server5.5 showing me this error. https://gist.github.com/anonymous/5309328 ... any solutions for this one ??
<shafox> i am on lucid btw
<R1ck> shafox: find out why it fails to start
<roniez> anybody knows if its possible to move the /home folders to another disk by creating a new partition on a 2nd disk for only the /home what would be the best way to go by that.
<R1ck> roniez: create the new partition, copy all files from the old /home to the new location, add the partition to /etc/fstab, clean the old /home and mount /dev/newpartition /home
<halvors> system.sysLocation.0
<halvors> obs.
<halvors> May anyone please help me out with the Ubuntu nagios-snmp-plugins package? :)
<halvors> I can't get snmp to work probatly :(
<halvors> I get the error in Nagios "ERROR: Description table : No response from remote host "192.168.0.2".Â " Does that mean that it can't reach the snmp host or that no information could be pulled from it?
<halvors> I'm using the check command "check_snmp_int_v1!"GigabitEthernet-[1-6]"" :)
<R1ck> halvors: either snmp isnt running or the community is invalid
<R1ck> halvors: very snmp is working correctly with snmpwalk first
<halvors> R1ck: It is, but maybe my snmp community isn't set probatly?
<halvors> I have set the snmp community using this command in /etc/nagios3/resource.cfg
<zul> yolanda:  ping https://code.launchpad.net/~zulcss/python-novaclient/2.13.0/+merge/157095
<yolanda> hi zul
<halvors> $USER7$=-C public -2
<halvors> R1ck: Any idea? Is it set probatly then? Any more detailed log files i can check for nagios?
<halvors> Rick: ? :)
<ndee> I have a fresh server and want to install all the packages from another server. This is what I found: http://askubuntu.com/questions/17823/how-to-list-all-installed-packages when I try this, I get following message: http://pastebin.com/D643sp9m <-- how can this be on a total new server?
<patdk-lap> easily, likely it isn't a real server, but some openvz thing
<ndee> patdk-lap, no, it's a physical server, that I'm sure.
<halvors> I get the error "ERROR: Description table : No response from remote host "192.168.0.2"." when trying monitor a cisco switch using the nagios-snmp-plugins ubuntu package. May anyone please help me out?
<rbasak> ndee: are both servers running the same release?
<ndee> rbasak, yes
<patdk-lap> you did do an, apt-get update, first?
<ndee> patdk-lap, yes
<ndee> I made apt-get update, apt-get upgrade on the new server
<ndee> then got the packages from other server with dpkg --get-selections, then set the new selection with dpkg --set-selections < packages on the new server and now that error shows up.
<zul> yolanda:  ping https://code.launchpad.net/~zulcss/python-glanceclient/0.9.0/+merge/157106
<ndee> adduser, ron and upstart are also installed so I'm really baffled.
<ndee> I cleared the selections again with: dpkg -l '*' | grep '^in ' | awk '{ print $2 " deinstall" }' | dpkg --set-selections so everything is now as before. Is there another way to get server foo to the same state as server bar?
<patdk-lap> personally, I use aptitude search '~i !~M' -F "%p install"
<patdk-lap> do a diff between the two systems
<patdk-lap> then install the diff
<halvors> How do i set the SNMP community in the nagios commands in nagios-snmp-plugins?
<hallyn_> zul: !  uh, can i get you to consider packaging libvirt 1.0.4 for raring at the last minute? :)
<hallyn_> zul: it fixes bug 1157626 without a custom hack in qemu
<uvirtbot> Launchpad bug 1157626 in qemu "Unable to use "virsh migrate" on two hosts after moving to raring" [High,Triaged] https://launchpad.net/bugs/1157626
<zul> hallyn_:  sure
<hallyn_> i'm not opposed to fixing it in qemu, but that's going to end up more different from usptream and debian, which will hurt in the long run
<hallyn_> of course, with raring htere isn't really a long run :)
<hallyn_> zul: awesome, thanks
<RoyK> any idea when this fix will make it to the repos? http://www.postgresql.org/about/news/1456/
<RoyK> seems to be pretty critical
<ScottK> RoyK: I know that the security team knew it was coming, so I'd imagine "soon".
<RoyK> ScottK: I hear from #ubuntu-bugs that the fix was released minutes ago. guess it takes a while to reach my repos
<ScottK> Yes.
<ScottK> Also, since the release included more than just the security fixes, for -security, the Ubuntu security team will probably have to segregate out the security changes to apply separately.
<ScottK> Actually I'm wrong.
<ScottK> It looks like 9.1.9 is published to security already.
<ScottK> RoyK: Published 20 minutes ago, so it should be hitting mirrors real soon now.
<zul> hallyn_:  do we need the enable qemu-spice patch anymore?
<hallyn_> uh, lemme check
<hallyn_> zul: no
<hallyn_> zul: though, if a vm.xml specifies /usr/bin/kvm-spice, will that then still work?
<zul> dunno
<zul> ill leave it in then
<mardraum> hallyn_: re 1157626 I'm here for a little while longer (1 hr?) until I need to sleep if you want me to try anything :)
<ScottK> RoyK: http://www.ubuntu.com/usn/usn-1789-1
<hallyn_> mardraum: after you migrate a vm with my patch applied in qemu, have you logged in and really used it?  Do you see any corruption with things coming from disk (which weren't yet in page cache)?
<RoyK> ScottK: thansk
<hallyn_> mardraum: basically there are 3 ways to go:  (1) my patch, (2) a probably better patchset from upstream, (3) newer libvirt.  so zul is packaging newer libvirt right now, andI'll run a testsuite on it
<hallyn_> if it looks ok, we'll try to push that as a cleaner fix
<mardraum> hallyn_: yes, I have logged in, run new binaries, rebooted etc
<mardraum> all ok
<halvors> Anyone know how to use the check commands in the nagios-snmp-plugins package?
<mardraum> still using the 12.10 test vm
<hallyn_> mardraum: awesome, thanks
<mardraum> I'll install some new software now
<hallyn_> mardraum: (just saw your email reply sorry :)
<mardraum> no worries
<mardraum> just installed nginx for example, and the depends all ok
<zul> hallyn_:  just doing a local build now
<hallyn_> mardraum: cool.
<hallyn_> zul: ^ newer libvir tis the *cleaner* fix but after spending 1.5 days on that one-line qemu fix, i'm attached to it dammit :)
 * hallyn_ biab
<zul> hallyn_:  ftbfs :( ill fix it up
<patdk-lap> :)
<mardraum> hallyn_: thanks for your time on this one. bbl
<zul> hallyn_:  around?
<hallyn_> zul: yup
<zul> hallyn_:  im just in the middle of uploading libvirt to http://people.canonical.com/~chucks/libvirt can you see why the testsuite fbtfs please
<hallyn_> ok
<zul> yolanda:  ping https://code.launchpad.net/~zulcss/nova/2013.1/+merge/157138
<yolanda> ok
<zul> yolanda: ping https://code.launchpad.net/~zulcss/swift/1.8.0/+merge/157142
<hallyn_> zul: fwiw at least the first error appears to be a real bug in the test driver...  just doing '../tools/virsh -c test:///default list' should show the test domain, and doesn't.  <huh>
<bobbyz> hallyn_: I haven't seen all of your above, but have you added '--all' flag to that command?  If the domain is not running, you won't see it in 'list' output unless you tag --all onto that
<hallyn_> bobbyz: this is the test driver.  it does show up without --all in 1.0.2, and teh testcases (upstream) expect it
<hallyn_> zul: i mean ffs, 1.0.2..1.0.4. has 729-line diff in src/test/test_driver.c.
<yousaf> hi all
<yousaf> I am having issues with twitter oauth, after debugging i found out that my server can not literally find twitter.com
<yousaf> any reasons why?
<Fieldy> i'm trying to get an ipv6 tunnel up. I have done this many times on many distributions and kernel versions, however for some reason on this ubuntu server system, i'm getting a problem. any ideas? commands and error output: https://gist.github.com/anonymous/5312013/raw/4ccccaba09acfaf5239bba852ed1031d52475d93/stdin.txt
<Fieldy> i've done a lot of searching on that error, with nothing solid as a result
<Daviey> roaksoax: I know you are committed with a tonne of stuff right now, but in the next few days.. can you work with plars to coordinate MAAS ISO tests for Raring?
<roaksoax> Daviey: sure thing
<sarnold> Fieldy: a few shots-in-the-dark -- does your kernel.modules_disabled sysctl prevent loading one of the ipv6 modules on demand?
<Fieldy> sarnold: i'll have a look, that's actually what i was just now poking around in -- i had formerly disabled ipv6 (because I didn't need it yet) in 3 sysctl lines however I reversed that (by echoing the opposing value by hand to them)
<Fieldy> and confirmed that worked by catting them
<Fieldy> I don't have kernel.modules_disabled at all in /etc/sysctl.conf or /etc/sysctl.d/*
<sarnold> Fieldy: okay, then it's probably not changed (you could check with sysctl directly) -- are the ipv6 modules blacklisted in /etc/modprobe.d/ ?
<Fieldy> ooh, no i didn't. i'm tired and was doing crap like echo "0" > net.ipv6.conf.all.disable_ipv6   ... FAIL. let me do that right
<Fieldy> that'll teach me to get with the times -- i'm so used to just echoing and catting
<sarnold> Fieldy: heh, well... the kernel devs have wanted to kill off sysctl for years. I'm not sure they're ever going to do it, but it's been on their minds for ages. blah.
<Fieldy> heh heh heh. that was it.
<sarnold> Fieldy: nice :D
<Fieldy> cough: ls                   bin  mysql-backups  net.ipv6.conf.all.disable_ipv6  net.ipv6.conf.default.disable_ipv6  net.ipv6.conf.lo.disable_ipv6
 * Fieldy deletes those and pretends they never existed
<Fieldy> sarnold: unless they have something that much better... leave it alone imo
<sarnold> Fieldy: yeah. I've always liked sysctl. it's easier to sysctl -a | grep ... than it is to find /proc -name '*...*'  :)
<Fieldy> no doubt
<qhartman_> I'm trying to install ubuntu server 12.04.2 i386 on an older P4 IBM server. The installer gets to 77% complete and then asks for a disk change. Has anyone found a work around for this? I've checked all the MD5's and done a disk validation, so I know the media is good.
<qhartman_> I also know the drive is good, I've used it to install other distros and versions on this machine w/o problems
<zul> hallyn_:  sucks doesnt it have you figured out whats going on with libvirt?
<Quest> how to know how much bandwidth is taken on a specific port?
<hallyn_> zul: no i haven't
<hallyn_> i'm buildign from git right now
<hallyn_> zul: i mean yes - the test:///default is not properly registering domain test
<Quest> rephrase:  I need to know what bandwidth is taken on trafic on a specific port. how it can be done?
<rbasak> Quest: look into iftop
<Quest> hm
<rbasak> (though that's interface-based, not port-based)
<zul> hallyn_:  awesome
<zul> adam_g:  how do you want to review these 2013.1 merge requests do you want them one by one or do yout want me to queue them up for you
<adam_g> zul, doesnt matter. im actually about to push a keystone bug fix that would be great to get out /w 2013.1
<zul> adam_g: okies i havent gotten to keystone yet so that would be cool
<zul> adam_g: btw https://code.launchpad.net/~zulcss/glance/2013.1/+merge/157196
<adam_g> zul, https://code.launchpad.net/~gandelman-a/ubuntu/raring/keystone/lp1158563/+merge/157199
<Quest> rbasak,  how to see traffic on a specific port by iftop?
<plars> Anyone here who could help with the maas tests or iscsi tests on iso tracker?
<Quest> rbasak,  ?
<hallyn_> zul: well, with libvirt from git i don't get thta problem.  wtf?
<zul> hallyn_:  maybe something is missing from the tarball (wouldnt surprise me)
<hallyn_> true
<hallyn_> but, 2 other possibilities:  1. i had to apt-get purge libcurl4-gnutls again to get git version to build.  maybe it built but wrongly bc of that from package.  (re-building to test)
<hallyn_> 2. could be osme patch we have.
<hallyn_> do you want to try building from the release tarball?
<hallyn_> (else i'll try that third)
<vedic> I have removed .bash_history file but I still see output of about 10 commands when I write 'history' on the terminal. How to clear history completely?
<zul> adam_g:  https://code.launchpad.net/~zulcss/quantum/2013.1/+merge/157205
<sarnold> vedic: see the HISTIGNORE and HISTFILE environment variables in the bash(1) manpage
<hallyn_> zul: well huh, for some reason it does not get past
<hallyn_>     if (!conn->uri->scheme || STRNEQ(conn->uri->scheme, "test"))
<hallyn_>         return VIR_DRV_OPEN_DECLINED;
<hallyn_> (it being testOpen())
<zul> which file is that in?
<hallyn_> zul: src/test/test_driver.c
<hallyn_> zul: OMG conn->uri->scheme is 'qemu', despite "-c test:///default"
<zul> hallyn_:  can you do me a favor? can you drop 9002-better_default_uri_virsh.patch
<hallyn_> yup
<hallyn_> (gonna take a few misn to build)
<zul> k
<hallyn_> zul: all tests pass with that patch dropped
<zul> hallyn_: hah thought so
<hallyn_> zul: odd, it seems to have been ported correctly at first glance
<zul> i noticed debian isnt using that patch anymore either
<hallyn_> zul: i think i see why
<zul> oh?
<hallyn_> zul: commit abfff210060625af8914e28601f1ec6ed96b05ae switched the order to first call vshInit(), then parse argv
<zul> ah
<jackln> I'm trying to setyp a PPTP VPN server, what IPs do I use?!
<jackln> for remoteip and localip
<hallyn_> zul: building with that reversed
<red82> can anyone point me at a good resource that I can read up on the process of replicating a mysql database from server A to B such that server B can function as a failover in the even that A goes down?  Application is e-commerce.
<plars> is http://iso.qa.ubuntu.com/qatracker/testcases/1463/info still valid? I'm not seeing these options listed in preferences
<jackln> Do you know anything about VPNs?
<hallyn_> zul: yeah with http://people.canonical.com/~serge/9002-better_default_uri_virsh.patch it works
<hallyn_> aaaand.... the machine i was doing that just shut down for no reason
<hallyn_> hardware sucks
<hallyn_> zul: OTOH, virsh list now gives me segfault.  <fume>
<hallyn_> oh, heh.
<hallyn_> left some debug cruft in my pkg
<hallyn_> zul: i dunno, pkg isn't working for me - won't show my installed vms
<hallyn_> or have they been deleted
 * hallyn_ slaps self
<hallyn_> yeah nm it's working.  now to run the qa tests
<cwinebrinner> echo $-
<cwinebrinner> echo $T
<cwinebrinner> woops >.>
<adam_g> zul, not sure if its related to that keystone fix from earlier, but i can't the test suite to pass for the life of me.
<hallyn_> zul: so the qa tests mostly pass, except there is some domxml-to-native weirdness
<Quest> what is the package name to install GUI  on ubuntu server? (i think its unity or gnome for ubuntu?) (and i would prefer kubuntu GUI (kde) ?
<sarnold> Quest: try: 'sudo apt-get install kubuntu-desktop^'
<Quest> whats ^ for?
<sarnold> (the ^ asks to install the _task_ named kubuntu-desktop, which is different from the package named kubuntu-desktop)
<Quest> really
<Quest> if i omit ^. will it install kde?
<sarnold> Quest: it'll install the metapackage kubuntu-desktop. you might never know the difference between the two.
<Quest>   Temporary failure resolving 'us.archive.ubuntu.com'
<sarnold> worked for me locally, and with both 8.8.8.8 and 4.2.2.1 public recursors
<Quest>  I just ssh through public ip to my server. but it seems i cannot ping -s 25 google.com or apt-get update.
<Quest> are all ports open by default on ubuntu-server (which is contrary to ubuntu desktop)?
<hallyn_> zul: ffs - it dies unless youhave a mem-balloon specified in the xml
<Quest> sarnold,  ^?
<sarnold> Quest: yes, all ports are open by default on both ubuntu-server and ubuntu-desktop. if you've configured iptables (or one of its frontends like ufw) then of course that'll be different.
<Quest> sarnold,  i just installed ubuntu-server  . it updated fine during installation . but now its not
<sarnold> Quest: check demsg. check 'ping 4.2.2.1'.
<Quest> let me se
<Quest> I just installed ubuntu-server . it boots with a black / blank screen and does not shows anything. what can be wrong?
<patdk-lap> kernel graphics driver issue probably
<Quest> patdk-lap,  what can be done?
<Quest> whats the main difference b/w ubuntu server and ubuntu desktop. ? i am having problems installing server . can i go with desktop?
<sarnold> Quest: server doesn't come with a gui and doesn't use NetworkManager to manage its interfaces. The desktop variant will use NetworkManager, and come with a GUI.
<sarnold> Quest: you sure can use the desktop variant as a server platform if you wish -- I've done that for my pandaboard, which has no monitor connected
<Quest> hm
<Quest> one add is ssh
<Quest>  i can install openssh my self. any thing else that is different?
<patdk-lap> there is 0 difference between ubuntu server and desktop
<patdk-lap> just what default packages are installed
<sarnold> Quest: you can compare the package manifests at e.g. http://releases.ubuntu.com/precise/
<sarnold> oh. uh. there's no server manifest there.
<Quest> iam having a general error of mounting file systems. iam at shell while installing kubuntu. how to install from shel?
#ubuntu-server 2013-04-05
<Quest> sarnold,  any clues
<sarnold> Quest: it's hard to debug a problem without any idea of what commands you ran or what errors you got. :)
<Quest> just says general error. error mounting hardisk
<Quest> iam installing by live cd
<acidflash> hello
<acidflash> i have a pci-express ssd that i installed into my server yesterday, and i can see it in lspci, but i cant see it in fdisk, can someone point me in the right direction as to how to get this ssd viewable in fdisk?
<sarnold> acidflash: does it show up with a device name in dmesg?
<sarnold> acidflash: does fdisk just not read the partition table or do you not even know how to get fdisk to talk to that specific drive?
<acidflash> sarnold: the entire drive doesnt show up in fdisk, im checking dmesg for any signs of life
<sarnold> acidflash: dang, unreadable partitions might have been easier. :) grep for 'sd' or 'hdd' or something...
<acidflash> [    2.232470] scsi 15:0:0:0: Processor         Marvell  91xx Config      1.01 PQ: 0 ANSI: 5
<acidflash> [    2.232596] scsi 15:0:0:0: Attached scsi generic sg3 type 3
<acidflash> im fairly certain thats it.
<sarnold> but sg3 is the closest you get? hrm. :(
<sarnold> that's way out of my experience, good luck :)
<acidflash> hehe, thanks
<acidflash> sarnold: all other ssd's show up with the after the areca raid controller, this is the only thing other then onboard devices which show up before the areca raid controller, makes me pretty sure its it,
<acidflash> 3 hdd's, then this, then areca controller, then the rest of my ssds
<sarnold> hrm, I do'nt see anything in /lib/modules/`uname -r`/kernel/drivers/scsi/ that looks like it'd fit the bill either.
<sarnold> acidflash_: hrm, I do'nt see anything in /lib/modules/`uname -r`/kernel/drivers/scsi/ that looks like it'd fit the bill either.
<acidflash_> i se
<sarnold> good luck :)
<acidflash> i think i should be looking for mvsas
<acidflash> thank you
<ashley_w> i built a vm using vmbuilder, and now i have a qemu qcow image, but i'm not sure what to do with it. i've built a couple of other VMs using virt-install
<zul> adam_g: ill take a look tomorrow
<zul> hallyn_:  gah?
<adam_g> zul, this is causing the failures: http://bazaar.launchpad.net/~openstack-ubuntu-testing/keystone/grizzly/revision/181  it was passing before because we weren't shipping the sql backend enabled by default. with it the default, we need to create a test db and overrides for the test suite
<adam_g> zul, or we can only test the kvs backend at build time but default to sql on installation, i prefer to test sql though
<pii3> what is the minimum hardware requirment for ubuntu 12.04 or 12.10 server ? specially for disk and ram ?
<smb> Daviey, when you are around, how far did the ffe handling for raring xen get?
<Daviey> smb: hey.  I couldn't quite work out if XiongZhang (xiong-y-zhang)  was concerned about the TSC change?
<smb> Daviey, Let me read the report again, but I belive it was more me not finding the support bit because the user-space tool I used was lying...
<Daviey> ah!
<smb> Yeah, I think it was that
<smb> Basically I had a machine supporting tsc offset (at least after I installed the upgrade) but cpuid is not telling the truth for cpuid[7]
<Daviey> smb: approved.. please progress :)
<Daviey> (Thanks!)
<smb> Daviey, If that means upload... hey could you sponsor the source package I prepared. :)
<Daviey> smb: sure
<caraconan> Hi there. Can somebody please provide to me the URL to install ubuntu server from internet? I have (I think) the one to install desktop version, which is http://archive.ubuntu.com/ubuntu/dists/quantal/main/installer-amd64/
<caraconan> Nobody? The url that I pasted is the right one?
<Daviey> caraconan: that is the correct one
<Daviey> mini.iso covers all flavours
<caraconan> <Daviey> But maybe I'm missing something, but I can't see in any place the option to choose ubuntu server
<caraconan> How will this end up installing the flavour that I want?
<Walther> Okay, completely stupid question. I have a new server here with 128GB of ram and only 2x146GB (-> 146GB after raid) for the host install. Ubuntu server wants to grab >130GB for *swap partition* when I try to configure LVM
<Walther> Obviously there will be more disk space afterwards for virtual servers etc, but those will be in ZFS pools and configured later
<Walther> the partitioner in the ubuntu installer doesn't seem to allow changing the swap partition size
<xnox> Walther: sounds about right. You should use manual partitioning, it does allow to "autopartition using lvm" and then delete/resize swap.
<xnox> Walther: unfortunately the rule of thumb SWAP ~= RAM doesn't work well these days where it can be the case that RAM >> HDD space
<Walther> mmh. However it seems that if I now go to manual partitioning, the lvm volume groups etc have already been written to disk -> it doesn't allow me to properly modify them, even if I "go back" in the installer
<Walther> so how do I go about to "just blank the disk / ignore partition tables and start again" in the installer?
<Walther> Er, how can I shrink the lvm swap partition in the installer?
<xnox> Walther: enter manual partitioning -> delete swap -> enter lvm submenu -> delete swap volume.
<xnox> Walther: the create a new volume.
<xnox> Walther: none of it is written to disk, but it does preserve / show the model. Or reboot =)
<Walther> ahh, now i got it. the menu is a bit backwards
<Walther> (and yeah, perhaps something should be done to that swap â ram thing
<Walther> at least in server installs, the partitioner could be smarter)
<wwoo> hello, I want to update mysql on my ubuntu VPS to version 5.6. How do I do that? Is there a command? Can I just run installation?..
<wwoo> I don't want to loose any data though
<shadeslayer> hi, does anyone have any ideas on how to limit mysql mem consumption by using cgroups?
<Daviey> smb: Your build FTBFS.. mce.c:150:1: internal compiler error: Segmentation fault .. erk
<Daviey> I've hit retry.
<smb> Daviey, ok. I am pretty sure I did do a test sbuild
<Daviey> smb: well this is i386.. so not sure i care much about that :)
<smb> Daviey, Hm, probably only did one on 64bit but still...
<smb> Daviey, Started a local 32bit build as well. If the failure persists we should let the compiler people know
<Daviey> smb: https://launchpad.net/ubuntu/+source/xen/4.2.1-0ubuntu2/+build/4469971
<_BAMbanda> on setup, i accidentally skipped over all the various options such as lamp server, ssh server, dns, etc...
<_BAMbanda> how can I bring that up again
<Nafallo> tasksel
<_BAMbanda> thats the command?
<Nafallo> maybe. I'm not fond of doing it that way myself.
<smb> It is the command and actually brings up the selection screen
<Daviey> smb: same error
<smb> Daviey, Bugger, ok I will look into it
<hallyn_> zul: yeah, gah.  libvirt is requiring mem-balloon in .xml for domxml-tonative qemu-argv to work, and i don't see why yet.
<zul> thats dumb
<hallyn_> yup
<zul> yolanda: ping https://code.launchpad.net/~zulcss/cinder/2013.1/+merge/157348
<yolanda> done
<zul> thanks
<ftpd> Hi guys. Is there a problem with installing Ubuntu on disks/partitions larger than 3 TB? I'm trying to install 12.04.2 from .iso on a remote dell iDrac machines. On machine with 6 x 600 GB RAID5 it works, on 8 x 900 GB or 8 x 480 SSD still (also with raid5) fails with grub's error 'out of disk'.
<ftpd> It worked once, when I've used 'guided partitioning'. When I'm trying to use just one big partition (as customer wants) without swap, it fails. I'm creating 'bootdisk' partition 1.0 MB 'large', as installer wants, but still nothing.
<maswan> ftpd: Nope, we always slice out a fairly small raid volume for OS and keep the large data volume separate
<ftpd> maswan, I would do that, but my customer just 'wants' to have one big partition. But I think I'll slice it and he'll have to deal with it ;-)
<xro> Hi, i'm looking for a cheap cloud to interface with JUJU... Do you know one?
<xro> nobody knows cheap cloud ?
<utlemming> xro: HPCloud and EC2 are the two with Juju support
<utlemming> or you can use the local provider against LXC
<xro> utlemming, you can't say amazone s cheap! 600$ a year per small vm...
<utlemming> xro: well, short of finding an openstack cloud with API access that is in beta, you aren't going to find both cheap and Juju support. However,  I cheap is in the eye of the beholder. 600$/year isn't that bad considering the infrastructrue you're getting.
<xro> utlemming, i though about building my openstack cloud... but it's really expensive
<xro> utlemming, 600$ per year is cheap or expensive depending of the use
<utlemming> xro: if you have a spare machine (albeit beefy) you can look at DevStack. That'll run on commidity hardware
<xro> and of the client
<tgm4883> utlemming, he wants to sell managed services
<xro> tgm4883, i try to do a business plan
<alimj1> Hi. I have a question with iptables. I want people who connect to my box through PPTP to have some specific port redirection. I am a little bit new to iptables
<alimj1> current role is: iptables -t nat -A POSTROUTING -s 192.168.4.0/24 -o eth0  -j MASQUERADE
<alimj1> I want ALL incoming TCP PPTP connection to be redirected to 127.0.0.1:9040
<alimj1> I managed to configure it with two Network Adapters and NAT. However, It does not work with PPTP connections
<alimj1> The role that works for NAT is: iptables -t nat -A INPUT ! -o lo ! -d 192.168.0.0/16 -s 192.168.0/24 -p tcp -m tcp -j REDIRECT --to-ports 9040
<alimj1> Forget the above line, it is for local redirection
<Quest> whats the recomended one for package for NX (like freeNX) to share full desktop with mouse, keyboard events where the server can be on linux ubuntu and the client may be either linux or windows?
<genii-around> Server edition doesn't come with a desktop to share.
<adam_g> zul, did you say you got keystone building?
<zul> adam_g:  almost
<zul> i thought i did but it still errors out
<adam_g> zul, might work if you just set all backends to kvs in the test_overrides.conf
<zul> adam_g: ill try that next
<adam_g> zul, you need to specify all of them explicitly or they'll default to sql (which the keystone.conf.sample specifies)
<zul> adam_g:  they are all kvs
<adam_g> zul, what is your test_overrides.conf ?
<zul> adam_g: the default one in debian/tests
<zul> wait never mind i suck
<adam_g> zul, theres a bunch of other backends that doesnt address, which the tests will configure to use what is set in keystone.conf.sample. we patch that file to specify sql for all of them, so unless they're overridden in test_overrides.conf, you'll be testing sql
<zul> adam_g: gotcha
<flickorfly> Is there a command to list the current applicable security updates for a server without all the rest on teh latest LTS?
<flickorfly> I usude to grep apt-get upgrade -s, but they don't seem to be differentiated from any other updates anymore.
<sarnold> flickorfly: if you remove 'updates' from your sources.list (or have a different sources.list for answering this question) you'll get just the security updates
<flickorfly> no way to just do it on the fly?
<sarnold> I've not tried the new configuration file msyelf, but presumably it'd be a simple little shell script, function, or alias, to use the different configuration..
<flickorfly> yeah, I'm basically supposed to be auditing systems so I avoid making changes without approval. That's why I'm attempting not to do something like that.
<flickorfly> I guess I could point apt at a sources.list in my home dir though
<flickorfly> I'm sure there is an option to specify a config
<sarnold> flickorfly: apt.conf(5) mentions a "sourcelist" configuration option, maybe just -oDir::Etc::Sourcelist=~/audit.list ,,,
<flickorfly> sweet, thanks
<rbasak> Just make sure that you've run apt-get update recently and that audit.list is a subset of sources.list. Otherwise I'm think you might get a silently invalid result.
<flickorfly> rbasak, I did " grep security /etc/apt/sources.list > sources.list" so that should meet your subset requirement right?
<rbasak> flickorfly: yeah that should be fine. Assuming you have security lines in /etc/apt/sources.list already that aren't commented out :)
<flickorfly> lol, yeah, It's good. Appreciate you attention to detail though
<Quest> iam following https://help.ubuntu.com/community/FreeNX . at step 5. it says Some packages could not be installed. This may mean that you have
<Quest> requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies:freenx : Depends: freenx-smb but it is not going to be installe E: Unable to correct problems, you have held broken packages.
<flickorfly> Thanks sarnold and rbasak. It's working great. (but now 157 security updates... looks like I'll be coming back ot this customer.)
<sarnold> flickorfly: oof. :)
<flickorfly> it'll keep me out of managing windows systems so I'm happy
<bkerensa> pmatulis: are you still the serverguide POC?
<pmatulis> bkerensa: yeah, but i've been sleepy this cycle in terms of actually doing anything
<bkerensa> pmatulis: ok I just got a simple pending merge I hoped to land for raring if possible
<pmatulis> bkerensa: link?
<bkerensa> https://code.launchpad.net/~bkerensa/serverguide/fix-for-1165007/+merge/157459
<resno> hey can you suggest any books?
<resno> im trying to learn a bit more about sys admining etc
<bkerensa> resno: Essential System Administration (O'Reilly)
<resno> thanks bkerensa
<flickorfly> resno, Rute Users Guide is another good one.
<resno> heh, i like one of the opening statements
<resno> flickorfly: ^
<resno> I Get Frustrated with UNIX Documentation
<resno> That I Don't Understand
<zul> adam_g:  https://code.launchpad.net/~zulcss/horizon/2013.1/+merge/157464
<cereal> so i'm having a hard time tracking down the cause of a system freeze on my server :(  did memtest for hours, added some additional air cooling, and when it happens its not overly taxed
<cereal> all it has is a single kvm guest running moderate load, suggestions on where to look next?
<zul> adam_g:  one more https://code.launchpad.net/~zulcss/ceilometer/2013.1/+merge/157467
<hallyn_> zul: well as long as i'm wasting *this* much time on libvirt 1.0.4, maybe i can also get past the annoying gnutls compilation bug
<zul> hallyn_:  hehe
<zul> hallyn_:  feel the love
<Quest> is it easy to get the password of a sudoer if the system harddisk is given to someone?
<maswan> yes, if it is a weak password
<Quest> ignore bruteforce
<patdk-lap> Quest, heh?
<patdk-lap> there is only two things
<patdk-lap> what kind of hashing was used
<Quest> ok?
<patdk-lap> and how long brute force or rainbow tables would take
<Quest> hm
<Quest> how about i just replace the /etc/shadows file with my own?
<Quest> it will change the passwords?
<patdk-lap> yes
<patdk-lap> just make sure the old cope on the disk is wiped
<patdk-lap> or it might be able to be recovered
<Quest> hm
<Quest> great
<Quest> thanks
<kaje> I have screwed up my pam settings somehow and now users can log in by typing any password. Here are my pam files: http://pastebin.com/yrmtQPJL
<kaje> I should have said, users can ssh in.
<kaje> If someone tries to connect with an unknown user, it rejects them. If they try to ssh with a known user, any password they enter will work.
<Quest> while updating . i got W: Failed to fetch http://ppa.launchpad.net/freenx-team/ppa/ubuntu/dists/quantal/main/source/Sources  404  Not Found.    i cannot install freenx. any solution?
<patdk-lap> quest, dunno, that isn't part of ubuntu
<Quest> what can i do
<Pici> !crosspost
<ubottu> Please don't ask the same question in multiple Ubuntu channels at the same time. Many helpers are in more than one channel and it's not fair to them or the other people seeking support.
<soren> kaje: You've commented out pam_deny in common-auth.
<soren> kaje: That's probably why.
<kaje> Yeah, that was my REALLY dumb mistake. I thought pam-auth-config was going to overwrite that.
<kaje> Thanks very much
<soren> kaje: No problem.
<Praxi> I'm trying to change the ownership of a drive, specifically /dev/mapper/sdk  I do a sudo chown bacula /dev/mapper/sdk and it doesn't change the owner.  I did some googling and come up with a possible fstab issue, but, I checked that and nothing in there.  Any ideas?
<sarnold> Praxi: what's ls -l /dev/mapper/sdk look like?
<Praxi> rebooting the server real quick, be up in just a second
<Praxi> lrwxrwxrwx 1 root root 7 Apr  5 14:37 /dev/mapper/sdf -> ../dm-2
<Praxi> this is an autofs usb drive
<sarnold> Praxi: aha :) that's just a symbolic link. do you really want to change the ownership of a symlink?
<Praxi> no just the drive :)
<sarnold> aha :) how about ls -l /dev/dm-2 ?
<Praxi> there we go, lets see if that fixed my problem
<Praxi> thanks sarnold
<sarnold> :)
<Praxi> ok that was the issue, lets see if I get further now lol
<irv> my /boot partition seems to be 100% full and only 226mb or so
<irv> is this a problem?
<sarnold> irv: it'll be a problem next kernel upgrade..
<irv> ya it's failing
<irv> how could this have happened?
<sarnold> old kernels are not automatically removed
<irv> and i guess during setup it only makes it large enough to just fit the one?
<irv> any way to automagically clear up some old ones so i can update?
<sarnold> irv: check dpkg -l linux-image* | grep ^ii   to find the installed kernel packages
<sarnold> irv: check uname -r to find the currently running kernel
<sarnold> you probably won't want to remove the package corresponding to the currently running kernel, since any new required modules will be missing..
<sarnold> irv: as for the default size, I don't think the installer creates a /boot by default. It'll let you make one, of course..
<sarnold> 226 would seem plenty to me :)
<irv> okay i have 8 installed
<irv> so just manually remove the ones i'm not using
<irv> well that's certainly not working ahah
<irv> i try to uninstall linux-image-3.2.0-29-generic
<irv> currently using 3.2.0-37-generic
<irv> it says i have unmet dependencies, and to run apt-get -f install
<irv> but when i do, it fails due to no space to extract
<irv> D:
<sarnold> irv: try to remove with dpkg --purge, not apt-get purge ..
#ubuntu-server 2013-04-06
<ruben231> hi guys
<Iapetus> Hello!
<Iapetus> bastebin for wads of text right?
<Iapetus> I'm trying to set up a L4D2 ubuntu server, I have most of it correct but I'm missing something and confused
<lifeless> pastebin, yes
<Iapetus> anyone alive in here?
<Iapetus> http://pastebin.com/NcrcHJC7
<Iapetus> there it is
<Iapetus> I need to add my steam user ID to the srcds_run file don't I?
<Iapetus> and pass
<Iapetus> right?
<Fieldy> what
<Iapetus> what
<Fieldy> it's a proprietary app last i knew of, consider asking their own support
<lifeless> Iapetus: I don't know anything about running L4D2 servers, sorry; it is unlikely that folk here will know, because it isn't open source.
<lifeless> Iapetus: I would look on the steampowered forums
<Iapetus> well balls, thanks, i'll go hunting
<lifeless> Iapetus: you might try the -debug option the program recommends
<Iapetus> yup, saw that in something I googled a bit ago, required a plugin of some sort. Didn't get to it yet. Also bugger no sticky on the steampowered about ubuntu! BOOO
<Neytiri> i am having a issue getting my bind server to start getting the error: permission denied on opening the config files
<overrider> Hello. I am on 10.04, how do i get a more recent version of php on my server? Is there some repo i can without fear of breakage?
<Neytiri> apt-get install php5
<overrider> Not quite the newest php5
<Quebert> Morning everyone
<[twisti]> hey, when i log into my ubuntu lts 12.04, i get this: *** /dev/md2 will be checked for errors at next reboot *** for all my partitions. but i just sudo reboot'ed, and it just rebooted in the normal time, and still says that.
<[twisti]> the answer at http://askubuntu.com/questions/182804/12-04-indicates-filesystem-check-on-next-boot-but-never-does-one sounds sensible, but im worried that the fsck will sit there with something like "found errors, want me to fix them? y/n"
<[twisti]> its a headless server, so that would effectively brick it
<maxb> Do you have a reason to distrust the system boot scripts that much
<maxb> ?
<[twisti]> none at all
<[twisti]> but it doesnt say anywhere that fsck at boot time is non interactive
<[twisti]> before i brick our server id like a better reason to assume it will run without interaction than "maybe we'll get lucky"
<[twisti]> in my experience, things that check hdds ask for user input if they find anything out of the usual
<[twisti]> i have no reason to assume that fsck is any different
<Hoffa_> need help with an openfiler OS krasch
<Hoffa_> hello
<Hoffa_> need help
<Hoffa_> #openfiler
<Hoffa_> is there anyone who knows anything about mdadm
<claco> morning. I'm looking for the right place to report issues with a real live ubuntu.com servers.
<syncsys_> Can any one tell a package / app for getting network stats/ speed for individual ports numbers . (it would be much  better if it would be web based)?
<Quest> can anyone tell why in iptraf in port 80 (breakdown by ports) the "in" is always zero and the "out" at bottom shows speed stats for download and for upload as well. on other strange thing. its showing download and upload . both speeds in "out"  and  "in" in is  0 ..  ?
<Quest> anyone home?
<X-ian> I need to stop postfix from delivering mail temporarily while still accepting incoming mails
<geofft> hallyn: (or anyone else) Is there current work on libtpms + qemu packaging?
<Quest> i cannot connect to freenx server by nx client. it quites at establishing display.
<geofft> ppa:serge-hallyn/tpm is kinda old. (Also some parts of this landed in upstream qemu very recently)
<elnur> anyone using ksplice? if yes, how's the experience?
<NUKEESIMONHD> dracula
<Quest> for LAN file sharing for Linux as server and windows as most clients.  simple sftp:/ip  from windows would be suffice (provied every user has an acccount in the server and a home folder) ? or an ftp server is needed? 2. what is fast and how much generally, sftp of ftp?
<RoyK> Quest: if   you want to share data between a linux server and windows clients, use samba
<RoyK> !samba | Quest
<ubottu> Quest: Samba is the way to cooperate with Windows environments. Links with more info: https://wiki.ubuntu.com/MountWindowsSharesPermanently and https://help.ubuntu.com/12.04/serverguide/C/windows-networking.html - Samba can be administered via the web with SWAT.
<Quest> RoyK,  samba cant give acces over wan internet
<Quest> we need that
<Quest> what different is vsftpd from ftp over openssh server?
<RoyK> security
<RoyK> traditional ftp sends passwords in cleartext
<RoyK> you really don't want that
<RoyK> just setup ssh and have the clients use filezilla
<Quest> vsftpd and  ftp over ssh is  same underneath.    s ftp.
<Quest> ya that what i meant
<RoyK> no
<RoyK> with ftp over ssh, sshd controls the business
<geofft> "sftp" is a misnomer, it involves no FTP at all.
<RoyK> with vsftpd, you can choose between traditional plaintext ftp or ftp over ssl
<geofft> You might be thinking of "ftps", which is to ftp what https is to http.
<geofft> sftp is an SSH thing with no relation to FTP servers or FTP clients
<geofft> although many FTP clients also, independently, happen to support SFTP too
<Quest> hm
<RoyK> Quest: just setup ssh and forget about the ftp server
<RoyK> Quest: then setup rssh if you want to limit the clients' access to other parts of the system
<Quest> sftp is bascially ftp over ssl and file transfer over ssh is on 22 and does the same as ftp
<Quest> right?
<maxb> sftp is not really ftp over ssl
<RoyK> Quest: no, sftp is the built-in ftp-like thing in ssh
<geofft> SSH doesn't involve SSL.
<Pici> SFTP != FTPS
 * RoyK hands out popcorn
<geofft> (I kind of wish they'd picked a different name for sftp.)
<Quest> whats the difference b/w ftp server like vsftpd and ftp of ssh (i typed sftp:// for access over ssh)
<Quest> confusing
<Quest> and whats the advantage of an ftp server ls vsftpd
<Quest> 3. how to restrict users not going outside their /home dir if ftp over ssh is used.
<RoyK> 23:22 < RoyK> Quest: then setup rssh if you want to limit the clients' access to other parts of the system
<geofft> You can restrict users to just running sftp-server as the target command, I believe.
<geofft> (I'm not familiar with rssh, but it sounds likely to work too)
<RoyK> we use it for 17k users at work
<Quest> RoyK,  rssh? whats that. similar to open ssh?
<RoyK> Quest: try man rssh first, please
<Quest> RoyK,  no manual. please inform
<RoyK> then install it
<Quest> geofft,  any ftp server will do like vsftpd?
<Quest> RoyK,  is it just an ssh server like open ssh server is?
<Quest> y/n
<Quest> rssh is a restricted shell, used as a login shell, that allows users to perform only scp, sftp, cvs,
<geofft> Quest: I'm not familiar with vsftpd, I haven't used it.
<Quest> geofft,  but you meant an sftp server?
<RoyK> Quest: http://bit.ly/Z0jxk1
<geofft> Quest: The SFTP server is part of the SSH server
<Quest> RoyK,  rssh is a restricted shell, used as a login shell, that allows users to perform only scp, sftp, cvs,
<geofft> Quest: If you want to run SFTP (instead of FTPS), you don't need a separate FTP / FTPS server
<Pici> SFTP is poorly named. It is not an FTP server, although many things that function as FTP client are able to connect to it if they have the proper support.
<geofft> Quest: It sounds like it is worthwhile for you to do some googling about SFTP and securing SSH, or about FTPS
<geofft> Quest: If you intend to deploy this across the WAN, you should be very confident you understand the security of what you're deploying
<Quest> Pici,  oh. so sftp is just file transefer over ssh?
<Pici> FTPS is horrible and no one should use it.
<RoyK> Quest: if your users have ssh login access, they can access anything they are given access to, also for file transfer
<RoyK> Quest: if your users only need file transfer, rssh is good
<Pici> Quest: essentially.
<geofft> Quest: I don't think the scattered advice of folks on an IRC channel is sufficient for setting up a secure file server.
<RoyK> geofft: I don't think FUD is any better :P
<geofft> Sure. I'm not trying to spread FUD -- there are good ways to learn about this
<geofft> And they're all reasonable for being in a good position to set up a server.
<Quest> RoyK,  doesnt an sftp server like vsftpd restricts users to get out of their /home dirs by default?
<geofft> I'm just pointing out that IRC is a bad way.
<RoyK> Quest: no, rssh does
<Pici> Quest: vsftpd is not an SFTP server.
<Quest> Pici,  what is it then?
<Quest> Pici,  its an ftps server theN?
<geofft> vsftpd is an FTP or FTPS server
<RoyK> Quest: but if your users have ssh login access already, they can get whatever they can access out of there by other means
<Quest> oh
<Pici> Quest: what geofft said.
<geofft> Oh, I guess the fact that "vsftpd" contains "sftp" in its name is also confusing :-(
<RoyK> Quest: so, please, tell us, do your users have ssh login access?
<Quest> ok. 2nd last question. can windows explorer get sftp and ftps dirs ?
<geofft> Windows Explorer cannot.
<RoyK> Quest: no, but filezilla can
<geofft> Frankly there are few good solutions for Windows file access across a WAN
<Pici> Quest: fyi, getting connected to an FTPS server if you are behind a corporate firewall is hell.
<geofft> I suspect SMB will work as well as anything. You could also try using WebDAV over https, but performance is very poor on Windows
<RoyK> geofft: well. SMB2 works well over a closed WAN
<RoyK> over the internet - wouldn't dare it
<Quest> and last question. 1. I need WAn and LAn access for users, 2. i will not give users a login to console. just make their /homes and login pass. 3. i have encrypted their /homes 4.
<RoyK> Quest: rssh
<RoyK> again
<Quest> RoyK,  explorer cant do for both? i remember i did sftp or ftps once in windows exploror
<RoyK> no, explorer can't
<RoyK> period
<RoyK> filezilla can
<Quest> RoyK,  ok
<Pici> explorer (iexplore) can do straight ftp.
<RoyK> Quest: when you say "over wan", do you mean "over internet" or "over a closed wan"?
<Quest> and what do i need. sftp or ftps?
<Quest> Pici,  not iexplorer. windows explorer
<Quest> RoyK,  ^
<RoyK> Quest: !
<Pici> ahh
<Quest> ok. period..
<Quest> i recall the period
<Pici> Quest: sftp is what we suggest. If you want ftps you're on your own.
<Quest> hm
<Pici> RoyK suggests doing this with rssh, as he has said many, many times.
<Quest> Pici,  ok. why ftps is discouraged?
<RoyK> Quest: can you please take an advice for once?
<Quest> RoyK,  i do need to configure rssh. ? ( i may want some users to rom around outside /home
<RoyK> ssh combined with rssh is very secure
<RoyK> Quest: then google it
<Quest> RoyK,  ofcourse. your advice is admired
<RoyK> it's not hard
<Pici> Quest: In my experience, it works horribly.  Extra configuration for clients needs to be done if you are behind a corporate NAT.
<Quest> ok
<Quest> nice
<Pici> rather, if the clients are behind a nat
<Pici> I've spent many hours trying to get things working at my employer, where I need to upload files to other companys' FTPS servers.
<Pici> With SFTP, things just work the first time.
<Quest> hm great
<Quest> hm
<RoyK> Pici: out of interest - what would be troublesome with NAT for FTPS? It's just TCP, after all
<geofft> I think the security story (in terms of vulnerabilities in the server) for OpenSSH is a lot better, btw
<Quest> Pici,  RoyK  I have read a bit tutorials about adduser and useradd. and chatted in freenode. i still didnt got an expert answer on how 1. to add a user as is added by default by ubuntu GUI with prober /home/userlongname  constructing the /home and the .files dot file like .cashe . bash etc      and 2. how to add a user with /home but no priviliges and NO console login . just files in his folder (for sftp)
<RoyK> Quest: using a gui isn't really a server question
<geofft> RoyK, Pici: isn't this the active/passive FTP nonsense?
<geofft> Quest: the "adduser" command should do #1
<RoyK> geofft: ah - right - you need a nat helper etc etc etc
<RoyK> ftp sucks rather badly at nat
<Pici> RoyK: From what I understand (and I don't feel like researching it in detail this moment) is that the additional port needed for extra communication between your client and the FTPS server is sent encrypted between yourself and the ftps server. So if you don't have the right range of ports open for that particular ftps server's configuration, it just won't work.
<Quest> RoyK,  ya. thats why i need console
<RoyK> Pici: makes sense
<geofft> Quest: http://www.debian-administration.org/articles/94
<Quest> geofft,  whats better. add user or useradd
<geofft> Quest: looks like the comments in that page suggest using rssh for #2
<RoyK> useradd -m blah
<geofft> Quest: on Ubuntu / Debian, "adduser" does everything. "useradd" creates the account but doesn't initialize it
<geofft> Quest: I basically always use "adduser"
<Pici> adduser should generally be used unless you want to do something special.
<RoyK> geofft: well, useradd -m creates the user, adds it to its default group, creates the homedir, copies /etc/skel there etc
<geofft> Oh, does -m do that? Good to know
<RoyK> man useradd ;)
<RoyK> -m == create homedir
<geofft> I just use adduser, or edit /etc/passwd by hand :-)
<Quest> RoyK,  useradd -m dont give bash as its defual shel
<RoyK> Quest: edit /etc/default/useradd
<Quest> geofft,  useradd" creates the account but doesn't initialize it ? whats that mean
<RoyK> Quest: it just means geofft hasn't rtfm
<geofft> I have in fact never rtfm for useradd since I haven't had a use for it :)
<RoyK> well, then don't blame it
<Quest> how users are added in all linux distros? ubuntu is different?
<geofft> I think all Debian-based distros do the same thing with adduser and useradd.
<geofft> Red Hat-based distros tend to just have adduser as an alias for useradd. I don't know if it's the same useradd.
<Quest> in user add and add user . when the password is set? and can i just give the login name to clients and ask them to setup their password themselft?
<Quest> at first login i mean
 * Quest thanks all in advance!
<Quest> oh yes. most important question. I need to auto block ssh acces to ips that repeatedly entered incorrect login passwords 5 or 7 times
<RoyK> on redhat, adduser is a symlink to useradd
<RoyK> adduser is a debian thing, it's a wrapper script
<Pici> hmm, indeed it is.  /me adds to notes
<Pici> (perl, for those playing at home)
<geofft> Quest: I've heard of fail2ban and DenyHosts used for this, but I've never used either
<RoyK> Quest: I use denyhosts
<Pici> I use fail2ban.
<RoyK> default config is a bit nazi - perhaps good to slow it down a bit
<Pici> I haven't thoroughly researched either though, but it works well with zero config for me.
<RoyK> the good thing about denyhosts, is that it works distributedly
<RoyK> so you can deny a host from all your servers if it tries n times on one machine
<RoyK> both work, though
<Quest> need easy for starters
<Quest> Pici,  oh so no need to config it?
<Pici> Quest: correct.
<Pici> The defaults are sane.
<Quest> it bans 5 incorrect logins for how much time by default?
<Quest> just a rought i dea Pici
<Quest> i hope it works for every ssh based app. which includes sftp
<geofft> Yes. sftp happens by opening an ssh connection and requesting an sftp-server instead of requesting a shell
<jgcampbell300> is it ok to ask questions about ispconfig3 here ?
<RoyK> Quest: omg, can you just try to read the config before asking? or the manual? or google it?
<Quest> ya.. i should not get consious
<RoyK> good idea
<Quest> has anyone happen to run freenx or vnc4server?
<RoyK> weren't you in the process of learning how to secure your server?
<Quest> I wonder apps like  spotflux.com and hotspotshield  use what protocol. https i guess?
 * RoyK ignores Quest 
<Quest> RoyK,  yes. and a remote gui is an addon
<Quest> in daily work.
<maxb> Is there any way to safely flush the history stored in an LVM2 metadata area? I have a weird GRUB issue and I'd like to eliminate the possibility it's reading an old version of the LVM metadata
<Quest> and https is for filtering  services which lets browsing facebook.com  if its blocked in firewall for example
<Quest> right Pici  and geofft ?
 * RoyK wonders if Quest  is a bot
<geofft> Quest: no idea, never used those services
<RoyK> maxb: what's the issue?
<maxb> grub-probe is returning some warning like: grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
<maxb> The system boots, but I'm interested in figuring out what grub is unhappy about
<RoyK> did you upgrade from an older version?
<maxb> Yes, but the warnings didn't start appearing precisely then
<Quest> thx
<maxb> I guess I could force pvcreate to reinitialize the on-disk metadata from a backup file
<RoyK> maxb: single drive?
<maxb> Two
<Quest> RoyK,  adduser -m makes the default shell as sh. it should be bash.  any solution Pici ?
<RoyK> Quest: do you even read what I write?
<RoyK> 23:52 < RoyK> Quest: edit /etc/default/useradd
<RoyK> moron
<geofft> Quest: I think you're wearing folks' patience thin. If you're unfamilar with the process of getting answers to these questions, you can ask
<geofft> Quest: But I'm pretty sure nobody has told you anything that's unavailable in package man pages, Wikipedia, HOWTOs, debian-administration.org, etc. etc. etc.
<geofft> Quest: And you'll get more thorough answers that way.
<Quest> RoyK,  iam talking about add use rnot useradd . /etc/default/useradd is for useradd. it says SHELL=/bin/sh
<RoyK> tried to change it?
<Quest> why should why when 1. i use adduser and not useradd 2. users added by GUI have bash. why?
<Quest> geofft,  those articales made me confuse. you guys solved it
<geofft> Quest: as mentioned earlier, adduser is a wrapper around useradd
<Quest> if i have not insisting questions though.
<geofft> Quest: The GUI tool probably has different defaults.
<Quest> geofft,  hm
<Quest> so I did needed to change the sh to bash
<Quest> ok
<Quest> geofft,  how to add the user , make /home but dont allow it to login /home folder. (only login to sftp by ssh)?
<Quest> is there a way
<RoyK> Quest: before you complain any more - try to add the file I've mentioned a couple of times - try to add a user
<Quest> i have
<RoyK> and it worked?
<geofft> Quest: Did you read the article I linked earlier and the comments?
<Quest> it made the account but i can login.
<geofft> Quest: Did you look at rssh, which RoyK has recommended a few times?
<RoyK> geofft: I don't think Quest reads anything we link to, he just asks here
<Quest> geofft,  RoyK  is rssh the only way. if i recall correct . there are other ways not to make the user login console?
<geofft> Quest: There are lots of ways. Is there a problem with rssh?
<RoyK> Quest: can you spell google?
<Quest> geofft,  i have mostly read it
<geofft> Have you tried it?
<Quest> geofft,  by ways. i mean natvie ways. not third party apps like rssh
<geofft> Install it on a machine, set it up, see if it does what you want.
<Quest> rssh is good. but iam asking for learning
<geofft> You can see what rssh does.
<geofft> Look at the configuration changes it makes, or look at its source. No better way to learn. :)
<Quest> hm
<geofft> if you haven't read the sshd_config manpage, read that too
<Quest> there is no native way?
<geofft> it's incredibly dense, but everything's in there
<Quest> i mean a way (if i recall) that was some arg at adduser/useradd
<Quest> hm
<geofft> Did you see the debian-administration.org article I linked to?
<RoyK> geofft: just trop it - Quest will just ask here - he doesn't read
<Quest> yes
<geofft> It talks about a way to do this that doesn't involve using rssh.
<Quest> ok. if you guys are annoyed. ill stop asking
<Quest> geofft,  whats that. i missed it
<geofft> Er, that's the entire article.
<Quest> k
<RoyK> any ops around?
<Pici> hm?
<Pici> Quest: they're annoyed.  We're here to help, and it seems you're ignoring advice that has already been said and asking again.
<Quest> advice admired and regarded. stoped asking
<Pici> thank you.
<Quest> did i said thank you to RoyK ?
<RoyK> Quest: maybe - but I'd still say "read the manual, google it" etc before asking too much in here
<Quest> my question was valid. as sftp is command execution. and so is same as ssh. if i need sftp but dont want to console login. even rssh might not help (just making a jail around /home is not enough)
<RoyK> Quest: as I said
<RoyK> earlier
<RoyK> we
<RoyK> are
<RoyK> using
<RoyK> rssh
<RoyK> for
<RoyK> 17000
<RoyK> users
<geofft> Quest: Why "might not help"? Have you tried it?
<Pici> "rssh is a restricted shell for use with OpenSSH, allowing only scp and/or sftp. It now also includes support for rdist, rsync, and cvs. For example, if you have a server which you only want to allow users to copy files off of via scp, without providing shell access, you can use rssh to do that."
<RoyK> Pici: don't help him - he may even find the command "man" one day
<Pici> RoyK: if I don't help then I'm going to say things that I might regret.
<geofft> http://catb.org/~esr/faqs/smart-questions.html
<geofft> specifically the "Before you ask" section.
<Quest> Pici,  great!!!
<Quest> RoyK,  just saying rssh does it. is less better than what Pici  explain. now iam confident.
<Quest> i was nervous
<Pici> ...
<Quest> but thanks alot RoyK
<Pici> Quest: fyi, that was a direct quote from rssh's website. which you could have looked for yourself. It took me two seconds to find it.
<geofft> Quest: All of us here learned what we know by reading documentation and trying things.
<Quest> looked. not quit efficeitly looked as a start. contrary to you
<Quest>  the ssh-terminal can be locked out, by adding <sftplib> to /etc/shells, and setting that shell for the user logon
<geofft> Quest: It takes time. I've been sysadminning ssh for years. But I now know it very well.
<geofft> Quest: You're going to get better answers by reading things slowly and being patient with web searches than expecting other people to do things for you.
<Quest> k
<geofft> Guessing at the purpose behind your questions, you're trying to do something very complicated.
<geofft> (thinking about the file transfer question, and the FreeNX question, and ...)
<Quest> ya
<geofft> That's fine! You can do complicated things with Linux.
<geofft> It will just take you more than a few hours to learn how to do.
<Quest> hm
<geofft> And especially for things like secure file transfer and remote logins, you want to make sure you understand it.
<Quest> tring to assemble a huge server in two days
<geofft> Because otherwise you risk setting up the security wrong.
<Pici> geofft: thanks for explaining this all :)
<Quest> i have already done network reporting. now with these things
<geofft> Again, you totally _can_ understand it. It's just not the sort of thing you want to try to rush to understand
<Quest> 2 day deadline is  a rush
<geofft> I would not feel confident setting up a remote file access + networked login server in two days, even knowing what I do about these things.
<Quest> hm
<geofft> I would feel confident in taking, oh, maybe a week to set up a dev server.
<Quest> am i correct about  the ssh-terminal can be locked out, by adding <sftplib> to /etc/shells, and setting that shell for the user logon
<geofft> I don't know.
<Quest> k
<geofft> I mean, I could think about it, and figure it out, but I'm not the one doing this project. :-)
<geofft> And it will take me more than five seconds to figure that out.
<Quest> hm
<Pici> but after that week you'd know what to do and probably know how to fix many of the problems that might occur after it is live.
<geofft> Depending on the constraints of your project, you might also want to hire an experienced sysadmin / consulting firm to do this
<geofft> and to maintain its security long term
<geofft> but anyone you could hire that's worth hiring will refuse to do it in a mere two days.
<Quest> Pici,  ya
<Quest> just to confirm, geofft  pici that its a good practice . to not allow console login for a user
<Quest>     change the user's shell to /bin/false (in /etc/passwd)
#ubuntu-server 2013-04-07
<halvors> Hi! Anyone know how to setup and enable nagiosgrapher? I've installed it and enabled "process_performance_data=1" and "service_perfdata_file_processing_command=process-service-perfdata" in the nagios.conf :) But no hosts is showing up :(
<ruben231> hi gusy how do i find huge file size on my ubunt server
<ruben231> i ahve storage of 98 pecent but cant see whihc file are having this huge file size any idea..?
<RoyK> find  / -size +something
<RoyK> man find
<RoyK> best place to start is /var/log
<RoyK> or start with
<RoyK> du -sch /var /home
<ruben231> 82G     /var
<ruben231> 58G     /home
<ruben231> 140G    total
<RoyK> so how large is the filesystem?
<RoyK> I need  to sleep - talk tomorrow
<ruben231> http://pastebin.com/XKvuhyaE
<ruben231> please
<ruben231> wait
<bobbyz> ruben231: You could always 'find / -type f -printf "%s %p\n" | sort -n
<bobbyz> ruben231: keep in mind sort uses space in /tmp though
<lamont>  
<vedic> Which virtualization software do you recommend? I have an 8 core Intel server with 8gb RAM and 300GB HDD. Need to create a virtual machine which can run 24x7 without any need for maintainace or reboot.
<vedic> Locally I have been using Virtualbox. But not sure if that is suitable for long running as a server
<geofft> I wouldn't use Virtualbox on a server; that's not its intended use case, really.
<geofft> kvm (possibly via virt-manager) and Xen are both quite fine
<vedic> geofft: I was thinking XEN but need second opinion. Is it easy to setup?
<geofft> I haven't used it on recent Ubuntu versions, but in my experience, yes
<vedic> geofft: Which guest type is preferred ? PV or HVM ? My needs include mathematical application and a lot of number crunching. That may include clock timing requirements for random numbers etc.
<snowdrop> Greetings from Sweden all. Just installed ubuntu-server on a virtual server, and all works, but I haven't understood one basic thing: Which user is "logged" when the virtual server reboots? For example, which users crontab will be in effect after a reboot?
<geofft> vedic: PV. Xen's HVM support is primarily for OSes that can't be run as PV
<yousaf> how I do list the current list of php processes?
<loostro> hi
<loostro> i'm haveing a problem setting up a apache2 server on ubuntu, is this the right channel to ask?
<loostro> (and i suspect this is not apache2 issue, but something wrong with my router/connection settings/port forwardning)
<duckstep> i just created a new raid5 volume with mdadm, got it configured and rebooted to confirm all settings were correct
<duckstep> volume wouldn't mount properly
<duckstep> i try to mount it manually and find the device in /dev has changed from /dev/md0 to /dev/md127, but all of my data is still there
<duckstep> any thoughts on what i might have done wrong?
<duckstep> here is what i have in /etc/fstab
<duckstep> UUID=f70b9a0f-cf0aa0a2-9e5cf3fd-c44046b8 /media/storage    ext4   defaults 0 0
<Quest> how can I restrict users to only use one command ("passwd") to just be able to change their password. and cannot do anything else in console? how is it possible?
<hallyn> geofft: yeah, that ppa is years old, and yeah i saw tpm related patches flying by the list recently...  sorry i don't kno wof anyone working on tpm+qemu right now.
<Quest> Pici,  geofft  you there?
<Quest> how can I restrict users to only use one command ("passwd") to just be able to change their password. and cannot do anything else in console? its for just giving them sftp acces. how is it possible? cant do it with rssh
<Quest> may be RoyK  would  know
<RoyK> if you use rssh, users can't login
<RoyK> how many users?
<RoyK> and why so paranoid?
<patdk-lap> only sftp? without scp?
 * patdk-lap goes paranoid though
<patdk-lap> users can change their password in webmail
<patdk-lap> and on the sftp box, I unset all sticky user/group settings from all programs
<yousaf> To start my application I need to run "start socialapi", but I can't find "start" anywhere
<Quest> RoyK, 50-100
<Quest> RoyK,  chroot might be better option?
<Quest>  what does %h means in http://www.fpaste.org/DQdA/
<Quest> and should the  ForceCommand internal-sftp    be   ForceCommand /usr/lib/openssh/sftp-server
<RoyK> Quest: chroot means you'll need to link in libs an other binaries
<RoyK> Quest: if the system is secure, like most are, allowing logins shouldn't be a problem
<RoyK> Quest: at work, we use a homegrown webinterface for users to change their passwords across several systems. I guess there should be some around for just changing unix passwords
<RoyK> Quest: as usual - please google first
<Quest> how do you give web interfaces for changin password?
<Quest> RoyK, ^
<Quest> RoyK,  i just dont want even anyone the use ifconfig eth0
<RoyK> why not?
<RoyK> it'll just show the ip address and mac address and so on
<Quest> no one should do anything that their dont need to
<Quest> first rule
<RoyK> no, the first rule is "noone should be able to administer the system"
<Quest> no one should do anything that they* dont need to
<RoyK> so what if they can run ifconfig?
<Quest> RoyK,  thats not a rule. thats implicit
<Quest> RoyK,  nothing... but why give info for a hacker that . look heres my ip config for all ehos.  ipconfig is just one example
<RoyK> oh
<RoyK> ipconfig doesn't exist, btw
<wickedpuppy> ...
<RoyK> the problem with newbies, is that they are afraid of users
<Quest> sory ifconfig
<Quest> thats a good problem then
<Quest> how do you give web interfaces for changin password?
<RoyK> Quest: have you even tried googling that?
<Quest> ok
 * RoyK ignores Quest 
<Quest>  what does %h means in http://www.fpaste.org/DQdA/
<Quest> and should the  ForceCommand internal-sftp    be   ForceCommand /usr/lib/openssh/sftp-server
<Quest> RoyK,  dont answer / chat with me if you do ignore on me one more time
<Quest> and use /ignore        not /me ignores Quest
<Quest> for good
<Quest> google that for its use
<RoyK> Quest: I've helped you with a lot of things, but I ask you, kindly, again, to please bloody google things before spamming this channel
<Quest> for that i am really thank full
<Quest> really appriciate it
<Quest> :)
<Quest> but saying ignoring is not friendly
<Quest> its like you are giving a peny to a begger and spiting on his hand as well. i dont need such penies
<RoyK> Quest: well, ignoring my repetitive requests for you to try to google things before you ask here, and then, when you get an answer, repeat the question, is not very friendly either. it makes people like me who likes to help newbies want to ignore them all the way
<RoyK> so please, jfgfi
<Quest> why theres a need to adduser to a group of its own name? why not add most users to one group only?
<shauno> that's an option, just not the default
<Quest> shauno,  adduser  userName      adds the userName to the groupd called userName by default (and makes /home/userName even -m is not supplied)
<maxb> Quest: The practice of making a user's primary group one dedicated to that user is indeed a bit obscure
<maxb> It has to do with the concept of 'umask'
<maxb> umask determines what the access privileges assigned to newly created files are
<Quest> maxb,  so adding users to their own group name is neccesry?
<maxb> It's not necessary, but it is the de facto standard way to implement the ability to share write access to files using groups
<Quest> maxb,  ok. whats the command to add a user and while adding, add the user to its own group (named as the user name) and to 2 more groups?
<maxb> The idea goes as follows: If you set up users with their own group, then you can set the default umask to one which allows the group write access bit for new files to be on, without actually giving access to other people
<Quest> hm
<maxb> Then, when you want a directory tree where write access *is* shared between a group of users, you can chgrp that tree and set the directory setgid bit so that new files are also group-owned by that group
<maxb> It is a fairly obscure use case
<Quest> i see
<Quest> ok
<Quest>  whats the command to add a user and while adding, add the user to its own group (named as the user name) and to 2 more groups?
<maxb> But it is the only concrete reason I've ever come across for the pattern of defaulting to creating these 'usergroups' as they are typically known
<maxb> Are you using 'adduser' the Debian/Ubuntu friendly helper, or 'useradd' the lower level tool?
<Quest> yes
<Quest> adduser
<maxb> That was an either/or question, yes is not a valid answer :-)
<Quest> i stated adduser
<maxb> It looks like you need to create the user and then add the additional group memberships in a second command
<Quest> ok
<Quest> whats the commands?
<maxb> 1) adduser [options] username
<maxb> 2) adduser username groupname
<maxb> adduser does different things depending on whether you give it one or two names
<maxb> Which is a little obscure at first
<Quest> my /etc/groups stats testing:x:1005: but groups testing says groups: testing: No such user
<Quest> whats wrong
<maxb> Huh, weird. Somehow I've managed to go a decade plus of using Linux without coming across the groups command :-)
<maxb> But 'man groups' tells me that groups takes a username, and you appear to be misunderstanding it as taking a groupname
<Quest> k
<Quest> i just $ sudo service ssh restart  . it did restarted and iam on that shell (i didnt disconnected) but now i cannot ssh to that computer by any account. it says connection refused. whats wrong?
<Quest> I just installed fail2ban with no config editing. i restarted sshd with sudo service ssh restart. now i cant login by ssh by any ip. nmap says port 22 is closed. what can by wrong?
<geofft> hallyn: OK, thanks. (Was looking for something easy to learn with, since my laptop doesn't have a TPM)
<geofft> hallyn: may I ITP libtpms in Debian based on your packaging? (I'll also check with the Debian qemu team)
<maxb> Quest: Sounds like sshd failed to start to me.
<Quest> hm
<Quest> maxb,  but why ssh 22 is closed and so are other ports?
<maxb> closed just means nothing has it open...
<Quest> ok
<Quest> maxb, this config http://pastebin.ca/2352079 in the /etc/ssh/sshd_config  is not letting the openssh server to startup. whats wrong in it?
<Quest> anyone?
<Quest> this config http://pastebin.ca/2352079 in the /etc/ssh/sshd_config  is not letting the openssh server to startup. whats wrong in it? i commented the out to make it work. now ssh server is runing.  the only logs i get is ssh status stop/waiting and Invalid user plant from 116.212.190.6
<Quest> more elaboration : this config http://pastebin.ca/2352079 in the /etc/ssh/sshd_config  is not letting the openssh server to startup. whats wrong in it? i commented the out to make it work. now ssh server is runing.  the only logs i get is ssh status stop/waiting and Invalid user plant from 116.212.190.6  . if i follow this http://www.serverubuntu.it/SFTP-chroot it says this http://pastebin.ca/2352109
<Quest> Pici, ?
<maxb> My guess would be that the way sshd is being managed by upstart is unhelpfully causing the interesting error messages to be lost.
<maxb> Therefore I would try starting a second sshd running on an alternate port manually in a terminal, so I could observe whatever it's complaining about
<Quest> maxb,  now iam on local host
<Quest> same problem
<Shogoot> Hi good people. I jsut made my ubuntu server to have static ip. I dont know if it is a consecuense of this that i cannot ping any ip that is not 8.8.8.8  -  8.8.8.4.4. Anyone that can help me find out whats wrong?
<Shogoot> I did the two ip are the two that i added as dns-nameservers in my etc/network/interfaces
<patdk-lap> no idea
<vedic> I have created upstart script to start/stop a python script (its a tcp/ip server). There are two servers that I need to start (order is not a matter). When I start the first server using upstart script, it is starting well and works fine. But while first is running, if I start second server which is using prefork to spawn about 10 processes, it is not able to start.
<Shogoot> Hi good people. I jsut made my ubuntu server to have static ip. I dont know if it is a consecuense of this that i cannot ping any ip that is not 8.8.8.8  -  8.8.8.4.4. These two ip are the two only ips i added as  dns-nameservers in my etc/network/interfaces. Can anyone help me find out why i cannot ping other ip's?
<Quest>  i just deleted /var/log/auth.log and i dont see it recreated. i recrated it with sudo. blank file but even after a reboot. its no being populated. stil blank
<guntbert> Shogoot: dns-nameservers have nothing to do with the ability to ping a host by IP-address, thats more a problem of routing-tables
<Shogoot> can you help me find ouot what i need to do?
<Shogoot> ig ot a netgear WNDR3700  router
<patdk-lap> did you define a gateway?
<Shogoot> patdk-lap, this is my interface file http://pastebin.com/ZKeq0n6j
<Shogoot> quick snser: yes
<Shogoot> quick answer: yes*
<Shogoot> im looing into if it is my router...
<Shogoot> it does have a static routers config thing....
<Shogoot> iamge: http://imageshack.us/photo/my-images/708/staticipk.jpg/
<maxb> You really shouldn't be defining static routes on a home wifi router unless you're REALLY sure you need to do so
<maxb> In this case it looks a lot like you've told the router it needs to route to your network via itself
<Shogoot> i want to access it from outside
<maxb> Which could well be breaking stuff
<Shogoot> hmmm
<maxb> Static routes have very little to do with external access
<Shogoot> what i thought to... but im going nowwhere with this
<maxb> Delete all static routes on the router and see if your routing problem is fixed
<Shogoot> i had no static routes to begin whit. so that not the issue
<shauno> you had no static routes when you could ping the outside world.  you now have them and can't. so it makes sense to backtrack to a working config before you go forward
<Shogoot> yupp
<Shogoot> and i found the problem
<Shogoot> in interfaces i use 3 dns-servernames, and only 2 are allowed...
<Shogoot> i got rid of the last and now i got it up and running :)
<Shogoot> maxb, thanks for your time
<Quest> i have seen rssh docs, used chroot with sftp and openssh server . i want to accomplish is. give users sftp access, make a jail and they cant go outside their home, but can login to console and only use those commands that i have allowed. . how can it be done?
<patdk-lap> heh? you can't
<patdk-lap> chroot breaks all of that stuff
<Quest> I encrypted /homes while installing ubuntu. how come i can browse other peoples /homes. ?
<RoyK> if those are encrypted, you can't
<IdleOne> probably because you are switching to that user account
<Quest> no iam not
<IdleOne> More detail will be needed to diagnose
<Quest> there is .ecryptfs
<Quest>  in /home
<Quest> but i can cd to others /home
<IdleOne> as root or as a user
<IdleOne> How many times do we need to tell you not to cross post your questions in multiple Ubuntu channels, it is rude, and it divides the support.
<shauno> Do you actually see anything in their homes?
<Quest> IdleOne,  user
<Quest> I encrypted /homes while installing ubuntu. how come i can browse other peoples /homes. ? so if had user1 setup at install time and choosed encrypt /home folder . who can go into other user accounts and who cannot?
<ashley_w> i used vmbuilder and now have a qemu qcow image. how can i boot this using libvirt?
<shauno> who can chdir to that folder is permissions, not encryption.  as you were were trying to put users all in the same group earlier, rather than having a group created per-user, you're probably not seeing the default behaviour there anymore
<shauno> what you find inside those homedirs should be the result of encryption/lack of
<Quest> coplete info . I encrypted /homes while installing ubuntu. how come i can browse other peoples /homes. ? so if had user1 setup at install time and choosed encrypt /home folder . who can go into other user accounts and who cannot?
<Fieldy> it's based on permissions of that users home directory. 700  means only they can see it. 750 will let users in the same group in to read (but not write), 770 also write. 755 those in the same group, and everyone else, read. 777, everyone read/write (bad idea)
<Quest>  if a theif gets the HD , boots from live cd, replaces /etc/shodow file with his own. boots up. logs as sudoer, changes all users password , can he get into the encryped /homes of users?
<Fieldy> Quest: no, because /home (i'm assuming) is encrypted as a partition. they would need the password and/or key of the encrypted partition
<geofft> I'm pretty sure the context is that directories in /home are encrypted with eCryptFS.
<Fieldy> i'm not sure what that is, all i know of is luks
<Fieldy> and i'm assuming /home was encrypted in such a way
<patdk-lap> encryptfs is far away from luks
<Fieldy> okay, well whatever it is, if it's /home that is its own partition, and encrypted, the concept still applies
<geofft> Fieldy: Quest has said before that ecryptfs is what's being used.
<geofft> Fieldy: It's not partition-level encryption.
<Fieldy> right. but i don't know what that is. so i'm reverting to conceptual stuff
<patdk-lap> fieldy, the concept does not
<Fieldy> okay, i am ill-informed on this subject then, sorry
<patdk-lap> if you don't know what it is, you don't know the concept, please to confuse people
<Fieldy> short answer from me: with luks, an attacker won't get the user data as described. with this other thing, I have no idea.
<geofft> I'm really worried about what Quest is doing, since they're clearly doing something security-sensitive
<geofft> and are asking random folks on an IRC channel for advice
<geofft> and that's a great way to get yourself totally misconfigured by mistake and screwed over.
<geofft> If I say "no, you're fine, there's no security risk", why should you possibly trust me?
<geofft> Even if I'm competent, I may have misunderstood you.
<geofft> Or you may have failed to describe something else about the system that's relevant.
<geofft> So I strongly, strongly advise folks here to point Quest at thorough documentation instead of guessing at particular questions.
<Quest> hm
<geofft> Or at consulting resources. I hear you can pay Canonical to run this for you.
<geofft> Here's some ecryptfs documentation:
<geofft> http://bazaar.launchpad.net/~ecryptfs/ecryptfs/trunk/files/head:/doc/
<geofft> I think this all gets installed somewhere in /usr/share/doc
<geofft> I'm usually happy to answer questions, but the amount that these questions are security-sensitive
<geofft> and the way that they're being asked
<geofft> worries me a _lot_
<geofft> If you're doing this for fun, for learning, for personal use, great. It gets hacked, whatever.
<geofft> but it sounds like you have a deadline, which means someone is paying you to get this right
<geofft> so you should be appropriately conscientious about getting this right.
<Quest> Fieldy,  geofft  user3@server1:/home$ sudo ls user1/
<Quest> Desktop  Documents  Downloads  Music  nxclient_3.5.0-7_amd64.deb  Pictures  Public  Templates  Videos  wget-log
 * Fieldy is confused
<Quest> user1 home is supposed to be encrypted
<Fieldy> you're running the command as root. you will be able to see any file anywhere
<Fieldy> i don't really understand the encryption you're using though, i only understand luks. so i can't say if an attacker would be able to see that information or not
<patdk-lap> heh?
<patdk-lap> this is how *it works*
<patdk-lap> fieldy, please read the docs before commenting about
<geofft> Fieldy: This isn't LUKS, please stop talking about LUKS.
<patdk-lap> but I don't see the private folder
<patdk-lap> so that use likely was not created using encrypted home
<patdk-lap> user
<geofft> Quest: Yes, the encrypted directory is mounted by eCryptfs because that user has unlocked and mounted it.
<geofft> patdk-lap: I _think_ this is what you get if you encrypt your whole homedir and not just Private
<geofft> but I might be wrong there
<geofft> Quest: You should figure out how mounts work and what PAM is and how pam_ecryptfs fits in here.
<patdk-lap> maybe, I don't use encryptfs personally, just messed with it some
<geofft> Quest: I can't give you a proper explanation of all that in an IRC channel. I've given 2-hour lectures on that stuff before.
<patdk-lap> but I do what fieldy doesn, luks on all my drives
<Quest> geofft,  so if one user logs in, he has decrypted all the users /homes?
<geofft> Quest: That's not what I said.
<geofft> Quest: You should figure out how mounts work and what PAM is and how pam_ecryptfs fits in here.
<patdk-lap> if a user logs in, that users home is decrypted for all to see, assuming permissions
<Quest> you mean. if the system is runing, its mounted. ofcourse, so decrypted
<patdk-lap> we are talking to a wall
<Quest> patdk-lap,  in my case, the user1 was not logged in but user 3 saw his home
<patdk-lap> default permissions set on home folders don't allow that, encrypted or not
<Quest> convinced
<Quest> drwx------ 19 user1 user1  4096 Apr  7 22:15 user1
<Quest> the install was by user1 and choosed to encrypt home
<Quest> ser3@server1:/home$ sudo ls user1/
<Quest> Desktop  Documents  Downloads  Music  nxclient_3.5.0-7_amd64.deb  Pictures  Public  Templates  Videos  wget-log
<Quest> patdk-lap,  geofft  any commentd ^
<patdk-lap> comment about what?
<patdk-lap> you just ran ls as root, what did you expect?
<Quest> i thought even roots cant go in ecrypted homes
<patdk-lap> if they aren't mounted
<Quest> hm
<Quest> i see. so if they are mounted. roots can go in those?
<patdk-lap> anyone can
<patdk-lap> as I said above
<Quest> now i understand what geofft  said
<Quest> patdk-lap,  thanks
<patdk-lap> for a server, it's generally pointless, as I see it for encrypted homes
<patdk-lap> unless you want to use it for some semi-private storage space, that does not need tobe used by normal server operations
<patdk-lap> cause anything in it, won't be accessable to normal server stuff, unless the user is logged in
<patdk-lap> or you auto-mount it
<Quest> Fieldy,  you said no. well if shaddow is replaced. so are password. so they have the password and can bot system.
<Quest> if a theif gets the HD , boots from live cd, replaces /etc/shodow file with his own. boots up. logs as sudoer, changes all users password , can he get into the encryped /homes of users?
<Quest> patdk-lap, ok
<geofft> Quest: Why not try it?
<geofft> Backup /etc/shadow, make a new one, see what happens.
<Quest> geofft,  are you saying it because you are unsure?
<geofft> No, I know the answer. I just want you to figure it out. :)
<geofft> I know the answer because I know _how_ /etc/shadow interacts with ecryptfs.
<Quest> please tell it
<geofft> And so I can figure out the answer from that base knowledge.
<geofft> No, dude, you're not paying me.
<Quest> ok. tell me yes/no. ill find out how
<geofft> I'm here to help you figure out how to answer questions on your own.
<geofft> If you're going to be demanding of volunteers, I'm not helping you.
<geofft> This is a development channel, not a paid contractor. If you want a paid contractor find one.
<geofft> Doing experiments like this is exactly how I learned the answer to every question you have asked fso far in the past two days.
<geofft> I am happy to help you learn, but I'm not doing your homework for you.
<Quest> ..
<geofft> And honestly, if I told you "no", why should you possibly believe me?
<geofft> Are you willing to risk your job on the chance that some guy you've never met before understands ecryptfs?
<Quest> i trust people here. thats why
<Quest> like you, Pici  and RoyK
<geofft> I don't even trust _myself_ to answer that question.
<Quest> k
<geofft> I have my guess, but if I had to do so on a reaal production system I'd do the experiment before guaranteeing the answer.
<geofft> So why don't you do that experiment and cut the middleman?
<Quest> i did and couldnt get into. maybe i did it wronge
<Quest> thats why asking
<geofft> Why did it fail? Did you get any error message?
<Quest> and i couldnt find on google. on how .
<Quest> nop.
<Quest> no error message
<Quest> geofft,  slienced?
<geofft> Dude, I'm doing three other things behind a bad internet connection
<geofft> What happens if you try to ecryptfs-mount the homedir?
<Quest> iam away from that system now. and i attached the HD again. re repleced shadow
<Quest> geofft,  can you tell me what might be wronge?
<^Mike> How can I list which repositories have a given package available?
<geofft> ^Mike: apt-cache policy $package, or packages.ubuntu.com/$package
<uvirtbot> geofft: Error: "Mike:" is not a valid command.
<geofft> ...
<geofft> I'm just going to call you "Carrot Mike" from now on.
<^Mike> cool, thanks
#ubuntu-server 2014-03-31
<teward> not really.  needs to work with IMAP clients and SMTP (I know postfix does SMTP), and needs SSL/TLS security if possible
<teward> i haven't had the patience to sift through the (virtually nonexistent) postfix documentation on getting this all set up though
<Joe_knock> teward if you're strapped for time and have some cash, somebody on freelancer/elance could do it for you.
<teward> Joe_knock, maybe it's just my being in a rush to get other things done that's prevented me from actually digging into it
<teward> and it's safer to assume I don't have cash :P
<Joe_knock> teward, I can understand where you're coming from. Is the mail server part of some web project?
<TJ-> teward: postfix with virtual aliases; deliver all email to a domain to a single user account on the server, and then have something like dovecot (IMAP4) for clients to access it
<teward> TJ-, it's the dovecot part i get lost at, I think
<teward> brb, firewall's being stupid again
<teward> for the 10th time today
<teward> back
<teward> Joe_knock, yeah, its going to be the web server for my entire site, I'm tired of using Google Apps as the backend, would rather move all the services in-house.
<teward> s/backend/email backend/
<teward> and email's the last thing to move
<teward> TJ-, it's getting dovecot set up where I can't find step-by-step documentation...
<Joe_knock> teward, maybe you should try my method of having the webserver do all outgoing automated mail and use an external mail hosting server for incoming and response mails
<TJ-> teward: Dovecot's config will depend on how you organise the mailboxes, so getting postfix configured in a way that supports the ways dovecot can access mailboxes is the issue, I think
<teward> Joe_knock, I could probably do that, I still want to move all the mail for the site off of Google Apps, though, and I do need the response emails.  The other problem is there's a ticketing system on the servers that needs to pull from an IMAP address for emailed-in tickets so meh
<Joe_knock> teward google apps isn't the only external mail server option. What does your app/site do?
<teward> Joe_knock, the site that's relevant is the ticket system, it's for a site that provides free IRC bouncers
<Joe_knock> interesting.
<teward> the ticketing system is the one which needs to pull from IMAP for emailed-in tickets
<teward> but meh
<teward> Joe_knock, I do have other domains I can experiment setting up, I'm trying to move from my gmail.com addresses to my own domain emails anyways so meh :P
<teward> Joe_knock, ultimately, this is an experiment, first off, before I move the data to a postfix/dovecot setup.  But I still need a guide for setting it up... :/
<sheptard> google?
<teward> sheptard, i'd use it if my firewall weren't causing hell...
<teward> I guess it's time to replace my network equpiment... again...
<ahmadgbg> Hi, i have setup my web server and everything works as it should. The only problem i have is postfix. I cant get it to send mail outside the local area. I am a newbie so its possible that i have done some config error.
<Joe_knock> Hey RoyK
<Guegs> Is it considered a good idea to update to 14.04 from 12.04 when it is released?
<zanzacar> I have an ubuntu-server 13.10 and wanted to install a gui. I was looking at this site https://help.ubuntu.com/community/ServerGUI for tips and it does mention sinstall ubuntu-desktop as an option.
<zanzacar> I thought that I would juts check to see if that is still up to date and everything, I thought maybe it might have changed since the site was modified in 2012.
<zanzacar> I almost rather uninstall the server and install just unbuntu-desktop but I figured I could just install the desktop and be pretty much the same correct?
<zanzacar> eh nm I am still going to have the same problems I would have had with my other desktop. I need another desk/computer
<lordievader> Good morning.
<pmatulis> morning
<zul> coreycb: ceilometer rc1 is out lemme know when you are ready for me to upload it to the archive
<Phibs> can anyone tell me why this fails to add the default gw to the interfaces config file: http://p.bsd-unix.net/puaf10jim
<jamespage> bug 1293177
<uvirtbot> Launchpad bug 1293177 in samba "'net usershare' returned error 255: net usershare add: cannot convert name "Everyone" to a SID. Access denied." [Medium,Confirmed] https://launchpad.net/bugs/1293177
<zul> jamespage:  have you seen this? http://pastebin.ubuntu.com/7185546/ (trusty-trunk packages)
<jamespage> nope
<zul> jamespage:  nm...im getting hit by that utf8-bug
<jamespage> zul, yeah - that sucks
<zul> jamespage:  horizon rc1 is out
<patdk-wk> I need to diagnose a trusty upgrade issue :(
<patdk-wk> upgraded a machine yesterday, and it hung before grub finished loading :(
<cfhowlett> patdk-wk, trust support is in #ubuntu+1
<patdk-wk> who was asking for support?
<cfhowlett> patdk-wk, pretty sure diagnosing an upgrade issue = support
<cfhowlett> !trusty
<ubottu> Ubuntu 14.04 (Trusty Tahr) will be the 20th release of Ubuntu.  See the announcement at http://www.markshuttleworth.com/archives/1295 for more info. support in #ubuntu+1
<patdk-wk> heh?
<patdk-wk> the first step to making a support issue, is asking for help
<patdk-wk> I don't remember doing that
<Phibs>  can anyone tell me why this fails to add the default gw to the interfaces config file: http://p.bsd-unix.net/puaf10jim
<jamespage> coreycb, zul: pls can you check for fixed ubuntu bugs when prepareing rc1's
<patdk-wk> I was making a comment about my *current* activities
<jamespage> patdk-wk, that sucks a bit
<jamespage> patdk-wk, tbh that's foundations territory
<cfhowlett> patdk-wk, until release, installing, cofiguring, troubleshooting and spotchecking of trusty belongs in the #ubuntu+1 channel.  thank you.
<Phibs> does anyone in here use preseed with a static net config
<patdk-wk> cfhowlett, but I didn't do any of those things, so thanks
<jamespage> zul: https://code.launchpad.net/~james-page/horizon/icehouse-rc1/+merge/213509
<zul> jamespage:  +1
<Phibs> nobody preseeds eh
<patdk-wk> phibs, I normally don't build from scratch like that
<patdk-wk> but build the machine, snapshot it, then clone
<jamespage> zul, tested and uploaded
<zul> jamespage:  awesome
<jamespage> pending approval from release team - I did not reference the FFe as I don't want to close it now
<Phibs> patdk-wk: nod
<luminous> is there anything special you can put into /etc/network/intefaces that would say "if you don't have a config for the interface, fallback to dhcp" ?
<zul> coreycb:  ceilometer uploaded
<coreycb> zul, thanks!
<Phibs> is there somewhere to set the default gw in ubuntu, where it is not tied to an interface config stanza ?
<sarnold> hey Phibs :) I don't think I've seen a way to do that; how would that help? you've got to have at least one interface configured with an ip in order to know how to route to the gateway anyhow..
<henrik> Anyone here interested in bug reports for the trusty serverguide? I found a couple of typos in the lxc section.
<pmatulis> henrik: sure, can you open a single bug for them?
<pmatulis> http://pad.lv/b/serverguide
<henrik> pmatulis: thanks, will create a launchpad account. (and do a quick search to see if anyone spotted the same things..)
<pmatulis> henrik: ty bro
<vlad_starkov> TJ-: I'm here
<TJ-> :)
<TJ-> The elevator=deadline is supposed to fix ultra-fast SSDs causing hangs
<TJ-> Which made me wonder if you might have also benefited from using "rootdelay=$SOMETHING"
<vlad_starkov> TJ-: ahh, I disconnected all drives
<TJ-> vlad_starkov: Oh yeah! Well, try it anyhow, we seem to firing shotguns at the barn doors anyhow :)
<vlad_starkov> "rootdelay=5" ?
<sarnold> why wouldn't you use elevator=none with ssds?
<TJ-> I think it's 30 by default, so I usually use 90 when I suspect it might help... If I recall the delay between the last kernel message and the bug warning, is about 21 seconds
<TJ-> sarnold: Because there are no SSDs connected :)
<sarnold> err, elevator==noop of course..
<sarnold> oh :)
<TJ-> sarnold: Summary. Supermicro X7DBR-3, no IPMI, with 2x SDD and 4x HDD, new system trying to run even the installer, BUG soft lockup early in boot. In default config (with an initrd) hangs at init-bottom, without initrd, still hangs.   Let me link you to a pastebin of the kernel log, captured over serial, from last Friday: http://paste.ubuntu.com/7169894/
<vlad_starkov> TJ-: Here fresh logs with different set of boot params http://paste.ubuntu.com/7186017/
<TJ-> vlad_starkov: Was this the system that "noacpi" worked on? And one thing I was trying to do was isolate which ACPI facility was causing the hang?
<TJ-> vlad_starkov: We tried "nolapic", did we ever try "nolapic noapic" ?
<vlad_starkov> TJ-: yes we did
<jamespage> coreycb, zul: was the epoch bump on ceilometer intentional?
<jamespage> -queuebot/#ubuntu-release- Unapproved: ceilometer (trusty-proposed/main) [2014.1~b3-0ubuntu3 => 1:2014.1~rc1-0ubuntu1] (ubuntu-server)
<zul> jamespage:  er...no
<zul> i missed that when reviewing it
<vlad_starkov> TJ-: the interesting thing is that with some sets of boot params, for ex. "debug initcall_debug console=tty0 console=ttyS0,115200n8 elevator=deadline", the systems does no fall in hang and responds to Ctrl+Alt+Del
<jamespage> zul, lets get it rejected and re-upload with epoch :-)
<vlad_starkov> TJ-: "systemd-udevd[1024]: timeout: killing '/sbin/modprobe -qba dm-multipath' [1107]"
<zul> jamespage:  yeah
<vlad_starkov> TJ-: "dm-multipath" - is a module name?
<TJ-> vlad_starkov: Does the "elevator=deadline" cure it completely... with disks connected?
<vlad_starkov> TJ-: disks are currenly disconnected, I'd like to boot successfully in minimal environment and then enable devices one by one
<coreycb> jamespage, probably a mistake on my part if anything.  I thought it was supposed to be 1.
<henrik> pmatulis: took me a second to get the account due to grey listing, but does this look somewhat sensible? https://bugs.launchpad.net/serverguide/+bug/1300369
<uvirtbot> Launchpad bug 1300369 in serverguide "Typos in lxc unprivileged usage server guide" [Undecided,New]
<TJ-> dm-multipath is the device-mapper
<pmatulis> henrik: looks good, somebody will take a look
<vlad_starkov> TJ-: how about this boot params? console=tty0 console=ttyS0,115200n8 debug initcall_debug noapictimer edd=off clocksource=acpi_pm nohz=off highres=off
<henrik> pmatulis: thanks
<zul> jamespage:  ok we should be ok for stable/havana on friday
<TJ-> vlad_starkov: I'm out of definitive ideas so keep trying anything that looks likely! I'll be afk frequently here (dinner time)
<vlad_starkov> TJ-: Hey! It looks I'm booting slowly...
<vlad_starkov> TJ-: Confirmed. Server is loading...
<vlad_starkov> TJ-: with this set "console=tty0 console=ttyS0,115200n8 debug initcall_debug apic=verbose sysrq_always_enabled ignore_loglevel no_hz=off nmi_watchdog=0 nolapic_timer hpet=disable idle=mwait idle=poll highmem=512m nopat notsc acpi=off pci=nomsi"
<sarnold> cripes
<vlad_starkov> TJ-: OK, fine. The system looks booted. But I see blackscreen (had to add nomodeset boot param I think).
<vlad_starkov> TJ-: Now will try to connect single ssd and boot with it
<nszceta> anybody have experience with Dell EqualLogic systems?
<Phibs> why the heck is this giving me 500G swap: http://p.bsd-unix.net/pbfxjtdjl  http://www.bsd-unix.net/seitz/jing/2014-03-31_1457.png
<TJ-> vlad_starkov: OK, so you'll have to reduce that set to figure out which setting(s) actually have an effect :)
<vlad_starkov> TJ-: this will take awhile))
<TJ-> vlad_starkov: I know how it goes, but at least, once it works, you have a position to work back from, to isolate the workaround, which might also help to pin down a bug in the kernel eventually
<vlad_starkov> TJ-: yep
<TJ-> Phibs: I'd suggest "64000 1000000 10000000000 ext4" because you want that partition to really get the maximum size it requests
<Phibs> ok...
<vlad_starkov> TJ-: when I boot from SSD it boots very slooow...
<TJ-> Phibs: The priority value should be in terms of the size values; closer to either size value will make that size more likely to win
<Phibs> oh....
<TJ-> vlad_starkov: OK ... with all those options I'm not suprised :)
<vlad_starkov> TJ-: looks like CPU used intensivly, but no lockup yet
<Phibs> I feel preseed partitioning... is retarded then :)
<Phibs> but this is good info
<Phibs> TJ-: seems to have fixed it thanks :)
<TJ-> Phibs: See debian-installer package, file "/usr/share/doc/debian-installer/devel/partman-auto-recipe.txt.gz"
<Phibs> THANKS
<Phibs> grr caps
<TJ-> CAPS are great when ITs raining on you :p
<Phibs> I was fine with the auto layout, except using 100% swap
<Phibs> 256G ram and 256G... no
<vlad_starkov> TJ-: At the very beginning of boot log I have lines like " *BAD*gran_size: 64K 	chunk_size: 512M 	num_reg: 8  	lose cover RAM: -256M". Does it mean that I have corrupted memory?
<vlad_starkov> TJ-: can't exit ÑÑÐºÑÑÑ
<vlad_starkov> TJ-: can't exit screen
<vlad_starkov> TJ-: Ctrl+A returns No other window
<vlad_starkov> TJ-: oh, exited :)
<vlad_starkov> I have to go. Will continue tomorrow I guess...
<vlad_starkov> Bye!
<zul> coreycb:  nova rc1 is out
<coreycb> zul, cool thx
<coreycb> zul, jamespage : https://code.launchpad.net/~corey.bryant/nova/2014.1.rc1/+merge/213291
<zul> +1
<rostam> Hi when 14.04 will be released? thx
<thumper> rostam: I think the schedule says 17th
<rostam> thumper, thx
<Daviey> mdeslaur: Hey, looking at CVE-2013-4311 it was fixed in libvirt (0.9.13-0ubuntu12.5) quantal .. but MITRE don't have it listed as affecting that upstream version.  Which one is incorrect ?
<uvirtbot> Daviey: libvirt 1.0.5.x before 1.0.5.6, 0.10.2.x before 0.10.2.8, and 0.9.12.x before 0.9.12.2 allows local users to bypass intended access restrictions by leveraging a PolkitUnixProcess PolkitSubject race condition in pkcheck via a (1) setuid process or (2) pkexec process, a related issue to CVE-2013-4288. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4311)
<[styx]> Im getting an error stating "Connection closed by [my local host]" this is weird because I was just able to connect last night and nothing has changed since. Anyhelp?
<[styx]> Ive tried google but nothing relevant came up. Most was about connecting remotely. Im on my local network still
<antarus> utlemming: ping
<antarus> utlemming: you notice anything weird with the udev 9.5 bump?
<sarnold> Daviey: the debdiff makes it look like the 0.9.13.x branch we had had problems: http://launchpadlibrarian.net/150603203/libvirt_0.9.13-0ubuntu12.3_0.9.13-0ubuntu12.5.diff.gz
<Daviey> sarnold: Yeah, i'm trying to work out why it's not listed on the CVE that this version is vuln.
<Daviey> (or was)
<xperia> hi to all. i created in ubuntu a btrfs partition on a SSD Drive finally for the first time and i would like to know what are the best mount option for this btrfs partition if Read and Write Speed are very Important for working with SSDs ? what for option do you use and where do you change it ?
<antarus> ahh nevermind
<antarus> unshockingly it was a bug on our end ;p
<adam_g> jamespage, any chance of pushing libvirt 1.2.2-0ubuntu7~cloud0 out to icehouse-updates?
<Guegs> Anybody know of a good way to schedule FTP downloads from my Offshore server (ubuntu 12.04) to my home server (12.04)? to occur every ~20 minutes or something
<sarnold> Guegs: man 5 crontab  :)
<Guegs> groovy. thanks.
<Guegs> I was considering setting up bittorrent sync, but I need to do some more reading on groups. Just started using any sort of linux distro about a week ok.
<Guegs> Wow. That is quite simple. :-)
<sarnold> Guegs: to forestall the first question -- it is best to give full pathnames to scripts in cron jobs :)
<Guegs> heh, good to know. :-P
<Guegs> I don't have the OS installed on my home server yet. I want to get stuff tested in a VM before I do anything too crazy.
<sarnold> nice, VM testing is a good idea :)
#ubuntu-server 2014-04-01
<Patrickdk> guegs, don't use ftp
<Patrickdk> atleast use rsync or sftp
<Guegs> yeah, sftp is what I meant.
<Guegs> Might even go for vsftp. Depends on how ambitious I'm feeling.
<Patrickdk> that would be horrible
<Patrickdk> vsftp is just ftp
<sarnold> don't do vsftp
<Patrickdk> sftp is a million times more secure
<pmatulis> a million?  wow!
<sarnold> yeah I think sftp has 7.3 million security units :)
<mwhudson> sftp is also not a completely horrible protocol
<Patrickdk> well, when you go from 0 security to 1 security, it's just infinitly better :)
<sarnold> so true. ftp scores a few million suckitude points. :)
<Patrickdk> it was a nice design, till nat was invented
<Patrickdk> then doing ssl+ftp become impossible
<Patrickdk> and after all that, I can't believe they went and designed sip to do the same stupid thing
<guampa> hello, a little question on amavisd-new. I know it supports listening on several inet sockets and plugging different policy banks to that, what I want to know is if amavis supports the same with unix sockets
<guampa> i only see a single $interface_policy{'SOCK'} in the default config
<guampa> hmm, i you can do $inet_socket_port = [10040,10041,10042,10043,10044]; maybe something alike can be done with $unix_socketname
<guampa> nope, inet sockets it is then
<lstefani> lstefani> hello. how i can change a file with drwxr-xr-x 5 nobody nobody   to  drwxr-xr-x 5 root root? ok i run chown root:root file_name, but not work
<ubunter> Any one have experience PXE booting?
<ubunter> Hello?
<ubunter>  Any one have experience PXE booting with Ubuntu server?
<ubunter> After completing Ubuntu installation through PXE booting, the client has no internet access, what would cause that issue?
<Phibs> ubunter: i use cobbler to do that
<Phibs> and it sets up the interface config post install
<ubunter> After using cobbler does your client have internet access after installations are complete?
<ubunter> Or do you have to make changes to the interfaces?
<Phibs> ubunter: yes
<ubunter> If possible could you briefly explain the steps involved in the process maybe like the 4 point summary. Let me give you mine: 1. Configured DHCP server 2. Install tftpd-hpa inetutils-inetd 3. Made Configurations for those tools 4. Download Ubuntu 12.04 ISO 5.Extracted and put proper files into proper directories. 6 PXE boot and installed Ubuntu 7. After installation no internet access for the client.
<ubunter> Where would I make the mistake of not allowing my clients to lose internet access after installation is complete?
<Phibs> ubunter: it is your preseed
<Phibs> you have to customize it
<Phibs> cobbler ships with a post network config script taht should work
<Phibs> it is possible it is not setting the default gateway
<Phibs> this might help, http://tech.five3genomics.com/cobbler-tips/
<Phibs> ubunter: I am going to sleep but if you still need help tomorrow I will give you my preseed/script
<ubunter> I see, Thank you very much I will see if I can fix the issue
<ubunter> Ok thank you very much been working on this for week for a non profit I volunteer for thank you
<hallyn> smb: suspect it's too early for you, but in any case - was there some option you said earlier that i have to give to mount nfs from a saucy host onto a trusty client?
<hallyn> it hangs while doing mount("10.42.43.16:/srv", "/srv", "nfs", 0, "vers=4,addr=10.42.43.16,clientad", but '-o nfsvers=2 or =3 is not supported
<hallyn> hm, seems to be working now <shrug>
<ruben23> hi guys any help when i run /usr/sbin/iptables ruless  --> it says no directory..? how do i run the path for iptables in ubuntu server..?
<verdeP> ruben23: which iptables
<verdeP> err thats the command xD
<verdeP> its in /sbin/
<smb> hallyn, I certainly said nothing earlier anyway. But it should be ok with no special magic.
<lifeless> jamespage: I'm hoping you're in UK time :)
<lifeless> jamespage: cause, https://bugs.launchpad.net/tripleo/+bug/1300663 - I'm thinking its an upstart bug
<uvirtbot> Launchpad bug 1300663 in tripleo "upstart using 100% CPU" [Critical,Triaged]
<rbasak> lifeless: could that be an errant upstart job, perhaps, causing some sort of loop? If it is an upstart bug, you haven't provided the release or version of upstart or anything.
<lifeless> rbasak: oh sorry ! still gathering data but saucy
<lifeless> rbasak: so yes, certainly an errant job, but that should never be able to wedge upstart
<lifeless> rbasak: upstarts job is to be unwedgable ;)
<rbasak> jodh: ^^
<rbasak> lifeless: agreed, but is upstart actually wedged there? Or is it trying as hard as it can to do what an errant upstart job might have said, while still being able to process other things?
<lifeless> rbasak: service nova-compute stop hangs
<lifeless> rbasak: even though the nova-compute process can be killed (have done so) and is now a zombie
<lifeless> rbasak: also can't reboot the machine
<lifeless> we think we know how we're tickling this now
<jodh> lifeless: looks like that server needs to raise its limits. what does 'ls -l /proc/1/fd' show?
<lifeless> jodh: 0 through 1023
<lifeless> jodh: but no - its a genuine leak in one of our scripts - my complaint here is that upstart has allowed itself to become nonfunctional
<lifeless> jodh: can't reboot, can't stop services.
<jodh> lifeless: try modifying /proc/1/limits to raise max files to the hard limit
<lifeless> echo 2048 > /proc/1/limits
<lifeless> -su: echo: write error: Invalid argument
<lifeless> jodh: ^
<jodh> lifeless: I've updated the bug with questions and suggestions.
<lifeless> jodh: brilliant, many thanks
<lifeless> I've restarted that server, but I've 9 more with the symptom intact, will grab a stack from them
<jodh> lifeless: thanks
<lifeless> jodh: I'm not sure what you mean by raise the limits, since upstart starts before any limits are able to be set
<vlad_starkov> QUESTION: Ubuntu 14.04 Server 64bit. Does it support 16Gb memory?
<rbasak> vlad_starkov: http://askubuntu.com/q/142043/7808 suggests that it should be fine. I'm not aware of any other restriction.
<rbasak> (assuming your hardware supports it)
<vlad_starkov> rbasak: nice)
<vlad_starkov> THanks.
<bekks> hi
<bekks> how can I enforce iscsi target to be presented over a specific network only? I defined a public lan, and a separated iscsi lan, but targets are visible over public lan, too.
<rbasak> bekks: arrange for it to "bind" to the correct interface or address. I'm not sure how to do that, but the wording might help your search.
<bekks> rbasak: yeah, I'm gonna try that. thank you :)
<bekks> for the logs: binding to a specific iscsi interface can be done by setting ISCSITARGET_OPTIONS="--address a.b.c.d" in /etc/default/iscsitarget
<bekks> Thanks for the clue :)
<rbasak> No problem. Thanks for reporting back - useful to know next time someone asks :)
<jamespage> lifeless, I see that rbasak and jodh are helping you
<lifeless> jamespage: they are, thanks!
<lifeless> oh nuts, I just realised I didn't get the stacktrace from the host
<lifeless> I bulk-removed the cause that triggered the issue :(
<jamespage> adam_g, just promoting everything aside from the rc1's to -updates now
<jamespage> lifeless, ooops
<lifeless> assuming we a) analysed it right and b) the fix works
<lifeless> we won't tickle the problem again
<lifeless> :/
<lifeless> should be fairly easy to reproduce on demand with a little scripting
<lifeless> I'll see what I can do tomorrowish
<jamespage> zul, coreycb: we need to switch over the CI lab to use the milestone-proposed branches as they appear
<mdeslaur> Daviey: mitre descriptions are often wrong, you can't rely on them. Here's the upstream link: http://security.libvirt.org/2013/0012.html
<vlad_starkov> QUESTION: Ubuntu 14.04 Server 64bit. Successfully boots with 12GB RAM. Fails with 16GB RAM raising "mtrr_cleanup: can not find optimal value, please specify mtrr_gran_size/mtrr_chunk_size" errors. How to choose correct values for mtrr_gran_size and mtrr_chunk_size?
<cfhowlett> vlad_starkov, unreleased ubuntu support = 14.04 is in #ubuntu+1
<vlad_starkov> cfhowlett: ooops, didn't mention that I'm in #ubuntu-server. Sorry. But anyways, the same errors and boot fail I've got with 12.04.4 and 13.10.
<jamespage> vlad_starkov, please raise a bug - 16GB should be just fine with the 64 bit kernel
<vlad_starkov> jamespage: How to do it?
<jamespage> vlad_starkov, that will at least get your issue infront of the kernel team who can triage this sort of thing more effectively
<jamespage> vlad_starkov, use the ubuntu-bug tool
<jamespage> vlad_starkov, https://help.ubuntu.com/community/ReportingBugs
<vlad_starkov> jamespage: thanks
<jamespage> zul, did you upload coreycb's nova rc1?
<zul> jamespage:  yep
<zul> jamespage:  Daviey is sitting on it because of this https://launchpadlibrarian.net/171386104/buildlog_ubuntu-trusty-i386.nova_1%3A2014.1%2Bgit201403311446~trusty-0ubuntu1_FAILEDTOBUILD.txt.gz
<zul> jamespage:  i have narrrowed down the  commit that caused it
<jamespage> zul: oh joy
<zul> jamespage:  yeah
<jamespage> zul, can you reproduce that locally?
<zul> not yet..have to start the day first :)
<jamespage> zul, ack
<jamespage> zul, anything I can help with?
<zul> jamespage:  not yet
<rbasak> smoser: sometimes I see the "WARNING! Your environment specifies an invalid locale." message to run locale-gen, even after /var/lib/cloud/instance/boot-finished exists. This means that "uvt-kvm wait" still feels racy.
<rbasak> smoser: is this expected?
<vlad_starkov> YAY!!! My system boots with 16GB RAM. Finally!!!)
<vlad_starkov> Strange thing (possibly BUG). System doesn't boot with BIOS settings "Memory Branch Mode -> Interleave". But successfully booted with BIOS settings "Memory Branch Mode -> Sequential". Anyone can explain me why this could happen?
<JBtje> My samba server stopped, can anyone help me find out the problem? (have tried for many hours now w/o success)
<shredding> Whatâs the difference between $VAR and ${VAR} ?
<shredding> i have cd $CURRENT_DIR and my ide says i should use ${CURRENT_DIR}
<shredding> But lacks an explanation.
<ivoks> shredding: it's easy
<henrik> shredding: in certain contexts, you need to use ${VAR} - otherwise they're the same. consider "$VARsuffix" vs ${VAR}suffix"
<ivoks> shredding: this$CURRENT_DIRwill not work
<ivoks> while
<shredding> Ah, thanks.
<ivoks> shredding: this${CURRENT_DIR}will work
<shredding> So its for string interpolation.
<ivoks> ${CURRENT_DIR} is always on the safe side
<smoser> rbasak, yes.
<smoser> its still racy.
<ivoks> hah
<smoser> although if you had a sane locale, i think you wouldnt see it.
<ivoks>  /etc/security/limits.conf is useless
<smoser> :)
<ivoks> no really, it is
<ivoks> i mean, you can set there whatever you want, it's ignored
<zul> jamespage:  neutron is available
<ivoks> why don't we include pam_limits in pam's common-session?
<rbasak> smoser: what, en_GB.UTF-8 isn't sane? :)
<smoser> LANG=en_US.UTF-8 ==> fix-released.
<rbasak> :)
<rbasak> So cloud-init runs stuff after boot-finished?
<rbasak> I'm a bit confused about that.
<smoser> i dont think so.
<smoser> i think you must be getting in before that.
<smoser> oh. wait, o. its simply expected behavior.
<smoser> no rce.
<smoser> i think
<smoser> if you *don't* see it then something is wrong.
<zul> jamespage:  have you changed it over to the milestone-proposed branches already? if not ill do it right now
<rbasak> smoser: http://paste.ubuntu.com/7189555/ is what I'm running on the guest for the wait. I can amend it as needed.
<smoser> its correctly telling you "hey, i don't have locales generated for your exotic locale, if you want to generate them, here is how you can".
<rbasak> smoser: except that if I wait a bit, I don't get that prompt, I don't think.
 * rbasak will test
<rbasak> (not even once is what I mean; I'll check for that)
<smoser> hm..
<jamespage> zul, not yet
<zul> ill do it right now
<rbasak> smoser: yeah I get the message on first ssh if I don't wait, and don't get the message on first ssh if I do wait, on a precise amd64 image.
<smoser> rbasak, that makes sense.
<rbasak> I can also see the message by it winning the race.
<smoser> it runs once.
<smoser> only.
<rbasak> (even when I believe I'm checking for boot-finished)
<smoser> and i suspect its running on your non-interactive run
<jamespage> zul, did you want me todo neutron?
<smoser> we need:
<smoser>  [ -t 0 ]
<rbasak> Non-interactive run?
<zul> jamespage:  yes please
<rbasak> I made sure not to trigger any outside ssh.
<zul> if you dont mind
<rbasak> smoser: that's what I mean by "first ssh". There was no other ssh.
<smoser> hm.. . oh i thought you were running the 'wait' in that paste via ssh.
<rbasak> I am, but I disabled it for my test.
<jamespage> zul, no problem
<smoser> rbasak, /etc/profile.d/Z99-cloud-locale-test.sh
<smoser> thats what does it.
<smoser> i'm not sure why you would not see it.
<smoser> i just verified ssh'ing to an instance that'd been up for a couple days like:
<smoser> env LC_ALL=en_GB.UTF-8 ssh sstack-5
<smoser> and I see it. but only the first time.
<rbasak> smoser: LC_ALL does trigger it, but LANG does not.
<rbasak> smoser: once logged in (without seeing the message), "locale" gives me LANG=en_US.UTF-8. No sign of en_GB.
<smoser> rbasak, i think this is because ssh does not allow your LANG through
<smoser> but does allow LC_ALL
<rbasak> OK, but why the race then?
<smoser> oh. well, maybe.
<smoser> i don tknow what the race is.
<smoser> i can't explain this, so i think that you must be doing something wrong :)
<smoser> you can look at how that works, i can't see how it could possibly result in not showing you that message.
<smoser> other than if it has run once on a non-interactive login (but actually, motd which runs *it* should only be running on interactive login)
<smoser> as showing that message to a computer isn't terribly useful
<rbasak> I have a theory
<smoser> rbasak, for what its worth, your 'wait for runlevel' has unsafe logic
<zul> jamespage:  after using my 300 baud modem we are using the stable branches now for icehouse
<jamespage> zul, milestone-proposed right?
<rbasak> smoser: what's wrong with it?
<zul> jamespage:  yes
<smoser> if 'runlevel | awk .. ' prints a non-integer  it will fail with bad syntax and drop from that loop.
<smoser> non-integer or "".
<jamespage> zul: ++
<zul> jamespage:  i cant reproduce the failure locally
<smoser> i think. maybe i'm wrong.
<rbasak> smoser: AIUI, the quotes fix that problem. I see people doing x$a = xfoo but I never understand why, since one can use quotes.
<smoser> oh. yeah, you're right. i was thinking the other end.
<smoser> xfoo is garbage.
<smoser> you're currect.
<smoser> i was thinking you were using -eq
<smoser> which would complain about non-integer. but you're just doing string compare.
<smoser> thats fine.
<rbasak> While we're looking, I plan to implement a look for /run/.../boot-finished at some point.
<rbasak> I just hadn't because I figured that I need to do a version test of cloud-init first, and I was in a hurry.
<rbasak> The script is user-overridable, so it's not critical for Trusty I don't think.
<rbasak> Scripts that call uvtool could supply their own, and users can use the PPA.
<smoser> agreed.
<rbasak> The only restriction is that currently it must be an "sh" script.
<rbasak> It would probably be nice to fix that at some point, but I didn't worry about it.
<rbasak> (it's documented)
<zul> jamespage:  hey are you agreeable to push out one more oslo.messaging http://pastebin.ubuntu.com/7189681/
<tomixxx5> how can i found out which version of package <package1> is going be installed with "sudo apt-get install <package1>"
<cfhowlett> tomixxlx5, apt-cache policy <package>  wil
<tomixxx5> cfhowlett: ty a lot
<jamespage> zul, +1
<jamespage> Daviey, just as a heads up - the neutron upload for rc1 includes some new binary packages; some of its renaming and some of its new since b3
<jamespage> zul, Daviey, I should really have pushed those changes in before rc1 - but hindsight is 20:20
<zul> jamespage:  agreed
<zul> Daviey/coreycb/jamespage:  I guess that nova test regression got fixed I dont see it anymore
<coreycb> zul, hmm ok
<zul> odd
<zul> jamespage:  oslo.messaging builds fine for me modulo a patch
<jamespage> zul, neutron uploading
<zul> jamespage:  huzzah
<jamespage> coreycb, I almost have nova-cloud-controller upgrading again
<jamespage> something is caching in do_openstack_upgrade
<coreycb> jamespage, ok great.  I hadn't attempted an upgrade in a few days so I hadn't come across any recent issues.
<jamespage> coreycb, I convinced quantum-gateway todo the switches between grizzly->havana->icehouse OK
<jamespage> still working on ncc
<coreycb> jamespage, cool
<coreycb> jamespage, btw, for the nova db updates
<coreycb> jamespage, my approach has been to compare old vs new databases after migration and make any changes to get the new db to look the same as the old
<coreycb> jamespage, I'm putting most of the changes into the new 216*.py version - should have something for you to look at in the next day or so
<jamespage> coreycb, OK _ if we are going to put this is it needs to happen this week
<jamespage> any later...
<zul> then it would make me nervous
<coreycb> jamespage, ok
<Haven|Work> installing ubuntu 13.10 server, I want to unlock encrypted drives with a USB Key, can I configure that during install or is it best to wait till the installer finishes and set up the crypt with the unlock file?
<rbasak> Haven|Work: I've done something similar before, and I set it up afterwards. You could set up an encrypted volume during install, and then change the passphrase later.
<rbasak> Change it with a random key that's only on your USB stick, and arrange for a keyscript to supply that.
 * rbasak isn't sure of any other installer option to achieve this
<vlad_starkov> TJ-: Hi! This is just to let you know. I won it :)
<Haven|Work> okay, let me give you a little background. I want to install the OS on an IDE Drive, then want to configure and unlock the encrypted RAID Array at boot with a USB Stick.
<TJ-> vlad_starkov: Fab... how!?
<Haven|Work> Probably best to make the array and everything after install rbasak ?
<TJ-> vlad_starkov: I think your case needs a bug report write-up, for others than might suffer the same issues
<rbasak> So I'm clear, your encrypted RAID array will not be on an IDE drive?
<Haven|Work> no its two 2.5tb sata drives
<Haven|Work> for storage
<rbasak> OK. Yes - then I'd arrange that all after install.
<rbasak> cryptsetup + keyscript in /etc/crypttab, etc.
<rbasak> /etc/fstab entry to mount it. I think with an auto mount from /etc/fstab, it'll correctly see /etc/crypptab and call the keyscript.
<rbasak> I can't remember the details, though.
<Haven|Work> I have a guide I intend to follow for the making the USB encryption. so that shouldn't be too bad
<Haven|Work> once I do that though if I install Zentyal will it overwrite the /etc/fstab?
<rbasak> No idea about how things will interact with Zentyal, sorry.
<Haven|Work> I'd suppose it wouldn't matter if i had the keyfile generated and on the USB stick I could still unlock the drives by telling it to look there :)
<Haven|Work> so that answers that question :)
<vlad_starkov> TJ-: Eventually it turned out that system successfully boot in 2 cases: 1) When BIOS's "Memory Branch Mode" param is "Interleave" and max 12Gb RAM installed; AND 2) When BIOS's "Memory Branch Mode" param is "Sequential" and 16Gb RAM installed :-)
<rbasak> BTW, on my home server machine I supply the LUKS passphrase over the network (loopback cable) using a keyscript I wrote: https://github.com/basak/netkeyscript
<TJ-> Haven|Work: It's pretty straight-forward cryptsetup processes. An entry in "/etc/crypttab" will ensure udev/cryptsetup unlocks and create a DM device node, which is what /etc/fstab will refer to
<Haven|Work> TJ-, perfect thanks
<Haven|Work> thank you also rbasak
<TJ-> vlad_starkov: So, BIOS issue after all ... I was looking at those MTRRs so that might have been another route to fix it
<xnox> rbasak: how does that with plymouth?
<TJ-> Haven|Work: I've done extensive work with cryptsetup, so if you need assistance, ping me
<rbasak> xnox: you mean my netkeyscript, or keyscripts in general?
<hallyn> zul: were you planning any libvirt upload soon?
<xnox> rbasak: ideally i'd like something like that for a desktop machine such that i can enter password via plymouth or via external means.
<vlad_starkov> TJ-: By the way, MTTRs fixes just by adding the following boot params "enable_mtrr_cleanup mtrr_spare_reg_nr=1 mtrr_gran_size=64K mtrr_chunk_size=1M"
<zul> hallyn:  no
<zul> hallyn:  1.2.3 is out though ;)
<rbasak> xnox: cryptsetup comes with some kind of keyscript/built in thing that can speak to plymouth, I presume.
<TJ-> vlad_starkov: Yes, that was one of the options I was going to suggest
<hallyn> zul: haha, yeah.  no i may be pushing the 2.0 qemu to trusty archive soon, so would need to push the corresponding libvirt
<rbasak> xnox: to integrate with my netkeyscript, I'd suggest some kind of keyscript multiplexing keyscript, that calls out to both a plymouth keyscript and my netkeyscript.
<zul> hallyn:  okies
<hallyn> i need to check how i said i would do it
<vlad_starkov> TJ-: So now I have working Ubuntu Server 14.04 64bit, 2xCPU (8 cores), 16Gb RAM, 2x80Gb SSD (RAID 1), 4x 2Tb HDD (RAID 10) :)
<TJ-> vlad_starkov: About time :) Glad it got sorted.
<vlad_starkov> TJ-: Thank you for all your time you have spent for helping me! I got many good lessons and learned many new cool things!
<TJ-> vlad_starkov: you're welcome
<Haven|Work> TJ-, have you ever done anything like I am attempting?
<TJ-> Haven|Work: Yes. I think I have an article about something similar from a few years ago, might not be precisely what you're wanting, but gives a good overview of the approach.
<Haven|Work> heh, good overview would be perfect, from there I can modify whatever I need to make it work
<TJ-> Haven|Work: http://tjworld.net/wiki/Linux/Ubuntu/HardyRAID5EncryptedLVM
<TJ-> Haven|Work: Nowadays many of the steps are built into the tools so the manual steps aren't required
<Haven|Work> okay, I really play on encrypting the Raid1 Array and unlocking that with Key, that's all the further I need to go this looks almost perfect for what I'm doing
<TJ-> Haven|Work: I have all our laptops using LUKs full-disk encryption, including the /boot/grub/ partition, via GRUB_ENABLE_CRYPTODISK
<Haven|Work> so afterinstall i make the Raid array then once that's done I run cryptsetup and it should walk me through the process at least somewhat :)
<TJ-> Haven|Work: If you're going to randomise the disk surfaces, use the 'quick' method of creating (luksFormat $LUKS_CONTAINER $LUKS_DEVICE) an initial sacrificial LUKS container spanning all the space, doing luksOpen, then using "dd if=/dev/zero of=/dev/mapper/$LUKS_DEVICE bs=4M" to quickly randomise, then luksClose followed by a wipe of the LUKS header with "dd if=/dev/urandom of=/dev/mapper/$LUKS_CONTAINER bs=1M count=1", then create the real LUKS containers.
<Haven|Work> I actually already have the partitions set and formatted on the disk space. I played with this forever in the install on Thursday and Friday and managed to get that far before the asshole janitor unplugged my server over the weekend
<Haven|Work> found out though CMOS battery is bad
<Haven|Work> so that was at least a partial help
<jamespage> zul, https://code.launchpad.net/~james-page/ceilometer/fixup-dbsync/+merge/213686
<zul> jamespage:  +1
<zul> jamespage:  looks like ceilometer needs a newer happybase
 * jamespage sighs
<Daviey> zul: How was the nova issue fixed?
<Daviey> jamespage: neutron accepted.
<jamespage> Daviey, thanks
<zul> Daviey:  I am not sure how built it this morning no problems...going to be dropping the patch soon
<pycoderf> Hi all. I am troubleshooting an ltsp server issue and ran into problems but #ltsp seems dead. Anyone able to help?
<Daviey> zul: It's concerning having unknown test failures that now work... Sure you didn't change anything else? :)
<zul> Daviey:  no i didnt change anything
<zul> Daviey:  other than change to milestone branches in the lab
<jamespage> Daviey,zul: was the test failure in the trunk PPA?
<zul> yeah
<zul> jamespage: Re-uploaded with the patch that disabled the test failures to the ppa now it builds fine
<Daviey> zul: withOUT?
<zul> without
<Daviey> zul: Out of interest, why is >=0.7 keystoneclient needed?
<zul> Daviey:  https://github.com/openstack/requirements/commit/65a913ef036de59ad84a7fb369a5e6df93bb5ac0
<Daviey> zul: I wish they weren't so vague on WHY.
<zul> Daviey:  agreed
<Daviey> We want a newer version because we want newer shiiitz
<zul> its shiney
<zul> Daviey/jamespage: glance should be ready today
<jamespage> zul, great
<jamespage> Daviey, neutron built and binary NEW awaiting review :-)
<imdea> Hi one question, I've a user "roberto" in my machine and want that it be able to do "sudo su - fyf"  and execute commands as that user without entering a password, I have edited as root the /etc/sudoers file using visudo and added this:  http://paste.debian.net/91019/  but if I'm root and switch to this user as: sudo su - roberto and then do sudo su - fyf it asks me for a password, any ideas?
<Daviey> jamespage: accepted
<keithzg> Oh hey, the Subversion project is switching to Git: https://issues.apache.org/jira/browse/INFRA-7524
<keithzg> ;)
<sarnold> imdea: every NOPASSWD: in the sudoers(5) has a space afterwards
<jamespage> Daviey, ta
<imdea> sarnold: curious, since I have another entry like this one: git     ALL = NOPASSWD:ALL (with no space afterwards) and it works great.
<sarnold> imdea: drat. it was reaching for straws anyway, i didn't like it much as a suggestion. :)
<sarnold> imdea: OH! rather than 'sudo su - fyf' try 'sudo -u fyf -s'
<sarnold> imdea: I like this one :) this one ought to work
<imdea> sarnold: what's the difference?
<sarnold> imdea: in your version, you're switching to root and then running the 'su' command to switch to fyf. in my version, sudo switches to fyf directly and then starts a shell.
<imdea> sarnold, thanks!
<sarnold> :D
<keithzg> Had my company's email go down earlier today while I was asleep, I was more tempted to use "sudo su - fml" :P
<sarnold> keithzg: :)
<patdk-wk> hmm, apache is moving
<patdk-wk> https://issues.apache.org/jira/browse/INFRA-7524
<sarnold> I suspect it's aprilfoolsism.
<patdk-wk> :)
<patdk-wk> your no fun
<sarnold> indeed :)
<jamespage> zul: you need to use the setup-jenkins job to reconfigure the icehouse jobs for milestone-proposed btw
<zul> jamespage:  ack..i did :)
<jamespage> zul, sorry - so you did - I just happended to look at swift :-(
<jamespage> doh
<zul> jamespage:  heh
 * jamespage eod's
<tcstar> I have a quad core server, with 3953 MB of memory which runs apache and php...  it's running about 30 high traffic websites -- just wondering what an approximate acceptable load average would be when looking @ htop
<sarnold> tcstar: load average is just one measurement number to indicate the 'load' of the system; it's just one more metric along with e.g. swap use and paging requests to help you determine if something has -changed- on the system
<sarnold> tcstar: of course, whther or not you need to -do- anything about any of the measures is another thing -- probably best measured by request latencies on the websites in question
<tcstar> yeah...  i started optimizing my apache a little...  had the cpu use drop from about 35% to no more than 7%...  load from 1.4 down to 0.43 memory down to 500 megs and ive never used any of the 4 gig of swap
<sarnold> tcstar: wow :) that's cool
<sarnold> tcstar: the 'bo' and 'bi' columns of 'vmstat 1' output is one of my favorite quick performance tools
<tcstar> now whether or not that really means anything is another question that i can't give the answer lol
<tcstar> mine shows:  https://gist.github.com/anonymous/4f447a8b086198b27d7e
<sarnold> I don't know what kind of time just a bare 'vmstat' covers, but it sure looks like this machine is nearly asleep :) hehe
<tcstar> just noticed the '1' so ran it again...  this is what i've got so far...
<tcstar> https://gist.github.com/anonymous/a7b2db052e8870c27b6a
<sarnold> aha, looks like heavy logging or light file uploading or similar?
<tcstar> atm no file uploading, might be seeing the rsync stuff in there mirroring my 'upload server'...  i don't understand anything i see in vmstat honestly...  but we do have a crap ton of traffic going to different sites
<tcstar> usually get about 500 unique hits per minute per site
<sarnold> cool
<tcstar> had one of my dual core servers lock up on us yesterday causing a 20 minute outage.. so spent the time to migrate over to the quad core machines -- and trying to optimize so it doesn't happen again...  that's the goal anyway
<sarnold> machines die: hard drives, power supplies, etc etc. having a fail-over or N+1 redundancy in place from the start is a good idea when you can't tolerate downtime
<sarnold> look into haproxy, it may be a nice simple stepping-stone to get to where you want to be
<tcstar> can I run something like HaProxy over public ip?  2 of my servers are in one DC and 2 are in another
<sarnold> tcstar: hrm that's way beyond my experience. I think your options there are limited to DNS-based solutions or anycast; dns is probably far easier to configure..
<tcstar> Or I could just run dual HaProxy -- one on each set in each DC... then just seperate domains evenly between the servers
<tcstar> so half of the domains on servers in DC1 and other in DC2
<henrik> Anyone running unprivileged lxc containers here in trusty? The autostart stanza won't start unprivileged containers - is that intentional?
<cfhowlett> henrik, until official release, trusty support = #ubuntu+1
<henrik> 'k
<ubunter_> http://paste.ubuntu.com/7191170/  tail -f /var/log/syslog is saying http://paste.ubuntu.com/7191174/ but I dont see it. Can I get some assistence?
<ubunter_> Where am I missing a semicolon? http://paste.ubuntu.com/7191170/ I dont understand why I get this http://paste.ubuntu.com/7191174/  if everything is ok
<sarnold> ubunter_: check out those quotes on line 7
<sarnold> ubunter_: I suspect you copy-and-pasted from some website? :)
<ubunter_> yes
<ubunter_> cobbler dhcp server set up on ubuntu server 12.04
<ubunter_> get rid of the quotes?
<sarnold> or replace them with standard ascii quotes ""
<coreycb> zul, jamespage : https://code.launchpad.net/~corey.bryant/glance/2014.1.rc1/+merge/213293
<zul> coreycb: mind adding the bug number (LP: #1299055)
<coreycb> zul, sure np
<zul> ill upload it tonight
<coreycb> zul, pushed again
<zul> thanks ill take a look
<coreycb> zul, thanks!
<thumper> smoser: ping
<thumper> smoser: nm
<Cygnus-X1> Anybody else having a problem with libreoffice segfaulting?
<Cygnus-X1> Sorry, wrong channel
#ubuntu-server 2014-04-02
<zul> jamespage/coreycb: i switched over glance to use the milestone-proposed git branch
<Havenstance_> question
<sarnold> answer
<Havenstance_> Installed ubuntu 13.10 server, during install i had DHCP and the internet worked, now after install I have no DHCP
<Havenstance_> any idea's what might be wrong? I checked /etc/networking/interfaces and eth0 is listed there as dhcp
<Havenstance_> but when I run a sudo service networking restart I get networking stop/waiting
<Havenstance_> never goes beyond it, router is giving out DHCP Addresses fine as I'm talking to you on it now
<sarnold> Havenstance_: yeah don't run "service networking restart", that busts dbus badly ...
<Havenstance_> ok, how would I get it to get a DHCP, that's what I don't understand it had one during install
<sarnold> Havenstance_: ifdown eth0 ; ifup eth0 ;  is the better way to cycle networking
<Havenstance_> but once install finished and it rebooted it gets nothing
<Havenstance_> ok
<Havenstance_> its sending DHCP Discover on 255.255.255.255
<Havenstance_> my netmask should be 255.255.255.0
<sarnold> Havenstance_: check logs, there might be something there, /var/log/syslog, /var/log/upstart/network-interface*
<sarnold> Havenstance_: dhcp discover is sent using a link-local broadcast packet; 255.255.255.255 is correct
<Havenstance_> okay :)
<sarnold> nice paranoia :)
<Havenstance_> well, i understand enough about networking but in linux it seems to be enough to get me in trouble lol
<Havenstance_> Can't a guy just shitcan windows, that's all I want to do :)
<Havenstance_> give bill gates some sign language
<sarnold> heh, it took me a few years to transition completely away from windows
<Havenstance_> i've been running ubuntu on and off for ever now
<Havenstance_> I still have disks from 5.04 and the such, when they used to give away the CDs to those of us unlucky people who had Dial Up :)
<Havenstance_> 4.10 too apparently :)
<Havenstance_> here's something in syslog idk if it means anything or not but it says -- apr 1 22:03:43 uss-enterprise rsyslogd-2039 could not open output pipe '/dev,console'
<sarnold> /dev,console ?? what an odd mistake..
<sarnold> Havenstance_: time for me to bail, good luck :) have fun
<Havenstance_> dhcp request of 192.168.1.104 on eth0 to 255.255.255.255 port 67
<Havenstance_> dhcp offer of 192.168.1.104 from 192.168.1.1
<Havenstance_> dhcpack of 192.168.1.104 from 192.168.1.1
<Havenstance_> bound to 192.168.1.104 -- renewal in 33966 seconds
<Havenstance_> but ifconfig still shows no dhcp addr
<jrwren> ubuntu systemd in the future will already benefit : https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8
<koolhead17> hello world11
<ubunty_> Why do I get this message http://paste.ubuntu.com/7192789/  What could be the issue? http://paste.ubuntu.com/7192787/
<ubunty_> why do I keep getting "sudo: unable to resolve host (none)" during my PXE boot installation?
<ubunty_> It will complete the installation but with no internet access which I suspect is caused by this issue
<TJ-> ubunty_: Sounds like there is no DNS resolver configured
<ubunty_> How would I got about fixing this issue? Or I mean how would I get to the point where I am configuring the dns resolver?
<sarnold> ubunty_: does your /etc/hosts look sane? does your /etc/nsswitch.conf look sane?
<TJ-> add an entry in /etc/resolv.conf
<ubunty_> ok ill try that
<TJ-> ubunty_: If you did a PXE boot, the interface will be inherited from the kernel, and so you need to take care of such things
<ubunty_> and if it is inherited from the kernel how can I make those changes if needed?
<sarnold> ubunty_: yikes are you still doing this?? http://paste.ubuntu.com/7192787/
<ubunty_> nope got that worked out
<sarnold> ubunty_: after CIDR was introduced back in the 90s, I think support for non-CIDR style netmasks has long since atrophied, I wouldn't expect "subnet-mask 255.255.0.255;
<sarnold> to work all that often
<sarnold> oh good
<ubunty_> resolv.conf looks like this http://paste.ubuntu.com/7192972/
<TJ-> ubunty_: best to fix the typo
<sarnold> and there's no point in searching hsd1.tx.comcast.net, you'll never look up hosts under that domain..
<ubunty_> oops but it was a copy paste typo not on my actual resolv.conf file
<ubunty_> google.com for the domain?
<sarnold> none at all should work fine
<ubunty_> so take out domain and search?
<TJ-> If you're doing PXE boot, that infers a local BOOTP/DHCP server, is there not also a local network DNS server?
<ubunty_> yes my modem is giving out those DNS
<ubunty_> thats what it says on my dhcpclient.leases
<jamespage> zul, Daviey: that nova test failure only happens in virtualized PPA builds
<jamespage> as found in openstack-ubuntu-testing and icehouse-staging for the CA
 * jamespage siggs
<allaga_> hey :)
<lordievader> Good morning.
<mischief> hello
<mischief> i'm trying to boot ubuntu 14.04 LTS server from a hard drive
<mischief> i dd the iso to the disk and reboot, and now the installer can't find the cd image.
<mischief> the server has a LSI Logic Fusion-MPT SAS card to which i believe the disk is attached to, but i can't find any kernel module for this in the installer
<cfhowlett> mischief until release, 14.04 support = #ubuntu+1
<mischief> oh, it's not released until the 17th :|
<mischief> well, i have a feeling 12.04 would result in the same issue, but i can try it too
<mischief> cfhowlett: is there a standard way to get raid drivers at install time?
<cfhowlett> mischief oh, my.  Never done that and so I don't know.  sorry.
<mischief> poop
<mischief> i really don't want to have to file a ticket at my dedicated server provider :<
<cfhowlett> mischief stay in channel and ask.  someone else will know.
<mischief> i am getting 13.10 server image now, to try instead
<mischief> but i think it will end up the same at 14.04
<mischief> same problem on 13.10 ;_;
<zetheroo> Is there a GUI for OpenIPMI?
<mischief> zetheroo: i wasn't able to find one
<ivoks> gui for ipmi?
<ivoks> why would anyone want that? :)
<mischief> so they dont have to use the stupid java clients ;)
<ivoks> isn't java client a gui?
<mischief> yes
<mischief> i'd rather a native client than java
<ivoks> why do you need GUI access to the server?
<ivoks> i mean, what's wrong with SOL?
<mischief> can you mount iso with that
<ivoks> you can't do ISO with SOL, true
<mischief> sorry if i sound dumb, it's my first day
<ivoks> but PXE is much faster than ISO anyway
<mischief> i dont have pxe on my host
<mischief> i mean, pxe exists, but my host doesn't have anything to boot to
<ivoks> let's go from start
<ivoks> you have a server isolated on the network to wich you want to install ubuntu 12.04?
<mischief> uh
<mischief> yes?
<mischief> i'm not sure what you mean by isolated on the network.
<ivoks> PXE is network boot
<mischief> right
<ivoks> usually, datacenters have some kind of network installation setyp
<ivoks> setup
<ivoks> so that people don't walk around with CDs
<ivoks> so when i say isolated, i mean network without such infrastructure
<mischief> well i tried to pxe boot, but the tftp server makes no offers
<mischief> dhcp works
<ivoks> ok, then you probably don't have tftp/pxe setup on your network
<mischief> yea
<mischief> so i tried to mount an iso with the IPMI shit on my dedi
<mischief> some 'MEGARAC Aster' by AMI on a dell poweredge
<mischief> the kvm console it provides is a really poor java program, and i have openjdk on debian
<mischief> most of it works, *except* mounting an iso
<mischief> so i tried to write the ubuntu server iso to the second disk, and directly boot to that
<mischief> it boots, it scans for the cd image.. and it can't find it.
<ivoks> dell uses DRAC, so you should be able to open web browser and go to DRAC's IP
<mischief> and why? because the hard drive is on a fucking lsi fusion-mpt sas card and the ubuntu iso has no mptsas driver
<ivoks> well, watch your language
<ivoks> ubuntu iso does have mptsas driver
<ivoks> but it's not part of the kickoff installer
<ivoks> it gets added during discovery process
<mischief> sorry, i'm just very frusterated. i've been trying to get openstack running on this dedicated server since last week
<mischief> the web ui of the management console looks like -> http://blog.milford.io/wp-content/uploads/2012/03/megarac2.png
<ivoks> yeah... it's ugly, but you have to do it that way
<ivoks> at least to kick off installer
<mischief> what way?
<ivoks> mount the iso within the web ui and boot from cd
<mischief> there is no way to do it from the web ui!
<mischief> and it's full of bugs anyway
<mischief> the web ui runs on https, but frequently tries to redirect me to a url like https://1.2.3.4:80/...
<mischief> which doesn't work obviously
<ivoks> and you say this is dell poweredge?
<mischief> one moment and i can run dmidecode..
<ivoks> (that's not what dell's drac ui looks like)
<ivoks> http://wiki.hetzner.de/index.php/Datei:Idrac_logon_en.jpg
<ivoks> that's how it looks like ^
<mischief> it's definitely a dell system, albeit probably an older one
<mischief> now i can't even run the java applet, great
<ivoks> there are also drac cli tools
<ivoks> but i'm not sure they can handle isos
<ivoks> Attaching, Auto-Attaching, and Detaching Virtual Media using RACADM
<ivoks> racadm config -g cfgRacVirtual -o cfgVirMediaAttached 1
<zetheroo> so is the short answer that there is none?
<ivoks> http://support.dell.com/support/systemsinfo/document.aspx?~file=/software/smdrac3/drac5/145/en/ug/racugc1b.htm
<mischief> it's a dell poweredge c6100
<ivoks> ah, this doesn't have drac
<ivoks> just ipmi
<mischief> and is apparently incapable of mounting ISO's
<mischief> even though a virtual cd rom drive appears
<mischief> so there's really no way for me to reinstall ubuntu, unless i either ask the datacenter to do it, or build my own iso with the mptsas driver in the kernel
<ivoks> http://www.iptp.net/en/support_ipmi.php
<ivoks> looks like yours
<zetheroo> I have been using IPMIview from Supermicro for our Supermicro servers - but this time I am dealing with a HP server which is not being seen by IPMI view ...
<mischief> it's not the same
<mischief> this is AMI ipmi, not SuperMicro
<ivoks> zetheroo: hp uses ilo; not every ilo is ipmi compatible
<ivoks> mischief: look at the screenshots
<ivoks> mischief: it's the same, just rebranded
<zetheroo> I have also checked out the IPMI (iLO) of the HP via the browser but there is hardly anything in there - and no KVM viewer
<mischief> well, perhaps so. but my interface does not have such options as 'Virtual Media'
<ivoks> 1) Launch the remote console;
<ivoks> 2) On the top of window select Media->Virtual Media;
<mischief> it doesn't exist in the web ui man
<ivoks> zetheroo: some older ilos are IE only
<zetheroo> ok, so is there any utility (graphical) that can be used to connect to iLO ?
<mischief> http://i.imgur.com/VRrbnU9.png
<mischief> that's the java client with the iso error, next to the web page of the ipmi
<mischief> i think i should just ask the dc for a new server :)
<ivoks> mischief: well, that's you answer... can you redirect usb?
<zetheroo> I could even try a Windows-based on in wine I suppose ...
<mischief> ivoks: nope
<ivoks> mischief: ubuntun iso is cd/hdd hybrid; you can boot it as a disk or as a cd
<mischief> i can't redirect anything from the media menu
<mischief> cdrom, iso, usb, floppy
<ivoks> bad luck
<mischief> and.. i thought i made this clear. i did write the ubuntu server iso to the hard drive and try to boot that
<mischief> but the installer can't load itself from the cd image, because it can't read the disk, because there is no mptsas module in the installer's kernel
<mischief> bit of a chicken and egg problem
<ivoks> mischief: correct, it actually tries to read cdrom to fetch that module
<ivoks> if you can dd iso to a usb stick, that would solve the problem
<mischief> what usb stick :-)
<mischief> this server is 200 miles away
<ivoks> well, then you are managing a remote server that has pretty poor remote management capabilities
<ivoks> it can't boot from a virtual media
<ivoks> and you don't have pxe
<mischief> i've come to realize that
<ivoks> how about...
<ivoks> net boot iso
<ivoks> hm...
<ivoks> at what stage does installer fail?
<mischief> it's a possibility, if it can do dhcp and then load the mptsas module from the 'net
<mischief> ivoks: scanning for and mounting cdrom
<mischief> right after picking the language/keyboard pretty much
<ivoks> you can try net boot iso
<ivoks> i'm not 100% sure it will work
<mischief> might as well
<ivoks> http://www.howtoforge.com/boot-linux-over-http-with-netboot.me
<ivoks> that might be an option too
<ivoks> doh
<ivoks> latest ubuntu is 10.04
<mischief> ivoks: nice work man
<mischief> netboot is going
<mischief> now i just ask providence to help me install openstack without too many headaches
<mischief> ivoks: thanks again, saved me a lot of time. i would have either had to wait on the datacenter guys or build my own installer image with the driver, if it had not been for the netboot image
<ivoks> mischief: np
<zul> jamespage: glance rc1 still waiting to be accepted
<jamespage> zul, did you see my ping re nova in virt-ppa's
<zul> jamespage:  i did...not good
<jamespage> zul, still need to decide what todo about db encoding
<jamespage> zul, the tests must have passed in the lab
<zul> jamespage:  they did
<jamespage> otherwise it does not get to the PPA
<mischief> i'm installing openstack on trusty ^.^
<jamespage> mischief, good!
<jamespage> mischief, things to watch for - db table encoding
<jamespage> mischief, if you are deploying neutron you have to configure neutron to talk to nova for nic plugging notifications
<mischief> jamespage: any link to tips on that?
<mischief> jamespage: re table encoding - a bug got filed that has the fix
<mischief> https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1300814
<uvirtbot> Launchpad bug 1300814 in keystone "Tables "migrate_version" have non utf8 collation" [Undecided,New]
<jamespage> mischief, kinda
<mischief> i'm setting up nova as we speak
<mischief> so neutron is next :^)
<jamespage> mischief, https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L295
<jamespage> zul: do we have an upstream bug for the keystone utf-8 break?
<zul> jamespage:  gimme a sec
<zul> jamespage:  affects heat as well https://bugs.launchpad.net/oslo/+bug/1301036
<uvirtbot> Launchpad bug 1301036 in oslo "openstack.common.db.sqlalchemy.migration utf8 table check issue on initial migration" [Critical,In progress]
<zul> jamespage:  im going to test that patch in the openstack-ci lab and re deploy using the charms
<jamespage> zul, OK
<zul> jamespage:  ok we are just waiting for swift rc1 now
<jamespage> zul, ack
<jamespage> zul, I think we might have a bug in the image snapshotting process
<jamespage> 2014-04-02 13:33:59.067 32697 INFO nova.virt.libvirt.driver [req-c05e37f9-1d3f-49ad-a6a9-57b6d690979e fdb9cd48f1804034a98aebd2918a9bdc d12b5ca2e2ca4329ac3b232052bd6a5e] [instance: cbe3d0c6-ee35-450e-9cbe-614108f13d1a] Snapshot extracted, beginning image upload
<jamespage> 2014-04-02 13:33:59.297 32697 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Requested operation is not valid: No active operation on device: drive-virtio-disk0
<zul> uh? is there a traceback?
<jamespage> zul, yes
<jamespage> zul, http://paste.ubuntu.com/7194270/
<jamespage> zul, wanna bug?
<zul> jamespage:  yes please
<zul> jamespage:  interesting
<zul> jamespage:  anything in the /var/log/libvirt ?
<zul> jamespage:  trusty or CA?
<jamespage> zul, trust
<jamespage> y
<jamespage> zul, https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1301393
<uvirtbot> Launchpad bug 1301393 in nova "Live image snapshotting failure on icehouse" [Undecided,New]
<Haven|Work> TJ- could I pick your brain for a moment?
<jamespage> zul, nothing in the libvirt logs
<zul> shazbut
<zul> im just trying to reproduce it now
<jamespage> zul: adam_g was hankering after new libvirt - wonder whether he's seen anything similar
<zul> jamespage:  like libvirt 1.2.3? or just an ubuntu cloud archive update
<jamespage> just ca
<zul> ah last time we touched it was for an libxl update...hallyn is doing a qemu 2.0/libvirt update for it though
<hallyn> but not just yet
<tomixxx5> what does mean entry "
<tomixxx5> sorry
<tomixxx5> what does mean entry "*" in gateway column when i call "route"
<zul> jamespage:  i havent been able to reproduce it
<jamespage> zul, hhm
<zul> granted i tried devstack but the tests ran fine for me
<ivoks> damn
<ivoks> i have to kill highlight on 'ante' :)
<zul> ivoks: try highlighting chuck
<zul> as in "im going to chuck this thing in here"
<ivoks> wanted, granted...
<jamespage> zul: bug 1301154
<uvirtbot> Launchpad bug 1301154 in python-openstackclient "python-novaclient needs to bump the epoch like the Debian package" [High,Confirmed] https://launchpad.net/bugs/1301154
<jamespage> opinion?
<jamespage> might as well make it compat
<zul> jamespage:  why not
<seaninryan> quit
<jamespage> zul: added some libvirtd.log to bug 1301393
<uvirtbot> Launchpad bug 1301393 in nova "Live image snapshotting failure on icehouse" [Undecided,New] https://launchpad.net/bugs/1301393
<jamespage> zul, ah
<zul> hallyn:  ^^^
<jamespage> I might see the issue
<jamespage> zul, root disk filling up
<zul> ah...:)
<hallyn> that's bug 1301393 you're talking about, jamespage ?
<uvirtbot> Launchpad bug 1301393 in nova "Live image snapshotting failure on icehouse" [Undecided,New] https://launchpad.net/bugs/1301393
<jamespage> hallyn, yes
<hallyn> k
<jamespage> hallyn, zul: I've bumped the root-disk size for m1.medium in serverstack
<zul> jamespage:  cool
<zul> jamespage:  i was using /mnt for some devtsack testing
<hallyn> zul: i'm trying to migrate a vm from a saucy laptop to a trusty one.  have you doen that, and had success/
<hallyn> zul: seems to hang on sh -c if nc -q 2>&1 | grep "requires an argument" >/dev/null 2>&1
<patdk-wk> I have done several precise -> trusty, without issue
<hallyn> patdk-wk: live migrations?
<patdk-wk> oh, migrate
<hallyn> those are supposed to be impossible :)
<patdk-wk> I read it as, migrate version :)
<zul> hallyn:  i havent
<jamespage> zul, hallyn: looks OK now - marked invalid
<hallyn> zul: ok.  hm.
<adam_g> jamespage, i haven't hit that but i needed the newer libvirt for something else
<adam_g> jamespage, you might be interested in https://review.openstack.org/#/c/74889/  someone is playing with the icehouse pocket and beating at it in the gate
<tsilenzio> hello
<tsilenzio> I have an ubuntu-server (13.10) box running nginx, php-fpm, mysql and postgres, etc.
<tsilenzio> its 8 cores, 20gb, etc. its a local development box that 4 developers use
<tsilenzio> right now when I do free -m as a command I get (Mem:         19070 total      18806 used       264 free         0 shared       250 buffers     16524 cached)
<tsilenzio> is it normal to be so low on free memory? or do i have a memory leak? :s
<Pici> you have 16GB cached. thats fine.
<tsilenzio> ah alright, thought i had a memory leak
<Pici> http://www.linuxatemyram.com/
<tsilenzio> why do i have some 6 bytes or 6kb of swap used? :s jw
<patdk-wk> looks like you have a harddrive leak
<patdk-wk> leaking all over your cached ram
<faiss> hi, how to rename p3p2 into eth0 on saucy?
<sarnold> faiss: check out /etc/udev/rules.d/70-persistent-net.rules
<bladernr_> hey, how familar are any of you with tweaking kernel routing?
<jamespage> adam_g, don't worry - it was just me being stupid (to small a root volume)
<bladernr_> there used to be a setting in /proc/sys/net/ maybe in ./ipv4 to force packets to ONLY go out and return via the interface they were supposed to.
<jamespage> adam_g, I'm fairly comfortable with the icehouse pocket
<jamespage> its smoking OK for me right now
<bladernr_> by default, it's possible to have packets go out eth1 and come in eth0 if both are on the same network
<bladernr_> and it's been well over 5 years since I last tried this and I forgot the magic switch.
<sarnold> bladernr_: check Documentation/networking/ip-sysctl.txt for the rp_filter variable -- that looks like the right thing on first glance
<bladernr_> thanks... that's a good start then...
<Quintasan> Hi, is there any way to make sure which way the data is mirrored in RAID1? Currently I have /dev/md1 (RAID1) with only one disk which contains my data, today I have created /dev/md0 which is RAID0 made out of two 1TB disks, can I just add /dev/md0 to the /dev/md1 array as a device and the data won't get overwritten?
<wiredfool> openssl question: Starting a server in a vm at startup, it starts up O(1) seconds after boot and reads 32 bytes from /dev/urandom to seed openssl's random number generator. . In trusty, O(100) seconds after boot,  [   99.783379] random: nonblocking pool is initialized shows up in the logs.
<wiredfool> Is this as sketchy as it sounds?
<sarnold> wiredfool: every process that uses openssl's random number generator should seed it themselves when they need it; that specific message comes from the linux kernel when it has finally collected enough entropy for the pool to be safe to use
<wiredfool> sarnold: so, yes, it's sketchy?
<wiredfool> for instance, starting a ssl-enabled webserver @ system startup
<Diegonat> hi.. With iftop I see a connection to a IP. I'd like to understand what program is connecting to that ip. How can I do it?
<sarnold> wiredfool: it would be best if it could block, but that's not the way the linux /dev/urandom works :(
<sarnold> wiredfool: however, we've got a new feature that you could turn on, if it isn't on already... let me go find a nice link to pollinate :)
<wiredfool> is there an upstart event that could trap the kernel saying urandom is ready?
<sarnold> wiredfool: http://bazaar.launchpad.net/~kirkland/pollen/trunk/view/head:/README
<kirkland> sarnold: right, if you want it to block, it should be pointed at /dev/random, not urandom
<sarnold> wiredfool: one of our users put this together, but I didn't get around to trying to integrate it into our distribution in time for trusty: http://www.av8n.com/cgit/cgit.cgi/init-urandom/
<Diegonat> hi.. With iftop I see a connection to a IP. I'd like to understand what program is connecting to that ip. How can I do it?
<wiredfool> sarnold: that sounds like what I want
<sarnold> Diegonat: netstat -nlp is very useful for this
<sarnold> Diegonat: (sorry for missing it earlier..)
<sarnold> wiredfool: yes :) it sounds nice. granted it isn't perfect because the kernel's api really does suck here. :(
<wiredfool> sarnold: this is what I'm getting from pound, a ssl terminating load balancer: http://pastebin.com/UPU2EFhv
<Diegonat> thank you sarnold
<wiredfool> sarnold: looks like open, read, close.
<Diegonat> mh... sarnold I cannot see anything with your command
<Diegonat> I still see that ip
<sarnold> Diegonat: if you run it as root you'll see which process is doing the sending and receiving
<wiredfool> sarnold: incidentally,  when pound starts, /proc/sys/kernel/random/entropy_available == 0
<Diegonat> ok i see it
<sarnold> wiredfool: yeah, it'd be nice if you could delay that read() command until after the entropy pool has been filled some..
<Diegonat> but i see an IP
<Diegonat> a remote IP
<Diegonat> i want to understand what is connecting to it
<sarnold> wiredfool: if you set pound to wait to run until after pollinate, you can at least be certain that some entropy has been shoved into the kernel -- it won't change the kernel's conservative entropy measurements, but it -will- provide the pool with unique input
<sarnold> Diegonat: which program has the socket open?
<Diegonat> no which program is talking to this remote ip
<wiredfool> sarnold: I'm wondering if the urandom seed file is early enough in the process that I'm getting the seed from there, it's just not updating the estimate
<sarnold> wiredfool: hrm?
<hallyn> mdeslaur: jdstrand: hey, how do you feel about the lxc option in virt-manager and virt-install?  and in particular about disablign it?
<hallyn> zul: ^
<mdeslaur> hallyn: I don't really have an opinion on it
<hallyn> ideally we'd have a config file where we coudl leave lxc out by default and have user override it if they really really want
<hallyn> but if ppl aregonna stumble into this whiel just playing around, this is pretty much exactly what i didn't want (supporting another, untested lxc)
<sarnold> I know I've seen at least one person surprised that the libvirt "lxc" wasn't as safe as the lxc lxc..
<hallyn> i'd be surprised too if id idn't know
<ivoks> hallyn: ping
<ivoks> serious issue with pacemaker in 12.04
<ivoks> https://github.com/ClusterLabs/pacemaker/commit/03f6105592281901cc10550b8ad19af4beb5f72f
<ivoks> marked as low, but really has a big impact
<ivoks> pacemaker node might refuse joining existing cluster cause of this
<ivoks> roaksoax: ^
<roaksoax> jamespage: ^
<roaksoax> ivoks ^
<jdstrand> hallyn: disabling it makes some degree of sense. I have always hoped that libvirt could be made to use our lxc. that said, I thought I saw some patches go by on the list for apparmor integration for their lxc
<jdstrand> hallyn: maybe from suse? not sure where they came from... maybe within the last month?
<ivoks> roaksoax: really? i notified you :)
<zul> hallyn:  i dont have an opinon on it
<zul> jdstrand:  they came from suse
#ubuntu-server 2014-04-03
<hallyn> ivoks: so that's the patch to fix it?  is there an open bug for it?
<hallyn> jdstrand: yeah, i did see the patches go by.  i've not tested them, virt-manager integration hangs with it, and virt-install does not 'create' a container like ppl expect it to
<jdstrand> zul: thanks
<jdstrand> hallyn: yeah, me either
<coreycb> jamespage, zul: nova db migration thus far if you want to take a look:  lp:~corey.bryant/+junk/nova-migration-after  (and for comparison, this has a patch with no modifications to the stable/havana history: lp:~corey.bryant/+junk/nova-migration-before)
<coreycb> upgrade seems to work ok, downgrade not so good
<coreycb> also I've been using this to dump the db: http://paste.ubuntu.com/7196972/
<tapout> how can i compare 2 folders between 2 servers recursively and find the missing files or different files?  anyone know an app?
<david_linhand> hi , how to download from /archive.ubuntu.com ?
<andol> tapout: Something along the lines of rsync --recursive --delete --dry-run --verbose perhaps?
<tapout> andol, that may work eh
<tapout> find . -type f -print0 | xargs -n 1 -P 8 --null md5sum
<tapout> this is what i'm using right now
<tapout> script hourly.0
<tapout> ... and going to use mysql memory tables to delete the same md5's
<tapout> reading manpage of rsync tho
<tapout> cheers
<tapout> wow that worked great
<andol> tapout: Assuming you don't want to trust mtime+size you can also add the (obviously slower) --checksum option.
<invinceable> what's the best way to attach a shell script i have to run at boot? running 12.04LTS server.
<sheptard> rc.local ?
<Corey> If it needs to daemonize look into pleaserun.
<defuncprocess> I am running ubuntu server, with apache 2.4.7 on a server...  the load time has been slow so i'm playing with configuration... was suggested that if i didn't need it to turn off followsymlinks in the vhost directive which i did and now i have a 403 for the domain.. any ideas? ( i'll gist anything needed )
<lordievader> Good morning.
<Guest52699> Goodmorning
<jamespage> zul, I managed to foobar dh-python on my last merge for the CA
<jamespage> fixed now - all reverse BD's rebackported
<jamespage> zul, that's why glance won't test right now :-(
<bekks> hi
<bekks> someone please can take a look at my ntp.conf - I just cant get it to work so another local server can sync its time with my local ntp server? http://pastebin.com/M7VPsmWK
<bekks> All I get is a "no server suitable for synchronization found" when running ntpdate 192.168.1.11
<jamespage> zul, nova now building in PPA OK _ must have been a broken dependency due my dh-python foobar
<bekks> I dont get that. When removing "restrict 192.168.1.0 mask 255.255.255.0 notrust" from ntp.conf, clients are able to sync with my local ntp server. This is the opposite of what the config says. Any clues on that?
<Cadero> Doesn't ubuntu server come with a preconfigured ntp client?
<Cadero> all our servers are running out of sync of each other
<rbasak> smoser: bug 1281767 is milestoned for 14.04. Is that a problem?
<uvirtbot> Launchpad bug 1281767 in simplestreams "simplestreams by-hash storage" [Medium,Confirmed] https://launchpad.net/bugs/1281767
<mardraum> Cadero: no, install ntp.
<zul> jamespage:  excelente
<jamespage> zul, I fixed up openstackclient
<jamespage> zul, we won't bump our epoch
<tomixxx6> does gateway have to be in the same subnet as the ip address when i define an interface?
<soren> tomixxx6: YEs.
<tomixxx6> soren: ty
<soren> tomixxx6: The gateway is where packets are sent if there's no other way to reach the destination. If the gateway wasn't on the same subnet, there'd be no way to reach it.
<soren> (Unless you had another route that enabled you to reach the gateway, but if you were doing something exotic like that, I don't think you'd be asking this question)
<zul> jamespage:  sounds good
<tomixxx6> kk, the problem is that the gateway is on another subnet and i must have two different subnets
<soren> tomixxx6: You only need one gateway.
<patdk-wk> next your going tell me I only need one gf also
<soren> Actually, you can only have one. (unless you're doing policy routing, but again: If you were doing something like that, you wouldn't be asking these questions)
<tomixxx6> lets say gateway is on subnet 192.168.0.0. how can nodes of subnet 10.4.128.0 use gateway?
<soren> patdk-wk: If you can handle more than one, knock yourself out. You get multiple mothers in law, too, don't forget that.
<tomixxx6> do i need some NAT ?
<patdk-wk> tomixxx6, you don't
<soren> tomixxx6: I saw you ask something over in #openstack, too, so I'm assuming this is all OpenStack stuff?
<tomixxx6> soren: yes
<soren> tomixxx6: OpenStack handles all the NAT stuff for you.
<tomixxx6> soren: kk... but the VMs cannot connect to i-net
<tomixxx6> hypervisor can connect to i-net however
<soren> tomixxx6: You don't have to set it up yourself. If it's not working, you need to look at your OpenStack config, not your base network config.
<tomixxx6> kk
<jdstrand> hallyn, zul: do you guys have any pending libvirt uploads?
<jdstrand> hallyn, zul: we are preparing bug #1298611
<uvirtbot> Launchpad bug 1298611 in lxc "[FFe] apparmor signal and ptrace mediation" [High,In progress] https://launchpad.net/bugs/1298611
<zul> jdstrand:  i dont
<zul> jdstrand:  knock yourself out
<jdstrand> hehe, thanks :)
<jdstrand> this will be a nice improvement for libvirt/openstack
<jamespage> smoser, https://bugs.launchpad.net/cirros/+bug/1301958
<uvirtbot> Launchpad bug 1301958 in cirros "Please use mtu option provided by dhcp" [Undecided,New]
<jamespage> did it in the end
<jamespage> sorry it took me two weeks to write two sentences
<smoser> jamespage, how useful would it be for you to get that ?
<smoser> i'm planning on doing it for 0.4.X.
<smoser> and i can probably provide you with a hacked image that does it.
<smoser> i think
<jamespage> smoser, right now I can't use cirros for certain tests in openstack-on-openstack testing
<jamespage> specifically the neutron ones that actually ssh to the machines and check things
<jamespage> so it slows things down as the standard ubuntu images take a bit longer to boot
<smoser> jamespage, will a hacked version help you?
<jamespage> smoser, it would
<jamespage> I can stuff that in object storage for tests to use
<smoser> can you give me a place to test / see failure?
<jamespage> smoser, yes
<jamespage> zul, pushing rc1 to -updates in the CA - smoked OK
<smoser> its too bad that not every OS can be as good as cirros.
<spidernik84> hi guys, anyone had any luck in preseeding a disk setup with raid+lvm+crypto? from what I understand it's not possible to have crypto+raid
<hallyn> jdstrand: i don't, thanks
<zul> jamespage:  ack
<tomixxx6> hi, i can wget 173.194.116.120 but i cannot wget www.google.at
<tomixxx6> normally, eth1 get's its config by external dhcp, now i have configured eth1 static.
<tomixxx6> dns not working anymore
<tomixxx6> anyone know what to do?
<spidernik84> ubuntu 12.04?
<spidernik84> server?
<tomixxx6> YES
<tomixxx6> ubuntu 12.04.04 lts server
<spidernik84> GOOD :)
<spidernik84> where did you specify the dns?
<spidernik84> in the interfaces file?
<tomixxx6> no, i have not specified dns in interfaces file, i thought to define address, netmask and gateway is sufficient
<spidernik84> those params are needed to reach other networks
<spidernik84> but to map urls (www.google.at) to ips ( 173.194.116.120) you need DNS to work
<spidernik84> soren, https://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/
<spidernik84> this is how it's done :)
<spidernik84> briefly
<spidernik84> The DNS configuration for a static interface should go as âdns-nameserversâ, âdns-searchâ and âdns-domainâ entries added to the interface in /etc/network/interfaces
<spidernik84> copied from the page
<tomixxx6> TY
<spidernik84> once done, "ifup -a"
<spidernik84> or, if it does not work, service networking restart. This is discouraged though
<tomixxx6> sudo reboot ftw!"
<tomixxx6> nothing else is working here
<spidernik84> that works as well... :)
<spidernik84> bye, gotta leave
<tomixxx6> cu
<spidernik84> good luck
<tomixxx6> TY!
<spidernik84> np
<tomixxx6> hi, i have the following problem: i can wget for example www.orf.at but i cannot wget www.google.at -> results in "network is unreachable"
<mardraum> wget?
<tomixxx6> yes, wget
<mardraum> can you ping it?
<tomixxx6> yes, i can ping
<andol> tomixxx6: Any different with a wget -4?
<tomixxx6> "wget -4 www.googl.at"?
<andol> Yepp
<tomixxx6> "wget: invalid option -- 4"
<mardraum> "google" even?
<tomixxx6> i have cirros virtual machine
<mardraum> cirros?
<tomixxx6> a small operating system for cloud computing
<mardraum> oh you are the guy with the broken openstack
<tomixxx6> yeah ^^
<tomixxx6> i'am very close to a working openstack, although
<mardraum> you think?
<andol> tomixxx6: Well, wget is supposed to have a -4 flag, and the point being to expliclty test IPv4. One guess being that your system things it has IPv6 connectvity, but doesn't really.
<tomixxx6> andol: sounds reasonable
<andol> tomixxx6: That would have explained why www.google.at behaves differnetly, since it has an AAAA record, while www.orf.at doesn't
<tomixxx6> this would mean i have a working openstack deployment, omg
<tomixxx6> maybe i should try an ubuntu image
<andol> (Also, the ping command is always IPv4, with the ping6 command for icmpv6 pings
<andol> )
<tomixxx6> andol: wow, this would explain a lot
<tomixxx6> ty!
<tomixxx6> what does "~" mean in a path, like "~/.ssh/authorized_keys"
<tomixxx6> where is this "~" ?
<tomixxx6> ok, got it, its the root directory of the logged user
<parallel21> anyone else experience slow updates?
<jpds> parallel21: That depends on SO many factors.
<parallel21> it no workee
<parallel21> What factors should I be looking at?
<jpds> parallel21: So, which country are you in, which mirror servers are you using, etc.
<parallel21> US
<parallel21> us.archive.ubuntu.com
<jpds> Now you'll have to do a traceroute to us.archive.ubuntu.com.
<parallel21> 15 hops
<parallel21> What am I looking for in the traceroute?
<jpds> Where your network is slow.
<parallel21> Once it hits sterlingnetwork.net, it jumps from 3ms to 73ms
<jpds> I have no problem getting to http://us.archive.ubuntu.com/ .
<parallel21> site loads fine
<parallel21> Just `sudo apt-get update` crawls
<jpds> parallel21: sudo apt-get -o Debug::Acquire::http=true update
<jpds> !ops | guoos, privmsg spamming
<ubottu> guoos, privmsg spamming: Help! Channel emergency! soren, lamont, mathiaz, Pici, Daviey, Tm_T or pmatulis
<parallel21> jpds: running that I get a list of 404's
<parallel21> W: Failed to fetch bzip2:/var/lib/apt/lists/partial/us.archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-i386_Packages  Hash Sum mismatch
<jpds> parallel21: sudo rm -rfv /var/lib/apt/lists/*
<jpds> That'll purge your cache.
<parallel21> Then rung that apt-get -o Debug::Acquire::http=true update again?
<jpds> parallel21: Yep.
<parallel21> It's still updating at like 93kB/s
<jpds> parallel21: That's your network.
<parallel21> don't think so mate
<parallel21> I can download an image at near 20mB/s
<parallel21> Performing updates on centos machines screams
<parallel21> I've just had problems with my ubuntu machines
<jpds> parallel21: Tried another US mirror like mirror.anl.gov ?
<parallel21> I'll try that
<parallel21> Success! Thanks jpds
<jpds> s/your network/somewhere in the network between you and us.archive/
<jpds> :-)
<JanC> is us.archive still in the UK?
<jpds> JanC: No.
<JanC> I remember it used to be  :)
<jpds> It did.
<bekks> hi
<bekks> I'm stuck with kickstarting an Ubuntu installation. The installation itself wents fine, but grub installation fails with "executing 'grub-install (hd0)' failed." This is the relevant part of my kickstart file: http://pastebin.com/2cshnPPd
<bekks> Maybe someone has a clue?
<parallel21> zermobr yes
<parallel21> should read zerombr
<parallel21> I don't think that's your problem though
<bekks> Hmm, no it isnt. I fixed that typo, but I get the same grub install error.
<zul> adam_g: hey we are still on track for tomorrow arent we?
<adam_g> zul, for what? 2013.2.3?
<zul> yeah
<adam_g> zul, its released now. pushing the tarballs to LP
<adam_g> zul, tarballs hsould be up at tarballs.openstack.org
<Guegs> I am planning on building a home server with 8x3TiB drives in raidz2. Would 8 GiB of ram be enough? This is the RAM I would be using (already have it). http://www.ebay.ca/itm/Patriot-Memory-Sector5-8gb-2-x-4gb-DDR3-1333-Dual-RAM-DDR3-10666-PGV38G1333ELK-/291113083847?pt=US_Memory_RAM_&hash=item43c7b123c7
<parallel21> Are you certain grub was installed on the install process?
<bekks> parallel21: was that for me?
<parallel21> bekks: yeah, sorry
<bekks> parallel21: gonna check i again, please hold on
<bekks> parallel21: I guess grub is installed. I get these error messages on vt4(?) http://picpaste.com/Bildschirmfoto_vom_2014-04-03_21_47_04-peDj0Dbr.png with this particular kickstart file section: http://pastebin.com/YKj3wHwV
<bekks> I dont get where that vg "sda" might be coming from - and why hd0 isnt found.
<TJ-> Where's it getting "(hd0)" from, something internal, or something you've specified? Because that should be the system block device, which looks there should be "/dev/sda"
<TJ-> The VG detection is because the lvm metadata hasn't been wiped
<bekks> hd0 / sda is the only harddisk attach. I'm gonna wipe it and try again, to get a somehow more clear error message.
<bekks> This is the log with a wiped harddisk: http://picpaste.com/pic2-OZILkQPn.png
<bekks> the last os-proper line shows sda2 as swap, so I dont get why it doesnt find hd0 - which should be / is the one and only disk attached (besides the cdrom, which is detected as sr0).
<Guegs> Is ZFS stable running on Ubuntu?
<sarnold> Guegs: i intend to run it myself 'eventually', in the last few months of #zfsonlinux it seems that using zfs solely for data works well, but using it for root doesn't work as well
<mwhudson> is xen 4.4 going to be in trusty?
<Guegs> I would just be using it as a storage array sarnold. Thanks for that chan though. Didn't know it existed.
<sarnold> mwhudson: yes: https://launchpad.net/ubuntu/+source/xen
<sarnold> Guegs: it's good stuff, those guys are like magic with a 'zfs status' pastebinned output :)
<mwhudson> sarnold: oh cool
<mwhudson> sarnold: blargh, no arm64 builds?
<sarnold> mwhudson: eek. no idea there
<mwhudson> i think it should work
<mwhudson> but i'd better become informed first :)
<mwhudson> sarnold: do you know anything about xen packaging?
<mwhudson> is it one of these projects that has a debian directory upstream that is stupid?
<mwhudson> hm
<mwhudson> sort of
<adam_g> zul, okay its official out
<sarnold> mwhudson: sorry, I don't know much about it beyond that smb takes care of xen for reasons I don't think I'll ever understand :) hehe
<mwhudson> heh heh
#ubuntu-server 2014-04-04
<WJB> can anyone tell me howto burn an iso of ubuntu on a mac so it is readable by7 mac or is that not possible?
<cfhowlett> !mac
<ubottu> For help on installing and using Ubuntu on a Mac, see: https://wiki.ubuntu.com/MactelSupportTeam/CommunityHelpPages
<metasansana> what is the universe-updates source for if they are not updated?
<metasansana> ^supported with updates
<law> evening all
<law> I have a most interesting problem, my Ubuntu server (12.04) forgot how to LVM during the grub boot process
<law> once grub drops to shell I'm able to see my /dev/mapper/vol* devices just fine
<law> I can mount them, e.g. to /root/ and whatnot
<law> but I cannot for the life of me get the system to boot normally
<law>  /boot is /dev/sda1, the rest of the partitions (/, /var, etc) are LVM
<law> can anyone help me?
<sbattey> My server's mail log has hundreds of lines that read "Apr  3 09:09:01 battey sendmail[10786]: s33991bS010786: from=root, size=831, class=0, nrcpts=1, msgid=<201404030909.s33991bS010786@battey.me>, relay=root@localhost
<sbattey> Apr  3 09:09:01 battey sm-mta[10787]: s33991iA010787: from=<root@battey.me>, size=1065, class=0, nrcpts=1, msgid=<201404030909.s33991bS010786@battey.me>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1]" Is this normal?
<sbattey> Anyone know if these lines in the mail long from sendmail are normal? http://pastebin.com/SG8mQP5d
<sheptard> sbattey: likely cron emailing you
<sheptard> sbattey: have you bothered to read the messages?
<sbattey> I don't know where to find them...
<sarnold> sbattey: /var/spool/mail/root ?
<faiss> noway to rename p3p2 on saucy, any advice folks?
<Daviey> hallyn, why are you using machine type 'trusty' in qwmu?
<smb> mwhudson, No there is not an arm64 build. Maybe it would work but I have had no hardware or time to get emulation set up to try. infinity did want to play with it but then its known that infinity's time is pretty much not infinite.
<mwhudson> smb: oh does infinity actually sleep?
<mwhudson> i wasn't aware
<mwhudson> and yes, makes sense about not having time to evaluate
<smb> mwhudson, Rarely but even when he does not there seems to be and infinite number of other things to do. :)
<lordievader> Good morning.
<jamespage> zul, dealing with swift rc1 now
<jamespage> also tidyting some bits
<sgo11> hi, I modified /etc/hostname and /etc/hosts. the command "hostname" returns the correct hostname. but "hostname -f" will always return "localhost". why?
<bekks> did you run "sudo hostname newhostname" yet?
<iKb> need restart the d
<sgo11> bekks, I reboot the machine
<sgo11> the machine is rebooted.
<sgo11> I gave "<ip> <domain> <hostname>" in /etc/hosts. after "127.0.0.1 localhost". but whatever I do, "hostname -f" always returns localhost. this is 13.10 ubuntu server.
<sgo11> I mean "<ip> <FQDN> <hostname>"
<iKb> in hosts
<iKb> 192.168.0.100   server1.example.com     server1
<iKb> ah ok
<sgo11> iKb, yeah, that is what I did.
<sgo11> but, hostname -f always returns localhost. I am confused.
<iKb> in /etc/hosts?
<sgo11> iKb, yeah.
<iKb> and in hostname?
<sgo11> iKb, /etc/hostname just has a hostname.
<iKb> echo server1.example.com > /etc/hostname
<iKb> do it
<iKb> than /etc/init.d/hostname restart
<sgo11> iKb, why? I remembered I should put hostname in /etc/hostname instead of FQDN.
<sgo11> that is what I always did before in old ubuntu-server release. and it worked.
<sgo11> I figured out why.
<sgo11> in /etc/hosts, the first line should be "127.0.0.1 localhost" instead of "127.0.0.1 localhost <hostname>". I don't know who or which program adds my hostname to that line. After removing it, everything works. I don't need to put FQDN in /etc/hostname.
<Guest81700> hey guys, i have a few Qs about securing postgres on ubuntu 12.04
<ccapndave> Hey all, I am trying to install Ubuntu Server 14.04 from a usb memory stick, but when it gets to the partition page its not seeing either of the two installed drives, only the memory stick.  FreeBSD sees them fine.  Does anyone have any pointers?
<bekks> OK guys. So who did break the kickstart installation of 14.04? The _exact same_ kickstart file works fine with 12.04, while 14.04 bails out being unable to install grub2 on sda.
<bekks> Someone please can clue me on how to submit a show stopper bug on this?
<jamespage> bekks, ubuntu-bug
<jamespage> zul, swift uploaded
<spidernik84> hi, anyone ever preseeded a software raid + lvm + crypto preseed? Partman-auto does not support all of them together, apparently.
<xnox> spidernik84: you can have two out of the three, not all three. e.g. raid+lvm or lvm +crypto.
<spidernik84> xnox, I feared so :(
<spidernik84> any workaround you can think of?
<xnox> spidernik84: but if you provision a custom script to setup raid for you (e.g. partman/early-command) to do raid, then you can presseed to do lvm+crypto on top of that.
<zul> jamespage:  awesome
<spidernik84> xnox, I had that idea in mind. Nice to know it's possible :)
<spidernik84> thanks
<xnox> spidernik84: or it's all shell-scripts so patches to partman-auto-lvm, partman-auto-raid, partman-auto-crypto are welcome to support tripple-combo.
<spidernik84> ah nice :) are you a maintainer of partman, by chance?
<bekks> jamespage: Do you have an idea which package could be the one with the bug?
<zul> jamespage/coreycb: LP: #1302575 for 2012.2.3 Openstack SRU
<jamespage> bug 1302575
<uvirtbot> Launchpad bug 1302575 in nova "Meta bug for tracking Openstack 2013.2.3 Stable Update" [Undecided,New] https://launchpad.net/bugs/1302575
<jamespage> zul, I love having everything happen at the same time :-(
<jamespage> zul, I was mean't to release the grizzly updates yesterday - but it slipped my mind/time
<jamespage> zul, don't want todo it today so I'll push on monday
<jamespage> dosaboy, ^^
<zul> jamespage:  ack
<hallyn> Daviey: same reason why redhat uses 'rhel'.  If we had done that at precise, then we could now distinguish between people migrating vms from precise's qemu-kvm or from saucy's qemu
<hallyn> Daviey: they have different incompatible machine settings, so we cannot at the same time have working migration from precise and from saucy
<jrwren> anyone ever have trouble removing full lvm snapshots?  lvremove says the volume is in use, but it is not.
<coreycb> zul, jamespage : https://code.launchpad.net/~corey.bryant/horizon/2013.2.3/+merge/214280
<coreycb> zul, jamespage : https://code.launchpad.net/~corey.bryant/keystone/2013.2.3/+merge/214281
<zul> jamespage:  https://code.launchpad.net/~zulcss/nova/2013.2.3/+merge/214289
<zul> https://code.launchpad.net/~zulcss/glance/2013.2.3/+merge/214290
<zul> https://code.launchpad.net/~zulcss/neutron/2013.2.3/+merge/214291
<zul> https://code.launchpad.net/~zulcss/cinder/2013.2.3/+merge/214292
<coreycb> zul, jamespage : https://code.launchpad.net/~corey.bryant/heat/2013.2.3/+merge/214301
<jamespage> zul, swift doing something odd - permissions on balanced rings is limited.
<zul> jamespage:  ?
<jamespage> zul, yeah - the dep-8 tests is failing as /etc/swift/*.gz is 0500 perms
<jamespage> owned by root
<zul> jamespage:  oh ok...ill take a look
<jamespage> zul, I'm looking as well
<coreycb> zul, jamespage : https://code.launchpad.net/~corey.bryant/ceilometer/2013.2.3/+merge/214302
<jamespage> ownership by root is expected - it looks like rebalance is doing off stuff
<jamespage> zul, this is icehouse btw - not havana :-)
<hehehe> heya
<hehehe> :D
<hehehe> how you migrate 1 remote server to another?
<jamespage> zul, got it:
<jamespage> +        tempf = NamedTemporaryFile(dir=".", prefix=filename, delete=False)
<jamespage> urgh
<jamespage> NamedTemporaryFile is always created with mode 0600
<jamespage> *quote
<jamespage> zul, just trying to decide whether that is actually a bug or not
<hehehe> emm
<hehehe> so
<hehehe> how do you do it?
<jamespage> hehehe, try rsync over ssh
<jamespage> zul, any thoughts?
<hehehe> jamespage but it will rewrite ssh files eventually if I copy everything
<hehehe> hmm
<jamespage> I might raise it upstream for an option
<hehehe> but it might work
<jamespage> hehehe, use it selectively
<jamespage> you can't really just migration the entirtiy of one server to another
<hehehe> yes not yet
<hehehe> it could be default webhosting option
<hehehe> to help people :)
<hehehe> I can in theory ask host to write entire server shadow copy on new disk
<hehehe> issue is php5 is owned by root on box 1
<hehehe> hmm so is nginx
<zul> jamespage:  im not sure yet...raise it with upstream if you have to
<jamespage> zul, https://bugs.launchpad.net/ubuntu/+source/swift/+bug/1302700
<uvirtbot> Launchpad bug 1302700 in swift "Use of NamedTemporaryFile creates rings with restricted permissions" [Undecided,New]
<jamespage> zul, I can work around it in the tests and charms but its a change in behaviour that will catch people out
<zul> jamespage:  ack
<jamespage> zul, shall I tag it esp for ttx :-)
<jamespage> zul, well I have done anyway...
<zul> jamespage:  please ;)
<jamespage> zul, ack
<hehehe> is it easy to install some gnome on server?
<hehehe> I heard many people like gnomes
<cfhowlett> hehehe sudo apt-get install gnome
<hehehe> ok easy enough
<hehehe> apt installs all files into /etc?
<hehehe> I want to migrate server with nginx,php5 fpm maria db and wp, so I am thinking perhaps I can save time and copy paste directories
<hehehe> that got those libs
<zul> jamespage:  hey mind reviewing the stable/havana branches?
<jamespage> zul, OK - looking now
<zul> jamespage:  at least i can get it started testing in the lab
<sync0pate> Can anyone in here give me any idea where to start debugging a problem I have with BTSync?
<sync0pate> I'm getting "Don't have permissions to write to the selected folder." despite chmod 777
<jamespage> zul, nova commented
<zul> jamespage:  ack
<Tzunamii> sync0pate: http://blog.bittorrent.com/2013/09/17/sync-hacks-how-to-set-up-bittorrent-sync-on-ubuntu-server-13-04/
<sync0pate> that repo didn't work for me at all Tzunamii
<sync0pate> I had to install from source
<sync0pate> not source sorry
<sync0pate> from the website
<Tzunamii> I'm not talking about the repo, but what permissions you need etc. Get with the program
<sync0pate> well I've seen somewhere suggesting there is more than one version
<sync0pate> btsync and btsync-client or something
<sync0pate> so
<zul> jamespage:  nova fixed
<jamespage> zul, glance +1
<kpettit> Any suggestions for simple/ease SMTP email server?  Need to setup a email server so my various phones systems, printers, faxes, etc can use some generic accounts to send email.  Use to use Google, CBeyond or whoever but just not reliable.
<rbasak> jamespage: could I have a quick sanity check on bug 1301919 please? I propose Breaks/Replaces: open-vm-toolbox (<< 2:0~). Does that choice of version seem right to you?
<uvirtbot> Launchpad bug 1301919 in open-vm-tools "package open-vm-tools 2:9.4.0-1280544-5ubuntu5 failed to install/upgrade: trying to overwrite '/etc/xdg/autostart/vmware-user.desktop', which is also in package open-vm-toolbox 2011.12.20-562307-0ubuntu1" [High,New] https://launchpad.net/bugs/1301919
<Tzunamii> sync0pate: btsync and btsync-common is what you need if you indeed use a server
<jpds> kpettit: postfix.
<kpettit> It would be for sending email only.
<kpettit> jpds: it's pretty easy for doing sending only?  Can it do TLS authenticaion out of the box or does it need other software to do that?
<rbasak> Or I could look up what version we synced, I suppose.
<jpds> kpettit: That's exactly what postfix is for
<kpettit> perfect.  Thanks.  I'll give it a go.
<jpds> kpettit: Possibly not 'easy' at first, but it's an industry standard.
<kpettit> I had done postfix before but it was for doing whole office email.  And it's been a few years.  Wasn't sure if there was something easier for my type of need
<jamespage> zul, cinder +1
<patdk-wk> kpettit, a nullmailer
<patdk-wk> kpettit, likely you want msmtp
<kpettit> I'm not sure the right term.  But it just needs to send voicemail, fax2eamil and crap like that.
<Tzunamii> msmtp is quite easy and definitely low weight
<kpettit> I was using ssmtp to connect to Google, CBeyond but they've been annoying to deal with lately.
<jamespage> zul, almost with nova
<kpettit> cool.  I'll check out msmtp now....
<jamespage> rbasak, thinking
<kpettit> Tzunamii: patdk-wk, it looks like msmtp is the same thing as ssmtp.  Which isn't working for me becuase Google keeps blocking my account if I don't login ever X number or days or wahtever.
<kpettit> so probally have to go the postfix route I think.  thanks though
<patdk-wk> what do you mean, google?
<zul> jamespage:  arrgh...try now
<patdk-wk> you have two options when doing email, you use a relay server, or you send direct
<kpettit> Google email.  I use Google Apps for email and such.
<patdk-wk> your not allowed to use gmail as a relay server
<patdk-wk> in their terms
<patdk-wk> why they keep blocking you
<jamespage> zul, +1 - but needs clear runway for landing :-)
<sync0pate> Tzunamii, I can't do dpkg-reconfigure because it's not installed as a package,also none of the config dirs exist.. any idea where I should look next?
<kpettit> patdk-wk: which is why I'm trying to do my own thing now :)
<patdk-wk> ya, then you need postfix :)
<kpettit> It worked great for a few years though.
<patdk-wk> well, gmail has been ending up on blacklists, so they *care* now
<kpettit> I even tried using a SMTP relay service.  But they were crappy to deal with
<kpettit> I just hate dealing with email servers.  Guess I'll suck it up and do it :)  I'm turning into a lazy admin I think
<Tzunamii> sync0pate: I'm not sure why you didn't install it via the package manager, but I highly suggest you get rid of your own built install and do it the right way
<Tzunamii> The config-file(s) are simple enough
<sync0pate> because it wouldn't install via package manager
<sync0pate> ppa didn't work
<jamespage> rbasak, why not just the version where the change was introduced
<sync0pate> just said btsync couldn't be found
<jamespage> (<< 2:9.4.0-1280544-5)
<Tzunamii> sync0pate: Maybe you should have asked for help with the PPA first then. I installed it on a client's server a few days ago from the PPA and it worked fine to me
<jamespage> rbasak, looking https://launchpad.net/ubuntu/+source/open-vm-tools/+publishinghistory that should do it
<jamespage> (<< 2:9.4.0-1280544-5~)
<rbasak> jamespage: OK. Or (<= 2:9.4.0-1280544-5) for intent?
<SpamapS> gah.. 23 hops between me and cloud-images.ubuntu.com
<patdk-wk> as long as all those hops are .1ms :) with 10gbit bandwidth
<sheptard> 10g aint what is used to be
<sync0pate> Tzunamii, I got the ppa working, still won't let me add a folder, saying I have no write permissions
<sync0pate> I've configured it to run as my user and group
<sync0pate> and the folder is 775 in my homedir
<Tzunamii> sync0pate: Know that I use a VPN to sync my folder(s) so you might want to use a relay server or something, but here's a working example http://pastebin.com/4S3W1haM
<sync0pate> I'm over a VPN too
<Tzunamii> sync0pate: put that config into /etc/btsync/<some name here>.conf
<Tzunamii> sync0pate: sudo chmod 400 /etc/btsync/<your config file>
<Tzunamii> after that: sudo service btsync start
<Tzunamii> make sure your user owns the config-file
<Tzunamii> as in: sudo chown <your user>:<your group> /etc/btsync/<your config file>
<sync0pate> ahh got it thanks Tzunamii
<sync0pate> it wasn't the sync folder that wasn't working..
<Tzunamii> my pleasure
<sync0pate> it was the storage path
<sync0pate> I saw that you'd set it in your config, changed mine, and bam
<sync0pate> ok.. well.. let's see if it actually syncs now! :D
<Tzunamii> Well, if you use iptables on the server you need to open up the relevant ports. Other than that it should work just fine
<sync0pate> yeah, I'm through openvpn
<Tzunamii> netstat -tldpu|grep -i btsync
<sync0pate> everything's open once I'm connected through the vpn
<Tzunamii> Perfect
<sync0pate> ahh.. it's done it again
<sync0pate> it's got halfway through the sync though, I have most of the files, and now it says it can't write to the folder
<zul> jamespage:  hey did you merge that branch? ;)
<Tzunamii> stop the btsync daemon, rm -r <where your store your files you want to sync>   (note: only if you don't have anything important there), re-create the dir, chmod it 775, restart the daemon
<jamespage> zul, sorry - which one?
<zul> jamespage:  adam_g's ironic MP
<jamespage> zul, nope
<jamespage> zul, can do now
<zul> jamespage:  thanks
<jamespage> zul, merged - still needs a release tho
<zul> jamespage:  the neutronclient dep is going to mess us up
<sync0pate> same thing again Tzunamii
<sync0pate> weird
<jamespage> zul, just revert it
<zul> jamespage:  yep
<sync0pate> gets halfway through, then stops with the error
<jamespage> zul, we have it patched at 2.3.0
<Tzunamii> sync0pate: When you start syncing from the remote client your dir on the local server is empty?
<zul> jamespage:  this looks pretty important though https://github.com/openstack/neutron/commit/15a912b1ca3c24ba8851b8b77d6de8027e120d78 (reason for newer python-neutronclient is needed)
<sync0pate> Yup Tzunamii , rm -rf and re-created it
<sync0pate> set to 755
<Tzunamii> Very strange
<sync0pate> *775
<sync0pate> yeah, it's dowloading like.. 60-70% of it
<sync0pate> before complaining it can't write to the dir
<Tzunamii> sync0pate: space issues?
<sync0pate> I've checked I still have disk space
<sync0pate> ;)
<Tzunamii> heh ok
<adam_g> zul, jamespage are you talking about neutronclient wrt stable/havana or ironic?
<zul> stable/havana
<Tzunamii> sync0pate: Have you tried to sync something else on the remote client? Just a simple 1 byte textfile or whatever?
<sync0pate> Yeah
<Tzunamii> Does it work?
<sync0pate> I tried another folder with a couple textfiles in it
<sync0pate> works fine
<sync0pate> just the folder that I actually want to sync that's giving me trouble ;)
<Tzunamii> Sounds like it might be a file/dir -name issue or something
<sync0pate> both running ubuntu..
<sync0pate> :\
<Tzunamii> Well, I'm sorry, but your daemon now works fine so it's something out of my control
<sync0pate> lol
<sync0pate> k well thanks
<sync0pate> guess I have a fun evening ahead
<adam_g> zul, im fairly certain you can drop this one patch from neutron to avoid requiring patching the neutronclient requires https://review.openstack.org/#/c/70178/7, or patch neutronclient ?  its a performance optimzation
<sync0pate> :)
<Tzunamii> sync0pate: Just sync one thing at a time until you find what's giving you a headache
<sync0pate> oh, last thing
<sync0pate> the ppa worked on my "client"
<sync0pate> but still doesn't work on the server
<adam_g> zul, oh wait hold on
<sync0pate> any idea?
<zul> adam_g:  yeah im looking at the commit now
<Tzunamii> sync0pate: If you have updated the server it should work just fine. If you get a specific error message I guess you should Google it.
<adam_g> zul, IIRC you may not need to do anything
<sync0pate> Tzunamii, I mean it's just "package btsync not found"
<sync0pate> after adding ppa and updating
<zul> adam_g:  so what if i bump it back down to what we had in saucy?
<Tzunamii> sync0pate: alias whichppa='apt-cache policy $1'            # Checks a package for which PPA it belongs to
<Tzunamii> sync0pate: Add that to your Bash library to start with
<sync0pate> "unable to locate package btsync"
<sync0pate> ?
<Tzunamii> Clearly your PPA hasn't been added properly
<sync0pate> not giving me any errors when I add it..
<sync0pate> or update
<adam_g> zul, https://review.openstack.org/#/c/72754/ fixed a bug in 2013.2.2 that required the neutronclient bump, but the version bump (mistakenly) didnt happen until 2013.2.3. so it should be working fine against 2.3.0 if you dont feel like updating it
<adam_g> (ie, i think its safe to patch requirements.txt)
<sync0pate> Tzunamii, and it's showing up in /etc/apt/sources.list.d
<Tzunamii> remove the PPA and add it back again and make sure you doublecheck
<Tzunamii> make sure you do a apt-get update after the removal and before the re-adding
<sync0pate> k
<sync0pate> hmm
<sync0pate> it's still showing in /etc/apt/sources.list.d
<Tzunamii> sync0pate: use this link (scroll down) as a reference for how you can do it (multiple ways): http://askubuntu.com/questions/173195/how-do-i-remove-a-ppa-added-via-command-line
<zul> adam_g:  i think this is the cause of the change in neutronclient https://github.com/openstack/python-neutronclient/commit/02baef46968b816ac544b037297273ff6a4e8e1b
<sync0pate> yeah, I did the first method, now I've done the second
<zul> jamespage: ^^^
<adam_g> zul, yeah, but what i mean is 2013.2.3 *should* be working fine against python-nc 2.3.0  - 2.3.4, with or without that change
<zul> adam_g:  ok cool
<zul> adam_g:  im going to revert the requirements bump then
<adam_g> zul, give it a test tho after do
<sync0pate> same again Tzunamii
<zul> adam_g:  i intend to :)
<Tzunamii> sync0pate: do #31 @ the link I gave you
<Tzunamii> sync0pate: also try the #12
<sync0pate> yeah I already did Tzunamii
<sync0pate> then re-added it
<sync0pate> and updated again
<sync0pate> and still package btsync not found
<Tzunamii> Did you run the update before re-adding the PPA?
<sync0pate> yup
<sync0pate> twice
<Tzunamii> sudo add-apt-repository ppa:tuxpoldo/btsync; sudo apt-get update; apt-get install btsync
<Tzunamii> darn
<Tzunamii> sudo add-apt-repository ppa:tuxpoldo/btsync; sudo apt-get update; sudo apt-get install btsync
<sync0pate> yup
<sync0pate> E: Unable to locate package btsync
<sync0pate> this is all on a vps if it makes a difference?
<Tzunamii> sync0pate: cat /etc/lsb-release
<sync0pate> I've been able to install other ppas
<sync0pate> DISTRIB_ID=Ubuntu
<sync0pate> DISTRIB_RELEASE=13.04
<sync0pate> DISTRIB_CODENAME=raring
<sync0pate> DISTRIB_DESCRIPTION="Ubuntu 13.04"
<sync0pate> (the one that works is 13.10)
<sarnold> yikes, 13.04 has been unsupported for over two months
<sarnold> https://wiki.ubuntu.com/Releases
<sync0pate> ugh
<sync0pate> I've only just noticed it's 13.04
<Tzunamii> sync0pate: You got your answer right there. On this server you need to install it from source. Scroll down a bit for that part  http://askubuntu.com/questions/284683/how-to-run-bittorrent-sync
<sync0pate> the vps provider set it up for me about 2 weeks ago
<sync0pate> nah Tzunamii I'll do a dist-upgrade
<Tzunamii> ok
<sync0pate> they told me it was latest v.
<sarnold> not very nice..
<sync0pate> ugh.. now dist-upgrade isn't working :\
<sync0pate> just thinks there's nothing to update
<sync0pate> wtf
<sync0pate> may have to change vps provider or something.. they only just installed this one for me because it wasn't updating from a 12.0something LTS
<sarnold> sigh
<sarnold> wish I'd returned a few minutes earlier..
<Tzunamii> He ran away before I could tell him that a dist-upgrade isn't what he wanted
 * patdk-wk would run too
<sarnold> sync0pate: hey :) I hope you foud out that dist-upgrade isn't what you wanted, but rather do-release-upgrade
<sync0pate> command not found :\
<Tzunamii> sync0pate: sudo apt-get install update-manager-core
<patdk-wk> install it
<Tzunamii> sync0pate: do-release-upgrade
<sync0pate> k
<sync0pate> thanks
<sync0pate> sorry I'm just in a blind rage at my vps provider at the moment
<sync0pate> for putting 13.04
<patdk-wk> when did they setup the vps?
<sync0pate> about 3 weeks ago?
<Tzunamii> Personally I stick to LTS releases unless there's a specific need
<sync0pate> I asked them to put the latest LTS
<patdk-wk> that would be 12.04
<sync0pate> and they've put 13.04
<jcastro> We're doing a Juju Charm school at the top of the hour, The topic is Juju Plugins:  http://ubuntuonair.com, we'll be taking questions in #juju
<zul> jamespage:  neutron fixed
<sync0pate> OK I hope someone can help me
<sync0pate> I'm in the middle of a release upgrade to my vps
<sync0pate> and I lost ssh connection
<sync0pate> any way to regain it?
<parallel21> you've ssh'd back into the machine?
<sync0pate> yeah
<sync0pate> I've heard there's some kind of screen -r or something?
<parallel21> I think try screen -list
<sync0pate> There is a screen on:
<sync0pate> 	16588.ubuntu-release-upgrade-screen-window	(04/04/14 19:52:27)	(Attached)
<sync0pate> 1 Socket in /var/run/screen/S-root.
<parallel21> screen -r should work
<sync0pate> screen -r shows me the same thing as screen -list and says "there is no screen to be resumed" ..
<parallel21> screen -D -r 16588.ubuntu-release-upgrade-screen-window
<sync0pate> :D
<sync0pate> that's it! thanks
<sync0pate> glad I found this room..
<parallel21> np :)
<sync0pate> OK I have another question actually
<parallel21> what's the question?
<sync0pate> well
<sync0pate> I had another question, but it seems the dist-upgrade might have fixed it
<verdeP> oh I just did a dist-upgrade too...it kept holding back that kernel q.q
<sync0pate> is there something that makes an ssh session time out, or stop responding after a certain time?
<Tzunamii> Some routers times out connections
<Tzunamii> sync0pate: There are a number of ways to fix this. This is one of them http://www.itworld.com/networking/397458/how-prevent-ssh-timing-out
<sync0pate> nice, thanks
<sync0pate> fyi, btsync is working now
<Tzunamii> Congratz
<sync0pate> it occasionally displays the same error (!?) but now it resumes after a couple seconds
<sync0pate> and starts working again
<sync0pate> so.. I have no idea, but it seems to be working
<keithzg> Hmm, is networking broken in current Trusty at the moment? Just did a test install of server from the latest daily and it's mysteriously unable to resolve addresses, despite having a nearly identical /etc/network/interfaces to the 12.04 servers I have on the same network.
<bekks> Works fine for me.
<Tzunamii> sudo service dnsmasq status
<keithzg> dnsmasq: unrecognized service
<sync0pate> hmm.. now btsync has stopped itself
<sync0pate> Tzunamii, any idea? log says "Received shutdown request via signal 15"
<sarnold> something sent it a SIGTERM
<sync0pate> any idea what it may have been or why?
<sync0pate> or how I can find out?
<sync0pate> I wonder if my vps host is doing it.. they do explicitly say they allow btsync
<Tzunamii> sync0pate: Sorry, new to me. Out of RAM or diskspace? Did you create a race-condition with two btsync config-files saving to the same local directory? You need to Google it as I have 7 terminals up helping two clients
<sync0pate> lol ok Tzunamii , you've helped me enough :D
<sarnold> sync0pate: what were the circumstances when it died?
<sync0pate> sarnold, was about 10 minutes after I started the service up..
<sync0pate> it had synced everything and was, I imagine, idling?
<sync0pate> i'll double check the log
<sync0pate> couple of lines saying "incoming connection from : <my ip> "
<sync0pate> then [20140404 19:44:21.961] UPnP: Could not map UPnP Port on this pass, retrying.
<sync0pate> [20140404 19:44:22.965] Received shutdown request via signal 15
<sync0pate> [20140404 19:44:24.454] Shutdown. Saving config sync.dat
<sync0pate> there's a bunch of those UPnP things above
<sarnold> hmm, maybe
<sarnold> sync0pate: can you disable the upnp firewall rules in btsync?
<sync0pate> yeah
<sync0pate> good idea
<sync0pate> I don't need it anyway
<Tzunamii> sync0pate: If you're using a VPN for btsync, use the settings from my config-file I shared with you
<sync0pate> will do Tzunamii
<sync0pate> I was changing them line-by-line until it worked
#ubuntu-server 2014-04-05
<AndChat164736> What is a command I can run to view incoming traffic on port 80?
<sarnold> AndChat164736: sudo tcpdump -i wlan0 tcp port 80
<sarnold> add more options if you want to see contents, etc
<AndChat164736> Thanks haven't learned much about tcpdump yet
<sarnold> AndChat164736: ah, you might also want -n to prevent nameservice lookups. that'll make it run way faster :)
<sarnold> hah, and of course wlan0 is because I tested it on my laptop :) your server may not have a wireless NIC :)
<AndChat164736> Yeah I changed that to eth0:0
<sarnold> did that work alright? I wouldn't be surprised if it can't tell one fake NIC from any of the others
<sync0pate> goddamn.. it's happened again
<sync0pate> btsync done 200mb no problem, then 16mb left tells me "don't have permission to write to the selected folder"
<sync0pate> :\
<sarnold> sync0pate: is btsync storing data into /tmp or somethingwhile transferring?
<sarnold> sync0pate: that sounds -incredibly- annoying
<sync0pate> I've no idea sarnold
<sync0pate> and yeah, it is
<sync0pate> maddening
<sync0pate> I've just deleted and reconfigured everything from scratch, again, and it's *nearly* at the point where it normally fails
<sync0pate> I'm too close to success to give up on it
<sync0pate> I think it might be failing at writing something that's not the actual files, like an index or something
<sync0pate> but I've checked the data dir
<sync0pate> and I don't know what else it might be writing
<sync0pate> the config dir, the logs dir etc
<sync0pate> checked em
<sync0pate> sudo chmod -R 777 /* ?
<sarnold> ugh no please no never forget you've ever seen that command :)
<sync0pate> haha
<sync0pate> sorry, I've been drinking
<sync0pate> ARUGHGHAWGH
<sync0pate> it's done it
<sync0pate> with 15mb to go last time
<sync0pate> 17.6mb to go this time
<sync0pate> any idea where I can even start to look?
<mand0> its not a network problem?
<sarnold> sync0pate: next time you test, just pick a file that's small, like a megabyte or something
<sarnold> sync0pate: is there a configuration file that specifies destination directory?
<sync0pate> it's definitely not a network problem
<sync0pate> yup
<sarnold> sync0pate: how do you tell the thing where the data should be sved?
<sync0pate> checked all the dest directories
<sync0pate> it works absolutely fine with small dirs
<sync0pate> even fairly large ones
<sync0pate> this one is 200mb
<sync0pate> but I think crucially - about 5000 files
<sync0pate> I think this might be it : http://forum.bittorrent.com/topic/29233-bt-sync-seems-not-to-be-closing/
<sarnold> sync0pate: omg
<sync0pate> omg?
<sarnold> sync0pate: they released with -that- bug?
<sync0pate> looks like it
 * sarnold shakes his head
<sync0pate> any idea on a temp workaround?
<sync0pate> maybe just restart the service every minute in a cron job?
<sync0pate> heh, yeah, when I restart the service it keeps going a bit longer
<sync0pate> then fails again
<sync0pate> I think it leaves the smaller files til last
<sync0pate> and is failing every 500-1000 files or so
<sync0pate> 512 maybe
<sarnold> sync0pate: if you run the server-side client with strace you might be able to confirm/deny if it ever closes files..
<sync0pate> <sync0pate> sorry, I've been drinking
<sync0pate> what?
<sarnold> haha
<sync0pate> I liked my idea better
<sync0pate> I reckon I've got it working by restarting the process 5 or 6 times when it fails
<sarnold> sync0pate: just to get the hang of strace, "strace -o /tmp/out ls /tmp"  -- then look at /tmp/out, it'll show all the systemcalls that 'ls' makes
<sarnold> sync0pate: then you can try it on the btsync program..
<sarnold> sync0pate: time for me to bail :) good luck, have fun
<sync0pate> yeah check it out sarnold
<sync0pate> [20140405 03:05:39.168] Error downloading file vendor/symfony/routing/Symfony/Component/Routing/Matcher/ApacheUrlMatcher.php: WriteToDisk: Too many open files
<sync0pate> tons of them in the log
<sync0pate> but k, I should sleep anyway, night
<lordievader> Good morning.
<sync0pate> hello helpful people, any idea when 14.04 LTS is ready?
<cfhowlett> sync0pate 17th
<sync0pate> awesome thanks
<sync0pate> I wonder if anyone today can shed any light on the problem I was having last night with BTSync
<sync0pate> seem to be having the same problem as this person : http://forum.bittorrent.com/topic/29233-bt-sync-seems-not-to-be-closing/
<sync0pate> BTSync is syncing a ton of files (up to the per-process limit I think) then failing, presumably because it's not closing them
<sync0pate> assumedly this would be caused by a horrible bug in the software, but is there a temporary workaround?
<Tzunamii> Morning
<sync0pate> Morning Tzunamii
<sync0pate> Thanks for all your help yesterday :)
<Tzunamii> any time
<Bill_croxson> hey there. would anybody be willing to help me? I have already made a thread, but I would like to draw some attention to it as the server is needed to be put up as soon as possible. If anybody would like to help, just ask for the thread URL.
<Bill_croxson> actually, ill just pop the URL in as i need to leave. Thank you to all who help!! http://ubuntuforums.org/showthread.php?t=2215192
<sync0pate> anyone use firehol? or is there something better?
<sync0pate> ufw or something?
<Tzunamii> I'm using my own made security solution
<dcosnet> Tzunamii: likewise
<bekks> sync0pate: firehole and ufw are just frontends for the same application: iptables. So there is no "better" at that point. :)
<sync0pate> yeah, I know bekks
<sync0pate> ufw seems to be working, and firehol isn't
<sync0pate> so
<sync0pate> I guess I have my answer :)
<bekks> Define "doesnt work" then.
<Guegs> Looking to build a home server, but need a CPU. It won't be used for anything really resource intensive, I just want something low power. Preferably intel. Suggestions?
<jasonmsp> hey all.  I'm running Ubuntu 10.04.4 LTS...  just ran an update and upgrade and got this error when it tried updating :  http://pastebin.com/xga6fJj7
<jasonmsp> it has to do with CA certs
<jasonmsp> errors started with:  Updating certificates in /etc/ssl/certs... unable to load certificate
#ubuntu-server 2014-04-06
<ska> Im trying to install 13.10 on a Asus Z87-Pro, but It fails in non-uefi mode as well as uefi.. Different problems..
<ska> I think UEFI fails right after partitioning
<ska> Its a new system with no other OS.
<vlad_starkov> QUESTION: Hi there! I'm setting up NFS4 server. Can't find rquotad service. Should I install it manually?
<sync0pate> hey, strange question, don't know if this is the right place but.. if I have a VPS running on a client's server, how secure is it from people who have access to the server?
<sync0pate> say the VPS is encrypting it's data internally, but someone may have access to the box the vps is physically running on?
<sync0pate> sorry by vps I mean a virtual machine
<bekks> As long as someone has physical access to that machine, someone has full control over everything.
<sync0pate> as in they could read all the encrypted data within?
<andol> sync0pate: The main benefit of encrypted data on your VPS is that it helps against some accidently accessing the data, such as afterwards when another customer reuses the same disk blocks.
<andol> sync0pate: It also forces the owner of the physical machine to be more explicilty "evil" to be able to access your data.
<sync0pate> well to be honest, I'm mostly worried about accidental access rather than malicious intent
<sync0pate> and sorry
<sync0pate> when I said VPS, I meant a VM
<sync0pate> what I'm actually talking about, is a VM running at a client site, on one of their servers
<sync0pate> if they provide access to that physical server to other contractors
<sync0pate> how much access could they get to the VM?
<bekks> Then the above still applies.
<sync0pate> so they'd theoretically be able to access everything?
<bekks> Physical access to the server means that they have full control.
<sync0pate> how would they be able to decrypt everything?
<bekks> They dont have to encrypyt anything as long as your VM is running.
<bekks> *decrypt even
<sync0pate> what if there's encrypted data within the VM?
<andol> Not to mention that they can grab the encryption keys from RAM.
<sync0pate> right ok
<sync0pate> so, assume they could get full access
<sync0pate> but considering I'm worried more about accidental access than deliberate malicious intent
<sync0pate> from *that* perspective, it's fairly safe?
<andol> sync0pate: I would use the word safe, but there are certain scenarios you do midigate.
<bekks> As long as your VM is running, they have full, decrypted access to it.
<andol> I wouldn't
<sync0pate> but they're unlikely to get full, decrypted access to it unless they deliberately wanted to?
<bekks> Read again:
<bekks> < bekks> As long as your VM is running, they have full, decrypted access to it.
<sync0pate> ok, how do they?
<bekks> We already told you.
<sync0pate> you told me how someone could get access through grabbing the encryption keys from ram or something
<bekks> Yes, thats one of the attacking vectors possible.
<bekks> It perfectly answers your question "Is it safe?" with a clear "No.".
<sync0pate> I think it shows "Is it secure?" would be a clear "no"
<sync0pate> would their be a better alternative though?
<bekks> Dont host valuable data at a site where others have physical control. :)
<sync0pate> that's not an option
<bekks> Thats the only option.
<sync0pate> OK, I'll explain the situation a bit better
<sync0pate> it's a client site
<sync0pate> a database system I have provided them
<bekks> Doesnt matter actually. If you dont want someone to be able to access your data, take care no one besides you has physical access.
<sync0pate> their internal IT has physical access to the machines
<sync0pate> so, I'm not really concerned about malicious attack
<sync0pate> just curious IT folk changing settings accidentally or something
<sync0pate> deleting the wrong thing etc
<bekks> You better crerate backups then.
<sync0pate> yeah
<bekks> Encryption is not providing any security against logical errors.
<sync0pate> well no, the encryption is more in case someone does take the data out of the office
<sync0pate> or for when I backup
<sync0pate> as I do
<bekks> As long as the VM is running, everyone with physical access has FULL ACCESS to ALL data.
<bekks> Is that clear now?
<sync0pate> that was clear from the start
<bekks> Then why did you ask it on and on?
<sync0pate> because it's not exactly what I'm asking
<sync0pate> thanks anyway though
<bekks> It is exactly what you are asking. You just ont want to accept the answer. Your proposed solution is not providing any security for the usecase you are providing.
<sync0pate> well there isn't a solution that does provide the security I would like
<sync0pate> because I'm not allowed to host the data off site
<bekks> There are. But not for the price you would pay.
<sync0pate> there are?
<sync0pate> hey, I wouldn't be paying
<bekks> Or are you willing to license a full blown Oracle Enterprise Edition with Encryption Option?
<bekks> It will cost several hundreds of thousand of dollars for the license only.
<sync0pate> seems unlikely then :)
<bekks> Then the answer is "No."
<sync0pate> so then there isn't really an affordable solution that would provide the desired level of security
<sync0pate> is there anything more you would suggest to gently discourage curious IT people from poking at stuff?
<sync0pate> other than a stern talking to?
<bekks> Let them sign "Whatever I break, I have to fix it. No one else will help me."
<sync0pate> Aha
<sync0pate> yeah, that's probably what I need
<cfhowlett> bekks oh, I LIKE that one!
<andol> sync0pate: http://www.tacticalknives.biz/ImagesProductsLarge/926795.jpg
<sync0pate> also a good idea andol :)
<bekks> cfhowlett: :D
<sync0pate> sorry if I was unclear before!
<andol> Our Server QA team lead has something like that on his desk.
<sync0pate> tbh it's probably not even a concern
<sync0pate> in this case
<sync0pate> I just had a bad experience once when a marketing manager with a little knowledge started changing my SQL views to try and add extra data to his reports
<cfhowlett> sync0pate so you must also enforce the CLIENT policy; you break, you fix (or you pay the consultant 2X)
<andol> sync0pate: That kind of stuff appear to be more related to who has what kind of access credentials?
<sync0pate> yeah absolutely
<sync0pate> I mean, I still did charge for the fix
<sync0pate> but I would like to not have the headache
<sync0pate> how so andol ?
<sync0pate> like you said, they have physical access to the machine..s o..
<andol> sync0pate: Yeah, but I doubt a marketing manager would leverage physical root access to root access on the vm, to access to the *sql database. More like the marketing manager had been provided access to the database directly?
<sync0pate> that's kind of what I was getting at earlier andol
<sync0pate> like, I doubt the IT guys are gonna be pulling encryption keys from RAM to piss about with connection settings
<sync0pate> but yeah, in the earlier situation, the guy was given direct access to the DB
<sync0pate> again, nothing I had control over
<ska> Given a choice between UEFI and Legacy installation, what would you recommend?
<martisj> morning
<martisj> What does a + mean next to a file in a file ls?
<tcstar> not the right channel i'm sure, i'm attempting to use tsung to test my server setup -- it's not generating any traffic... #tsung has been dead all weekend.. anyone with experience with this that can help?
<hxm> can I follow symlinks in webdav server?
<hxm> i added Options Indexes FollowSymLinks to the configuration but doesnt works
<martisj> How can I remove custom acl settings for a specific folder?
<martisj> is this in the directory listing: dr- -r- -rwx+ on my folders, what does the + mean?
<Aison> are there any big changes in apache server between raring and saucy?
<Aison> I updated one server and everything still works, except the apache2
<Aison> well, apache2 still works, but the virtual hosts are not found anymore
<Aison> sites-enabled is somehow ignored or whatever
<dzeko> Does anyone know how to setup ntp server authentication. I only found one with the centos, but none with the ubuntu.
<bekks> the service is the same ;)
<dzeko> bekks: you've tried it before?
<bekks> I dont see a reason to authenticate for ntp access. If you ont want my clients being able to change the time of the ntp server, I just use nomodify.
<bekks> So "no." :)
<dzeko> bekks: because i've heard that if you don't have this, than it is possible to ddos it.
<bekks> Every service may be ddos'ed, regardless of unneeded authentication.
<bekks> nomodify is totally enough for disallowing modifications.
<dzeko> ok
<dzeko> tnx
<Aison> my apache server is no longer working under saucy
<Aison> the VirtualHosts are not matched
<Aison> always the default is taken
<Aison> what could be wrong?
<Aison> are there any big changes that I have to consider?
<Aison> when I try to access index.php of a virtual domain, always the default one is taken:
<Aison>  [:error] [pid 29379] [client 10.0.1.1:52937] script '/var/www/default/index.php' not found or unable to stat
<TJ-> Aison: https://wiki.ubuntu.com/SaucySalamander/ReleaseNotes
<Aison> thx
<Aison> TJ-, no luck. somehow VirtualHost matching is not working
<Aison> damn
<TJ-> Aison: That'll be because of the Apache 2.2 > 2.4 update. Lots of things changed. Run the configuration test option of the apache2
<Aison> apachectl configtest says Syntax OK
<TJ-> Aison: That's good then!
<TJ-> Aison: Did you check the apache 2.4 upgrade guide, particularly the "NameVirtualHost" changes?
<Aison> yes, I didn't use it before anyway
<TJ-> Is it HTTP or HTTPS?
<Aison> HTTP
<TJ-> Aison: have you confirmed that the site files are being parsed?
<Aison> you mean the files in sites-enabled?
<TJ-> Yes
<TJ-> Look at /etc/apache2/apache2.conf, and the "IncludeOptional" statement. Does it match the naming of the files in your "/etc/apache2/sites-enabled/" ?
<Aison> yeah, there is a IncludeOptional sites-enabled/*.conf
<Aison> well, I can try to add some syntax error to one of the files and config check again
<TJ-> Aison: And your sites files are all named $SOMETHING.conf ?
<Aison> aaaaaaahhh, damn *hit head on table*
<Aison> some have got no conf...
<Aison> lol
#ubuntu-server 2015-03-30
<Spyidonas> Hello guys can you help with a small issue i have?
<JanC> Spyidonas: I going to sleep, but just ask your question and if people can help, maybe they will (once they see it, have patience...)
<Spyidonas> its a pretty simple one , i followed this guide http://www.krizna.com/ubuntu/setup-mail-server-ubuntu-14-04/
<verdeP> mail server and simple...
<verdeP> <_<
<Spyidonas> but now i want to update the certificate , i change the lines about the keys inside my main.cf
<Spyidonas> but outlook doesnt recieve the new certificate
<Spyidonas> it keeps on getting the old one
<JanC> did you restart?
<Spyidonas> does outlook 'caches' it?
<Spyidonas> yes
<Spyidonas> smtpd_tls_cert_file = /etc/ssl/key.crt
<Spyidonas> smtpd_tls_key_file = /etc/ssl/issued/myserver.key
<Spyidonas> smtpd_tls_CAfile = /etc/ssl/key.ca-bundle
<Spyidonas> smtpd_use_tls = yes
<Spyidonas> smtpd_tls_auth_only = yes
<Spyidonas> is what my main.cf says.
<Spyidonas> postfix restarted and the whole server, my pc too
<JanC> so you restarted both Postfix & Dovecot, and updated the keys in the right place(s)?
<Spyidonas> yeap
<Spyidonas> if the right place is only main.cf
<Spyidonas> (according to this tutorial its the only place)
<JanC> Dovecot?
<JanC> read the manuals for both Postfix & Dovecot
<Spyidonas> no, you think it has settings for that?
<Spyidonas> ok
<JanC> and maybe also Outlook manuals
<Spyidonas> thanks JanC
<JanC> Spyidonas: as verdeP indicated, mail servers aren't "simple", so it's important to understand the mail server(s) you run
<Spyidonas> JanC: Sure but that specific was simple , i fight with getting it to work 2 weeks now >_<, still Hotmail cuts off my email...
<Spyidonas> JanC: In the end i needed to add the new keys to dovecot ssl config file thing
<Spyidonas> JanC: I never understood why mail servers are so complicated :/
<verdeP> its really too bad yeah
<JanC> Hotmail isn't exactly an example of how to configure a mail server correctly...
<JanC> neither is gmail
<Spyidonas> JanC: Yes but my emails are cutted by Hotmail they dont go in spam folder either, they are completly vanished
<JanC> Spyidonas: so?
<Spyidonas> JanC: Gmail on the other hand recieves all my emails and after setting up dkim and the other dmarc thing and some txt things they dont go on the spam folder
<Spyidonas> JanC: So when i send to whoever owns a @outlook, @live account they never recieve anything.
<JanC> I don't care
<JanC> if their mail servers don't work right, their customers should move elsewhere
<Spyidonas> JanC: Good point, but imagine telling all your friends that Facebook is broken and they should move to Google+. Easy to say , not all of them will do it
<JanC> I've never had a Facebook account ;)
<JanC> but e-mail is based on standards
<Spyidonas> JanC: Apparently every company sets its own standards then, there are a bazillion of standars out there, some company accept all of them some part of them
<Spyidonas> JanC: In the email case, while dkim and dmarc are standars gmail choose to ignore them while some other will completly cut you off. You can't deny that DMARC is a standard either.
<JanC> Spyidonas: actually, the problem is with companies which apply DMARC incorrectly and/or restrict their customers too strictly
<Spyidonas> JanC: Yeah its like CSS and HTML, its the standard sure, but noone implements them 100% or have their own version of the standard.
<JanC> e.g. making it impossible to use mailing lists
<JanC> Spyidonas: it's not like that
<Spyidonas> JanC: its all about the money...
<JanC> it's companies purposely obstructing their customers
<Spyidonas> JanC: yeah, you cant even send a properly styled email these days
<verdeP> ewww text only pls
<JanC> I prefer plain text mails  ;-)
<verdeP> JanC++
<JanC> but styled mails should work if they have no external links...
<JanC> and if really useful
<Spyidonas> JanC: well Gmail cuts of <style> tags
<JanC> no eternal included links
<Spyidonas> JanC: im sure Hotmail has its issues too
<JanC> no external *
<Spyidonas> JanC: you pretty much have to inline everything , and apply hacks to the inline styling
<Spyidonas> JanC: its fun :(
<JanC> Gmail also has an IMAP interface, even if it's broken
<JanC> gmail as a user agent has always been broken beyond repair anyway :P
<verdeP> i like gmail a lot i wish they had an open source google mail server i could install
<hehe> lol
<hehe> gmail is bs
<hehe> however  nice UI
<Spyidonas> JanC: google is evil
<Spyidonas> JanC: :P
<JanC> verdeP: do you use gmail with mailing lists?
<JanC> Spyidonas: I don't like to generalise
<verdeP> a few but i dont really like mail lists, i prefer just looking on e.g. github etc
 * JanC now really going to sleep (only about 3h of sleep left...)
<verdeP> ugh
<fishcooker> how to disable all cron job for 1 hour for maintenance?
<andol> fishcooker: The easiest is probably to simply shutdown the cron daemon, and making sure to remember turning it on again afterwards.
<fishcooker> which cron daemon service should be shutdown, andol?
<lordievader> Good morning.
<Sling> fcking uefi/gpt modern crap, always gives me issues
<Sling> fresh 14.04 install on a 8TB raid10 array @ LSI controller, installation goes fine, it finds disks, i used the guided partitioning, it made the bootable EFI partition, it installed grub in the end of the installation
<Sling> it added 'ubuntu' to the EFI menu, I boot into it, and all i get is a grub> prompt
<Sling> ls shows me (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1)
<Sling> why doesn't it just boot the OS? what am I doing wrong (i just followed the installer, no weird modifications made)
<jamespage> jpds, hey - which of the numerous strongswan packages is needed for the new neutron vpnaas support?
<jamespage> zhhuabj, maybe you can answer that question :-) ^^
<zhhuabj> jamespage, hua@hua-ThinkPad-T440p:~$ ipsec --version
<zhhuabj> Linux strongSwan U5.1.2/K4.0.0-040000rc2-generic
<jamespage> zhhuabj, yeah - I'm updating the packaging - need to know which strongswan package is required - just strongswan?
<jamespage> none of the plugins?
<zhhuabj> jamespage, strongswan 5.x is ok
<zhhuabj> jamespage, but I only test default strongswan package  5.1.2 in ubuntu 14.04 and 14.10, other 5.x version should be ok as well.
<jamespage> zhhuabj, awesome - thanks!
<zhhuabj> jamespage, thanks :)
<jamespage> zhhuabj, nice work btw - great to have a fully supportable vpn option in ubuntu again
<zhhuabj> jamespage, are you enabling charm support for strongswan now ?
<jamespage> zhhuabj, if I get to it in time, yes
<jamespage> zhhuabj, just working through the kilo-3 release atm
<zhhuabj> jamespage, I saw charm is using vpn-agent instead of l3-agent,  have charm supported openswan now ?
<jamespage> zhhuabj, "if I get to it in time, yes" so not yet :-)
<zhhuabj> jamespage, got it
<salih-emin> Last night I was reading the "server" chapter of the "Official Ubuntu Book [Published Jul 15, 2014 by Prentice Hall]", and in there it says that "One of the main differences of Ubuntu Server flavor is the customized kernel"... Is this still true ? I thought  desktop and servert kernel are now one and the same (for about 2 years now). But nevertheless I searched the packages and I didn't find any linux-kernel-server... So is this a b
<salih-emin> ook errata ? or am I missing something ?
<bekks> salih-emin: All Ubuntu favors share the same customized kernel.
<bekks> *flavors
<jamespage> jdstrand, hey - are your team likely to get to MIR bug 1430082 before release?
<jamespage> just assessing whether we need to disable the new token format support in keystone for vivid/kilo.
<jamespage> zul, coreycb: kilo-3 all in the pipeline for vivid - most things waiting for release team review.
<rbasak> strikov: how about https://bugs.launchpad.net/ubuntu/+source/openldap/+bug/1103353? Patch available. Just needs to be backported and verified for each relevant release.
<rbasak> strikov: could be a good one for you to handle SRU policy. https://wiki.ubuntu.com/StableReleaseUpdates
<strikov> rbasak: ok, i'll look into this! thanks!
<coreycb> jamespage, awesome
<coreycb> jamespage, zul, I have a round of new proposals for icehouse 2014.1.4.  there was a bad dependency in keystone's requirements.txt plus some other bugs fixed.
<jamespage> sarnold, hey - can you +1 https://bugs.launchpad.net/ubuntu/+source/python-pysaml2/+bug/1407695 from security team review - I think the pysaml stuff is all now resolved
<Arrick> hey all, what cmd do I run to find out what drives are attached?
<Arrick> my mysql wont allow me to connect, and I am thinking that the programs were all stored in the data dir (mounted as a folder) when I built the machine...
<lordievader> Arrick: Look in /dev ?
<lordievader> Or in /sys/class/block/
<jrwren> Arrick: df will show you disk usage per partition mounted, that sounds like what you want. See also, cat /proc/partitions
<Arrick> I'm thinking my data directory is no longer connected... it's on the san
<zkvvoob> Hello! I'm getting an Internal Server Error 500  when I activate the BuddyPress plugin on a WordPress based site. The site is on a Ubuntu 14.04 server running Apache 2.4.7. Here's the error message that is generated every time: http://pastebin.com/GCGSq6WD Could you help me find out why this is happening?
<Arrick> Ok... trying to start or stop mysql doesnt work.... using either "service mysql start" or "service mysql stop" or even the "stop mysql" or "start mysql" ... when I tried /etc/init.d/mysql stop it told me to9 use service or stop...
<jdstrand> jamespage: re 1430082, I'll ask the team and get people on it
<jamespage> jdstrand, thanks - much appreciated
<lordievader> Arrick: What is the exact output of 'sudo service mysql start'?
<Arrick> start: Job failed to start
<Arrick> this system has been running for the last 2 years, and I came in today to our site not being able to connect to the db
<lordievader> Arrick: What do the logs say about it?
<Arrick> not sure where to look
<lordievader> Arrick: As a start, syslog.
<Arrick> where would that be located?
<maxb> Or, guessing here, possibly in /var/log/upstart/, if the upstart job is failing
<Arrick> thanks
<lordievader> Arrick: Syslog is in /var/log/syslog, Mysql usually logs to the syslogger.
<jamespage> sarnold, hey - just poking through MIR bugs for openstack - and tripped over this one again:
<jamespage> https://bugs.launchpad.net/ubuntu/+source/conntrack/+bug/1381450
<maxb> Though, the specifics of the error message imply that the upstart job failed, so the mysql daemon probably never started
<jamespage> that's quite important to support router HA for neutron
<Arrick> correct...
<lordievader> Arrick: Does mysql error?
<Arrick> ls lordievader I have mysql.err in there, but they are empty
<Arrick> and mysql.log.1.gz etc..
<lordievader> Arrick: I was talking about the syslog, does mysql show errors there?
<Arrick> up through 7
<Arrick> mysql not found in syslog
<Arrick> and the last logs were written on the 27th (3 days ago)
<lordievader> You could invoke mysql manually, perhaps that shows what is going wrong.
<lordievader> According the the Debian init script it is:  /usr/bin/mysqld_safe
<Arrick> hrmmmm... lordievader I am unsure what to run to invoke it manually... I know when I use mysql -u username -p it says Error 2002 (HY000): Can't connect to local MySQL Server through socket '/var/run/mysqld/mysqld.sock' (2)
<lordievader> Arrick: As I said, accoring the the Debian init script it is: /usr/bin/mysqld_safe
<lordievader> Arrick: You are using the mysql client ;)
<Arrick> how do I invoke it?
<lordievader> Arrick: Run that command.
<Arrick> no such animal, looking at the files in there right now.
<strikov> rbasak: juju-1.22 should be in your inbox
<lordievader> Hmm, does the Ubuntu version differ on that.
<Arrick> no, I was missing a d
<Arrick> thanks
<lordievader> Arrick: Does it show errors?
<Arrick> looking
<Arrick> its stuck at starting mysqld daemon with database from /var/lib/mysql
<lordievader> Stuck, or is it actually running?
<Arrick> seems to be running now
<Arrick> I was able to logon
<lordievader> Hmm. What does the upstart log say?
<lordievader> Specifically the mysql upstart log.
<Arrick> there is noting in the upstart log
<lordievader> Nothing?
<Arrick> I have  a lot of .log.6.gz and stuff, but nothing that that is a straight .log file
<Arrick> ls
<lordievader> Hmm.
<lordievader> Arrick: What version of Ubuntu do you run?
<Arrick> its currently 12.04
<Arrick> 12.04.3
<FunnyLookinHat> Ran updates on 15.04 this morning and now mysql won't start
<FunnyLookinHat> Anyone else having that issue?
<lordievader> Arrick, FunnyLookinHat: What version of the mysql server do you both have?
<FunnyLookinHat> mysql  Ver 14.14 Distrib 5.6.23, for debian-linux-gnu (x86_64) using  EditLine wrapper
<Arrick> I believe its either 5 or 5.5, one sec
<lordievader> Err, package version please.
<Arrick> 5.5.34
<lordievader> !info mysql-server-5.5 precise
<FunnyLookinHat> 5.6.23-1~exp1~ubuntu4
<FunnyLookinHat> ( vivid )
<ubottu> mysql-server-5.5 (source: mysql-5.5): MySQL database server binaries and system database setup. In component main, is optional. Version 5.5.41-0ubuntu0.12.04.1 (precise), package size 8523 kB, installed size 30579 kB
<Arrick> should be running the socket of /var/run/mysqld/mysqld.sock
<lordievader> Hmm, guess unrelated then.
<FunnyLookinHat> Ugh
<FunnyLookinHat> :-P
<lordievader> Arrick: There s an update available, perhaps that fixes things.
<rbasak> FunnyLookinHat: yes. Workaround are in the bugs. Fixes are ready and just waiting on the release team.
<Arrick> lordievader, I have 2 years of data in there, I dont want to muck that up.....
<FunnyLookinHat> rbasak, Ah - I can't find the bug on LP - got a URL?
<lordievader> Arrick: Then I assume you have a backup.
<rbasak> FunnyLookinHat: https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1435823 https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1436178
<FunnyLookinHat> rbasak, thank you!
<rbasak> FunnyLookinHat: also see https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1434758
<lordievader> Arrick: Else you should make one. Now.
<Arrick> I just found my backup
<Arrick> lol
<Arrick> I was looking for that first,
<Arrick> I would run a backup over ssh, but I am not sure how to do that
<Arrick> I do have a /var/log/mysql/error.log http://pastebin.com/jreED4hM
<zkvvoob> Hello! I'm getting an Internal Server Error 500  when I activate the BuddyPress plugin on a WordPress based site. The site is on a Ubuntu 14.04 server running Apache 2.4.7. Here's the error message that is generated every time: http://pastebin.com/GCGSq6WD Could you help me find out why this is happening?
<Arrick> lordievader start: Rejected send message, 1 matched rules; type="method_call", sender=":1.2" (uid=1000 pid=1472 comm="start mysql ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
<Arrick> I rebooted it.
<lordievader> Arrick: Did rebooting fix it?
<Arrick> nope
<Arrick> I had to start the mysqldsafe again to access it, looking up commands to backup the db's now...
<oldsk00l> test
<lordievader> Arrick: /var/lib/mysql contains the db's. Simply backup that dir.
<lordievader> Arrick: Anyhow, I get the idea that the upstart script is broken.
<lordievader> Arrick: Updating might fix that.
<Arrick> done, next?
<Arrick> not sure how to update from terminal, Havent played with it in so long
<lordievader> Arrick: sudo apt-get update && sudo apt-get dist-upgrade
<Arrick> that cmd doesnt make any changes, just tells me it read them all
<lordievader> ?
<Arrick> it says
<lordievader> !paste
<ubottu> For posting multi-line texts into the channel, please use http://paste.ubuntu.com | To post !screenshots use http://imgur.com/ !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<Arrick> hit http:P//us.archive.ubuntu.com precise-updates/ with all the packages
<Arrick> I know better than to paste, lol
<lordievader> ? Could you pastebin the full output?
<Arrick>  >http://paste.ubuntu.com/10707229/
<Arrick> sorry about the >
<lordievader> Arrick: What is wrong with that?
<lordievader> Standard apt-get update output, though not complete/
<Arrick> it doesnt do anything...
<lordievader> Could you pastebin the full output with the input command?
<Arrick> http://paste.ubuntu.com/10707280/
<Arrick> if I try them separate I get no space left on drive
<Arrick> I'll get back to you in a bit lordievader I am having our VM admin double my drive space
<lordievader> Ah, that is likely the root cause of all your problems :P
<Arrick> ok, they added mode space, how do I expand it?
<zkvvoob> Hello! I'm getting an Internal Server Error 500  when I activate the BuddyPress plugin on a WordPress based site. The site is on a Ubuntu 14.04 server running Apache 2.4.7. Here's the error message that is generated every time: http://pastebin.com/GCGSq6WD Could you help me find out why this is happening?
<patdk-wk> ask php why it's crashing
<zkvvoob> patdk-wk: How do you propose I do this?
<patdk-wk> by knowing how your server works, I didn't propose and won't propose anything
<patdk-wk> I did not setup and configure that server
<patdk-wk> and it looks to be done VERY custom
<dannf> rbasak: hey - i was trying to test a fix for LP: #1427406, but ran into LP: #1429725 - do you have a workaround for that? i tried backing out the passwordless support, but things got to messy :(
<Arrick> hey lordievader I have the following: http://paste.ubuntu.com/10707443/
<Arrick> I need to expand the / partition, and there is 30 GB free on sda... I need to expand sda5 lvm /.
<patdk-wk> well, edit the partition table
<patdk-wk> then pvextend
<patdk-wk> then lvextend
<patdk-wk> then resize2fs
<patdk-wk> it couldn't be any more simple :)
<Arrick> yeah, except I have no knowledge of how
<rbasak> dannf: no idea, sorry. I've not looked at mariadb yet :-/
<Arrick> patdk-wk, I have the following when I run fdisk -l, which is the reason why I am confused on what to do... http://paste.ubuntu.com/10707544/
<Arrick> I know I need to add room to /dev/mapper/SCGWB008--vg-root
<rbasak> strikov: o/
<rbasak> strikov: sorry, was on the phone before.
<strikov> rbasak: o?
<rbasak> strikov: what's the state of Juju and systemd in your package please? I'm confused by pitti's "Fix Committed" change in https://bugs.launchpad.net/juju-core/+bug/1409639. Is this correct?
<strikov> rbasak: looking at the ldap bug now
<rbasak> strikov: it's a picture of my head and arm, waving at you :)
<strikov> rbasak: hm, systemd is not available in 1.22
<strikov> rbasak: let me double-check
<strikov> rbasak: is it possible to see why fix commited shows up?
<strikov> rbasak: i.e. it was manually changed or via package upload
<rbasak> strikov: manually changed.
<lordievader> Arrick: Like patdk-wk said, first extend the parition.
<Arrick> lordievader, thats what I am trying to learn how to do...
<Arrick> I have created /dev/sda6 with cfdisk
<Arrick> set it to a linux lvm
<lordievader> Arrick: parted or fdisk lets you do that. But make sure the start of the partition matches with the old size.
<Arrick> but for some reason, it keeps telling me there is no /dev/sda6 when I try to format it
<Arrick> im trying to follow this
<lordievader> Arrick: Just resize your lvm partition.
<Arrick> http://stackoverflow.com/questions/16515739/extending-logical-volume-in-ubuntu
<lordievader> No need to format things.
<Arrick> pvcreate says no such animal
<lordievader> Arrick: Please first read up on how LVM works if you don't have experience with it.
<Arrick> in the meantime this is a production server I am trying to get back up
<lordievader> Even more reason to thoroughly read up on LVM before trying to mess with it.
<Arrick> you have a link to the right place to read then? there are so many links that provide "cfdisk, then fdisk, then format, then lvextend"
<lordievader> man <command> ;)
<lordievader> Man pages are your first source of information on commands.
<strikov> rbasak: please ignore my juju-1.22 package; juju team changed licenses for some subprojects; i'll fix and resend you the bits
<rbasak> strikov: no problem, thanks
<strikov> rbasak: updated and sent to you
<strikov> rbasak: i won't update this package anymore; any other changes will go to 1.23
<strikov> rbasak: ideal state is unachievable; this one seems to be good enough
<rbasak> strikov: OK. Thanks!
<strikov> rbasak: do we want -released blocking bug? we planned to manually move it to -released so technically bug is not needed
<rbasak> strikov: is it released upstream yet? If so then no need to block it.
<strikov> rbasak: my understanding was that you'll upload juju-1.22 to -proposed and bug is needed to avoid moving to -released automatically
<strikov> rbasak: is it correct?
<strikov> rbasak: ah, you opened one already: https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1416051
<strikov> rbasak: even before i joined the team
<strikov> rbasak: it's for trusty but we may add vivid there
<Arrick> lordievader, I found a directory in /totara that was a backup of over 10gb, and that is what caused the system not to start... couldnt hit the logs to generate errors because the disk space was too full.
<rbasak> strikov: oh, OK. I think I misunderstood you.
<rbasak> Yes - the bug is there, and the Vivid task is tracked in the "main" task, since it's the development release.
<strikov> rbasak: okay, understood. thanks
<rbasak> strikov: I'll review tomorrow morning if that's OK?
<strikov> rbasak: sure, thanks
<lordievader> Arrick: Jup, systems will behave strange when their disk is full.
<Arrick> lol
<fullstop> Hi, running 14.04 here, and this has cropped up twice in the last week:  http://pastebin.com/raw.php?i=Dc6Sgy83
<fullstop> 3.13.0-44-generic
<fullstop> the server runs fine, but the kern.log fills up quickly since this message repeats over and over and over again
<RoyK> fullstop: have you updated lately? seems 3.13.0-48 or 3.16.0-33 is current
<fullstop> I've updated things but not the kernel
<fullstop> I guess I'm looking to see if this is a known problem and that a kernel update would fix it.
<RoyK> this is a kernel thing
<fullstop> yes, clearly
 * RoyK updates office machine :P
<fullstop> :D
<RoyK^Work> there
<RoyK^Work> Linux roysk 3.16.0-33-lowlatency #44~14.04.1-Ubuntu SMP PREEMPT Fri Mar 13 10:51:41 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
<fullstop> I tried the LL kernel on my desktop for a while.  It didn't seem to make that much of a difference for me.
<RoyK> never underestimate the speed of a cheap SSD
<RoyK> just installed it to do some testing
<RoyK> I don't really need it
<RoyK> this box is a test thing - u1404 - using a mac for most things
<RoyK> (except servers, which are either debian or rhel/centos)
<fullstop> yes, most of our servers are ubuntu-server
<RoyK> ok
<fullstop> my desktop is arch (sorry!)
<RoyK> hehe
<RoyK> no offence ;)
<RoyK> I don't like ubuntu that much on the server side - too much cutting/bleeding edge stuff
<RoyK> debian works well, though
<RoyK> (I know I'm swearing in church)
<fullstop> I like the rolling releases.  I never feel the urge to wipe and start over.  :)
<fullstop> With debian, I have an urge to compile my own stuff since the packages are ancient.
<RoyK> well, mostly it works
<Patrickdk> haven't had that issue at all
<Patrickdk> I have updated my kernel though, to get new features
<fullstop> and I've run into problems with centos where things like e2fsck had bugs in them and I had to get a package from a newer release to actually recover the folume.
<fullstop> f=v
<RoyK> I haven't had too much issues with ubuntu server either, but some of them were rather annoying
<Patrickdk> heh? again? more mdadm issues? :)
<RoyK> fullstop: oops - never seen that
<RoyK> Patrickdk: not really - after I dropped ubuntu server ;)
<fullstop> it was a database with a large volume.  rather stressful, since it would take many hours to restore from backup.
<RoyK> Patrickdk: but it's just that ubuntu seems very eager to implement things not well tested
<RoyK> ok, debian is very conservative
<RoyK> so new stuff must be compiled
<RoyK> but sometimes that's good
<Patrickdk> or grab from sid/testing
<RoyK> yeah, pretty good idea to messup every dependency on the planet
<fullstop> well, the kernel is updated.. just need to restart the server at some point to test.
<Patrickdk> hmm?
<Patrickdk> never had that issue
<Patrickdk> grab package from sid/testing, rebuild on your release
<RoyK> I've seen pretty bad things from Sid
<RoyK> well - so far - just running stock stuff except zfsonlinux
<Patrickdk> works for most things, rarely will you hit a depend issue
<RoyK> never fix a running system
<RoyK> oh - here's a new kernel hack that can make my I/O performance gain .01% - GO
<RoyK> (not)
<Patrickdk> no, one should not run it :)
<RoyK> we're using centos/rhel a lot at work - a bug in the ixgbe 10GE driver makes TX packets in ifconfig and friends return zero - reported more than half a year ago and still not fixed - those RHEL fanatics are still talking about how good RHEL is compared to debian/ubuntu
<Patrickdk> ya, I built my own ixgbe dkms package, it's in my ppa
<Patrickdk> haven't had to roll it out to rhel though yet
<RoyK> same problem in debian/ubuntu? not fixed?
<Patrickdk> well, depends on the kernel your using
<Patrickdk> as it comes with the kernel
<jrwren> each has warts.
<Patrickdk> and I dunno if it was the same issue, as there where many issues, I was concerned with
<RoyK> my munin graphs would look a bit better if that thing was fixed
<Patrickdk> the xen nic issue was really screwing up all my aws vm's
<Patrickdk> that one is fixed
 * RoyK wonders why people use xen
<Patrickdk> not sure why amazon hasn't changed yet
<Patrickdk> they are basically mostly kvm now
<Patrickdk> but still using the xen tooling
<RoyK> k
<Patrickdk> atleast all of mine in aws are kvm instances
<CappyT> Hi everybody... i set up a gre tunnel and i was wondering how can use one of my server as a gateway for the other... I'm not good with routes =/
<RoyK> we have a 1024core openstack cluster at work for spawning out VMs to students
<RoyK> to be reinstalled soon (all ubuntu)
<RoyK> trying to get those professors to understand that YES - we need IPv6 - not just RFC1918
<Patrickdk> I only have 100 cores :(
<Patrickdk> have it loaded up with just over 800 windows vm's
<Patrickdk> professors understand rfc1918?
<jrwren> my networking profs did. :)
<Patrickdk> royk, just use the cgn addresses :)
<RoyK> cgn?
<Patrickdk> carrier grade nat
<RoyK> we're the college/university in Norway that has the most advanced IPv6 setup - we'll take this further
<RoyK> and avoid NAT at all costs
<jrwren> RoyK: thank you for that. I look forward to awesome ipv6 support in OS someday soon  :)
<jrwren> RoyK: is your OS ipv6 config documented publicly anywhere?
<RoyK> jrwren: most of the IPv6 support is in most OSes now
<RoyK> jrwren: the problem is the network operators
<RoyK> jrwren: it's just plain IPv6 - not even DHCPv6
<jrwren> RoyK: i'll have to look again. It has been a while, and I have very simple OS needs, just nova, no neutron.
<jrwren> RoyK: OS manages the radvd ?
<RoyK> jrwren: not sure
<RoyK> jrwren: I mostly work with storage and linux and monitoring, but not much with hard core networking
<rberg_> I see there was a backported 3.13 kernel for 12.04.. are there any plans to provide the 3.16 kernel available in 14.04 to 12.04?
<Daviey> jamespage: Am i right in saying that python3-oslo.serialization needs promoting?
<bananapie> can I debootstrap Debian Wheezy 7 from Ubuntu 10.04 ?
<PryMar56> bananapie, ls -al /usr/share/debootstrap/scripts/ | grep whee
<bananapie> yea, it's not there. so I did ln -s sid wheezy
<bananapie> it seems to have worked.
<bananapie> Debian is close enough to Ubuntu, that I think I can figure the rest out.
<bananapie> it bootstrapped. :)
<bananapie> I can't wait until ubuntu 16.04 LTS, I am really excited about systemd. I already installed 15.04 beta in a vm to test it out :)
<PryMar56> bananapie, you could have gone from lucid -> wheezy almost 3 years ago
<PryMar56>  2 years and 10 months to be exact
<bananapie> nah, I am staying on Ubuntu. I like Ubuntu. I am instaling wheezy out of necessity and not choice. I don't like that debian doesn't come with firmware for my nics.
<bananapie> and ubuntu LTS = 5 years support. Debian is usually 3 years.
<bananapie> I wanted to switch to CentOS which gives 10 years support, but I missed Ubuntu's working default configs when installing packages
<sarnold> bananapie: if you don't mind, what kind of tasks wouldy ou be doing on a machine with a ten-year supported OS? I'm surprised how many people find 14.04 is too out of date for what they want to do (mostly those are folks wanting a brand new openjdk..)
<bananapie> I run more recent versions of Ubuntu for my desktop.
<bananapie> But for my servers, I want peace of mind. If I install a server with a config today, I don't want to have to reinstall the server until I want to reinstall it.
<bananapie> if it ain't broke, don't fix it.
<PryMar56> bananapie, apologies, wheezy was released in May, 2013 (not 2012)
<bananapie> no worries.
<sarnold> that bit I get :) but ten years is a long time, hehe :)
<bananapie> Yea, ten years is longer than I would ever need, which is why I compromised 5 years to get working default configs
<bananapie> btw : 18:03:23 up 976 days, 22:01,  1 user,  load average: 0.85, 1.02, 0.92
<sarnold> hehe
<sarnold> I had a machine with >1000 days once...
<bananapie> I am 24 days away
<sarnold> I hope your power holds out another thirty.. ;(
<sarnold> :)
<bananapie> It will. I'm in a crazy ass data centre
<bananapie> My servers have redundant power supplies, connected to redundant batteries, connected to redundant generators. in 5 years, they have never had both power circuits off at the same time.
<bananapie> 976 days ago ubuntu released a kernel patch that affected my installation.
<sarnold> haha
<bananapie> affected this server*
<bananapie> most of my other servers were rebooted more recently.
<sarnold> I can't wait for our on-the-fly kernel patching for security issues.
<bananapie> Yea, but even with on the fly patching, I'd still have to do the work during the night just in case.
<bananapie> although, on the fly patching would be awesome, it would help me convert this machine into a debian box without downtime :)
<sarnold> hehe I suspect you can change all of userspace onth e fly without reboots if you really wished..
<bananapie> ...
<bananapie> that is a brilliant idea
<sarnold> .. but the kernel changes, even with on-the-fly updates, is really best suited for small individual fixes, near perfect for most security issues.. wholesale replacement of kernels is just going too far
<bananapie> yea, true.
<bananapie> I am going to test the server from chroot. If it works, that means I only have to install the boot loader, kernel and 3rd party drivers during maintenace hours :)
<bananapie> god I love linux
<bananapie> thanks for your help
<bananapie> I have to go
<beanbag> I dunno what "genius"  built 14.04 lts and decided that the bootloader and os should default to whatever back-ass-wards video mode as default, BUT YOU REALLY PICKED A WINNER
<beanbag> it's caused issues on every box I tried so far
<beanbag> can't edit the boot command line because it gets so managled you can't see wtf you are doing
<beanbag> and if you boto it, you get a freaking annoying flashing white screen
#ubuntu-server 2015-03-31
<flyback> im beanbag and I am back for more berating of this fucking stupid graphical bootloader and the default mode of "fail"
<ircfox> Hello folks!
<ircfox> I am running a vpn server and I just configured but it is not connecting from my home computer, could someone help me try to figure what is going on at server side please?
<ircfox> I was just trying to analise the server log but to be honest I don't know which file to check.
<davidbowlby> what kind of vpn are you using
<davidbowlby> PPTP (also known as the American Indian restroom)
<davidbowlby> L2TP
<davidbowlby> ?
<ircfox> pptp
<sarnold> davidbowlby: hahaha
<davidbowlby> sarnold, that one never gets old if you ask me
<ircfox> yes
<davidbowlby> ok, one sec
<sarnold> davidbowlby: I suspect I'll be 85 and start giggling to myself and be unable to convey to anyone else why I'm laughing...
<ircfox> ok
<davidbowlby> ircfox, if you check /var/log/syslog, you can grep on pptp
<davidbowlby> you'll see some stuff there
<davidbowlby> like some idiot from korea trying to log into your server
<ircfox> davidbowlby: I don't know how to use grep command :P
<davidbowlby> it's easy, I'll show you
<davidbowlby> cat /var/log/syslog | grep pptp
<ircfox> ok
<davidbowlby> cat writes the file out ot console
<davidbowlby> you "pipe" in secondary commands that handle that output
<davidbowlby> grep lets you filter on the content
<davidbowlby> now, something to remember
<davidbowlby> if you're looking for something with a space in it or two works
<davidbowlby> surround with quotes
<davidbowlby> cat /var/log/syslog | grep "monkey login"
<davidbowlby> for example
<ircfox> davidbowlby: and what I dod with the result?
<ircfox> something like Mar 30 22:20:30 webfox pptpd[19631]: MGR: Manager process started
<ircfox> and MGR: Maximum of 100 connections available
<ircfox> and aximum of 100 connections reduced to 6, not enough IP addresses given
<davidbowlby> ok
<davidbowlby> so you gave 6 IPs, but said to use 100 connections
<davidbowlby> but it's smarter than you, so it fixed that ;P
<davidbowlby> now I will introduce you to another command
<davidbowlby> tail
<davidbowlby> tail is your friend
<ircfox> davidbowlby: to be honest I think this 6 ip's is default
<davidbowlby> tail -f /var/log/syslog | grep pptp
<davidbowlby> try you connection
<davidbowlby> see what she says
<davidbowlby> tail lets you watch the log as she populates
<ircfox> MGR: Maximum of 100 connections reduced to 6, not enough IP addresses given
<davidbowlby> you pipe in the grep and you only see what you care about
<davidbowlby> right
<davidbowlby> now try to log in
<ircfox> MGR: Manager process started
<davidbowlby> you should see pptp info messages
<davidbowlby> try to connect to you pptp
<ircfox> MGR: Maximum of 6 connections available
<davidbowlby> read the works that are coming out of my keyboard
<davidbowlby> *words even :)
<ircfox> My Mac says : The PPTP-VPN server did not respond. Try reconnecting. If the problem continues, verify your settings and contact your Administrator.
<davidbowlby> ah ok
<davidbowlby> but no log entries?
<ircfox> no
<davidbowlby> see, when I connect I get this:Mar 30 22:59:26 via pptpd[19944]: CTRL: Client 192.168.1.7 control connection started
<ircfox> no log entries
<davidbowlby> so, it sounds like you aren't getting connected to your pptp server
<davidbowlby> which is exactly what your mac is telling you
<sarnold> you might need to check logs on your os x
<davidbowlby> because Macs are awesome
<sarnold> there's some kind of event viewer there or console applicatioon that shuold show you your logs
<davidbowlby> ok, so you can do this
<davidbowlby> pptp default port is 1723
<davidbowlby> (which is how many folks fit in a pptp)
<davidbowlby> anyway
<davidbowlby> so you can just do a simple telnet check
<davidbowlby> open Terminal
<davidbowlby> I'm assuming you know how to use your Mac
<ircfox> ok
<ircfox> go ahead please
<davidbowlby> telnet <the ip of your VPN> 1723
<davidbowlby> you should get "trying... connected to..."
<davidbowlby> if you don't
<davidbowlby> rut row, your firewall is kicking your nads
<davidbowlby> yes folks, you need to open ports to allow traffic to run to your pptp server :)
<ircfox> trying is still trying and no log at tail
 * davidbowlby has to use the pptp
<davidbowlby> yeah, trying and trying means no connection
<davidbowlby> which means you can't get in
<davidbowlby> what are you using for your firewall
 * davidbowlby prepares for the facepalm
<ircfox> Mmmm.. not sure default Ubuntu perhaps
<davidbowlby> ....ok
<sarnold> it could also be a firewall on your mac, or any firewall/router between the two machines
<davidbowlby> ircfox, first you need to know your network infrastructure
<davidbowlby> ircfox, start there
<davidbowlby> for example, I have ((internet)) ---> <ubuntu server> ---> ufw firewall ---> pptp VPN
<ircfox> I will try to port foward my wifi
<davidbowlby> oh boy... that statement
<davidbowlby> ircfox, ok, so how is your network configured
<ircfox> no, still not working
<davidbowlby> ircfox, do you put the internet directly to the wifi router?
<ircfox> no, I got a modem (not wireless) and I got a wifi router connected to it.
<ircfox> but I think I've bridged the modem and I cannot access it no longer, by the way it was a way long time ago
<davidbowlby> ok, you probably need to access the modem
<davidbowlby> you probably are natting on your wifi
<davidbowlby> which isn't going to go over well depending on your setup
<sarnold> davidbowlby: did you mean "probably don't need to access the modem"?
<davidbowlby> sarnold, I mean he will need to connect to the modem
<davidbowlby> sarnold, modems have firewalls too these days, depending on your setup
<ircfox> I can turn ethernet on
<davidbowlby> sarnold, I had to set mine to forward on to mine
<sarnold> davidbowlby: hmm. I could imagine some configurations might be easier that way, but I certainly hope this doesn't require replacing the router..
<davidbowlby> sarnold, no, just configuring it
<ircfox> would you mind to test if my vpn server is working for me?
<sarnold> davidbowlby: i'd hope most modems in bridge mode just forward packets without inspection
<davidbowlby> sarnold, sometimes you have to connect to the modem to tell it to forward requests
<davidbowlby> sarnold, some not all, we don't even know what he has
<ircfox> before it gets too radical please
<davidbowlby> sarnold, nevertheless, he should be able to access the modem UI
<davidbowlby> sarnold, if not, that's kinda a problem too
<davidbowlby> ircfox, what's your IP, I'll telnet
<ircfox> davidbowlby: I cannot access my modem config right now. As I said it is in bridge mode, I would have to reset it.
 * davidbowlby waits for the internal subnet
<sarnold> davidbowlby: heh, I've never once needed my modem's UI :)
<davidbowlby> sarnold, I wish that were the case for me
<sarnold> davidbowlby: self-bought modems or ISP-provided?
<davidbowlby> ircfox, actually that worked
<ircfox> davidbowlby: yes?
<davidbowlby> ircfox, sometimes modems don't like you hairpinning
<ircfox> my lord!
<davidbowlby> ircfox, actually, now that I think of it, there is a setting I believe
<ircfox> So you think it is my modem?
<sarnold> ohhhhh.. is the os x machine currently "inside" the same network?
<ircfox> sarnold: no
<davidbowlby> ircfox, can you put your mac on a 3g/4g hotspot and try the connection outside of your network?
<ircfox> davidbowlby: I am connecting to the same vpn server but with ssh right now. does it say anything about modem block?
<davidbowlby> no
<ircfox> davidbowlby: I don't have any 3g/4g device.
<davidbowlby> ...what...
 * davidbowlby is amazed
<ircfox> yeah :P
<davidbowlby> you are setting up pptp
<davidbowlby> and don't have cell data
<ircfox> I have a old nokia
<davidbowlby> ...whaaaat
<ircfox> yes, lol
<davidbowlby> ok
<davidbowlby> ok, is this running on a 486?
<davidbowlby> ;)
<davidbowlby> ok ok, sry sry
<davidbowlby> ok
<davidbowlby> try the telnet to your host using the port 22
<davidbowlby> this should work just fine
<davidbowlby> you'll see what happy path should look like
<ircfox> yes
<ircfox> fine
<ircfox> SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
<davidbowlby> ok, so that's happy path
<davidbowlby> try to telnet to 1723 again
<ircfox> no still not working
<davidbowlby> make sure you have port 1723 open on the wifi firewall and forwarded to your VPN server IP (should be on the wifi)
<davidbowlby> I should say, should be on the same network as the wifi or routable via the wifi
<davidbowlby> I still don't get your network topology
<davidbowlby> is the VPN server on a different network than the wifi router?
<davidbowlby> is the Mac on a different network than the wifi and VPN?
<davidbowlby> btw, to rule out the mac firewall, you can go to System Preferences > Security & Privacy > select Firewall tab, and turn it off
<ircfox> it said it is a invalid ip which should be set in the valid subnet of something
<davidbowlby> ok
<davidbowlby> let's start here ircfox
<ircfox> right
<davidbowlby> what is the "internal" IP of the wifi
<davidbowlby> router
<davidbowlby> 192. something or a 10. something
<davidbowlby> for example
<ircfox> 192.168.1.222
<davidbowlby> ok
<davidbowlby> very good
<davidbowlby> wait, sounds a little weird
<davidbowlby> but ok
<davidbowlby> that sounds more like a client IP
<davidbowlby> but ok
<davidbowlby> so that's the IP you configure the router with, right?
<ircfox> this is my mac ip
<davidbowlby> ..
<davidbowlby> what is the wifi router IP
<ircfox> this is the ip I am using on my wifi router
<davidbowlby> something like 192.168.1.1?
<ircfox> 192.168.1.254
<davidbowlby> no, that's the IP of your computer, not the wifi router
<davidbowlby> ahhh
<davidbowlby> now were are getting somewhere
<davidbowlby> hmmm, sounds like UVerse
<davidbowlby> anyway
<davidbowlby> can't be, you have a nokia
<davidbowlby> anyway
<ircfox> hahaha..
<davidbowlby> ok, so what is the IP of your VPN (internal only please, no real IPs here)
<ircfox> not sure, let me check
<davidbowlby> kinda important to know that
<davidbowlby> so you can set the firewall rule on the wifi router to point to it...
<ircfox> what is the command I use again please?
<davidbowlby> ifconfig
<davidbowlby> this right here is why I start with teaching IP and network configuration before ANYTHING ELSE
<ircfox> 127.0.0.2
<sarnold> that's a localhost address
<sarnold> check for an eth0 or similar address
<ircfox> inet add is 127.0.0.1
<davidbowlby> you can use grep
<davidbowlby> ifconfig | grep eth0
<davidbowlby> or
<davidbowlby> ifconfig -a | grep eth0
<ircfox> nothing
<davidbowlby> ifconfig -a | grep "inet addr"
<ircfox> I think it is called venet
<davidbowlby> that last one should do it
<ircfox> It has lo, venet0 and venet0:0
<davidbowlby> yes
<davidbowlby> and is one a 192 address?
<ircfox> venet0:0 is
<ircfox> well not 192 but not 127 either
<sarnold> venet? o_O
<davidbowlby> ircfox, what is the damn ip
<ircfox> sarnold: because it is a vps perhaps?
<davidbowlby> it's not your public one, is it
<ircfox> yes, it is
<davidbowlby> wait, a vps
<davidbowlby> ... ok
<davidbowlby> who are you using
<davidbowlby> are they openstack?
<ircfox> crissic
<davidbowlby> because most VPS folks start with port 22 open, but everything else is locked shut
<sarnold> oh man so much more makes sense now!
<davidbowlby> sarnold yeah
<sarnold> I hadn't considered that possibility.
<davidbowlby> ircfox, first of all, you don't have to do anything on your wifi router
<davidbowlby> ircfox, because the VPN isn't on your network
<davidbowlby> ircfox, which would have been nice to know ;)
<ircfox> I did use this command when configuring the pptp : sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
<ircfox> perhaps it is not working because there is no eth8?
<davidbowlby> ircfox, yes, but with a hosted service, the system can have its own firewall, not just the VM
<davidbowlby> no, ircfox, the interface names are ok
<davidbowlby> they just told us you were running a different kind of network
<davidbowlby> ircfox, I'm sorry, I'm not familiar with how they open ports up
<sarnold> and in fact if you're getting connection timeouts they almost certainly have all ports firewalled off except 22 and maybe 80...
<davidbowlby> ircfox, there should be some kind of manual on how to open a port to your virtual machine
<davidbowlby> sarnold, definitely
<davidbowlby> ircfox, most of them have some kind of network profile you can edit to allow ports
<davidbowlby> on the virtual machine properties
<sarnold> ircfox: check around their admin panel for firewalling or security groups, probably you have to do something to specify ports and networks allowed to use those ports
 * davidbowlby needs a drink now
<davidbowlby> sarnold, I was able to connect to his 1723 though from my host
<davidbowlby> sarnold, so something is still a lil off
<sarnold> davidbowlby: ohhhhhh
<sarnold> color me confused then :)
<davidbowlby> sarnold, yeah, just remembered that ahah
<davidbowlby> sarnold, it's client configuration, we're good here
<jamespage> Daviey, yes it does and thankyou if you did it :-)
<lordievader> Good morning.
<DenBeiren> goodmorning,..
<DenBeiren> when running an rsync from a server to a dreambox (linux based sat decoder) on gigabit network i get max speeds of 1.7MB
<DenBeiren> shouldn't this be a lot more?
<lordievader> Perhaps, perhaps not.
<lordievader> Writing to a slow disk?
<DenBeiren> it's a new WD blue or black (don't remember exactly)
<DenBeiren> it's def. the "newer faster" technology,.. not ide
<lordievader> Congested network?
<DenBeiren> i'm guessing not,.. how could i find out? my (manageable) switch is not even breaking sweat
<DenBeiren> the server has bonding
<lordievader> DenBeiren: Do you get decent speeds when you download from some other server?
<DenBeiren> the server downloads ok,.. as fast as my internetconnection will allow me
<DenBeiren> the dreambox, i don't know,.. i don't normally download stuff with it
<DenBeiren> i'm now just pushing movies to it located on my server
<lordievader> Try it, you are trying to find out where the bottleneck may lie.
<DenBeiren> wget doesn't give me a speed indication,.. am i forgetting an -option?
<lordievader> DenBeiren: It should. 'wget --output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip' is what I use to test connections.
<DenBeiren> Connecting to speedtest.wdc01.softlayer.com (208.43.102.250:80)
<DenBeiren> null                   0% |                                               |  5098k  0:14:54 ETA
<Daviey> jamespage: python3-oslo.serialization is now in main, should fix the depwait of python.oslo.log
<jamespage> Daviey, thanks - the rest of kilo-3 is now trickling through as that's unlocked most depwaits
<Daviey> super
<jamespage> Daviey, just manually twiddling rebuilds to workaround circular deps in neutron and its decomposed *aas packages...
<Daviey> jamespage: Yeah, i didn't envy you this cycle....
<jamespage> Daviey, we won't have all decomposed vendor drivers in for kilo
<Daviey> jamespage: But, looks like splitting out is the way of the future... see cinder?
<jamespage> infact not many - the *aas ones are done - I'll probably target a few for release if possible
<jamespage> Daviey, yeah
<jamespage> Daviey, it makes sense
<jamespage> Daviey, but it does increase the packaging complexity and quantity
<Daviey> jamespage: Planning to have vendor PPA's?
<jamespage> Daviey, I think that's likely yes
<jamespage> Daviey, we'll probably use that for testing and then onboard into distro next cycle
<jamespage> depending on status
<jamespage> it actually gives us a nicer QA process rather than accidentially packaging all vendor plugins as we have done in the past
<jamespage> alot had out-of-archive deps hidden in them
<Daviey> jamespage: What about UCA?
<jamespage> archive feeds UCA
<jamespage> still
<Daviey> right, but for pulled out vendor drivers?
<jamespage> well for the first 9 months at least
<jamespage> Daviey, we already do some UCA type things for vendors
<jamespage> its not UCA
<jamespage> 'partner package archives'
<Daviey> Sorry, i mean, Kilo on 14.04, will that have the vendor drievrs from PPA's or limited set from main on vivid?
<jamespage> Daviey, most likely PPA's
<Daviey> ok, thanks
<jamespage> Daviey, any of specific interest to you?
<jamespage> (I was thinking for seeing if I could get vmware-nsx in as I have charm stuff that depends on that)
<Daviey> jamespage: More curiosity than anything else.
<lordievader> DenBeiren: Is that a normal speed for your connection?
<memoryleak> Is there a storage service you guys can recommend for backing up data of servers, other than AWS S3?
<R1ck> hi. I have a Ubuntu 12.04 server, which has nsca 2.7.2. Ubuntu 14.04 has nsca 2.9.1, whats the best way to get that 2.9.1 package installed on a 12.04 server?
<lordievader> R1ck: See if there is a backports ppa?
<R1ck> well I found this: https://launchpad.net/~bli-linsang/+archive/ubuntu/nsca-backport
<R1ck> so thats the "best" way?
<DjangoPythonist> Hi everyone, i have some trouble with an ubuntu 12.10 webServer. df tell me that i have a 91% (around 89G) of 94,28G of total use in a disk partition, but dcnu and du tell me that the total amount used is 44G. It's a production server and i need to free some disk space without pay for more Gigas, and this big difference from df to du make's me crazy. Â¿Somebody can give me some light about what happend? Thks, and s
<lordievader> R1ck: Does it have pesky dependencies?
<lordievader> R1ck: I suppose upgrading to Tusty ain't an option?
<R1ck> unfortunately not :)
<lordievader> You could also package 2.9.1 yourself.
<lordievader> But there you might run in to dependency problems.
<lordievader> You absolutely need 2.9.1?
<R1ck> yes, I'm beginning to get 2.9 clients that cannot communicate with a 2.7 daemon
<R1ck> 2.9.1 daemon is running :) backports package seems to work fine
<lordievader> R1ck: Ah, good to hear :)
<rbasak> strikov: ping, about Juju, for when you start today.
<strikov> rbasak: i just started
<rbasak> strikov: oh, OK. Google Calendar seems to think you will start in another hour :)
<rbasak> Though we just did go into daylight savings time in Europe so maybe that affects things?
<strikov> rbasak: that's right, i have an appointment today, so i started earlier :)
<rbasak> strikov: oh, OK :)
<strikov> rbasak: did you have a chance to look at the package?
<rbasak> strikov: yes. Great work!
<rbasak> strikov: I appreciate the amount of time you've had to spend on this.
<rbasak> strikov: the only review comments I have are things that I never mentioned to you - my fault.
<rbasak> strikov: only really minor - I can upload now anyway, but we can maybe fix these to save ourselves time next time.
<strikov> rbasak: sure, let's do it today
<strikov> rbasak: what do i need to fix?
<rbasak> strikov: first, we should mention the tracking bug in the changelog, so it auto-closes on upload. Eg "  * New upstream release (LP: #1416051)". No need for you to fix this - I can just do it before I upload.
<rbasak> strikov: then the only other thing is minor disparity with the PPA debian/control file. I noticed because I diffed debian/ against the PPA, just to make sure that Curtis hasn't made any packaging changes in the PPA that we need.
<rbasak> strikov: specifically these two differences help with backports. They aren't technically needed in Vivid, but we've made it so they will work in Vivid anyway, and that way when we push to Trusty we won't have to change the debian/control file, which would be easier for us to manage.
<strikov> rbasak: oh, really? i did that comparison myself about a week ago and it looks like i missed something. sorry
<rbasak> The first is:
<rbasak> -               gccgo-go [!amd64 !i386 !armhf],
<rbasak> +               gccgo [!amd64 !i386 !armhf],
<rbasak> This was indeed broken on Vivid a while ago, but I added gccgo-go as a virtual package so now we can Build-Depend on it in Vivid without issues.
<rbasak> (and it'll pull in gccgo on Vivid, and gccgo-go as a real package in previous releases)
<rbasak> The second is:
<rbasak> -Depends: cloud-image-utils | cloud-utils,
<rbasak> +Depends: cloud-image-utils,
<rbasak> This only happened after I uploaded the previous Juju packaging, and then forgot to tell you about it, so I wouldn't expect you to have known about it.
<rbasak> cloud-utils got split a release or two ago, so we fall back to cloud-utils if cloud-image-utils doesn't exist.
<rbasak> I asked Curtis to add this to PPA packaging, and said I'd sync into Ubuntu packaging, but forgot.
<rbasak> strikov: both of these I couldn't have expected you to have known about :)
<strikov> rbasak: you told me about debdiff'ing against sinzui's ppa
<strikov> rbasak: fixing it now, thanks for a review!
<rbasak> strikov: no problem. And good work! This isn't an easy package to work on :)
<rbasak> strikov: OK if you want to fix those up, then you can update the changelog with the bug reference and those other two changes, and then you can take all the credit for the upload :)
<rbasak> That'll help when you apply for upload rights.
<frudo> hi..
<frudo> how can i off checkconfig serivce on ubuntu like linux I am not more familiar with ubuntu, can any one help on this
<rbasak> frudo: are you asking for the Debian/Ubuntu equivalent of chkconfig?
<rbasak> frudo: update-rc.d is used to adjust Sys V init script execution. For upstart jobs, you edit service definitions directly in /etc/init/.
<rbasak> For systemd, you copy /lib/systemd/service/... to /etc/systemd/service and then edit it.
<frudo> thanks. rbasak. i will try for zabbix serivce..
<frudo> on ubuntu machine
<strikov> rbasak: why 'Depends: cloud-image-utils | cloud-utils' is needed? maybe just cloud-utils is okay, because it's available everywhere (including precise where cloud-image-utils is not available)
<strikov> rbasak: okay, i figured it out myself, cloud-image-utils might be installed manually without a metapackage
<strikov> rbasak: okay
<rbasak> strikov: yes. Also, won't cloud-utils pull in cloud-guest-utils also, which we don't need in this case?
<rbasak> That was the reason it was split.
<rbasak> kickinz1: is http://askubuntu.com/q/582038/7808 relevant to your work?
<frudo> update-rc.d nginx disable  this commad is wirking but when i trying to do update-rc.d zabbix-agent disable  its not working
<kickinz1> rbasak: yes, we are in the process to make it work on armhf. Docker is ready, working on owncloud 8
<kickinz1> rbasak (docker-1.5.0 armhf package  is ready, just need to wait for some code in the store to have it uploaded)
<strikov> rbasak: just fyi, gccgo-go is not available on precise (just in case we'll make decision to package current juju there)
<rbasak> strikov: OK, thanks. I think probably there is no gcc go package to use regardless of name on Precise?
<rbasak> strikov: so that should be OK
<strikov> rbasak: gccgo is available: http://packages.ubuntu.com/precise/gccgo
<rbasak> strikov: Oh, OK
<rbasak> strikov: I don't think there will be a demand for Juju on Precise for non-Intel that doesn't exist already, so let's just aim for parity with the PPA for now, and we can both change together later if necessary.
<rbasak> Then the diff against the PPA can remain smaller
<rbasak> strikov: thank you for pointing it out though
<rbasak> jamespage: what was the state of arm64 support in docker.io in the archive, please? I saw some back and forthing of support in the changelog, so wasn't sure of the previous status.
<jamespage> rbasak, it needs dropping for now
<rbasak> jamespage: I ask because we're having some issues with arm64 but I think we have everything else working, so am wondering if this is actually a regression in functionality or not.
<jamespage> its incomplete (specifically in one of the deps)
<rbasak> kickinz1: ^^
<jamespage> and its never worked
<strikov> rbasak: debian.tgz and .dsc are in your inbox; thanks!
<kickinz1> rbasak: OK
<rbasak> jamespage: "its never worked" is perfect for a justification for an FTBFS in the FFe bug - thanks :)
<rbasak> strikov: thanks!
<jamespage> rbasak, sorry - I mean't todo that before handing over
<rbasak> jamespage: no problem
<rbasak> kirkland: hey. byobu.co seems to redirect to www.byobu.co that doesn't resolve.
<strikov> rbasak: it is accessible for me
<strikov> rbasak: i mean byobu.co
<strikov> rbasak: http://www.downforeveryoneorjustme.com/http://byobu.co
<rbasak> $ host byobu.co
<rbasak> Host byobu.co not found: 2(SERVFAIL)
<strikov> rbasak: http://pastebin.ubuntu.com/10711691/
<strikov> rbasak: you may use sshuttle or something with dns forwarding
<rbasak> strikov: thanks. I don't know why this fails for me. Firefox isn't being clear on why it complains about www.byobu.co. I see no redirect accessing it by hand over HTTP.
<bwm> Having problem setting keep alive timeout in Apache2 default virtual host
<bwm> I set the timeout in the virtual host on a clean apache2 install on 14.04 and it has not effect.
<bwm> s/not/no/
<OpenTokix> bwm: you enable keepalive in the main config, not per vhost
<bwm> OpenTokix: that's not what the apache doc says
<bwm> ... let me check that ...
<OpenTokix> bwm: I have never set it per vhost, but maybe you can then. - What is the problem you are seeing?
<bwm> [Context:	server config, virtual host]
<OpenTokix> So you set KeepAlive On - and you dont get a keepalive? - How do you check?
<bwm> Its just ignored.  I have a little curl based test.  If I run it with keepalive configured in the default vhost - then connections are dropped.
<bwm> If I run the same test with keepalive timeout set in the global config then then connections remain open
<OpenTokix> no loadbalancer?
<bwm> If I munge the default host config to mess up the docroot I get 404's - so the virtual host config is being used
<OpenTokix> And no user agent sniffign browsermatch-directives?
<bwm> Not in this test case - the reason I'm doing this is my production load balancer config is having problems and i've traced it to the keepalive possibly being the issue
<bwm> no browermatch directives - never heard of them :(
<OpenTokix> What load balancer are you using?
<OpenTokix> Keepalive is.... not always optimal ebhind a load balancer
<bwm> I'm not in this test case.  In production is an AWS loadbalancer
<OpenTokix> Is that a proxying or routing load balancer?
<bwm> Strictly - I don't know.
<bwm> I'm guessing the load balancing is done in the routers
<OpenTokix> If its routing, keepalive should be on - if its proxying, off
<OpenTokix> Esp. if you have a lot of traffic
<bwm> The AWS config assumes the keepalive is on.  We have it on - but I'm seeing occasional gateway timeout errors which can be due the loadbalancer keep alive timeout being longer than the servers keepalive timeout
<OpenTokix> What is the host-ip you see in yo uaccesslog? - Is it clients or the load balancer?
<bwm> client
<bwm> actually - we see both
<OpenTokix> Yes, but what is the clientip, and the other is probably forward-for header?
<bwm> the other is X-Forwarded-For header
<OpenTokix> Then its proxying
<OpenTokix> Then  your apache threads will be busy with waiting for the  proxy to send a new connection while keepalive timeout is timeing out.
<OpenTokix> and your proxy will open a new connection for each new client
<bwm> Confession: I'm a bit new to all this.
<bwm> OpenTokix: I have no idea what the LB's policy is.  It does expect keep alive to be on.
<bwm> OpenTokix: Any thoughts on why my little experiment with setting keepalive timeout on the default virtual host isn't working?
<OpenTokix> bwm: The timeout is ignored?
<bwm> OpenTokix: the timeout setting in the virtual host is ignored.  The server wide timeout is used.
<OpenTokix> In a name-based virtual host context, the value of the first defined virtual host (the default host) in a set of NameVirtualHost will be used. The other values will be ignored.
<OpenTokix> It is used in the first namebased vhost
<bwm> In my test setup I only have one virtual host - 000-default.conf
<OpenTokix> ok
<bwm> OpenTokix: one of the things I wanted to check here was that there is nothing special about the default VH, e.g. you can't override main config settings in it.
<OpenTokix> bwm: there is nothing special about it - just the default
<Adri2000> how supported and developed vmbuilder is these days? I thought at some point it was abandoned
<strikov> rbasak: i have an appointment right now and will return back in 1 hours; just fyi
<ircfox_> could someone help me figure why I am unable to connect to a pptp server please?
<bwm> OpenTokix:  Thanks for taking the time to answer my questions.
 * pmatulis didn't know people still use PPTP
<bwm> OpenTokix: I've just extended my test.  I added a second named virtual host that is a clone of the default.  It successfully overrides the keep alive timeout.
<bwm> OpenTokix: I'm hesitant to say this because I am new to this - but this is looking suspiciously like a bug
<OpenTokix> bwm: I never use the default host - so not sure it is a bug
<rbasak> strikov: OK, thanks
<bwm> OpenTokix: agree - I'm not sure either.  I guess I have a choice between - a) change the main config; b) set LB keepalive timeout to 3s c) post details of my test as an issue - but don't know where
<OpenTokix> bwm: I doubt it is a bug. - Why cant you change the config?
<bwm> OpenTokix: policy.  I'm doing automatic deploys using chef.  I've got mechanisms to configure virtual hosts.  I don't want to change the main config because then I have to maintain different versions for different apache versions
<OpenTokix> bwm: ok, and you always use the default vhost?
<rbasak> bwm: can you reproduce on a fresh install? If Apache is documented to support that configuration option on virtual hosts, and it doesn't work, then it sounds like a bug to me.
<OpenTokix> I have to test it now =)
<bwm> rbasak: my test is running in virtualbox vm with a fresh clean install.
<rbasak> bwm: which version?
<bwm> rbasak:checking ...
<bwm>  apache2 -v
<bwm> Server version: Apache/2.4.7 (Ubuntu)
<bwm> Server built:   Mar 10 2015 13:05:59
<rbasak> bwm: a clean install of your chef recipes, or a minimal test to exercise this issue?
<bwm> rbasak,OpenTokix: I could put my configs and test script on gist or somewhere
<rbasak> bwm: if it does it on a minimal test, I'd ask you to check Vivid and if that's affected then test on a build from the upstream source without packaging.
<OpenTokix> bwm: I changed it on a test machine now
<bwm> rbasak: of the chef recipe.  good point
<OpenTokix> And chaking KeepAliveTimeout to 3, changed it from the default 5 - in 000-default.conf
<rbasak> bwm: assuming your test is correct, I'd be interested in a report to upstream.
<OpenTokix> rbasak: it is not a bug
<OpenTokix> Just tested it on a 14.04 updated host with apache 2.4.7
<rbasak> OpenTokix: OK, thanks. Then it's just between you and bwm to figure out what you're doing differently :)
<OpenTokix> http://pastebin.com/B85znLP3 <-- this is my 000-default.conf
<Arrick> Hey all, I am setting up a ubuntu 12.04 server at this time, and need to know how to pull the information for the current network, that will give me gateway, network, and dns that is currently drawn from dhcp... I need to setup static addressing, but ifconfig does not give me all the information presently.
<bwm> OpenTokix: thank - I'll take a look
<Arrick> can anyone help me out with the right cmd?
<Arrick> sorry, 14.04 server, not 12.04
<OpenTokix> Arrick: ifconfig eth0 and ip route show
<OpenTokix> Arrick: and /etc/resolv.conf
<bwm> OpenTokix: that looks exactly like what I did
<mnaser> What are opinions here on running LTSL enablement stacks?
<bwm> Only I set mine to 120
<OpenTokix> bwm: I used chrome developer tools to check the keepalive  settings
<mnaser> *LTS
<OpenTokix> bwm: What curl command are you using?
<bwm> Ah - hadn't thought of that - was using curl and watching it not keep the connection alive
<OpenTokix> bwm: I get this from curl -v "* Connection #0 to host m01 left intact
<bwm> Can I give you my curl test that shows what happens to the connection - the headings might say one thing and the server do something else
<OpenTokix> bwm: if I put keepalive off. it say: * Closing connection 0
<bwm> And I get connection closed
<Arrick> Thanks open`
<Arrick> OpenTokix, ^
<OpenTokix> bwm: My curl dont say the timeout value
<OpenTokix> Arrick: your welcome, glad to help
<rbasak> mnaser: I'm running the Trusty HWE kernel on a Precise bare metal server because I found the Precise kernel's IPv6 performance over a bridge to be terrible.
<rbasak> (oddly IPv4 was fine, and the Trusty HWE kernel fixed the issue)
<rbasak> It was the easiest way to solve the problem. As an example.
<bwm> I restricted the download rate and did two gets in curl - adjusted the rate so that it took a bit longer than 5 seconds to download index.html once
<mnaser> i'm running all trusty here but my most recent server, i installed 14.04.2 and it turns out that had a newer kernel
<bwm> Can you put your curl somewhere I can get at it?
<mnaser> so I wasn't sure if I should keep all that consistent or not
<rbasak> I'd stick to the same thing for consistency unless you have a reason to deviate.
<rbasak> With 14.04.2 you're on an HWE kernel upgrade treadmill which isn't ideal.
<rbasak> (you'll need to roll up to  with the 16.04 HWE kernel eventually)
<rbasak> (you'll need to roll up to the 16.04 HWE kernel eventually for full LTS-period support)
<bwm> OpenTokix: curl -S -v --limit-rate 1K -S -o foo -o foo -o foo http://10.10.10.10/ http://10.10.10.10/ http://10.10.10.10
<OpenTokix> bwm: http://pastebin.com/jmh8NFpM
<mnaser> i'll likely have to upgrade myself to 16.04 anyways to keep up with openstack releases for example
<OpenTokix> bwm: Sounds like a very weird way to make this test. - I am more and more suspecting your test is errornous than the config of the server.
<rbasak> mnaser: then maybe it doesn't matter so much
<mnaser> good information to have.. i'll chew on it a bit more
<rbasak> With running the Trusty HWE kernel on Precise I'm not on a treadmill, so the decision is easier for me.
<mnaser> yep, i can see the value there but i won't be running releases that long most likely
<OpenTokix> What is HWE-kernel?
<bwm> OpenTokix: I'm not going to argue with that ; I'll try your curl on my setup
<mnaser> OpenTokix: https://wiki.ubuntu.com/Kernel/LTSEnablementStack
<OpenTokix> mnaser: thanks
<bwm> OpenTokix: your evidence is that curl reports that it leaves the connection open after downloading one file.  My evidence is that when I download two files - the connection gets closed by the server after 5 seconds.
<bwm> OpenTokix: i.e. between the downloads of the two files
<OpenTokix> bwm: Default keepalivetimeout is 5s
<OpenTokix> bwm: If you go above 10s - other things will close connection, like your tcp-settings and such - both client and server.
<bwm> OpenTokix: Right - the default is 5 seconds.  And when I try to override it - its still 5 seconds.
<bwm> OpenTokix: whatever the headers say - the connection is getting closed.
<OpenTokix> bwm: Maybe you have aggressive tcp-settings on yiour client or are you testing via localhost?
<bwm> OpenTokix: can you confirm the result I'm getting?  The connection is getting closed.
<OpenTokix> bwm: I can't
<OpenTokix> I cant see the time for the connection reset
<bwm> OpenTokix: I'm on a default ubuntu config
<bwm> OpenTokix: its not getting closed on a named VH which is a clone of the default.
<bwm> OpenTokix: I think that lets tcp off the hook?
<OpenTokix> bwm: Im not sure how to check it reliably
<Arrick> if I get that /dev/sdb doesnt contain a valid parition table, does that mean I just need to create a partition and format it?
<bwm> OpenTokix: well - if you have a big enough file to get, and ensure it takes between 5 and 10 seconds to download - then that controls the timing of when the second request goes in and whether it finds an open or closed connection
<bwm> OpenTokix: the message I get is "* Connection 0 seems to be dead!"
<OpenTokix> ok
<OpenTokix> bwm: Damn you, I got curious now =)
<bwm> OpenTokix: I've been damned for quite a while now :)
<Adri2000> hallyn: hi, can you tell me more about the status of vmbuilder? I thought it was abandoned
<kirkland> rbasak: doh.  thanks for that.  I just fixed it
<davegarath> Hi all I'm trying to install an ubuntu 14.04 server with / crypted on a machine with an 12.04 installed with lvm ( /boot shared )
<OpenTokix> bwm: I see it =) =)
<bwm> OpenTokix: wondering what you have seen
<OpenTokix> bwm: connection seems to be dead
<davegarath> I configured my encrypted volume but I have an error message at the end of partitioning : "The attempt to mount a file system with type ext3 in Encypted volume (myvolname) at / failed
<davegarath> and it ask to me to resume partitioning
<OpenTokix> bwm: curl --verbose --limit-rate 2k -w "%{time_total}\n" http://m01/1 http://m01/3 http://m01/3 -o /dev/null -o /dev/null -o /dev/null <-- And 1,2,3 is a 10k random file.
<bwm> OpenTokix: So confirmation?  And if you try it on a second vhost?
<davegarath> what I'm wrong ?
<OpenTokix> bwm: if the config paramenter is inside the virtualhost -block, it will be ignored
<bwm> OpenTokix:  by config parameter you mean the KeepAliveTimeout directive?
<OpenTokix> yes
<OpenTokix> And it works as expected
<bwm> OpenTokix: If you test it in a named virtual host block, other than the default, I think you'll find it is not ignored.
<OpenTokix> it is ignored if its inside the Virtualhost block
<bwm> OpenTokix: irc is wonderful - but still a limited channel :(
<bwm> Open Tokix: do you mean in any virtualhost block
<bwm> ?
<OpenTokix> bwm: it is not ignored inside the default block
<OpenTokix> in the default file
<OpenTokix> for the * host
<OpenTokix> bwm: if the default-config is enabled, it will take the default timeout and set it - if you disable the default (*) virtualhost. - Ie. The default values get added to the * vhost, if not stateed. - And that hjost that precende of any other host.
<bwm> OpenTokix: I'm sorry but I'm not following you clearly.  Can we assume we have 3 apache config files, apache2.conf, 000-default.conf, and another-vhost.conf
<OpenTokix> You can only have one KeepAliveTimeout value - but even if you dont state it - the * will take the defautl value. - If you do a2dissite of the default host, - and change the keepalive timeout on another vhost, it will respect that value.
<OpenTokix> bwm: if you disable 000-default.conf - it will respect the KeepAliveTimeout set in another-vhost.conf
<bwm> OpenTokix: If I have keepAlivetimeout values of 5 in apache2.conf, 120 in 000-default.conf and 120 in another-host.conf then when I access the default vhost I get a timeout value of 5 and when I get access another-vhost I get a timeout value greater than 5
<OpenTokix> no, it will be set to 120
<bwm> OpenTokix: so I can have different keepalive timeouts on different vhosts at the same time.
<OpenTokix> bwm: no
<OpenTokix> It will take the first one, - and in your case 000-default
<OpenTokix> unless you name your other vhost 000-another-vhost.conf
<bwm> OpenTokix: ok - that sounds interesting - what is your evidence for that?
<OpenTokix> bwm: My tests, and what the documentation say
<OpenTokix> From docs: In a name-based virtual host context, the value of the first defined virtual host (the default host) in a set of NameVirtualHost will be used. The other values will be ignored.
<bwm> OpenTokix: From the docs - the default vhost is not used when the hostname in the request matches the servername in the vhost config.
<bwm> OpenTokix: those were my words - I'll go lookup the docs for the quote.
<OpenTokix> bwm: no, that is for the requests
<OpenTokix> bwm: has nothing to do with how the configuration is "built" whe nthe server is started
<OpenTokix> bwm: That quote does not sound like it come from apache official docs.
<bwm> OpenTokix: "f multiple virtual hosts contain the best matching IP address and port, the server selects from these virtual hosts the best match based on the requested hostname."
<OpenTokix> bwm: yes, that is how virtual hosts mean -- but that has nothing to do how the configuration is built for the actual networking stack inside apache when you start it, two entierly different things
<bwm> OpenTokix: Ah - can you tell me where in the docs I should look for how the config is 'built'
<OpenTokix> bwm: no, not really - its more of a understanding how the configuration works with includes etc.
<rbasak> kirkland: no problem. I don't understand why strikov coudn't reproduce it though? Anyway, no matter. Works for me now.
<OpenTokix> bwm: apache start either a worker process, that divide the connections among threads, or it has a prefork model where there is different processes. - However, only one and only one process binds to port 80 and that divide the connection
<bwm> OpenTokix: don't you think it would be a bit weird to explicitly allow the specification of a keepalivetimeout inside a vhost block and then not honour it?
<OpenTokix> bwm: keepalivetimeout is set when you create the socket
<OpenTokix> bwm: it is layer 2 in the OSI -model, and the vhost is much higher up
<bwm> OpenTokix: I was asking for your evidence that you can't have different keepalive timeouts for different vhosts at the same time.  You mentioned you had a test that shows this.  Can you please describe the test.
<OpenTokix> bwm: I created an extra vhost called mmm and I disabled 000-default.conf - then it respects the keepalivetimeout inside the vhostblock of mmm-vhost. - When 000-default is enabled, it will take the default value from apache2.conf
<bwm> OpenTokix:  I believe there is a keepalive at the TCP level.  I don't think the keep alive were are talking about is the same thing?  I think Apache has its own keep alive mechanism.
<OpenTokix> bwm: And if I disable 000-default.conf - it will take the first KeepAliveTimeout it finds, insde the block inside test.conf (that has the mmm vhost)
<OpenTokix> bwm: Regardsless, you can only have one and exactly one KeepAliveTimeout per apache2 server instance
<bwm> OpenTokix: a test - excellent.  I get a different effect.  When I enable both 000-default and mmm I get two different timeouts.
<OpenTokix> And you have the KeepAliveTimeout inside the vhost block
<OpenTokix> ?
<bwm> OpenTokix: yes.
<Pici> /7/70
<kirkland> rbasak: yeah, I'm confused as to how this happened too;  nothing has changed in my registrar in years
<OpenTokix> bwm: I dont
<bwm> OpenTokix: I've burned a lot of your time with this.  We could stop at this point and say we understand why we are getting different results.
<bwm> OpenTokix:  I'll check my test
<OpenTokix> bwm: Also confirm the timeout with devtools in chrome
<OpenTokix> bwm: I am 100% sure about my config, since both tests show same results. - Bot curl with limit and devtools show same time.
<bwm> OpenTokix: I've just tried to reproduce my test showing multiple different timeout values and failed.  Not sure why - need to investigate further.  For now - my assumption is I screwed that test up somehow.
<strikov> rbasak: do you have any idea why this package have such a strange naming: http://packages.ubuntu.com/vivid/libgnutls-deb0-28
<strikov> rbasak: (a) what deb0 mean (b) why package is called -28 while it contains 3.3.8
<OpenTokix> bwm: sounds plausible
<bwm> OpenTokix: thanks for all your help
<rbasak> strikov: I don't know what the deb0 means.
<rbasak> strikov: but the -28 often refers to a sover, so that multiple sovers can be installed concurrently. This is helpful during transitions.
<rbasak> strikov: yeah so /usr/lib/x86_64-linux-gnu/libgnutls-deb0.so.28 is where the 28 comes from
<rbasak> And apparently libgnutls-deb0 is the soname, hence the package name
<rbasak> strikov:
<rbasak> gnutls28 (3.3.2-2) experimental; urgency=high
<rbasak>   * Fix crashes due to symbol clashes when a binary ends up being linked
<rbasak>     against GnuTLS v2 and v3 by bumping library symbol-versioning (and
<rbasak>     therefore also the soname) in a Debian specific way, to make sure there is
<rbasak>     no conflict with future:
<strikov> rbasak: i thought that sover mimics the actual version of the codebase but it seems to be wrong
<strikov> rbasak: thanks a lot!
<zetheroo> Hi -  I am trying something out here with Ubuntu - I have LAMP stack installed and tt-rss - the idea is to now have some kind of blog page on the same server which can be updated with short IT messages for staff such as "service xyz is down ... we are working on a solution" - I was going to install Wordpress for this purpose, but it seems overkill. Is there anything else which could work?
<rbasak> strikov: speaking of which, your latest Juju packaging is great and fine to upload. I have one very minor comment though, for next time.
<rbasak> "Change build dependency from gccgo to gccgo-go." could do with a reason.
<rbasak> As we try to explain _why_ in a changelog entry, as well as what, for when someone asks years later :)
<rbasak> No need for an update this time though. I'll upload.
<strikov> rbasak: yeah, good point; thanks
<bananapie> sarnold => I used debootstrap to run Debian Wheezy in a chroot in Ubuntu. I migrated all my services without rebooting. All that is left now is to install the debian linux kernel and reboot. In the mean time, the server is running perfectly in the chroot :)
<rbasak> strikov: Juju uploaded. Thank you! That was not a trivial piece of work.
<strikov> rbasak: thanks YOU! I volunteer for a next release packaging; I hope i'll take much less time because i know what to do.
<ircfox> I am trying to set a pptp server at a vpn but it is currently not working. could someone help me figure how to solve it please?
<Arrick> I have a server I am working on, and I need to know if "ifdown em2 && ifup em2" is the proper way to restart a nic (I dont want to run down and then up, or else i have to go onsite and run the up command.
<rbasak> Arrick: define "restart"
<rbasak> Arrick: if you want to change /etc/network/interfaces, ifdown, then edit, then ifup.
<Arrick> rbasak, I changed the dns in etc/network/interfaces and also in /etc/resolv.conf
<rbasak> Sometimes you can get away with editing first, but that's not the "proper way"
<rbasak> For DNS changes you can get away with it though :)
<Arrick> the edit has already been done, and I am at a remote site.
<Arrick> so I am wondering how to make the 2 commands run, so I can get back in
<rbasak> I would run screen, then inside the screen "ifdown em2; ifup em2"
<Arrick> it worked, nevermind.
<rbasak> Not &&, since you want to at least try run ifup even if ifdown fails.
<bwm> OpenTokix: sorry - back again: you were sorta right about different VH not being able to have different keep alive timeouts. :)  The 'sorta' is that's true for VH's on the same IP and port.  I found the bit of documentation you referred to and I think I understand it now.
<bwm> OpenTokix: I still read that documentation as saying that the first VH config should override the server wide config timeout setting.
<bwm> OpenTokix:  I think we agree from our testing that is not happening.  I'm back just to check that and whether you agree that documentation says the override should happen.  Or am I missing something.
<Arrick> Good morning all... I made a change to my apache2 configuration to make it match the server I am migrating from, and when I go to restart Apache2, it gives me an error... AH00526: Syntax error on line 44 of /etc/apache2/sites-enabled/000-default.conf: Invalid command 'NTLMAuth', perhaps misspelled or defined by a module not included in the server configuration Action 'configtest' failed.
<Arrick>  You can see my .conf (scrubbed for security) file at http://paste.ubuntu.com/10712945/
<Arrick> any help getting this resolved would make me grateful.
<ircfox> How do I test a port to see if its open or closed?
<rbasak> Arrick: NTMLAuth sounds like it's for a module not shipped with Apache by default. I'm not sure though. So maybe you need to install the module?
<Arrick> not sure, lol... I just performed an apt-cache search, no such animal
<Arrick> nice, sourceforge is temporarily offline
<fullstop> I think that sourceforge is just waiting for everyone to leave
<Arrick> oh really?
<hallyn> Adri2000: vmbuilder is basically abandoned, but i couldn't pull it from the archive bc my patches to replace it with uvtool in adt weren't accepted in time for 14.04.
<guitarzan> hi folks, does this ring any bells? GPG error: http://ubuntu-cloud.archive.canonical.com precise-updates/havana Release: The following signatures were invalid: BADSIG 5EDB1B62EC4926EA Canonical Cloud Archive Signing Key <ftpmaster@canonical.com>
<guitarzan> zul: jamespage: adam_g said you might know about this? ^^^
<mgagne> guitarzan: have this package installed? http://packages.ubuntu.com/precise-updates/misc/ubuntu-cloud-keyring
<guitarzan> I assume so, let me check that box
<guitarzan> the havana archive worked before today
<mgagne> otherwise I have nothing else to suggest =)
<guitarzan> mgagne: haha, ok :)
<jathan> Hello Ubuntu channel. Does someone know how can I redirect one domain to another domain with ssl certificate using Apache 2.4 in Ubuntu 14.04 please?
<jathan> I tryied already with virtual host conf and .htaccess and follow this link http://stackoverflow.com/questions/14565560/redirect-all-traffic-from-one-domain-to-another
<jathan> But the second domian still without redirecting to the first one
<jathan> Can some help me please?
<jathan> someone I mean sorry
<BrianBlaze420> jathan: check #httpd
<jathan> Ok thanks BrianBlaze420
<BrianBlaze420> sorry I can't help you more then that I too have been trying to work this out with my same version ubuntu server
<Sling> jathan: so you want to redirect from a ssl vhost to another ssl vhost?
<Sling> that should work fine, just use Redirect and two virtualhosts each with valid certificates
<Sling> if it doesn't, feel free to share vhost configuration and details on what's going wrong :)
<jathan> Only 1 domain has the SSL Certificate of the entity
<jathan> And the second domain does not have
<jathan> Where can I paste you my conf?
<Sling> jathan: apaste.info for example
<jathan> Ok
<Sling> it has highlighting for httpd config
<jathan> Thanks Sling :)
<jathan> Done. http://apaste.info/9Di and http://apaste.info/QYQ
<Sling> jathan: these are essentially the same vhost?
<Sling> just the servername/serveralias swapped
<Sling> ah no, but both are handling www.crieit.com.mx
<jathan> yes
<Sling> why?
<jathan> www.crieit.mx is the domain with SSL
<jathan> Is not correct then
<jathan> both handle www.crieit.com.mx
<Sling> i still don't know what you're trying to do exactly and why
<jathan> Ok. I will explain :)
<Sling> also the proxypass directives are not done correctly
<Sling> the path you give there should be part of the URI, not an absolute filesystem path
<jathan> www.crieit.mx is the main domain and the one that have the SSL authority certificate form entitity. www.crieit.com.mx has not no one SSL cetificate
<jathan> And is the domain that I want that can redirect to www.crieit.mx
<Sling> okay
<jathan> Do I delete the proxy part?
<Sling> I don't know why it's there
<Sling> if you don't know either, remove it :)_
<jathan> Because I tried different methods ja
<jathan> OOk
<Sling> also remove the mod_rewrite stuff too, you don't need that for redirecting
<Sling> and remove the ServerAlias'es
<jathan> Ok
<jathan> In both files?
<Sling> you should have one <VirtualHost *:443> ServerName www.crieit.mx ... <VirtualHost> and one <VirtualHost *:80> ServerName www.crieit.com.mx ... <VirtualHost>
<Sling> then put 'Redirect / https://www.crieit.mx' in the non-ssl vhost
<Sling> and you should be done
<jathan> Ok. Here I go.
<Sling> sorry those '... <VirtualHost>' should be '... </VirtualHost>'
<Sling> in the non-ssl vhost you don't need a DocumentRoot btw, just the ServerName and Redirect line are all
<Sling> assuming you want to redirect *everything* landing there
<jathan> I if activated .htaccess and created a file in both domains in /var/www/html/crieit.com.mx and /var/www/html/crieit.mx does this affect to virtual conf files in sites-available
<jathan> ?
<Sling> no need for htaccess if you have access to the main config
<jathan> Ok
<Sling> just leave it disabled
<jathan> Done Sling.
<jathan> I restarted apache but still appearing Your connection is not private
<jathan> if I enter https://www.crieit.com.mx/
<Sling> well, of course
<jathan> I will paste you my files again
<Sling> that's not what you said you wanted
<Sling> you can only redirect http://www.crieit.com.mx to https://www.crieit.mx
<jathan> Sorry maybe I do not explained well. I refer that https://www.crieit.com.mx/ sends to https://www.crieit.mx/
<jathan> If it is possible
<jathan> ?
<Sling> you can only do that with a certificate that is valid for www.crieit.com.mx
<jathan> Ah I see
<Sling> otherwise it would be a big flaw in the ssl protocol :)
<jathan> jaja
<Arrick> Good Afternoon All, I am trying to get a Totara Moodle site up and running all the way, however, I have an issue with a "broken helper" for the single signon with active directory.... Here is my 000-default.conf file,  http://paste.ubuntu.com/10712945/ and my errors can be found at http://pastebin.com/tkaJqDKkI am running Apache 2.4.7 on Ubuntu server 14.04.. Any help would be appreciated.
<jathan> So I can not link the url https://www.crieit.com.mx to https://www.crieit.mx as symbolic link (for mean something)
<bekks> Arrick: "This paste has been removed".
<Arrick> ok, pasting again.
<Arrick> http://paste.ubuntu.com/10714203/
<Sling> jathan: nope
<Sling> think about it, the http client (browser) sends a HTTPS request with the Host header 'www.crieit.com.mx' to the webserver, apache will pick the right vhost for this Host header, and then start a SSL negiotiation for that hostname
<Sling> then the SSL certificate is offered to the browser, which sees a different hostname in the certificate, and gives you an error
<Sling> only after the ssl handshake is done and the browser accepts the certificate, then the rest of your config like proxy's or serving files would be relevant
<Sling> Arrick: seen https://bugs.launchpad.net/ubuntu/+source/apache-mod-auth-ntlm-winbind/+bug/1304953/comments/2 ?
<jathan> O wow. That was my problem all the time besides start to set up Web Servers
<Arrick> Sling, yep, tried it
<jathan> Thank you very much Sling. You helped me a lot and resolved my dudes :)
<Sling> np :)
<Sling> Arrick: ok :) no clue then, never used that module
<Arrick> lol
<Arrick> I've been googling since I last posted in here, trying to find the answer on my own first.
<jathan> It is possible do the same with www.crieit.mx (without ssl link) to https://www.crieit.mx?
<jathan> I created another virtual host for it, but does not work
<jathan> Following the same for www.crieit.com.mx
<Arrick> hey, based on this line in my 000-default.conf is there a module or something I need to install?  NTLMAuthHelper "/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp"
<jathan> I only changed name server
<Sling> jathan: read what i said before about the *:80 and *:443 vhost
<jathan> Ok Sling :)
<jathan> The fact is that I tried creating a fille named /etc/apache2/sites-available/crieit.mx_wssl.conf
<jathan> )without ssl)
<jathan> Leaving *:80
<jathan> Since the beggining
<jathan> Oh I forgot enabled
<jathan> with the command
<jathan> Sorry let me check
<jathan> a2ensite
<jathan> Yes :)
<jathan> Thanks Sling
<Sling> yw
<diphtherial> hey, i've been having this inexplicable issue lately where every time i attempt to install a python package (say, via pip) even into a virtualenv, it reports that i don't have space on the device
<diphtherial> df isn't reporting that anything is full, though: https://dpaste.de/xORj
<teward> diphtherial: actual error message?
<diphtherial> teward: https://dpaste.de/hyrO
<Pici> diphtherial: tmp is full though.
<teward> ^
<Sling> recreate the tmp mount with more space
<diphtherial> true...hrm. i'm confused that it's only 1mb in size
<diphtherial> i hadn't run into this problem before today and i've been using this VPS for a good two years now...
<Sling> mount -t tmpfs -o size=<bytes>,mode=1777 overflow /tmp
<Sling> after unmounting the current one
<diphtherial> Sling: alright, sounds reasonable; thanks
<Sling> did your / fill up recently?
<diphtherial> it did, but i resolved it by adding an extra volume to the VPS and moving my giant postgres db to it
<Sling> this is a remnant of ubuntu panicing about that :)
<diphtherial> aha, fascinating
<diphtherial> i'm kind of nervous to modify my fstab...the last time i did, the VPS became unbootable and i had to wait two days for my VPS maintainer to reboot the machine
<diphtherial> (my VPS provider -- also my university -- doesn't have a means to get console access. they only you to access the server via ssh, which is impossible if it's halting before sshd comes online)
<diphtherial> i'm having some trouble unmounting /tmp; the system is complaining that it's in use, which makes sense...
<teward> is remount a avlid option?
<teward> i.e. mount -t tmpfs -o remount,size=<bytes>,mode=1777 overflow /tmp
<teward> (not sure, don't run unless someone confirms)
<diphtherial> noted, thanks. on a side note, what's a reasonable size for it? apparently 1mb is far too small
#ubuntu-server 2015-04-01
<maddawg2> hey all...  so been using ubuntu server at work for a nmber of our facilities (open vpn tunnels, squid proxy, dhcp, firewall, etc)...  normally we've been using Firewall builder so that some of our windows system administrators can configure the firewall with a GUI...however it looks like firewall builder has stopped developing
<maddawg2> any alternatives people can recommend?
<sarnold> maddawg2: ufw is simple enough, even though it's commandline; I think someone put together a gui around it but I can't vouch for the quality of it..
<lkthomas> guys, anyone have experience to run multicast routing ?
<maddawg2> yea sarnold we're looking for a GUI and we dont want a gui on the computer
<maddawg2> with firewall builder we could install it to Windows and make our rules there and it would generate a conig file that would then get uploaded to the ubuntu server
<sarnold> maddawg2: you could try X forwarding, ssh -X hostname xterm   to get a quick idea of what I mean..
<maddawg2> yea but then they wouldnt be on windows
<maddawg2> these are windows administrators
<maddawg2> needin g to administer a linux firewall
<sarnold> ahh
<lordievader> Good morning.
<Gwys> Hi all ! I'm trying to install openstack through MAAS following this documentation http://ubuntu-cloud-installer.readthedocs.org/en/latest/multi-installer.guide.html
<Gwys> But I've an issue with br0 during openstall-install
<Gwys> Someone can help me ?
<Gwys> I see some error on br0 in the log files. And it look like the script comment my interface in /etc/network/interface
<Gwys> There are error logs : http://pastebin.com/22yELB7r
<strikov> rbasak: regarding this bug: https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1438788
<strikov> rbasak: it's upgrade related thing; previous version of the package generated this incorrect symlink and we have to manually remove it
<strikov> rbasak: it was not a good idea to remove it somewhere inside installation handlers of the new version because this file/link may be legal
<strikov> rbasak: i.e. user created it manually not buggy package
<strikov> rbasak: that's definitely a bug of debhelper-systemd and i need to file it
<rbasak> strikov: OK. So workaround available, and only affects users who had mysql-server-5.6 5.6.23-1~exp1~ubuntu4 installed?
<strikov> rbasak: i'm reproducing this now on a cloud instance to provide with a workaround which 100% works
<strikov> rbasak: yes, only when you upgrade from ubuntu4
<rbasak> strikov: I think it's OK to leave it then - we can just explain it in the bug for users to apply the workaround.
<strikov> rbasak: ok
<rbasak> strikov: and then explain that it's too difficult to fix without breaking other users, and then mark it Won't Fix.
<Arrick> hey all, if I am working with a .conf file, is ; a commented line?
<davegarath> Arrick: It depend for what application is .conf file. Usually a comment is #
<Arrick> its the smb.conf
<Arrick> there are dozens of lines which start with ; that follow the # liens
<Arrick> lines
<davegarath> smb.conf use both  # and ; as a comment
<davegarath> # is used as a comment and ; is used to comment a statement
<Arrick> I'm having an issue with winbindd and smb, cant seem to figure out how to get it to lookup usernames, or anything.
<Arrick> wbinfo -u says error looking up domain users
<strikov> rbasak: just fyi, debian guys provide cloud images since jan2015: http://cdimage.debian.org/cdimage/openstack/testing/
<strikov> rbasak: it might be useful for testing while filing debian bugs
<Adri2000> I've just discovered uvt; I understand it as being a way to create VMs from cloud images. I have a side question: is there a recommended way to build cloud images? is the toolchain used for building those at cloud-images.ubuntu.com available somewhere?
<jcastro> Adri2000, check this out: https://launchpad.net/~ubuntu-on-ec2
<jamespage> coreycb, huh - quick poke at the ci builds - missing deps for the source packages was not helping as a result of move to systemd
<rbasak> strikov: that's useful. Thank you!
<coreycb> jamespage, what was missing?
<rbasak> Adri2000: I think our toolchain for building cloud images is available. utlemming might be able to help with that. But it is not recommended. We think that you should use "official" cloud images instead, and use cloud-init on first boot to customize them as needed.
<rbasak> Adri2000: or, if you must, modify the official cloud image for local use but starting from the official one, rather than going from scratch.
<rbasak> Adri2000: of course, you can do what you like. We just try to best support that workflow.
<MDTech-us_MAN> hello
<jamespage> coreycb, dh-systemd and openstack-pkg-tools
<jamespage> without those you can't cut the source packages
<MDTech-us_MAN> what is a good program that will backup specific programs and file every once in a while? maybe even somethign with a good (web?) interface?
<coreycb> jamespage, so the ci builds don't use the deps from debian/control?  because those should be in the debian/control files.
<jamespage> coreycb, not for cutting the source packages
<coreycb> jamespage, ok
<Adri2000> rbasak: typical use case is I want ubuntu cloud images that include specific configuration to my local network (think, apt mirrors and such). what would be the proper way to create those, if not using the toolchain used to build the "official" images"?
<Odd_Bloke> Adri2000: You have two options, really: (a) take the cloud images and modify them, or (b) use cloud-init to do what you need to do on first boot.
<Odd_Bloke> Adri2000: The toolchain used to build the official images starts from scratch, but you don't have to start from scratch because we build the official cloud images. :)
<Adri2000> Odd_Bloke: then what tool do you recommend to do (a) ?
<Odd_Bloke> Adri2000: Have a look at http://ubuntu-smoser.blogspot.co.uk/2014/08/mount-image-callback-easily-modify.html
<Adri2000> thanks
<rbasak> Adri2000: to set apt mirrors and things, I suggest you use cloud-init. Then you don't have to keep re-rolling your customised cloud images.
<rbasak> Adri2000: you can inject configuration information into the cloud images, which cloud-init then uses. http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt documents the configuration you can do.
<rbasak> Modifying a cloud image is easy. Maintaining that setup is not.
<Adri2000> rbasak: I know, but I'd like to offer users (internal IaaS/OpenStack users) images that work out of the box, and therefore do not require them to add userdata if they don't need anything specific
<Adri2000> rbasak: of course I'll have to maintain my custom images, that's why I need to automate the process
<Adri2000> mount-image-callback may be part of the solution
<rbasak> Adri2000: look into vendordata. It lets you provide defaults that userdata can override, but if no userdata is used your users will get your apt mirror by default.
<Odd_Bloke> Adri2000: If you're on OpenStack, you could use vendor... that.
<rbasak> (unless users actually touch that setting in userdata)
<strikov> rbasak: could you change status of this bug to won't fix please: https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1438788
<strikov> rbasak: i don't have permissions to do this
<strikov> rbasak: i investigated this and (a) upgrade from ubuntu4 to ubuntu5 runs smoothly (issue reported was observe with the previous version of the package I assume) (b) i can observe the issue when removing the package but it's a result of previously created symlink which requires manual actions
<rbasak> strikov: yes, but please could you first explain the bug why the bug should be Won't Fix?
<rbasak> explain in the bug
<teward> rbasak: i was about to ask that too xD
 * teward was about to hit "Won't Fix" too xD
<teward> rbasak: stupid question for you with regard to freezes, but a bug of mine got poked saying "Shouldn't the fix for this be SRU'd?" on nginx, and it's not in Vivid yet - it'd set the thing to build as position independent - would that even qualify for SRU or even a bug that'd get past featurefreeze?
<rbasak> teward: what's the bug?
<rbasak> teward: if it's a security bug, then the normal SRU process doesn't apply. An update would go via security sponsorship itself, and the security team would judge security impact vs. regression risk themselves.
<teward> rbasak: i'll poke mdeslaur in either case, but the other problem is the fix isn't even in Debian yet - just committed
 * teward digs for the bug
<teward> wow i still had it open from 2 hours ago xD
<teward> https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1315426
<teward> rbasak: it reads as a feature request, but i'm not sure if it needs to be a security bug that mdeslaur / security team would review
<mdeslaur> teward: what do you need me for?
<teward> mdeslaur: oop forgot you're here xD
<mdeslaur> nah, that's an SRU, not a security vulnerability
<teward> that's what i thought
<rbasak> If it's not a security vulnerability, then what's the user impact that necessitates an SRU?
<teward> rbasak: AFAICT there isn't one
<teward> not unless package policy starts pushing for PIE as a requirement for inclusion anywhere
<rbasak> Then an SRU isn't appropriate IMHO.
<teward> mhm
<rbasak> Unless mdeslaur says it's worthwhile as an SRU for security reasons even if it shouldn't go through a security upload.
<teward> mdeslaur: the question goes back to you, whether it'd be worthwhile as an SRU for security reasons or not.  (Note the changes are committed in Debian but not implemented anywhere, not even in Vivid)
<teward> so it'd need a nitpick pull from Debian git, added to Vivid, then SRU'd.
<mdeslaur> I don't think it's worthwhile as an SRU, no
<teward> rbasak: and since it'd be needed in Vivid, the question then becomes whether FeatureFreeze prevents this, or whether i have to go poking the release team for an FFe
<mdeslaur> it's just hardening, it has no direct benefit
<strikov> rbasak: teward: is it okay to close the bug with 'won't fix' if the root cause of the bug is in different project? in our case this issue arises from the fact that debhelper can't handle aliases in systemd's unit config.
<rbasak> strikov: no, I don't think so. If the bug cannot be fixed in this package, then the bug should be reassigned to the correct package, or a new task added and the mysql-5.6 task marked Invalid.
<strikov> rbasak: okay, so let me look into debhelper bug tomorrow; will see then what to do
<mdeslaur> turning on PIE in stable releases will have a detrimental performance impact on 32-bit platforms, which may piss off people who are specifically using nginx for it's performance
<rbasak> strikov: if the bug *can* be fixed in this package but it isn't worth doing it because it affects development release users only in a way that they can workaround, and it isn't worth going to the trouble to fix it for that set of users, then I think it's OK to explain this and then mark Won't Fix against mysql-5.6.
<mdeslaur> s/it's/its/
<teward> mdeslaur: rbasak: OK that's what i thought (not SRU worthy, no significant benefit).  I'm considering leaving Vivid's status alone though, in the interim, once Vivid is released marking it as "Won't Fix" and setting a "Triaged" state for the next release later (because there may be a merge in that cycle from Debian, which would likely include the PIE changes)
<rbasak> teward: I think "PIE isn't turned on though expected for security-sensitive packages" is a reasonable bug to fix under feature freeze without needing an exception. I would be OK to sponsor that. But see mdeslaur's comment on whether we should do that or not.
<teward> rbasak: right, given that, i'm considering leaving Vivid's status alone
<teward> but i was going to "Won't Fix" for the earlier releases
<rbasak> Maybe it's fine to do, and those who are performance sensitive can switch to amd64 when upgrading to Vivid for production use.
<mdeslaur> and we'll likely be turning on PIE by default on amd64 for V+1
<teward> i'm thinking at this point V+1 might be the target.  at some point after Vivid's release it's likely Debian will get an update in its package that turns on PIE by default
<teward> since it's in the git, but not yet released due to Debian freeze
<teward> (at least from what the nginx maintainers in Debain told me)
<teward> mdeslaur: rbasak: i'm going to use those statements as "blocking points" for a vivid fix for now, and will wait to see what Debian does on this - just because it's Fix Committed there means nothing - it's not even 'tested' there afaict
<teward> (net)
<teward> s/net/yet/
<teward> rbasak: mdeslaur: i'm comfortable leaving the change out of Vivid and waiting to V+1 to get the fix in with the likely merge i'll do during that cycle.  Around that same time I'll make a blog post on my blog (which'll end up in Planet.u.c's list) indicating that for V+1 we recommend that performance-sensitive use cases should be switching to amd64 architectures instead of staying on 32-bit architectures, for the performance hit reason we just
<teward> discussed
<teward> wow i hate irc truncation
<teward> (that PIE bug's been there for a while now)
<teward> (I posted as such on the bug just now)
<teward> thank you both for the discussion on it, sometimes it helps to have a second viewpoint / opinion :)
<strikov> rbasak: https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1438788/comments/5
<strikov> rbasak: how about that?
<rbasak> strikov: looks good. Done.
<strikov> rbasak: thanks!
<teward> rbasak: mdeslaur: https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1315426/comments/5
<teward> rbasak: mdeslaur: looks like htere's pushback for no Vivid inclusion - opinions on putting it in Vivid other than us having to say "Those who have performance-sensitive setups should move to amd64 for the upgrade to Vivid", assuming the release team approves an upload to enable PIE?
<mdeslaur> teward: if you want comments from disgruntled people, I can fill your inbox if you'd like
<teward> mdeslaur: sure, feel free, i have 500 today
<teward> mdeslaur: on top of 6000 aprils fools jokes
<teward> and 10000 spam messages in my PMs here
<mdeslaur> teward: just upload it to vivid
<teward> i'll go nitpicking then
<teward> mdeslaur: uploaded, it's going to need approval
<teward> and there's the accept.
<teward> ooo apparently the debian changes FTBFS
<adam_g> zul, ping
<zul> adam_g: yo
<adam_g> zul, can you go through and remove all your -2's from https://review.openstack.org/#/q/reviewer:chuck.short%2540canonical.com+status:open,n,z ?
<zul> adam_g:  sure gimme a sec
<zul> adam_g:  done
<adam_g> zul, thanks
<keithzg> Huh, one of my servers is offset from correct time by a tad over -161 seconds, I wonder what would cause that?
#ubuntu-server 2015-04-02
<keithzg> Guess whatever the default setup is in ubuntu-server doesn't cut it these days? (This is a 14.04 server). Installed ntpd, now offside is down to about 4 thousandths of a second.
<keithzg> Although my guess would be that ntpd still ships as default and I just did something weird when I installed and set up this server last year, heh.
<sarnold> I think ntpdate ships as default..
<keithzg> sarnold: Yeah, ntpdate was already installed, but doesn't that have to be run manually, or at least manually added to some cron job?
<sarnold> there's a discussion here https://lists.ubuntu.com/archives/ubuntu-devel/2014-October/038512.html
<sarnold> keithzg: yeah. if ntpdate hand't been run recently that could explain the three minutess..
<keithzg> sarnold: Ah, makes sense then, ntpdate probably only runs on reconnection to networks and such, eh? The half-year uptime of this server since last time I admitted I should probably apply kernel updates is plenty of time for drift ;)
<sarnold> keithzg: hehe yeah, three minutes of drift in half a year makes sense..
<Patrickdk> sarnold, here is something you might like
<Patrickdk> http://google/
<sarnold> Patrickdk: you know I tried that earlier today.. and just got my localhost http server
<sarnold> $ host google.
<sarnold> google has address 127.0.53.53
<sarnold> google mail is handled by 10 your-dns-needs-immediate-attention.google.
<Patrickdk> oh ya
<sarnold> what on earth gave me -that- response? :)
<Patrickdk> stupid wildcard
 * Patrickdk isn't thinking clearly
<Patrickdk> suprised they don't have a webpage on that though
<sarnold> it feels like they should
<sarnold> why buy a tld for $180k USD and then only use it for a gimmick like com.google? :)
<keithzg> Well, $180k is chump change for Google, and it gives them the option in the future if they ever decide to use it.
<keithzg> Right now people (somewhat rightly) are distrustful of these new domains, so they probably figure it isn't worth comitting to what they'll do with it yet.
<Patrickdk> heh? google has already ate like 30 tld's
<Patrickdk> most of them make sense
<Patrickdk> about 10 of them are just pure greedy
<Patrickdk> ads, dad, ...
<Patrickdk> and the whole .dev and some of the other tlds that are for internal usage only
<lordievader> Good morning.
<mdev> whoever does the apache distro for ubuntu, how can I get them to disable apache indexing by default?
<mdev> is really a big security issue and no reason it should be enabled, yet it always is on fresh ubuntu server installs
<mdev> it lists files/directories in your htdocs folder /var/www/html in the end users browser if no index.php or index.html exists
<OpenTokix> mdev: The default only export /var/www
<mdev> probably can list the same even if they do exist
<OpenTokix> mdev: how is that a security issue?
<mdev> because end users don't need to see any of those files?
<OpenTokix> mdev: My question still stands, what are the security implications?
<mdev> if you're running a website you don't need all your web files listed
<mdev> what the security issues? many...users could potentially access files they should be because they can see a full list of everything
<mdev> if you stored certain information in a hidden_log_3939495.txt for instance
<OpenTokix> mdev: They can still access all the files, regardless if they are indexed or not
<OpenTokix> What you are talking about is security by obscurity
<mdev> I had one client who stored transaction info in his web directory, using long obscure names that one couldn't guess but wouldn't need to if they were listed by freaking apache...
<OpenTokix> And that is not security
<mdev> and security via obscurity is security, regardless of what people say
<OpenTokix> mdev: Your client are doing dumb stuff, - And that isnt apache default configs fault
<OpenTokix> no, its not
<mdev> Open every ubuntu vps install i've seen has apache with indexing enabled by default
<mdev> apt-get install apache, or it installed via php
<mdev> so clearly it's whoevers running that repo
<mdev> obscurity is security, 1000%
<OpenTokix> no
<mdev> if you go bury a treasure chest full of gold in your backyard, is it secure from theives? absolutely
<mdev> if they don't know it's there...
<OpenTokix> ...
<mdev> but if you broadcast and tell everyone you buried it there, similar to apaches indexing
<mdev> then no it's not secure...
<OpenTokix> mdev: you are wrong in so many ways, I dont have enough button presses before I have to switch keybords of mechanical fatigue. - So have fun with your endevors
<mdev> even if you truely feel that way, which I doubt
<mdev> there's no huge reason to have indexing enabled by default anyway
<mdev> if users want it on they can enable it, but the average user, having it on, is security risk
<mdev> so whoever maintains the apache repo for ubuntu please consider disabling it
<mdev> some other distros don't have it enabled
<rbasak> mdev: I see no case here to change the default, but will follow Debian's default. If you want to file a bug with them to change the default, go ahead.
<lordievader> mdev: I suppose you should file a bug, if you really feel strongly about it.
<rbasak> mdev: I see no point though. Files placed in a server configured for static serving of files are expected to be public. If you don't want to share the files, don't put them in /var/www.
<rbasak> OTOH, if you do put files in /var/www, the implication is that you do want to make them public. Why else would you put them there?
<rbasak> Not providing automatic indexes don't help whether the files are accessible or not anyway.
<rbasak> OTOH, I find automatic indexing really useful. I can use a public area to dump files that others can discover and access.
<skylite> im using dhclient eth2 command to get an IP from my dhcp server. Server gets the request and sends back a dhcp offer but my client wont accept it and keeps asking for an IP from the dhcp server  (isc-dhcp) any ideas? I dont see any other entries in the dhcp log
<skylite> ok it seems its a network issue
<arcsky> hey, my syslog/messages are empty files, where is my logs?
<replman> Hi! I have a svn repository on my ubuntu 12.04 server and access it by https through apache. I setup a location in http conf with authtype basic and require valid-user. Everything works so far. Now i want to give access to a specific repository path to another user. Adding this user to my AuthUserFile gives him full access. What's the best way to restrict the access?
<replman> Ok, looks like i have to use AuthzSVNAccessFile. My location in http conf is <Location /svn/repo>, int the acl file i have a [Test:/customer/acme/project1/trunk]. If i try to access the repo through https://testuser@myserver.com/svn/repo/Test i get a forbidden error. What is the correct url?
<rbasak> niedbalski: looking at https://code.launchpad.net/~niedbalski/ubuntu/vivid/rpcbind/fix-lp-1430181/+merge/253260 now
<strikov> rbasak: hey, i have a question regarding mysql apport bug
<rbasak> strikov: sure
<cohonen> yo guyas
<strikov> rbasak: essence of the issue is that mysql doesn't generate crash reports by default
<strikov> rbasak: it handles all the crashes internally w/o letting kernel/apport know about them
<strikov> rbasak: this behavior can be changed by my.cnf though
<cohonen> so removing resolvconf (which pissed me off) will result in removing ubuntu-minimal
<strikov> rbasak: do we want generating crash reports enabled by default?
<cohonen> is ubuntu-minimal a pseudo package or will this break my system ?
<rbasak> strikov: I think we're talking about two types of "crashes". My issue was with postinst failures, which I guess isn't related to changes in my.cnf?
<rbasak> strikov: so "ubuntu-bug /usr/sbin/mysqld" should use the apport hook, for example.
<rbasak> cohonen: to stop using resolvconf, I think you can just replace /etc/resolv.conf with a regular file and it'll leave you alone.
<strikov> rbasak: okay, i'll check this path as well; i'm currently testing with sigsegv
<cohonen> rbasak: im pretty sure it wont
<cohonen> also editing interface has NO  EFFECT!@!!!!
<cohonen> which 99.8888% of ubuntu forums suggests
<rbasak> cohonen: https://www.stgraber.org/2012/02/24/dns-in-ubuntu-12-04/
<cohonen> rbasak: let me verify
<cohonen> rbasak: yes, that one will work, EXCEPT, it still appends the ISP DNS servers
<cohonen> I dont want ANY dns servers other than the ones i choose
<cohonen> not ISPs
<cohonen> not googles
<cohonen> just the ones i pick
<cohonen> and resolvconf doesnt seem to let you do anything other than prepend or append to crap i get from whatever dhcp server answers
<cohonen> i know i can probably edit some dhclient thing
<cohonen> but why do i have to,, i just want resolvconf to go the way of the dodo
<rbasak> cohonen: I'd say that's a matter of configuration of your DHCP client, not resolvconf. Of course disabling resolvconf will disable it too, but that doesn't feel like the correct place to configure what you want to me.
<rbasak> I'm quite happy with resolvconf.
<cohonen> /etc/resolv.conf IS THE PLACE
<rbasak> /etc/resolv.conf is fine for static DNS configuration.
<cohonen> i just get tired of crap trying to the me where to get my IPs from
<rbasak> It doesn't work well for dynamic environments - such as a laptop.
<cohonen> rbasak: which is what i want
<rbasak> cohonen: and you can have what you want, but you're expected to know how to configure the bits you need.
<cohonen> rbasak: i does if you know good dns servers and youre not in a  stazi country or network
<cohonen> rbasak: so,, back to my original question
<cohonen> if i remove resolvconf, dpkg tells me, ill remove that AND ubuntu-minimal ALSO
<rbasak> ubuntu-minimal is a metapackage.
<cohonen> so, doesnt do anything ?
<rbasak> You might break release upgrades, but no your system shouldn't really break apart from that.
<cohonen> okey,,, hmmm
<rbasak> However, understand that you're going "off piste", so any future bugs or issues caused by doing this are down to you.
<cohonen> btw,, its on ubuntu server, on my toy servers
<cohonen> why is it wierd that i want a 100% self managed dns there ?
<cohonen> even if i use dns (my isp forces me to)
<cohonen> dns / dhcp
<rbasak> It's odd that you want DHCP but not DNS from DHCP.
<rbasak> If you just wanted a static IP and static DNS, you can do that with dns-nameservers in /etc/network/interfaces and everything would lbe fine.
<cohonen> well, i admit its not the most common
<rbasak> So it seems to me that what you really want is to configure dhclient to not take DNS.
<cohonen> no no i tried messing with /etc/network/interfaces
<cohonen> hmm
<cohonen> i think i have to look into dhclient
<cohonen> rbasak: yes it seems so
<cohonen> sigh
<cohonen> rbasak: reason is that my IP is external but not 100% static
<rbasak> Get a better ISP :)
<cohonen> better would mean digging a fiber myself
<cohonen> its pretty good as it is, just has a few annoyances
<rbasak> niedbalski: I'm not sure that bug 1430181 is appropriate to fix in an SRU or during feature freeze in Vivid (without an exception).
<rbasak> niedbalski: seems to me that TCP binding is a new feature because the switch is documented to support UDP only.
<rbasak> niedbalski: the patch looks pretty extensive too.
<cohonen> rbasak: okey i guess the best solution is to edit the dhclient, that seems to work, very tempted to set up a xattr to lock the file
<cohonen> rbasak: yea , the solution is to NOT request dns-nameserver domain search etc via dhclient
<cohonen> thanks
<cohonen> later guys
<rbasak> cohonen: no problem
<rbasak> cohonen: thinking about it...
<cohonen> ???
<rbasak> cohonen: I think that with resolvconf disabled, dhclient is probably writing to your /etc/resolv.conf.
<cohonen> it makes sense sorta,.
<rbasak> I could be wrong though.
<rbasak> So removing resolvconf might not have helped you here anyway.
<cohonen> that what it seems like
<rbasak> So maybe rage a little less at resolvconf? :)
<cohonen> i kindof want to reinstall resolvconf just to be close to a normal ubuntu install
<jrwren> cohonen: I think resolvconf may help you more than hurt you. It makes it easy to override dhcp's dns settings.
<cohonen> jrwren: well , i see that it has options to all resolvers for interfaces
<cohonen> it just another complexity i dont like
<jrwren> cohonen: I used to agree, then I learned it, saw the problems it solves and embraced it.
<cohonen> jrwren: yea , i had the same experience with firewalld on rhel/fedora clients
<jrwren> cohonen: to override nameservers for dhcp on eth0, run: echo nameserver 8.8.8.8 | sudo resolvconf -a eth0
<cohonen> jrwren: anyway, i reinstalled minimal and resolvconf
<cohonen> jrwren: and that will be the only dns server then
<jrwren> cohonen: you can later remove with sudo resolvconf -d eth0
<jrwren> cohonen: I do not think it will be the only, but it will be first, so unless it is down it will be only one used.
<cohonen> jrwren: and that setting is persistent across boots ?
<cohonen> jrwren: hmm okey, thats like the head file
<jrwren> cohonen: unsure, I tend to run ephemeral server instance, so I don't reboot
<cohonen> i dont reboot much either, but i want to be able to trust as much state as possible
<cohonen> jrwren: i still belive that in my particular usecase the answer was to kick dhclient in the face and tell him NO DNS !
<jrwren> cohonen: why is that?
<jrwren> cohonen: my isp provides poor dhs too :)
<cohonen> mine too
<cohonen> and im not gonna use googles even if it gave free BJs
<jrwren> was just an example I used :)
<cohonen> so i have a list of freedom respecting DNS servers
<jrwren> what are these freedom dns?
<cohonen> jrwren: yea, they were clever , getting 8.8.8.8
<cohonen> jrwren: well 2 are small providers
<cohonen> 2 are opendns which are obviously not so cool
<cohonen> but i count on that losing the first 2 is rare
<strikov> smoser: i just found out how to make canonistack faster; due to some reason m1.large gets much faster i/o than m1.small which significantly increases performance; that looks like something wrong but it works :)
<smoser> stnice.
<strikov> rbasak: this looks like a correct crash report, right? http://paste.ubuntu.com/10724987/
<rbasak> strikov: yes that looks good
<strikov> rbasak: that's what i get with the hook copied to the right place; basically: http://pastebin.ubuntu.com/10724996/
<strikov> rbasak: we had this file in .files before but switched mysql-server from dh_movefiles to dh_install which required .install
<rbasak> strikov: does "ubuntu-bug /usr/sbin/mysqld" generate a report? When I tried it, it hung forever.
<strikov> rbasak: yes, in a few moments
<rbasak> strikov: OK. It's much simpler than I thought then. Sorry!
<strikov> rbasak: okay :)
<strikov> rbasak: you owe mean one really painful bug though
<strikov> *me
<rbasak> :)
<strikov> rbasak: and returning back to sigsegv dumps; is it expected that they are not going through apport?
<samba35> i am using ubuntu 14.04 with ssh i want to change default ssh port to 5123 but when i change it is not able to change port it always show 22 , i make changes in sshd_configu port 5123
<bekks> samba35: Chaneg the port and restart the sshd daemon.
<strikov> rbasak: https://bugs.launchpad.net/ubuntu/+source/init-system-helpers/+bug/1439793
<strikov> rbasak: not sure if it worth fixing but want to let ubuntu-devel guys know about that corner case
<samba35> bekks: thanks i was makeing some mistake with service restart i was using /etc/init.d / method to restart the service but now with service ssh restart it work
<samba35> thanks
<bekks> samba35: For sshd, that actually doesnt matter :)
<samba35> you mean for starting service ?
<bekks> samba35: Yes.
<samba35> but unfortunately it did not work for me
<samba35> but service xx restart work
<wiredfool> I've got an older server running trusty w/ a 4 disk raid 10 setup, initially setup on 500gb disks but now on 1tb due to several single drive failures + rebuilds. I'd like to convert it to a 2 drive raid 1, using the currently unpartitioned space, and free up two disk trays for ssds.
<wiredfool> current mirror is /dev/md1.  I'm thinking of making a new degraded raid 1 mirror, /dev/md2, with /dev/md1 as it's only member. Then reboot for that to be /. then I'll tail out one of the drives, repartition it, and add it to /dev/md2 and let it rebuild. Then when that's good, I'll fail /dev/md1 out of the array, fail one more drive out of it, and add it to /dev/md2, and let it rebuild again.
<wiredfool> then I should have a working raid1 set, and a pair of drives that still have the same data (if I'm lucky and fail the correct drive on the second try).
<wiredfool> Is this a workable plan, or is it totally daft?
<parallel21> I have a directory that has a size of 0 and I am unable to delete it
<parallel21> Unable to cd into it either
<JanC> parallel21: permissions?
<JanC> wiredfool: you'd have to pastebin more specific info, but failing disks in a RAID system to re-use them for something else is certainly possible
<parallel21> JanC: Doing this as root
<teward> is there any harm on my local computer to edit the ownership of files in /etc/bind for my bind server for my user to own it, and bind group to have access as well?
<sarnold> teward: if you don't mind your web browser being able to edit those files.. :)
<JanC> parallel21: did you run fsck on the file system?
<teward> sarnold: given that this system is encrypted out the wazoo and the password is complex enough that I have to plug in a yubikey just to actually enter the password when prompted... :P
<teward> sarnold: it's a local bind9 instance for local IPs only on the system (for the VMs on the host only subnets xD)
<sarnold> teward: hehe, I figured you weren't actually going to run firefox on your dns systems :)
<teward> sarnold: indeed.
<sarnold> teward: but I saw an opportunity for a joke and had to take it :)
<teward> sarnold: THOSE are on separate servers xD
<teward> sarnold: indeed.
<teward> sarnold: i meant from a runtime perspective if things'll break - setgid on the directories would enforce group ownership xD
<wiredfool> JanC: is there a way to tell which disks are paired in a mdadm raid10?
<sarnold> teward: should be fine, just so long as bind can read them
<JanC> wiredfool: I never used mdadm raid10, so don't know for sure, but I assume there is
<teward> sarnold: indeed, g+r is still in place, and the directory is set o+s g+s, with MYUSER:bind as the ownership
<JanC> I've done it with a raid1 though
<teward> s/o+s/u+s/
<wiredfool> I could add in a second partition on one drive that I'm going to keep, prior to failing out the first drive, and then watch the iostat to see what's reading and what's writing.
<JanC> wiredfool: I'm pretty sure mdadm can tell you what devices are used for what purpose?
<wiredfool> JanC: not in a manner that's obvious -- http://pastebin.com/6hH14K3a
<JanC> wiredfool: the "layout" parameter should be useful
<JanC> wiredfool: https://en.wikipedia.org/wiki/Linux_MD_RAID_10#Linux_MD_RAID_10
<JanC> looks like /dev/sda3 & /dev/sdb3 are part of 1 mirror, and /dev/sdc3 & /dev/sdd3 of the other
<JanC> so you can fail one device in each mirror
<wiredfool> ok, if running code matches wikipedia
#ubuntu-server 2015-04-03
<bojan> Can anybody solve this problem "I have configured NFS on ubuntu and i can mount the shared partition on the same computer but cant mount in the computer connected in the network...Saying error as :mount.nfs:server access denied while mounting"..But i can see the shared directory in the network computer by the command "sghowmount -e 192.168.2.1"
<lordievader> Good morning.
<jptned> Hello guys,  I'm getting kind of DDOS'ed / Brute forced. My server has no problem with it, but it seems like my router can't handle it.
<jptned> The attacks come in on ports, 25, 80, 110 and 443. Exim logs of a few days ago show numerous login attemps. I've got fail2ban installed, but it doesn't block these IP's. Blocking them manualy doesn't work either, because the IP's change constantly.
<siebjee> is anyone here able to help me with xl2tp and performance improvements ? Not able to get over the ~30 Mbps :( (10gbit line)
<Walex> siebjee: 'top'
<dbm> Greetings, anyone know how to allocate free space to existing hard drive without lossing any data on it? (I'm using VPS Ubuntu server) :)
<sarnold> dbm: the usual approach is to use a volume manager like LVM or zfs
<sarnold> dbm: if the thing already exists it might be a bit harder, but you can resize ext* filesystems with resize2fs
<dbm> sarnold: alright, well the thing is im using VPS @ OVH and cloud 1 package is 25gb so i recently buyed cloud 2 and they just assigned 25gb to me. So now i have 25gb of free space and nothing to do with it. Unless i can somehow to allocate it, they gave me some tutorials but they literally telling me to recreate my partitions not to allocate it
<sarnold> dbm: that may be the easiest solution
<sarnold> dbm: did they grow an existing block device by another 25 gigs? or did they give you a second block device to work with?
<dbm> sarnold: yeah, but then im lossing data :)
<dbm> sarnold: they gave me second block
<sarnold> dbm: spin up an AWS or DO or similar instance for two hours to temporarily hold your data?
<dbm> I might do 1 thing thou.. buy for 5e "Backup" so they back up everything on thier other system. And literally recreate (reformat) all harddrive and thats it?
<dbm> Probably the simpliest way of doing it, without any harm
<sarnold> and good opportunity to test backups/restores :)
<_1_Himanshu3> hey
<wxl> hey folks. what's the plan to migrate precise to php 5.4 now that security updates have stopped happening upstream on 5.3?
<sarnold> wxl: there are no plans to change major versions of php in released products; we'll continue backporting patches as needed
<sarnold> wxl: see also https://wiki.ubuntu.com/SecurityTeam/FAQ#Versions and https://www.debian.org/security/faq#version
<wxl> heh i just found that sarnold :) thanks!
<sarnold> (the latter is from debian, but holds true for us too :)
<sarnold> firefox and chomium-browser are the two biggest counterexamples I can think of; both of those are far too much for us to backport individual security fixes for them, though.. supporting them on older releases is also not easy, it is popular with users :)
<wxl> frankly i'm surprised that's possible with php in this case since i know 5.4 is a fairly large deviation from 5.3
<wxl> so you must have to cherry pick as well as re-develop things!
<sarnold> depending upon the issue it can be more or less engineering work; some are simple patch refreshes, others are fairly large reworkings.
<wxl> yeah wow that's pretty amazing
<wxl> now i'm even more impressed :)
<sarnold> we've built fairly extensive test suites over the years to help catch regressions; it's amazing what they pick up that upstreams often don't... :)
<wxl> nice
#ubuntu-server 2015-04-04
<zhhuabj> https://docs.google.com/document/d/1Lhvp2-fbW8rezaipQklk3tzU7mIefH3J2UUFUW5Dv24/edit#
<zhhuabj> sorry, type error
<zhhuabj> pls ignore it
<sarnold> you may wish to check the ACLs on your doc to make sure it is properly restricted
<brianw__> anyone here use lxc for samba ad dc containers && ctdb/glusterfs samba dfs file servers (containers) ??
<brianw> my nick is brianw
<brianw> :)
<arcsky> hello
<arcsky> i have empty files messages/syslog. what can i do about it?
<samba35> i am trying to confiugre mail /web server @home ,i am able to send email but i am not sure how pop3/pop3s  to be configure
<samba35> i am behind a firewall /utm /pop proxy for spam and antivirus and other thing ,if i have to use ssl certificate do i have to use my own certicate  (create with openssl )
<samba35> using ubuntu 14.04.2
<bekks> samba35: "Yes."
<samba35> bekks: thanks
<samba35> if i am using  proxy /relay do i have to use that proxy's certificate ?
<bekks> No, since you want to configure your own mailserver, not the proxy server.
<samba35> my ubuntu mail server is behind utm/firewall /proxy
<samba35> in that case do i have to use that utm/firewall/proxy's certificate ?
<samba35> or i have to use dovecot's certificate
<samba35> my mail server use relay to deliver mails
<pi-> I'm seeking to remotely (from my MacBook) modify files in my VFS's /var/www/html.  I'm looking at using sshfs to do a remote mount. I can get it working for mounting ~ but not /var/www/html
<pi-> sshfs  pi@46.101.38.186:/home/pi  ~/RemoteFS/droplet_home    <-- works, asks me for my remote pi password
<pi-> sshfs -o allow_other root@46.101.38.186:/var/www/html ~/RemoteFS/droplet_webroot   <-- asks me for my remote root password, but rejects it even though I know I'm typing it in correctly
<pi-> ah I have disabled remote root access!
<pi-> d'oh
<pi-> So what I'm wondering is whether I can move everything from /var/www/html to say ~/web, and symlink
<pi-> Does this sound sane?
<Jare> remote root..grrr
<Jare> change the file owner/group for that dir+files, just ensure that the webserver has permissions to access the files...
<pi-> I've just done `sudo mv /var/www/html/wiki ~/web/` -- now I need to do something like `sudo chmod -R u+w ~/web` so that user has write-access. But that doesn't seem to have had any effect.
<pi-> `ls -l web/`  displays  "drwxr-xr-x 14 root root 4096 Apr  3 15:56 wiki"   -- it is failing to set the middle '-' to 'w'.
<pi-> I'm guessing this is because by doing `sudo`, my 'u' is now going to be root
<pi-> Just reading up, the owner of that folder is root.  So I think I need to be changing the owner to pi.
<pi-> `sudo chown -R pi web;  sudo chmod -R u+w web;  ls -l web/`  still prints out "drwxr-xr-x 14 pi root 4096 Apr  3 15:56 wiki"
<pi-> I do notice everything in ~/web/wiki now has ??w ??? ??? permissions. I've just realised I didn't want to do that. heh. Lucky I took a backup.
<sergey> How to setup mail sever on vps?
<Voyage> Hi
<Voyage> I have googled a lot and found no answer workable answer to this question: How to downgrade from php 5.5.9 to php 5.4?
<teward> Voyage: manual compilation of php 5.4 and removal of the version installed from the repositories, likely
<Voyage> teward,  if I install php5 manually, it wont be connected to apt-get anymore and I would not be able to install modules and auto integrate them with apache etc?
<teward> probably not, no.
<teward> Voyage: why do ou need 5.4 anyways
<teward> it's "old"
<Voyage> ya, some app requires it.
<teward> 'some app'
<teward> the app needs updating
<Voyage> teward,  sugarcrm7.5
<Voyage> ok, how about I downgrade/change repository, install php5.4, lock the php5.4 version, change repository to latest again in sources. update?
<teward> Voyage: still going to be stuck on bad module versions - the modules would've been updated for the version in the repositories, not the version you've installed
<teward> Voyage: i can't believe that sugarcrm doesn't provide an update package for newer PHP, after all they are kinda bound to PHP upstream's versions, soon enough 5.4.x will probably burn
<Patrickdk> how did you get php 5.5.9?
<Patrickdk> 14.04 is 5.4
<Patrickdk> or it is 5.5 isn't it?
 * Patrickdk is going braindead
<teward> Patrickdk: it's 5.5
<Patrickdk> ya, so many issues with php 5.5, expectially from zend
<Patrickdk> it's one thing to have app support for 5.5, it's another when php people can't make support for 5.5 so you can support 5.5
<Voyage> teward,  so is there a proper solution?
<Patrickdk> voyage, it's simple
<Patrickdk> you just rebuild the 5.4 or 5.3 php packages for your release
<Voyage> rebuild the 5.4 ?
<Voyage> Patrickdk,  what do you mean? how?
<Patrickdk> I am not going go into how
<Patrickdk> as I don't feel like holding your hand for the next x hours to do it
<Voyage> Patrickdk,  rebuild manually without apt and repositories?
<Patrickdk> that is what google is for and all the webpages that tell you
<Patrickdk> why would you do that?
<Voyage> Patrickdk,  dont worry, I would do the research. just tell me what do you mean by "rebuild"
<Patrickdk> I never said not to use apt, or repos
<teward> Voyage: the alternative is to rebuild the debian package - get the older version from Debian or the last version in Ubuntu repositories to have 5.4, rebuild.
<teward> the tricky part is a LOT of the php modules in the repositories aren't exactly in the php5 source package
<Patrickdk> anything that touches php will likely need to be rebuilt
<Patrickdk> but likely, you don't use a lot of them
<maxb> Though, to be fair, for a complex package like PHP, rebuilding a different series package probably isn't something someone who hasn't previously worked with Debian packaging wants to confront
<Patrickdk> so probably php base, and 2-3 other packages
<Patrickdk> maxb, why?
<Patrickdk> rebuilding php is extreemly simple
<teward> Patrickdk: apache might hate it
<Patrickdk> teward it shouldn't at all
<Patrickdk> can't think of a single thing apache would hate about it
<teward> Patrickdk: you'd still need to apt-pin the version down - otherwise it'll just overwrite again
<teward> rebuilding also requires a build environment for it
<Patrickdk> no
<Patrickdk> for version, just add a 1: infront of the version, done
<Patrickdk> or pin it, but that can be annoying
<teward> rebuilding it as should be done needs the build environment unless you want to install all the build dependencies
<Patrickdk> rebuilt it in a ppa
<Patrickdk> no build env needed
<Voyage> Patrickdk,  teward  what do you both mean by "rebuild" and what involvment would the repos have in it?
<teward> Patrickdk: there's probably a reason the person who had the php oldstable PPA stopped after 13.10
<teward> i could ask, but i wonder if it's because build deps might be to 'new' or such
<Patrickdk> I don't know, could look
<Patrickdk> personally I build my own apache and php's
<teward> ran into the problem for the znc package, in a PPA, as well, older than 14.04 you have to go grab additional PPAs for g++, have to backport things, etc.
<Patrickdk> been a requirement for years
<teward> but the reverse can be true also, build deps drop things that're needed, problems arise, etc.
<Patrickdk> https://help.launchpad.net/Packaging/PPA/Uploading
<Patrickdk> best overview url I can locate :(
<Patrickdk> but that is more about uploading
<Patrickdk> not really rebuilding
<teward> Patrickdk: needs a launchpad account, PGP keys uploaded, etc.  -  not for the faint of heart either
<Patrickdk> seems most people just post how to rebuild locally
<Patrickdk> heh, that isn't hard
<Patrickdk> and it makes distribution so much easier
<Patrickdk> and updating
<Patrickdk> vs attempting to make your own local repo
<teward> Patrickdk: didn't say squat about local repo xD
<Voyage> Patrickdk,  teward  what do you both mean by "rebuild" and what involvment would the repos have in it?
<tarvid> installing 14.04 on a dell r410 with one 160 gb drive and two 500s. Can install first on the 160 as LVM and then add the 500s?
<tarvid> make thar r310
<Walex> tarvid: why not?
<Voyage> what will this do sudo sed -i.bak "s/trusty/precise/g" /etc/apt/sources.list ?
<tarvid> the two 500s are now set up (maybe0 with ESXi, can I just blow those away
<Patrickdk> Voyage, that will cause LOTS of breakage
<Voyage> Patrickdk,  hm. but what would this do ?
<Patrickdk> hopefully? nothing, except not let you update/security patch/...
<Patrickdk> it will attempt to download/install from 12.04 instead of 14.04
<Voyage> Patrickdk,  you mean, it will change repositories to old ones?
<teward> Voyage: lots and lots of damage, it'd set your sources to use Precise instead of Trusty and break a ton of things
<Voyage> Patrickdk,  how about I use sudo sed -i.bak "s/trusty/precise/g" /etc/apt/sources.list , install php 5.4 old version, lock it, and then change to new repositories?
<Voyage> teward,  ^
<Patrickdk> why do all that?
<Patrickdk> why not just INSTALL 12.04
<Patrickdk> since your doing the same thing, but making it much much more broken that way
<Voyage> Patrickdk,  ya, but I already have a server assembled with latest ubuntu
<Voyage> ubuntu should provide a way.........
<Patrickdk> it does provide a way
<Patrickdk> you can download it, rebuild it, and distribute it
<Patrickdk> ubuntu does will not support it, so there is no point in them doing it
<Voyage> Patrickdk,  ok. when you say "download it" do you mean downloading a .deb package and installing it???
<Patrickdk> no
<Voyage> Patrickdk,  then?
<Voyage> Patrickdk,  what do you meant? download what?
<Voyage> rebuild what?
<Voyage> Patrickdk,  teward  come one people, whats so hard about it? what do you guys actually mean. I am having trouble understanding the terminology here. What is it suggested to be downloaded? if its not a .deb. what is it?
<Voyage> a .gz.tar?
<Voyage> are you guys talking about compiling it from source?
<maxb> Effectively, yes.
<maxb> "ubuntu should provide a way" you say - well, it'd be more convenient for you, sure, but maintaining multiple versions of software increases the burden of support on Ubuntu quite significantly
<maxb> It's easy to see why it's not considered an efficient use of developer and tester time
<Patrickdk> this *is* something rhel does
<Patrickdk> though, only lately
<Voyage> maxb,  hm
<Voyage> ok. I tried this: is it a good way despite error? I tried this but its not working. echo "deb http://php53.dotdeb.org stable all" | sudo tee -a /etc/apt/sources.list                    Err http://php53.dotdeb.org stable/all i386 Packages              W: Failed to fetch copy:/var/lib/apt/lists/partial/php53.dotdeb.org_dists_stable_all_binary-amd64_Packages  Invalid file format
<Voyage>  why doesnt this works? echo "deb http://php53.dotdeb.org stable all" | sudo tee -a /etc/apt/sources.list
<maxb> You appear to now be using some site's pre-built packages of a different PHP version. The fact that what you've said includes 'stable' rather than any Ubuntu codename strongly suggests you're trying to use something targetting Debian on your Ubuntu system. If it works at all, it'll be more by luck than intention
<Voyage> maxb,  hm. ok is there ANY way to install / php 5.4 by a repository and lock the upgrade?
<maxb> No, unless you find someone who is specifically building such a repository.
<Voyage> so  http://php53.dotdeb.org stable all wont work. and what about pool?  http://snapshot.debian.org/package/php5/5.4.39-0%2Bdeb7u2/
<Voyage> max?
<maxb> Now you're just randomly grasping at irrelevant links you happen to find
<Voyage> because I cant find a usefull one
<Voyage> maxb,  I really wonder what does the "rebuilding" means when Patrickdk  and teward  say it but do not explain it
<maxb> The notion of compiling software?
<Voyage> from source?
<Voyage> like in a gz.tar?
<Voyage> maxb,  thats what Patrickdk  and teward  dont tell..........
<maxb> Well, yes, from what else would you build software? That's (by us) assumed to be not worth mentioning.
<Voyage> why I am getting the error? sudo deb http://snapshot.debian.org/archive/debian/20150329T213024Z/       sudo: deb: command not found
<maxb> Sorry, it looks like you're looking for something (namely, pre-built packages of a specific version of PHP matched to a specific version of Ubuntu) that doesn't seem to exist because no-one has made that
<Voyage> maxb,  hm. so if I installed php 5.4 by dpkg -i .deb or build it from source, I would not have the advantage to manage related modules / integration by repository and apt-get?
<maxb> You're getting that error because you typed some meaningless string into a shell
<maxb> With sudo at the start, no less. You probably want to be a bit more careful about what you run as root
<maxb> Right, you don't get the advantages of a supported repository of integrated software if no supported repository of integrated software exists for the combinations of versions you want to use
<Voyage> hm
<Voyage> maxb,  I followed http://stackoverflow.com/questions/17128602/apt-get-identify-all-old-version-numbers-of-a-package           answer 1. I wonder what should be replaced for     "deb http:// ......"
<Voyage> this is the correct sudo echo "deb http://snapshot.debian.org/archive/debian/20150329T213024Z/ stable all" | sudo tee -a /etc/apt/sources.list
<maxb> No, in so many ways
<Voyage> ..
<Voyage> it just adds the repo. doesnt it
<Voyage> to sources.list
<maxb> So the first thing which is wrong with it is you're trying to add a Debian package source to an Ubuntu system - which, granted, is something that *might* work *sometimes*, but really if you're resorting to that you should just build from source rather than just hoping for compatibility
<maxb> The second thing that is wrong is 'sudo echo' -- why would you bother sudo-ing a command that doesn't require any kind of elevated privileges?
<maxb> The third thing which is wrong is that you're trying to use snapshots of the Debian stable release, which is pretty much static anyway, so makes no sense
<Voyage> hm
<Voyage> maxb,  i guess to write inside /etc   would require a root
<maxb> In the interests of trying to work this conversation around to a more productive direction, let me summarize: You really want something pre-made. But you've searched for some time and it doesn't seem to exist. It's probably best to accept that and reconsider your direction.
<maxb> Yes but that command isn't writing inside /etc -  which you seem to have partially recognized since you do have the actually relevant sudo before tee
<maxb> As far as I can see your options are basically: 1) Use the PHP 5.5 that ships with current Ubuntu releases, or 2) Compile PHP 5.4 from source
<Voyage> hmm
<Voyage> maxb,  the only thing I get afraid is if I install php5.4 by source. many modules like sudo apt-get install php-curl or others  that integrate with apache auto matically will be a big headache for me
<Voyage> maxb,  am I correct?
<Voyage> as a side error: W: Failed to fetch http://snapshot.debian.org/archive/debian/20150329T213024Z/dists/stable/Release  Unable to find expected entry 'all/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file)
<maxb> You won't be able to install modules with apt-get to work with a PHP built from source
<maxb> Though many modules are shipped as part of the main PHP source tarball, and will build with it if the relevant library dev packages are available.
<maxb> Some may need additional flags passed to PHP's configure script to enable
<Voyage> maxb,  thats the headache
<Voyage> maxb,  ok. which ubuntu version ships with php 5.4/
<Voyage> ?
<maxb> None which are currently still supported
<Voyage> 12.  something?
<Patrickdk> 12.04 is 5.3
<Voyage> whats for 5.4?
<maxb> Some releases between 12.04 and 14.04 shipped with 5.4, however all such releases have passed their End of Life
<Voyage> maxb,  its 13 then?
<Voyage> maxb,  i see amazon has 64-bit ubuntu/images/ebs/ubuntu-quantal-12.10-amd64-server-20140202 - ami-006a0930        option to launch instance
<Patrickdk> that would be unwise
<maxb> I'm quite deliberately not saying the exact version number because I can't consider deploying something afresh on an OS version that has already reached EOL to be a sensible move
<Voyage> unwise? why? because its old?
<Patrickdk> cause it is NOT MAINTAINED
<Patrickdk> you will NEVER get securty updates for it
<Patrickdk> it already has known vulnerabilities
<Voyage> hm
<Voyage> so best solution is to compile by source or dpkg -i deb php.5.4
<Voyage> Iam going for the later
<Voyage> I never compiled anything fromsource
<Voyage> atleast a deb would be installed auto on default locations
<Voyage> I guess centos maintins versions
<Patrickdk> centos doesn't maintain anything
<Patrickdk> that is the whole point of centos
<Voyage> who does ?
<maxb> CentOS is an exercise in de-branding and rebuilding mostly unmodified source from RHEL
<Voyage> maxb,  am. this resulted in a promising message; add-apt-repository ppa:ondrej/php5
<Voyage> hm
<Voyage> ah. no use. my apt policy is still the same
<Voyage> having 5.9
<Voyage> going for db
<Voyage> deb
<teward> Voyage: php 5.4 is in his oldstable ppa
<teward> Voyage: but that isn't updated past 13.10
<teward> read: https://launchpad.net/~ondrej/+archive/ubuntu/php5-oldstable
<teward> Voyage: 12.10 had 5.4, but it's not supported.  12.04 + ondrej's php5-oldstable PPA would give you 5.4
<teward> but no guarantees on it having everything you need
<teward> I remember 12.10 getting 5.4, 'cause one of my patches made it into Debian's packaging to switch php5-fpm from a TCP listener to a linux socket
<teward> s/linux socket/unix socket/
<Voyage> teward,  i see
<Patrickdk> I just normally change it to socket myself
<Patrickdk> nice the default is that way though
<teward> Patrickdk: indeed, it changed with 5.4rc1 iirc
 * teward pulls the Debian changelog
<Patrickdk> this php 5.5 issue is the ONLY reason I haven't upgraded customer facing machines to 14.04 yet
<Patrickdk> but that isn't so much a ubuntu issue, as it is zend/php
<Voyage> so whats the easiest way to install and maintain 5.4? where to download. source or .deb? where from"?
<teward> Patrickdk: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=650204 is the bug where it was changed.  Thanks to Clint Byrum (SpamapS) for helping get that patch up to debian back when I wasn't as fluent as I am now xD
<teward> s/bug/php5-fpm listener/
<teward> Patrickdk: i don't know if they've reversed it... but i could check xD
<Patrickdk> oh nice, zend JUST released support for php 5.5 a few days ago
<Patrickdk> that only took 2+ years
<teward> wow my vivid vm is outdated o.o
<teward> oh well
<teward> Patrickdk: yep, php5-fpm still listens on a socket by default :P
<Voyage> can anyone tell where to download (I just want to do it wrong experiments)
<Patrickdk> and it looks like it has backlog support now
<Patrickdk> Voyage, there is none
<Patrickdk> you have to create it yourself
<Patrickdk> no one yes has cared enough, to attempt it
<Patrickdk> or maybe have, but gave up
<teward> I may be willing to try and grab Quantal's package and repackage for trusty in a PPA, but I won't go past one attempt
<teward> i'm lazy after all xD
<Patrickdk> tried, the configure script is horrible
<teward> quantal was the last version to support 5.4 anyways
<Patrickdk> 5.4 configure might be better
<teward> Patrickdk: um... i meant the actual 5.4 package
<teward> that was in quantal, try and forward port it
<teward> xD
<Patrickdk> but there have been new security patches CVE's that need to be ported to it
<teward> blah
<teward> i hate 5.4
<teward> why can't app devs actually update their things
<teward> Patrickdk: i could just pull 5.4.39 from upstream
<Patrickdk> yep
<teward> try and wrap the 5.4 debian/ around it from quantal
<teward> but still
<Voyage> Patrickdk,  no I mean, how to install php 5.4 from source? what you call rebuild
<Patrickdk> those are two totally different things
<Patrickdk> install from source, just follow php's instructions
<Voyage> Patrickdk,  which link?
<Patrickdk> php.net
<teward> Patrickdk: and I give up - there's too much of a delta
<teward> (half the patches fail
<Patrickdk> :)
<teward> could try ondrej's source but i'm lazy as i said earlier
<teward> ooo wait there's build fails there that's why
<teward> nevermind
<Voyage> I just installed php by source. apache is not displaying my .php files (its showing blank pages ) what can be wrong?     apache does not knows if php is isntalled.I gues
<replman> Hi! I have a cronjob that executes a backup script as user 'backupchef'. In backupchef's home/.ssh i have id_rsa.pub for ssh. When executing the script manually logged in as backupchef, it works. When cron executes the script i get a password failed from the ssh.
<bekks> Voyage: You downgraded your php5 installation manually, as OerHeks just told in #ubuntu?
<Voyage> ya.
<Voyage> I have php 5.4
<Voyage> but it should run
<Voyage> as it is a php...
<bekks> LOL, no :)
<Voyage> its just not installed by a repository
<Voyage> thats the only difference
<Voyage> right?
<Voyage> or no?
<bekks> As it is PHP, it is more a matter of star constellation, water temperature and some voodoo.
<bekks> In #ubuntu you just made sure that /usr/lib/apache2/modules/libphp5.so is missing.
<Voyage> bekks,  so what do I need to do?
<bekks> Looks like you need to build the missing modules as well.
<bekks> Cant you just use a more recent php version from the repos?
<Voyage> nop
<Voyage> bekks,  which modules? modules on apache side or php side?
<bekks> Voyage: You are missing the php module for apache.
<Voyage> hm
<Voyage> ok  how to uninstall php that I installed by source?
<bekks> Please keep it in one channel :)
<Voyage> ok
<Voyage> isnt it the same channel?
<Voyage> oh. yes. messages dont pass
<Voyage> rigth
<bekks> No. This is #ubuntu-server, while you are in #ubuntu, too :)
<Voyage> ops
<replman> ok, got my script to work with cron.
<r0x> Hi. I'm configuring a server and for some reason I want that a certain user can only view a particular folder and that can execute only a set of program/commands (e.g.: I don't want that user X can execute ifconfig, ip, ps and even ping). How I can do that?
<r0x> I have already asked to #ubuntu channel
<bekks> r0x: The answers are the same as in #ubuntu ;)
<r0x> ok, but after I do chroot a certain user can still execute all the commands available on my system?
<r0x> such trying to ping other hosts on the network
<r0x> bekks: he can execute only the commands that are available in the bin folder that I create in the chroot environment?
<bekks> Thats what a chroot is for.
<bekks> But you cannot hid /proc and /sys from the user. Your chroot will be dysfunctional.
<r0x> uhm
<r0x> so i should create a vm?
<JanC> you could use containers to get even more protection
<JanC> without needing a VM
<bekks> r0x: A VM will not help you from hiding /proc and /system.
<bekks> And containers dont, too.
<r0x> ok, but if someone see /proc and /system of the vm I don't care
<r0x> I only want to hide the configuration of the real machine
<r0x> JanC: with containers I can do what I asked for?
<JanC> also, to prevent people from executing certain applications, you only need to set/revoke permissions?
<r0x> JanC: I don't understood when you said "you only need to set/revoke permission?"
<JanC> r0x: you can set different permissions depending on the user?
<JanC> or group
<r0x> no
<JanC> etc.
<r0x> Same for all
<JanC> ?
<r0x> I don't need that kind of flexibility
<JanC> eh?
<JanC> you were asking for it...
<r0x> I just have a set of users that have the same permission
<JanC> that sounds like a "group"
<r0x> Yes
<JanC> so use it
<r0x> I found this on the internet: https://openvz.org/Main_Page
<bekks> r0x: Seeing /proc and /sys will enable users to find out a lot about the effective configuration of the host.
<bekks> r0x: openvz are containers - and they still reveal a lot about the configuration of the host.
#ubuntu-server 2015-04-05
<lordievader> Good morning.
<ubuntu023> Hi every one
<ubuntu023> I would like to ask about web chat for ubuntu
<ubuntu023> any suggestion what should i use?
<ubuntu023> i have ubuntu server with nginx and php5-fpm and mysql
<rypervenche> ubuntu023: You could set up something like Kiki IRC and have it connect to an IRC channel.
<rypervenche> ubuntu023: Kiwi IRC*
<ubuntu023> i see, how about private chat server one?
<Amm0n> ubuntu023, https://www.ejabberd.im/jwchat-localserver
<ubuntu023> i think that one can do it
<ubuntu023> btw does this ejabberd open source?
<Amm0n> yep
<Amm0n> it's under gpl
<ubuntu023> i just saw that there are jwchat on ubuntu repository
<ubuntu023> http://packages.ubuntu.com/hu/lucid/web/jwchat
<ubuntu023> is this same one
<ObrienDave> lucid is fairly old
<ubuntu023> i have try web chat call shoutbox
<ubuntu023> it use socket.io but i am stuck with the installation of shoutbox so i am looking to alternative one
<ubuntu023> shoutbox have less documentation
#ubuntu-server 2016-04-04
<Curly_Q> Hello folks.
<Curly_Q> Ubuntu has a     sudo reboot      and a     sudo poweroff     command.
<Curly_Q> Is there a sleep mode to ensure that Ubuntu can be remotely controlled in sleep mode?
<mybalzitch> no
<Curly_Q> Power off means that the machine needs to be mechanically turned on.
<Curly_Q> I suppose I could program a device that starts the machine.
<Curly_Q> There are some devices that do this by way of telephone.
<Curly_Q> The other thing is can Ubuntu control the BIOS settings remotely?
<Curly_Q> I suppose it would depend upon the type of the machine.
<Curly_Q> Which means that the BIOS settings would require the machine to reboot.
<mybalzitch> you'll want to look at wake on lan
<Curly_Q> I do lots of computer repair and data recovery, but never remotely.
<Curly_Q> I was spoiled by Windows GUI and now like to use the Command Prompt. It is much more rewarding to use.
<Curly_Q> More homework but worth the effort.
<Curly_Q> My server is Apache2 Headless. Nice machine. i386
<Curly_Q> I am using Wily  version 15.10  <------<   Nice.
<Curly_Q> The one thing I don't understand is that if I purchase an expensive 64 bit machine will I have to change all of the software installed to 64 bit?
<Curly_Q> If I install a 64 bit Ubuntu server will the same software work?
<mybalzitch> I'm mostly sure you can run 32bit software in a 64bit userland
<Curly_Q> I am sure there are exceptions there.
<Curly_Q> I use VBOX on all of my 32 bit Windows machines and run Debian and Ubuntu servers.
<Curly_Q> I know that 64 bit box will work faster with VBOX.
<Curly_Q> If there were a 128 bit machine, I would purchase it in a heart beat.
<Curly_Q> Solaris and UNIX are a different story.
<Curly_Q> The issue is that most programmers program with the current technological compilers and it is 16    32    or    64 bit. Programs vary.
<Curly_Q> Any ways Mybalzitch thanks for the input. You have a strange nickname.  :)
<Curly_Q> I suggest    sudo apt-get  install Scratch Them     hehe
<patdk-lap> ya, use wake-on-lan or ipmi for poweroff/on
<Curly_Q> Don't forget to    sudo apt-get to update     it.
<patdk-lap> heh?
<Curly_Q> Patdk-lap interesting. I will Google that. Thanks.
 * patdk-lap wonders where you can even purchase a 32bit machine
<patdk-lap> it's been a rather long time since they made cpu's that didn't do 64bit
<Curly_Q> Patdk I have a large collection of computers in my home basement. They just sit there.
<Curly_Q> The nice thing about Ubuntu is that it still accomodates the older machines. If not you can still download the older versions of Linux or Ubuntu.
<Curly_Q> I have used Red Hat Linux years ago. It was nice.
<Curly_Q> It came with a floppy disk to install it.
<Curly_Q> Partitioning the disk was fun though.
<Curly_Q> The floppy disk was a DOS Windows disk. Strange with vmlinuz    file.
<Curly_Q> Oh whell, those were the old days,
<Curly_Q> Patdk where are you from?
<Curly_Q> I am from Massachuestts  U.S.A.
<Curly_Q> I guess that everyone here is asleep.
<Curly_Q> Have a nice day folks. Sleep well.
<Hyllegaard> Hi.
<Hyllegaard> My problem is that having just installed ubuntu and the openstack single server, I am not able to reach any of the ip's listed in the openstack status.
<Hyllegaard> Hi. I am having problems accessing the ip adresses listed in openstack status, on a freshly installed single server.
<Hyllegaard> Hi. I am having problems accessing the ip adresses listed in openstack status, on a freshly installed single server.
<Hyllegaard> Hi. I am having problems accessing the ip adresses listed in openstack status, on a freshly installed single server.
<melati> Assalamaulaikum
<pmatulis> huh?
<rbasak> pmatulis: it's a greeting. Not sure why he joined, greeted us and then left though.
<pmatulis> rbasak: ok, TIL
<bonzibuddy> hello
<bonzibuddy> my ubuntu server 14.04 keeps printing stuff on the console asynchronously, when i'm logged in over ssh
<bonzibuddy> "fatal: Read from socket failed: Connection reset by peer [preauth]"
<bonzibuddy> seems to be related to ssh and potentially brute force attempts
<bonzibuddy> ultimately i dont want those printing to my ssh session.... its really messes up ncurses based things
<bonzibuddy> how do i disable that???
<andol> bonzibuddy: Are you seeing that in a (physical) console or in a ssh terminal session?
<bonzibuddy> andol: ssh terminal session
<bonzibuddy> it seems to happen to any logged in user
<andol> bonzibuddy: Hmm, in that case I'm not sure, but it might an option to tune alt. remove the /dev/xconsole entry in /etc/rsyslog.d/50-default.conf
<bonzibuddy> andol thx! I will look into that.  seems to be what I'm after
 * andol is not entirely sure to what extent /dev/xconsole connects to ssh terminal sessions.
<bonzibuddy> this server is running syslog-ng and seems to have similar rules
<andol> Ahh, then it might make more sense :)
<andol> I just assumed a default 14.04 install.
<tyui> hi there
<tyui> how to check what happen to a process in a specific time ?
<Pici> tyui: What do you mean?
<tyui> i got a process which goes off  on saturday
<tyui> i would like to know why that process gone down like that ?
<Pici> tyui: look at your logs.
<tyui> where exactly ?
<tyui> i can't find /var/log/messages on ubuntu
<Pici> tyui: You might find something in /var/log/syslog  but your process might not be configured to log using syslog.
<tyui> so where i can find it ?
<Pici> tyui: it completely depends on what you are running.
<tyui> it's an program perl
<tyui> ok thanks
<tyui> bye
<Blueking> TJ- hello again :)
<Blueking> TJ- you talked about making script to fix mine net issues related to dhcpdiscover,dhcprequest,dhcpoffer, dhcpnak  problem
<TJ-> Blueking: oh, yes, how far did you get with that? Last I recall I suggested enabling debugging mode and capturing the log when the lease expires
<RoyK> TJ-: I think he has a wireshark dump of it
<TJ-> RoyK: it's the dhclient debug log I asked Blueking to collect
<TJ-> dhclient fires events into a shell script and we can hook into those events to over-ride actions
<RoyK> he should have that as well
<RoyK> he gave me some log yesterday, but I was a bit tired
<TJ-> Some terrible ISP there!
<RoyK> TJ-: the interesting part is that I have the same ISP and my internet connection has been stable for >5Y
<TJ-> right, but are you using static IP?
<RoyK> so it may be the problem is somewhere else
<RoyK> I have a static IP with DHCP
<TJ-> also, are you seeing the same short DHCP renew times
<RoyK> so has Blueking now
<RoyK> I'm just using the router they gave me - he put that in bridge mode, so there's the diff
<TJ-> right, and the issue is the ISP DHCP server is NACK-ing the renewal so the lease/address gets withdrawn on the client, then it asks for a fresh lease and gets the same IP back in a new lease
<TJ-> I was wondering about router MAC registration being an issue
<RoyK> they don't have proper ipv6 - so you need to setup 6rd with the router in bridge to make that work
<TJ-> as I recall we were dealing with IPv4 only
<RoyK> yeah, but the main reason to use bridge mode on that router thing is to make ipv6 work
<RoyK> with 6rd
<RoyK> I haven't tried it yet
<coreycb> jamespage, ddellav, I'm fixing up the mitaka CA build failures for ryu and cinder (i386 issues).  I think I have both fixed but cinder might take a little longer since it has some new non-i386 failures on xenial.  I'm going to see if RC2+new deps fix those issues up.
<rbasak> hallyn: quick question, not really work related. What's the recommended way for me to get a shell running in its own cgroup? Should I be using cgm like its manpage example or something else?
<rbasak> (on Xenial)
<rbasak> Google seems to suggest many things. It's not clear to me what is deprecated, etc.
<hallyn> rbasak: good question.  i think we should ask pitti if there is a proper systemd way to do it
<hallyn> cgm imo is the easiest way still, but since we're trying to drop cgmanager...
<rbasak> Thanks, I'll ask in #ubuntu-devel since pitti isn't here.
<hallyn> +1
<Blueking> TJ-  back
<noobadmin> hi, people, I need help bringing up a new bridge interface. I'm on 16.04 and I edited /etc/network/interfaces to add 'br0' using dhcp and set 'bridge_ports em0', when I bring it up with 'ifup br0' it works but I lost connectivity
<noobadmin> and I can see on the router a lot of 'arp who-has' and 'arp reply' but nothing else... can somebody help me a bit? I don't know what else to check
<Blueking> can I pm u or u disabled it ?
<Blueking> TJ- ?
<noobadmin> I'm not sure, I'm not use to use irc... lets try
<JRWR> What would be the best method to combine the free space of 4 servers into once filesystem over a network? all three have different drive sizes but in total it would be 15TB and would like to be able to add new servers in at any time, no redundancy needed since its bulk data that can be easly replaced
<JRWR> I have experimented with aufs/nfs mounts and it was OK but I found the rr modes where not very roboust
<sarnold> JRWR: sounds like ceph, gluster, or maybe hdfs (less likely, depends upon what you're doing with it)
<JRWR> mostly just bulk media storage, nothing too fancy
<sarnold> JRWR: note that ceph appears to be insanely picky about full storage targets. do not let that happen.
<sarnold> JRWR: alright, skip researching hdfs then.
<JRWR> gluster looked pretty good but the auth methods are lacking, noticed it was IP only
<JRWR> I'm fine with that but how well does it handle load balancing the files across the systems
<sarnold> ceph allows you to define maps that say which storage targets in which drives in which servers in which racks in which datacenters on which continents get your data
<sarnold> it probably scales up to planets too but to my knowledge no one's tried
<JRWR> lol
<JRWR> Im not trying to scale that high, maybe 10 servers at max
<sarnold> no interplanetary travel? oh well
<JRWR> gluster looks like a bitch to configure
<sarnold> JRWR: ceph feels like a decent fit, but it'll take you two or three days to work through the docs
<sarnold> I haven't read all the gluster docs yet, it didn't feel as 'ready' as ceph to me as far as I have researched it
<sarnold> the filesystem layer of gluster felt weak; apparently gluster's object store layer is decent thuogh
<JRWR> Well this is not for enterprise at all, right now im using mhddfs but its a little CPU heavy
<JRWR> on top of NFS exports
<sarnold> neat, I've never heard of that. seems like a funny storage design though.
<JRWR> Greyhole workds the same way
<JRWR> looks like gluster can stripe files over servers, thats interesting
<sarnold> so will ceph
<sarnold> the fact that greyhole and .. uh the other one .. don't do striping is in fact pretty strange to me :)
<JRWR> both are more file routers then anything
<JRWR> they overall all the filesystems on top of one another and route based off freespace
<sarnold> heh, interesting analogy
<JRWR> so not really a filesystem
<sarnold> I guess the nice thing is that if you lose a server you lose those specific files and nothing else; losing enough ceph nodes to run below your replication levels means you lose pretty much everything
<sarnold> but that's why you tune your replication levels appropriately :)
<JRWR> I was thinking of just exporting everything over NBD
<JRWR> and zfs the bitch up
<JRWR> put small OSes on them and do science
<sarnold> SCIENCE!
<JRWR> lol
<sarnold> zfs is -not- a cluster filesystem
<JRWR> no
<sarnold> you probably know that but I've got to say it
<JRWR> but you can raid shit
<JRWR> and nbd exports block devices over the network
<JRWR> and do support uneven raid
<JRWR> nbd vs iscsi
<JRWR> now I've really gone down the rabbit hole
<sarnold> .. and -which- iscsi targets / initiators to use..
<JRWR> oh noes!
 * JRWR now has over 400 tabs open in chrome
<JRWR> uses CHAP for auth
<JRWR> holy shit
<sarnold> yeah, these things often assume they're running on a trusted storage network
<sarnold> different switches than the application network
<JRWR> nope, all these guys are pretty much on the open internet
<JRWR> so software firewalls for me :3
<sarnold> ipsec or openvpn the things then :)
 * sarnold adds another dozen tabs to JRWR's poor chrome
<JRWR> thats what I had kinda planned on
 * JRWR knows how to setup openvpn already :p
<ndf> wheeyy what a time to walk in
<ndf> I literally just got my openvpn working
<ndf> =)
<ndf> </brag>
<JRWR> so ill do tiny os installs (15GB) and the rest ill export over iscsi
<JRWR> then use ZFS to raid all those bad boys together
<ndf> you raiding over the internet through vpn?
<ndf> didn't even know you could do that
<ndf> lol
<sarnold> "can" and "should" are different things of course ;)
<ndf> hah
<sarnold> JRWR: ooh ooh ooh, tahoe-lafs. :)
<JRWR> I saw that
<sarnold> (though to be honest I don't know how many people use it.)
<JRWR> ndf I'm taking the storage space of a few servers and combining it
<JRWR> making a poor man's SAN
<ndf> well I suppose it fills the gap in the market for realtime remote backup, but it's gotta cost a lot of bandwidth
<JRWR> meh
<JRWR> they are in the same datacenter
<ndf> oh ok
<ndf> wellllll wouldn't it be cost effective to shuffle the racks closer and share a hdd cage?
<ndf> lol
<ndf> where's the fun in that tho eh
<JRWR> there are some nifty tools I found
<JRWR> like mhddfs that works like a file router based off free space
<JRWR> thats what I'm currently using
<JRWR> likes to get touchy when you abuse it
<sdeziel> JRWR: with ZFS you can always use send/receive to copy your data over the VPN link. Not real-time though.
<JRWR> ya, looking for real time, thats why I was exploring layered filesystems
<JRWR> AuFS looks pretty nice
<sdeziel> JRWR: for real-time replication, DRBD is pretty impressive and since it runs over TCP, it's easy to tunnel over VPN
<sarnold> aufs/overlayfs feel like a stream of issues :/
<JRWR> looking for combined, not redundancy
<sdeziel> like RAID0 over the network?
<JRWR> pretty much
<JRWR> thats why i was going to use iSCSI and do soft-raid
<sdeziel> should work (make sure to use write-mostly for the iSCSI one)
<sdeziel> err, write-mostly only makes sense for RAID1 ... nvm that part
#ubuntu-server 2016-04-05
<sdeziel> hallyn: are you there?
<sarnold> man, for a bad time, undefine the uvtool storage pool in libvirt then try to make it work again. sheeeeesh.
<sarnold> I ought to be familiar with this cycle by now; ignore libvirt, think it's probably fine, try it again, hate it again. repeat.
<lifeless> nevyn: you factoring
<lifeless> bah
<trippeh> dnsmaster named[15571]: ../../../lib/isc/ratelimiter.c:185: INSIST((rl->pending).tail == (event)) failed, back trace
<trippeh> hrms ;)
<Celphish> Elo
<Celphish> I'm in the need of some advice, I'll describe my problem, hold on
<Celphish> I'm trying to install ubuntu server 12.04 on a hp-server through ILO, during the setup it states that it's missing firmware for the hardware, and it asks for the file q12500_fw.bin and asks if I want to insert a removable disk or media with it.. I then go on to create a .img with the file on it (also tried a version where the file was on the img at the path /lib/firmware/) but it doesn't seem to find it..
<Celphish> problem is that I need that firmware to be loaded for me to be able to access the first disks and so on
<jamespage> cpaelzer, any opinion on a sane default for allocating cpu cores for dpdk/ovs ?
<jamespage> trying to capture some sort of best-practice for the neutron/ovs/dpdk integration
<cpaelzer> jamespage: dpeends on the system size
<cpaelzer> jamespage: how complex can the rules become ?
<cpaelzer> jamespage: as input vars you surely have the overall #cpus
<jamespage> cpaelzer, my current thinking was to allocate a core on each numa node along with some ram
<cpaelzer> jamespage: do you also have #network cards and maybe even #queues on these cards?
<cpaelzer> jamespage: not a bad thinking - I'd consider that a good lower boundary
<cpaelzer> jamespage: but it depends a lot if this system primary purpose is passing dpdk traffic
<cpaelzer> jamespage: if this is doing a lot of dpdk traffic like via many cards and/or 40G/100G cards I'd go higher than that
<jamespage> cpaelzer, I have memory per numa node as a config option, might add cores per numa node as well
<cpaelzer> jamespage: htat I like very much
<cpaelzer> cores-per-node starting with 1 as a working but not high-intensity default and up to X as requested would be a good match
<cpaelzer> jamespage: will that end up in dpdk's -c option?
<jamespage> cpaelzer, yes
<cpaelzer> jamespage: while you are at it please consider setting rx queues
<cpaelzer> jamespage: after you started openvswitch-dpdk
<jamespage> cpaelzer, on what formula?
<cpaelzer> jamespage: you can set it via "ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=2"
<cpaelzer> that example uses two queues
<cpaelzer> jamespage: formular would IMHO be "up to the number of combined queues, but not more than CPUs assigned"
<cpaelzer> jamespage: I'd guess that is a sane start
<cpaelzer> jamespage: a manual tuning would include locating on which socket a network card is and then make way more complex masks
<cpaelzer> jamespage: number of queues can be read with "ethtool -l"
<cpaelzer> and - btw - is by default limited to #cpus
<cpaelzer> so on my 12 core the card has 64 queues but uses 12 by default
<cpaelzer> ah and I wrote it wrong, has to be BEFORE ovs-dpdk restart
<cpaelzer> so that on device init it picks it up
<caribou> rbasak: remember my query on apache2 backport from Xenial to trusty : looks like it's not going to be as simple :
<caribou> rbasak: there is a Pre-Depends: dpkg (>= 1.17.14) after the Utopic version
<rbasak> "  * Bump dpkg Pre-Depends to version that supports relative symlinks in
<rbasak>     dpkg-maintscript-helper's symlink_to_dir. Closes: #769821"
<rbasak> I remember that now.
<rbasak> Yeah that one might be a little messy to fix.
<caribou> rbasak: the simpler solution is to backport the Utopic version that is the one right before the addition of this pre-depends
<caribou> rbasak: and is still what the original bug asked for
<rbasak> caribou: I think the pre-depends was to fix a bug.
<caribou> rbasak: I'm looking at the xenial source;looks like what it was fixing is no longer used
<caribou> symlink_to_file
<rbasak> Perhaps I'm thinking of a different bug.
<caribou> symlink_to_dir rather
<rbasak> bug 1393832 is what I have in mind.
<ubottu> bug 1393832 in apache2 (Ubuntu) "Modules fail to enable when configured after apache2 is configured" [High,Fix released] https://launchpad.net/bugs/1393832
<rbasak> I think that's completely unrelated now. Sorry.
<frickler> jamespage: does https://bugs.launchpad.net/ubuntu/+source/python-keystoneauth1/+bug/1566296 look plausible to you? will it be possible to get a FFE or could you only do an update after the initial release is done?
<ubottu> Launchpad bug 1566296 in python-keystoneauth1 (Ubuntu) "Please upgrade python-keystoneauth1 for xenial" [Undecided,New]
<Slashman> hello, do you think the beta version of xenial is fine to use on a server without any graphical interface in production (without net access)? I'm looking at the bug list and doesn't see any show stopper... I want to avoid if possible the update from 15.10 to 16.04 and use latest ZFS and LXD
<jamespage> frickler, we have a standing ffe for mitaka/xenial
<jamespage> coreycb, ^^ can you take a look please
<jamespage> coreycb, I'm +1 on a resync with Debian if that looks good with you.
<teward> Slashman: #ubuntu+1 for 16.04 questions.  And I would not use 16.04 unless you have no choice as it is still not released as 'stable' yet.
<DirtyCajun> im about to bond 2 nic. i dont have a special switch so im going to use balance-alb, do i need to state bond-lacp-rate or no?
<Slashman> teward: thanks for the chan. I understand your point of view, but from the bug tracker I don't see any critical bug for a container/virtualization server
<coreycb> jamespage, frickler, yes sounds good I'll take a look today.
<jrwren> Slashman: its fine, but the update is not a problem either. Jumping through hoops do avoid running an update does nothing but waste your time, IMO.
<teward> Slashman: that isn't why I said that.  It's up to you if you use software that is not yet released as stable - there's still quite a lot of other bugs in the system.  I would suggest waiting anyways, before updating production systems.  Or, stage a testing environment to test things
<teward> or both :)
<Slashman> I'm already testing it, it's working so far
<jrwren> Or... use it in production, its excellent real world testing for the rest of us ;]
<Slashman> jrwren: ^^
<jamespage> frickler, fyi just testing a revised 32 bit compat patch and will get 10.1.0 uploaded to xenial
<Slashman> I may stay reasonable and use wily with lxc and zfs ppa then
<frickler> jamespage: any change to tackle some of my issues with that, too?
<frickler> jamespage: also I just found an old one: https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1488962 this is still present in xenial, too, it seems
<ubottu> Launchpad bug 1488962 in apache2 (Ubuntu) "systemd does not notice when apache2 service fails" [Medium,Confirmed]
<jrwren> Slashman: that is no fun. the built in zfs and lxd instead of ppa-zfs and lxc is like upgrading to cadillac from a chevy :p
<jamespage> frickler, on my list for this week...
<jamespage> not apache2
<jamespage> frickler, I see rbasak did some triage - is that bug ^^ a target for xenial?
<jamespage> frickler, urgh - that's a systemd fix in suse...
<rbasak> Yeah, that needs attention.
<rbasak> smoser was going to look at systemd-boot tagged bugs but I think he's occupied on something else.
<rbasak> jgrimm: ^^
<smoser> :-(
<Slashman> there is on issue with zfs between ppa wily and xenial: ppa version is 0.6.5.6 and xenial version is 0.6.5.4
<Slashman> s/on/one/
<jgrimm> rbasak,  i'm going to suggest cpaelzer for that task
<jrwren> Slashman: how is that an issue?
<Slashman> not sure how the update will handle that, and I prefer the latest stable version :p
<cpaelzer> jgrimm: dpdk crumbles between my fingers atm, so I unsugegst myself :-) but lets talk in 20 minutes and define priorities
<jgrimm> cpaelzer, ok, i can do1x1 now if you'd like
<cpaelzer> jgrimm: let me just adapt and start the next iteration of the testsuite - approx 3 min
<jgrimm> cpaelzer, k
<teward> jgrimm: ohai, PM?
<jgrimm> teward, hi there
<ddellav> jamespage when you get a chance, can you look at my fix for this and push it if possible? https://bugs.launchpad.net/ubuntu/+source/manila/+bug/1546116
<ubottu> Launchpad bug 1546116 in manila (Ubuntu) "manila share process init script is missing" [Medium,Fix committed]
<ddellav> I put the fix up awhile ago but no one has had the chance to review/push
<jamespage> ddellav, we need to get that into xenial first - manila is one of the git repo's under ~ubuntu-server-dev
<ddellav> jamespage should I create the MIR?
<jamespage> ddellav, MIR ?
<ddellav> jamespage it's in xenial/universe currently
<jamespage> ddellav, I think we're talking cross purposes
<jamespage> there is no manila-share package in xenial yet
<ddellav> jamespage ooooh, that's what you meant.
<jamespage> ddellav, yes
<ddellav> jamespage so a requestsync then?
<jamespage> ddellav, no
<jamespage> ddellav, we're divergent from debian here - infact it was in Ubuntu first, but members of the openstack-pkg team did their own thing
<ddellav> jamespage so what is the next step?
<jamespage> ddellav, how would you make a packaging change to any of the other core openstack packages?
<ddellav> jamespage normally what i do is package updates, so i debcheckout, import updated package, fix depends and any other issues then push for review. If making an upstream change i use git-review to push changes for CI and manual review.
<jamespage> ddellav, git+ssh://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/manila is the source for manila
<jamespage> can you propose the fix for this bug against that please?
<ddellav> jamespage yep
<jamespage> ddellav, thanks!
<jamespage> ddellav, the delta in your bzr branch looks fine btw - just needs targetting to the right location!
<ddellav> jamespage gotcha, thanks
<teward> server team still meeting today?
<rharper> teward: yeah, scheduled to start in 10 minutes
<teward> cool
<teward> just making sure, I can't ever keep track of whomever is chair :)
<rharper> https://wiki.ubuntu.com/ServerTeam/Meeting has chair
<rharper> but as you say, not always 100% accurate
<teward> yep
<jamespage> frickler, any thoughts on what we should set the TasksMax attribute to for ceph-osd/mds/mon ?
<jamespage> 512 is way to low
<jamespage> frickler, looking at the juju charms, we currently set kernel.pid_max to 2097152 as a sane default
<jamespage> frickler, but this is really just applied at the process level - so I'm tempted to set it to infinity and allow the server admin to deal with it at the kernel sysctl level
<jamespage> frickler, I've uploaded another set of updates for 10.1.0 to the ceph-sru PPA - they will take a while to build
<jamespage> but they include fixes for most of your reported problems...
<Blueking> TJ- hello again :)
<axisys> how do I know if my openssh server is vulnerable to CVE-2016-3115 ? I am using 12.04 lts and I have the latest openssh server already..
<sdeziel> axisys: you can track the patch status in Ubuntu here: http://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-3115.html
<sdeziel> axisys: in the meantime there are some mitigation you can apply as mentioned in http://www.openssh.com/txt/x11fwd.adv
<axisys> sdeziel: forgot to mention, I was there already... but let me check the mitigations
<teward> axisys: disable X11Forwarding and it'll help mitigate
<sdeziel> yup
<teward> "Set X11Forwarding=no in sshd_config. This is the default."
<teward> not sure the defaults in Ubuntu's openssh-server, but it's that simple to help mitigate
<sdeziel> Debian/Ubuntu changed that default
<axisys> how do I check the default values? is there a switch ?
<teward> axisys: /etc/ssh/sshd_config
<teward> go poke there, which is on your system.
<teward> edit those configuration items, so that X11Forwarding has 'no' instead of 'yes'
<teward> unless you need X forwarding
<axisys> I know I can manually modify the config.. but is there a switch to get the default value, like postconf and mutt has?
<teward> in which case you're stuck between a rock and a hard-place
<sdeziel> axisys: sed -i 's/^X11Forwarding.*/X11Forwarding no/' /etc/ssh/sshd_config
<axisys> sdeziel: and no-x11-forwarding on those keys
<sdeziel> axisys: that would work too but the global param also covers password authentication
<axisys> sdeziel: right.. would you know if ubuntu will have a upgrade available soon?
<sarnold> unlikely
<sarnold> we've rated it low, so it'll only get handled alongside something else
<axisys> also, might a be question for #openssh, but anyway find out if anyone used x11 .. if not I could modify it without notifying over 2000 users :-)
<teward> axisys: for the record, sarnold is on the security team - he speaks from knowledge therein ;)
<teward> so if it's an "Unlikely, unless we handle it alongside something else", well... :P
<axisys> teward: appreciate that..
<sarnold> we're currently running a pretty steep backlog of issues at the moment, so it's behind a looong queue of other things
<axisys> sarnold: I know how that is ;-)
<axisys> sarnold: thank you
<axisys> so any trick you guys know of x11 has been used recently would help decide how I go about making change with or without change management
<sdeziel> axisys: maybe you could create a thin wrapper around xauth that logs something prior to calling the real xauth
<sarnold> axisys: another potential mitigation (one I haven't researched at all) is using apparmor policies on the server; that can also confine what authenticated users can do. there's lots of ways of doing that, from giving users a login shell that is confinde with apparmor, or confining sshd with apparmor and using pam_apparmor to change users into a hat ..
<axisys> sdeziel: so no log today of xauth use?
<sdeziel> confining sshd with apparmor isn't exactly trivial though :)
<sdeziel> axisys: maybe sshd logs something specific when calling to xauth. I don't know since I don't use that
<axisys> sdeziel: on that token since it is enabled today.. let me use it and see how the log looks like.. thanks
<sarnold> back when I was a kid we used to confine sshd with apparmor just for something fun to do on a saturday afternoon! of course back then movies were a quarter and a candy bar a nickel so you couldn't just buy your way to happiness let me tell you
 * axisys wonders how old is sarnold
<sdeziel> https://code.launchpad.net/~sdeziel/apparmor/usr.sbin.sshd-refresh
<sdeziel> ^^ it was fun indeed :)
<sarnold> cap_sys_ptrace???
<sarnold> oh. right.
<sarnold> #insert <rant/kernel_devs/ptrace>
<sdeziel> yeah
 * axisys struggling to follow 
<sdeziel> actually the up to date version of the profile is here: https://github.com/simondeziel/aa-profiles/blob/master/16.04/usr.sbin.sshd
<sarnold> it might be nice to clean out all the commented-out stuff.. I can't imagine that we'll bring back the privsep sshd any time soon
<sdeziel> axisys: I also have a version for 14.04 but nothing for 12.04
<sarnold> axisys: if you care about this specific flaw enough to use apparmor to confine your sshd, these patches would be a good starting point
<axisys> sdeziel: might need to enable Log DEBUG .. cuz I do not see anything about xauth on remote's /var/log/auth.log when I login with ssh -X remote
<sdeziel> sarnold: hmm, let me update that bzr merge proposal with the commented-out stuff removed
<sdeziel> I wish apparmor's was tracked in git
<sarnold> yeah :/
<sarnold> see afore-mentioned backlog :(
<sdeziel> yeah, at least the conversion is in the pipeline
<axisys> sdeziel: yep atleast DEBUG1 is necessary to catch
<axisys> x11-req
<sdeziel> good to know
<axisys> i have x11forwarding disabled.. what is the best way to test it? ssh -X remotehost and xterm fails ?
<sarnold> yeah that shld suffice
<teward> sarnold: oh, that brings a question, i'll poke in -hardened since it's a question about the openssl defaults :P
<teward> openssh*
#ubuntu-server 2016-04-06
<Pinkamena_D> when using the program 'glances', which some of you may be familiar with, the disk usage monitor is messed up because there are a bunch of devices like ram0, ram13, ram1, etc so I can not see the actual disks at the bottom of the list
<Pinkamena_D> does anyone know if you can reorder this or disable these devices?
<Pici> Pinkamena_D: I think you can specify some options in the conf file: https://glances.readthedocs.org/en/latest/aoa/fs.html
<FManTropyx> "deferring update (hook will be called later)" wat
<frickler> jamespage: regarding TasksMax, I agree that for ceph-osd/mon, there should be no limit at all from systemd probably.
<frickler> jamespage: as to why the limit is not enforced after booting, this seems to be a systemd issue, not only affecting ceph
<frickler> jamespage: and thanks for the other fixes, I'll try to verify the new packages today
<aotea> I'm trying to set my server up to only accept connections with ssh id keys, but one I set PasswordAuthentication no in sshd config I can't connect to it anymore. Is this due to another rule that says not to allow empty passwords (if the user has a id key without password) or what am I lookin for?
<RoyK> aotea: no, the server has no way of knowing if the private key is password protected or not
<RoyK> aotea: did key authentication work before the change?
<aotea> RoyK: I think so, looked through ssh in verbose mode and seemed like key was ok'd to authorized_keys
<aotea> RoyK: looking through my auth log now to see if I might find a clue there though
<lordievader> aotea: Perhaps wrong permissions on ~/.ssh?
<aotea> lordievader: could be, should be 700 on the /ssh and 600 on the autorized keys right?
<lordievader> Yes, but if it would be wrong, you'd see it in the servers auth log.
<RoyK> aotea: if you try to change back, can you then login without getting a password prompt?
<aotea> No, tried setting passauth to no in sshd and couldn't then log on at all so had to get to the server and reset the config
<aotea> http://pastie.org/10787181 this is the log after the change at first, still asking me for password.
<aotea> seeing as there is an added thing about RSA I'm guessing it is checking my key then at least?
<yossarianuk> hi - on ubuntu 15.10 when I use virt-manager to create a QCOW2 disk image it is not thin provisioned by default  - i'm sure on previous versions it was, any ideas how I can make it thin provisioned by default >
<yossarianuk> actually - forgive me, it is thin provisioned - just ls showing the virtual size.
<aotea> Still having issues getting ssh to work with keys, this is the verbose output I get http://pastie.org/10787305 - sshd is set to still prompt me for password as last I removed that I had to visit the server but I want to remove this option soon I get this working.
<lordievader> Your public key is in the authorized keys file and the ssh daemon has read that file?
<aotea> not sure about that last bit, but yes I got a warning after doing ssh-copy-id that my pub was already in my authorized_keys. How do I check that the daemon read the file?
<lordievader> aotea: Check what is configured as the authorized keys file.
<aotea> "%h/.ssh/authorized_keys" lordievader
<lordievader> And your key is in there?
<aotea> it's in the .ssh/ folder of the user I'm trying to log yes - never seen the %h for home before thought
<lordievader> Hmm, then I'd like to see the server sshd log as to see why the rsa key is not accepted.
<aotea> http://pastie.org/10787326 lordievader last login seem to have accepted a pub key, but no idea what changed, only did ssh-copy-id and was told nothing happened seeing as key was already in authorized_keys :s
<aotea> lordievader: http://pastie.org/10787350 logged out and back in and no sight of the pubkey again
<cpaelzer> anybody an idea how I'd get a vivid lxd container?
<cpaelzer> it is already out of images.linuxcontainers.org
<markc> testing a xenial setup on a hp microserver by booting from the internal usb drive, just hangs at initramfs prompt with no usb drivers (I guess) so no keyboard access to do anything... kubuntu xenial desktop installer boots okay in the same slot... just spins on /scripts/local/block... what is atht?
<markc> * /scripts/local-block what or where is that script?
<markc> wily was booting up the same server from the same slot a month ago.
<cpaelzer> markc: do you mean /usr/share/initramfs-tools/scripts/local-block ?
<cpaelzer> markc: in the initramfs that could be your path
<cpaelzer> it is only a dir, but then the only thing remotely matching the file name you asked for
<cpaelzer> it is part of cryptsetup
<cpaelzer> wild guess - maybe it is showing a popup on a screen somewhere (or think it does so)?
<markc> cpaelzer: thanks for the hint... it's a fresh install and I didn't ask for encrypted anything... unfortunately there is only usb keyboard access and that is not working
<cpaelzer> markc: so the install went through and now it is hanigng after reboot - or is the install itself blocked?
<markc> I did the install on my laptop (usb stick to usb stick) then put the new install into the hp microserver
<cpaelzer> markc: unfortunately I'm no expert on that, but I really think it might ask you something on a framebuffer
<cpaelzer> markc: are you having only serial console access atm?
<markc> boots up but just spinds on /scripts/local-block for half a minute then drops to the initramfs prompt, also says the UUID of the usb stick is not detected but I've double checked that the usb boot stick is indeed that UUID
<markc> cpaelzer: just using a "normal" vga screen, not serial
<markc> seeing that it can't seem to see the ext4 boot partition UUID and no usb keyboard it's like the right usb drivers have not been installed into the kernel
<markc> perhaps if I do a fresh install onto the usb drive on this server then perhaps the instll process will figure out it needs to install usb drivers, but I would have thought that was part of any "normal" ubsuntu server install these days
<markc> nope, that same usb stick does the same thing on my laptop, just spins on "Begin: Running /scripts/local-block ... done." for 30 secs then goes to a initramfs prompt.. so it's not the hp microserver hardware at fault, something is wrong with a stock standard xenial ubuntu server install onto a usb stick
<cpaelzer> markc: :-/ i'm out of ideas, if nothing else can help you I'd ask you to file a bug with as much detail on the HW and the instal process as possible
<markc> at least I have keyboard access on the laptop and the only odd thing I can see is that normally the usb stick would be /dev/sdb1 etc and there is no /dev/sdb* at all
<markc> bizarre, from the initramfs blkid shows the internal sata drive but not the (usb) device it just booted from... I've done this 100s of times before and never struck anything like this kind of problem before
<lordievader> aotea: I'
<lordievader> I'd set the loglevel a bit higher and try again.
<rbasak> jgrimm: bug 1505839 probably needs assigning
<ubottu> bug 1505839 in debian-installer (Ubuntu) "Unable to install from text mode interface" [High,Triaged] https://launchpad.net/bugs/1505839
<rbasak> jgrimm: along with a report in the ubuntu-server ML just now
<wsirccc> https://uec-images.ubuntu.com/releases I want to run these images on qemu. One problem is to login. They say this is done by ssh keys. But I do not find explicitly how to do it? Given I have the instance running, what should I do?
<wsirccc> who can Help me?
<rbasak> wsirccc: cloud images have no access by default to avoid the obvious security vulnerability. You can feed an ssh key or password when booting using cloud-init using various mechanisms, or alter the image manually first.
<rbasak> wsirccc: uvtool handles all of this for you - see the server guide. Or use cloud-localds manually.
<wsirccc> ok, I good it is, about preseeding ssh-keys in the guest. Thanks as well for the pointer.
<wsirccc> Another question: how would one best prepare a arbitrary ubuntu image with working ssh keys, given ssh-keys-login is the best choice for remote controlling the guest?
<wsirccc> good/got/
<rbasak> mount-image-callback is handy. But I wouldn't use that in production.
<rbasak> In production, use cloud-init.
<rbasak> With cloud-localds. Stop thinking about rolling your own images :)
<wsirccc> rolling?
<rbasak> Creating
<rbasak> Of course, you can do what you want.
<rbasak> But this philosophy is that you don't prepare your own images. Instead you use the same official image everywhere, and it sets itself up as you wish on boot.
<rbasak> Using cloud-init.
<rbasak> And you give it your ssh key on boot via one of the available mechanisms.
<rbasak> On EC2, that is the metadata service, etc.
<rbasak> For local qemu, you can use a "config drive" which cloud-localds can prepare for you.
<rbasak> uvtool does this automatically for you for local use.
<rbasak> Then you don't have to maintain your own images, which inevitably don't get updated with security updates.
<wsirccc> I do not get it, create means what? And what to do instead?
<wsirccc> create: install one's on system from a iso
<wsirccc> instead: download them from ubuntu-uec, and manipulate them?
<wsirccc> ?
<wsirccc> I need them guests for trying out (X) user configuration and such things it is to say. ..
<wsirccc> rbasak
<wsirccc> rbasak
<wsirccc> rbasak:?
<rbasak> wsirccc: please explain what you're trying to do. Is this for testing, production or both?
<wsirccc> lets say experiments
<rbasak> Use uvtool.
<wsirccc> ok, save this conversation and go for it. Thanks for point me.
<wsirccc> point/pointing
<jgrimm> rbasak, i couldn't find ML post for bug 1505839?
<ubottu> bug 1505839 in debian-installer (Ubuntu) "Unable to install from text mode interface" [High,Triaged] https://launchpad.net/bugs/1505839
<rbasak> jgrimm: not the same bug, but a different report that made me thing the installer needs a little attention: https://lists.ubuntu.com/archives/ubuntu-server/2016-April/007248.html
<rbasak> think
<jgrimm> ah, yeah, i didn't get a connection from what you wrote
<jgrimm> rbasak, also noticed you marked that triaged, but i didn't see obvious corresponding explanation of what was determined wrong?
<rbasak> jgrimm: matsubara confirmed. Triaged doesn't need a root cause analysis. As long as it can be tackled by a developer (eg. valid bug and reproducible), it's Triaged.
<rbasak> (and ideally no dupes)
<jgrimm> oh, i would see that as confirmed
<rbasak> https://wiki.ubuntu.com/Bugs/Bug%20statuses
<jgrimm> rbasak, thanks
<jgrimm> matsubara, want to take a look at the bug 1505839?
<ubottu> bug 1505839 in debian-installer (Ubuntu) "Unable to install from text mode interface" [High,Triaged] https://launchpad.net/bugs/1505839
<matsubara> jgrimm, I can give it a try
<jgrimm> matsubara, thanks
<dr4c4n_> hi there, I am trying to define a management network on one nic, and connect my vm guests to a network unattached to the host, and has access out
<dr4c4n_> I'm running ubuntu server 14.04 with kvm
<crazybluek> TJ- hello u there ?
<crazybluek> TJ-  could u message me with details about how to make a script to keep mine router on net despise problems/errors on dhcp server (ISP) ?
<dr4c4n_> I understand I need to create a bridge interface, but I want to ensure it doesn't allow the host to access the internet, only the vm guests.
<sdeziel> dr4c4n_: what do you mean by "a network unattached to the host" ?
<dr4c4n_> sdeziel: ok, so my requirements are that the host hypervisor (kvm running on ubuntu 14.04) shouldn't access the internet, it should only be setup with the default network (provided by libvirt) to connect to it's guests. I would like the guests to be able to access the external network and the internet as I need to create a vm to make a build environment, to compile openvswitch to move to the host os
<dr4c4n_> is this a firewall setting? and not a network setup step?
<sdeziel> dr4c4n_: if you need openvswitch in the host why not apt-get install it?
<dr4c4n_> sdeziel: I'm not allowed to have the host running the hypervisor kvm attach itself to the internet
<sdeziel> dr4c4n_: also if you are using the default virbr0, this requires your host to have Internet connectivity to make it available to the guests
<sdeziel> dr4c4n_: understood but in that case the host probably have access to an HTTP proxy, isn't it?
<TJ-> crazybluek: did you collect the dhclient debug log?
<dr4c4n_> sdeziel: the host doesn't have access at all
<sdeziel> dr4c4n_: then how will you install security updates?
<dr4c4n_> what I mean is I can plug in the host to an internet accessible network, but that is not what I need to do
<sdeziel> dr4c4n_: yeah I understood you don't want direct Internet access for the host. That's best practice and what I do too
<sdeziel> dr4c4n_: but you need some form of Internet access
<dr4c4n_> sdeziel: which I am not allowed to have. what I am requesting, is some sort of reference that can show me how to setup networking to enable a management interface for the kvm host (which might eventually be connected to the internet) but for now let's assume it doesn't have access.
<dr4c4n_> on one nic, and setup another bridge to interface with the vmnics and a different physical nic on the box
<dr4c4n_> which can connect to the internet
<sdeziel> dr4c4n_: so you have 2 NICs on the host, one is the management NIC and has an IP and a default route. The other NIC is only used by a bridge that you hook your VMs to. That bridge and the underlying NIC must not have any IP in the host.
<sdeziel> dr4c4n_: the NIC used for bridging should be hooked to a network that has a router/DHCP/etc to provide your guests full connectivity with the outside world
<dr4c4n_> that I understand, I can create an entry for em1 in /etc/networking to setup a static ip
<sdeziel> dr4c4n_: if em1 is your management NIC, yes
<dr4c4n_> and I understand how to physically connect the nic for bridging, but I do'nt know how to tell ubuntu how to a) create bridge and b) hook a vm to it
<dr4c4n_> <- I'm new at this, and understood using ubuntu on a desktop
<sdeziel> ah OK, sec
<dr4c4n_> I did try reading the kvm networking information on ubuntu page, and the libvirt page, but I'm confused by the different options they are putting up there
<dr4c4n_> and technically I have 4 nics on the host, but once I can configure these two, figuring out the other two should be a) easier, and b) I will have to do the management of the virtual network from openvswitches, which will change this initial setup
<dr4c4n_> as I will be creating a bunch of vlans to individual vms. eventually
<dr4c4n_> btw, thanks a lot for your help
<sdeziel> dr4c4n_: what I typically do is define my bridge in /etc/network/interfaces and make sure to NOT assign any IP. Something like this http://paste.ubuntu.com/15655200/
<dr4c4n_> ok, then when defining / creating your vms, you pass the bridge name to them correct?
<sdeziel> dr4c4n_: then I define a p2p network using virsh and this: http://paste.ubuntu.com/15655225/
<sdeziel> dr4c4n_: then I add a NIC to a VM like this: http://paste.ubuntu.com/15655249/
<dr4c4n_> and any vms with the nic defined in that way are connected to that network
<dr4c4n_> thanks
<sdeziel> dr4c4n_: yeah, any VM with an interface using the same source network will end up in the bridge
<dr4c4n_> ok. I get it now :) I really appreciate your help.
<sdeziel> dr4c4n_: you are welcome
<sdeziel> dr4c4n_: when you move to using VLANs, the concept will be the same except that you will bridge over VLAN devices instead of raw NICs
<patdk-wk> depending :)
<sdeziel> well, yeah, many variations are possibles especially with OVS involved
<patdk-wk> if you your doing vm's that are limited to 1 or just a few vlans, sure
<patdk-wk> if your vm's need most all or all, and you don't need to worry about vlan security for the vm's
<patdk-wk> might be simpler to to bridge the nic, vlans and all
<patdk-wk> but ya, not normally a *secure method
<sdeziel> true
<patdk-wk> unless security is controlled else where, like anything on that box, is allowed all those vlans anyways
<FManTropyx> I am trying rsync as daemon and it cannot access files that do not have o+r even though I started it as root
<DirtyCajun> does anyone know what the commands are after autopart in kickstart to automatically accept changes?
<nacc> DirtyCajun: how do you mean?
<DirtyCajun> well when the automation process partitions the entire disk using autopart. there are 2 questions it asks you after it partitions "confirming" what you are doing.
<DirtyCajun> i dont want to have to click ok 2x. defeats the purpose lol
<DirtyCajun> im doing it in a vm to get you the exact pages
<DirtyCajun> actually there are 3. there is the possible "there are already mounted partitions would you like to umount? there is Finish and write partitions to disk, and there is write? (yes)
<FManTropyx> which FTP daemon should I employ?
<DirtyCajun> found it nacc i need to preseed the answers.
<nacc> DirtyCajun: yeah you need to preseed
<nacc> DirtyCajun: i think that's the limit of kickstart, presumig you're actually using a kickstart file
<DirtyCajun> i am. i just did preseed partman/confirm boolean true
<DirtyCajun> in my kickstart
<nacc> yeah
<nacc> DirtyCajun: at that point, you might be better off using a proper preseed, but it's a matter of preference :)
<DirtyCajun> true. less work to just stick the 2 lines in the already made kickstart file tho XD
<nacc> DirtyCajun: yep, it all depends on if you're deploying more than just ubuntu/debian, imo
<DirtyCajun> such blasphemy in #ubuntu-server
<DirtyCajun> ;)
<nacc> DirtyCajun: :)
<DirtyCajun> ok. i got preseed to do 2 of those 3. i cant find the partman/(some command) for already mounted partitions
<sb_9> ext3 needs journal recovery.    can any one help on this..
<wsirccc> Question: connecting to the qemu-installed-out-of-box-guest from host, first choice is ssh, first, and secondly how  is this possible in so called default network mode? For a decent communication with the guest, one needs to switch to be root for first time, is that right?
<sarnold> hunh?
<DirtyCajun> for anyone interested the preseed issue with a drive already existing it is a current bug! https://bugs.launchpad.net/ubuntu/+source/debian-installer/+bug/1347726
<ubottu> Launchpad bug 1347726 in debian-installer (Ubuntu) "ubuntu14.04 installation hang on "The installer has detected that the following disks have mounted partitions"" [Undecided,Confirmed]
<supNow> Hello all, without sounding like a complete noob I could use some help with a small issue. I setup ubuntu server a few years back for this company and since left, so nothing has been touched or updated in years. I came in today to run some updates on the websites I had running on it and to update the server itself but I'm having no luck. Weather I
<supNow> try sudo upgrade or the software manager (using gui) both return failed download attempts but network connection is fine.
<patdk-wk> supNow, sounds like you have an unsupported version of ubuntu on it
<supNow> @patdk-wk thank you... do you know if ubuntu-server is a rolling release? Is there an easy way to upgrade to the latest?
<patdk-wk> it's not rolling
<patdk-wk> and no, I have no idea, since we have no idea what release you are using
<patdk-wk> cat /etc/issue could be a clue
<patdk-wk> or whatever is in your /etc/apt/sources.list file
<RoyK> supNow: you can do a do-release-upgrade -d
<RoyK> supNow: upgrade to the latest non-lts-release
<RoyK> supNow: or even 16.04 beta
<RoyK> supNow: I don't do that for servers
<RoyK> supNow: but then - you're the one running it
<qman__> !eol
<ubottu> End-Of-Life is the time when security updates and support for an Ubuntu release stop, see https://wiki.ubuntu.com/Releases for more information. Looking to upgrade from an EOL release? See https://help.ubuntu.com/community/EOLUpgrades
<qman__> the latter, there, will tell you how to upgrade an EOL release
<qman__> but you need to know which release you're running first
<qman__> lsb_release -a or cat /etc/issue
<qman__> from what you're saying you didn't set it up all that long ago, so you're probably not running an LTS release
<qman__> which I don't recommend
<RoyK> 8.04 was an LTS ;)
<qman__> yeah, but it's also 8 years old
<RoyK> yeah - old things are always better
<qman__> I actually just got rid of my last 8.04 a few weeks ago, it was a mail server and my postfix+dovecot configuration wouldn't survive the upgrade
<qman__> I had to rebuild from scratch and import the mailboxes
<patdk-wk> very strange
<patdk-wk> mine has servived upgrades from 8.04 to 14.04 so far just fine
<patdk-wk> well, if he was running non-lts, upgrades are much harder :)
<patdk-wk> I do have 5 production 16.04 I have been running for 2 months now
<DirtyCajun> 16.04 BETA *
<qman__> all my other servers survived upgrades just fine, it was just that postfix+dovecot vmail configuration that refused to work
<aotea> Could someone help me understand why when I ssh to my server, I get to skip typing password (so ssh key obviously works) but once I set "PasswordAuthentication no" my key is suddenly not working?
<sarnold> aotea: sshd authentication is a bit annoying .. there's a dozen different controls that interact. the easiest way for someone else to help debug is if you pastebin your sshd_config
<pmatulis> and the auth.log of the server
<coderanger> Is there a way to tell the Server installer to not use a UEFI boot setup?
<coderanger> and/or to not use a GPT?
<sarnold> easiest may be to set the bios to 'legacy' mode before booting
<coderanger> Okay, the installer is smart enough to not use EFI if it was loaded in legacy?
<patdk-lap> heh, kindof hard
<patdk-lap> if your disk is >2tb you need gpt
<sarnold> yeah, I think so; I ran the installer with my bios set to "dual" the other day and wound up with a system that wouldn't boot. so I changed it to uefi, re-ran the installer, and it noticed the difference and behaved differently
<patdk-lap> if the bios is set to uefi, it will setup uefi
<sarnold> I suspect going the other way is probably something it'd notice too
<patdk-lap> I have been unable to do uefi boot correctly
<patdk-lap> when also interacting with grub luks boot disk
<coderanger> Heh, big disks are not a risk factor for this. I think there is a way to go MBR-mode grub on a GPT but I just don't care. This is just to get Xen running on a babby machine.
<coderanger> Will try it out and report back, thanky :)
<patdk-lap> well, gpt will install a mbr compatable header
<patdk-lap>  Iwonder if it's boot compatable though, not sure I have tried that
<patdk-lap> or bothered to look into it even
<aotea> sarnold: http://termbin.com/asw6 for the auth.log
<sarnold> heh my usual goal is to get the stupid thing booting with as little of my mental energy as possible :)
<patdk-lap> well, I want fully encryped disk
<patdk-lap> but I wanted uefi also
<sarnold> then you have a different problem than I do ;)
<patdk-lap> ya, interactions with hippa stuff
<sarnold> aotea: wow, the internet is a brutal place to put an sshd..
<patdk-lap> and I totally do not trust fully, SED disks
<aotea> sarnold: What do you mean? http://termbin.com/9yqj
<patdk-lap> did someone not install fail2ban or the likes?
<patdk-lap> what is the *problem*? :)
<sarnold> aotea: the number of brute-force password guessing attempts is astonishing
<aotea> sarnold: got fail2ban set up proper to hopefully help me on that front :P
<patdk-lap> I forgot to isntall fail2ban once
<aotea> but yeah, tons more than last I tried tinkering with a server
<patdk-lap> the T1 line would get routinely *overloaded*
<aotea> or well I 'hope' I got fail2ban set correct at least, it says each IP only should get 3 tries but I see way more than that :S Figured setting ssh key for log in would be 'safer'
<patdk-lap> ya, but it won't cut down on bandwidth/log usage
<patdk-lap> aotea, maybe also UsePAM no
<sarnold> o_O
<sarnold> I think all kinds of things break if you set that to 'no'
<patdk-lap> maybe
<sarnold> do you have it set to 'no' on any of you rlinux boxes?
<patdk-lap> yes
<sarnold> interesting
<patdk-lap> but I don't have passwordauth disabled
<patdk-lap> and I do customize my pam stack
<patdk-lap> but thinks like, encfs, for homedir's and stuff like that, will break, yes
<sarnold> do you have this on any wily-or-newer systems? I've got a vague feeling that withuot pam_systemd_logind_lennartd or whtaver you'll wind up in funny situations
<patdk-lap> no, I don't have it on any systemd though
<sarnold> e.g. 1564451
<coderanger> Yep, that worked. Booting the installer with EFI disabled in the BIOS it did everything old-school :)
<coderanger> Thanks all.
<sarnold> \o/
#ubuntu-server 2016-04-07
<JRWR> Trying to setup GlusterFS {3.5.2}, Both boxes are connected over a tinc connect and confirmed working ok, glusterfs is having a hard time adding peers, it just hard locks and never completes, it just locks up with the log line of 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) -- my command is gluster peer probe 10.10.xx.xx
<hallyn> hm, rabbitmq-server wont' install for me in xenial, either a host or ina container. systemd job fails.
<hallyn> odd, no new upload since january
<hallyn> hm, no, in a container it starts fine with /etc/hosts fixed up.
<BusyElf> Ubuntu 14.10 Server : I have a problem with a programme shutting down at 02:30 -ish every morning. Nothing in the log files to sugges it was a system side order to shut down.. does anyone have any ideas?
<BusyElf> Hi :-)
<BusyElf> Ubuntu 14.10 Server : I have a problem with a programme shutting down at 02:30 -ish every morning. Nothing in the log files to sugges it was a system side order to shut down.. does anyone have any ideas?
<TJ-> !eol | BusyElf 14.10 is no longer supported
<ubottu> BusyElf 14.10 is no longer supported: End-Of-Life is the time when security updates and support for an Ubuntu release stop, see https://wiki.ubuntu.com/Releases for more information. Looking to upgrade from an EOL release? See https://help.ubuntu.com/community/EOLUpgrades
 * BusyElf facepalms
<TJ-> BusyElf: that's not going to cause your issue, but it sounds like an application-specific thing, unless you've a system cron job causing it
<BusyElf> we did have a cronjob in place to close it and restart it after backup. Now there's no backup we deleted the cronjob  but it still shuts down
<BusyElf> The server has since been restarted as well
<BusyElf> sorry My mistake, it's Ubuntu 14.04.4 LTS version
<halvors> Anyone knows why mod_http2.so is missing from the apache2 package in Ubuntu Server 16.04?
<halvors> I have the latest build.
<halvors> The http2.load and http2.conf is there. Just not the shared object.
<lukasa> halvors: The Ubuntu developers elected not to add mod_http2 to the LTS release packages
<halvors> Well, they did anyway.
<halvors> lukasa: It is present in the latest package.
<halvors> Just not the so file.
<halvors> So it seems to be there for the user.
<TJ-> halvors: http2 for both apache2 and nginx, as I recall, are deliberately not included due to the 'experimental' nature
<TJ-> halvors: as the package changelog says: "- Don't build experimental http2 module for LTS:"
<TJ-> halvors: oh, seems like it has been enabled in nginx, as of start of April. The original discussion was that neither package would have http2
<TJ-> who handles building the cloud-images for the cloud-images and partner-images archives?
<halvors> TJ-: There is apache2 config present for this: /etc/apache2/mods-available/http2.load
<halvors> Just that the .so file is missing.
<halvors> It should be if it's going to be disabled...
<TJ-> halvors: I agree; have you reported the bug?
<frickler> jamespage: yet another issue involving systemd task accounting https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1567381 , pretty significant for our deployment
<ubottu> Launchpad bug 1567381 in libvirt (Ubuntu) "qemu threads should not be counted in libvirt.service tasks limit" [Undecided,New]
<jamespage> cpaelzer, rharper: ^^ thats probably quite an important bug to resolve
<jamespage> thanks frickler - your pre-release feedback has been invaluable!
<frickler> getting some feedback on https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1564812 would also be nice, not sure whether you create the sudo config or that is taken from upstream
<ubottu> Launchpad bug 1564812 in nova (Ubuntu) "Disable sudo io logging for rootwrap" [Undecided,New]
<cpaelzer> jamespage: yeah that looks bad from just the description
<cpaelzer> jamespage: since it is about libvirt also highlighting hallyn and smb to bug 1567381
<ubottu> bug 1567381 in libvirt (Ubuntu) "qemu threads should not be counted in libvirt.service tasks limit" [Undecided,New] https://launchpad.net/bugs/1567381
<frickler> jamespage: well, thanks for being so supportive, much better than having to go through this bug-fixing for another 6 months after the release ;)
<ddellav> jamespage manila is ready for review and push: lp:~ddellav/ubuntu/+source/manila i rebased against ubuntu-server-dev
<smb> jamespage, hm, practically the qemu processes are part of the virtualisation
<smb> so from my thinking they should be accounted against it. Maybe the question is to adapt the task limit
<frickler> smb: I would rather vote that this is similar to sshd, there the processes of users logged in via ssh also should not be counted towards the ssh.service
<smb> frickler, but there VMs started by libvirt directly relate to the service. processes started by users logged in is one level deeper
<frickler> smb: hmm, maybe, but then the question is how to set a useful limit. I get 490 of of 512 tasks being used by starting 4 cirros instances, the 5th already fails
<smb> frickler, Yeah, probably a question what this really is accounting for. I suspect that is might rather be threads than tasks
<frickler> smb: "tasks" for systemd means threads, yes, the process count is 7 ( 4 x qemu, 2 x dnsmasq, 1 x libvirtd)
<smb> and the number for qemu then could vary a lot depending on the config of the VM (possibly one per disk/nic though I have not verified that). Not sure how to fit that best into the accounting. Either the limit must be rather high or ... hm I assume this is all depending on cgroups, so libvirt would need to place each qemu/VM into an independent one. Which might be a rather big change.
<patdk-wk> sarnold, dunno what happened, but whatever was messed up with grub efi boot + luks, got fixed in beta
<rsevero> Hi. One of my Ubuntu 15.10 servers started this morning to present errors like: "Setting up thermald (1.4.3-5ubuntu1) ...
<rsevero> Installing new version of config file /etc/thermald/thermal-conf.xml ...
<rsevero> Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: Timeout was reached (g-io-error-quark, 24)
<rsevero> Failed to execute operation: Connection timed out
<rsevero> Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: GDBus.Error:org.freedesktop.DBus.Error.TimedOut: Failed to activate service 'org.freedesktop.PolicyKit1': timed out (g-dbus-error-quark, 20)
<rsevero> " during an "apt-get dist-upgrade"
<rsevero> It presented errors like the above during a boot I did just before the upgrade. Any idea what might be wrong? And how to fix?
<coreycb> ddellav, I pushed your barbican changes, and we can pick those up when we do the final release of barbican
<ddellav> coreycb: ok thanks
<jamespage> ddellav, doing manila now...
<jamespage> coreycb, do you want me to sweep up the 2.0.0 as well for manila whilst I'm doing ddellav's manila-share landing?
<coreycb> jamespage, sure!
<jamespage> coreycb, did all of the releases already come out?
<jamespage> 07:47 this morning?
<coreycb> jamespage, I believe so, that's what openstack-announce says
<jamespage> coreycb, okies...
<coreycb> jamespage, ugh, the new keystoneauth1 adds a new dep for python-betamax
<jamespage> can we pick a compromise version?
<coreycb> jamespage, we can stay where we are or I can patch out the tests, it's just for tests
<jamespage> coreycb, i'd prob go with patching out the tests..
<coreycb> jamespage, ok
<jamespage> coreycb, does it fix important things?
<coreycb> jamespage, there are some bug fixes, 2.4.0 gets us to the  upper constraint
<coreycb> I'll go with patching the test out for now, that's what upstream's been using
<jamespage> coreycb, okies manila uploaded
<coreycb> jamespage, nice
<ruben23>  hi guys i have installed apache2 on ubuntu server, when i run my site i get this -----> 404 NOt found any idea guys how to troubleshoot this..?
<jvwjgames> Apache error logs
<jvwjgames> Logs are your best friend
<ruben23> http://pastebin.com/M5DzF2J3  <--------- nothing significant to error appearing
<teward> ruben23: 404 would suggest that what you're trying to access doesn't exist
<teward> ruben23: I'd set a more verbose error logging level and try again
<teward> or, nuke your browser cache first
<teward> then try again
<_max_> hi guys, quick question: does anybody here have experience with fai-project under ubuntu?
<BlessJah> I'm trying to setup PXE boot using iso as local repository. Debian-installer tries to contact my repo (/cdrom/dists exposed via http) and succeds to find dists/trusty, bet then tries trusty-updates which is not available on cd and hangs when it receives 404. (I've figured out -server channel is better place to ask such question)
<RoyK> BlessJah: mirror the whole repo, not just what's on the ccd
<RoyK> BlessJah: mirror the whole repo, not just what's on the cd
<BlessJah> RoyK: how large whole repo could be, and why is cdrom not enough?
<BlessJah> I'm actually trying to bootstrap local pxe/repo mirror server for environment behind firewall
<RoyK> got a proxy?
<BlessJah> not at the moment of bootstrapping
<RoyK> I don't know how large those repos are - I guess 10-20 gigs
<RoyK> https://help.ubuntu.com/community/Rsyncmirror
<RoyK> see that
<BlessJah> way too big
<RoyK> why?
<Karunamon> so here's a question that probably doesn't come up often - anyone familiar with setting up serial terminals?
<RoyK> BlessJah: that's not really a lot
 * RoyK setup a neat little server with 80TiB disk space yesterday
<RoyK> Karunamon: I am
<Karunamon> RoyK: I've got a wy50 hooked up to my machine and giving me a login session, but it's still really easy to screw it up so that a reset is required to become usable again
<BlessJah> RoyK: because it has to fit usb drive, 8GiB is max, >1GiB is good
<Karunamon> i'm almost certain that it's a charset thing.. I can use sh, but not bash, opening anything like mutt or vim completely trashes it
<RoyK> BlessJah: I mean on the repo, not on the final install
<Karunamon> that or some disagreement between my tty settings and the settings on the terminal
<RoyK> Karunamon: ah - not sure, sorry. try setting TERM=vt100
<RoyK> Karunamon: as in "export TERM=vt100
<RoyK> "
<jamespage> coreycb, I have the theme refresh from design - lemme take horizon
<coreycb> jamespage, awesome
<Karunamon> nope, that doesn't appear to work
<BlessJah> I know that it's the repo that is 20GiB, but all files required for bootstrap has to fit pendrive so they'd fir pendrive and be manageable to be sent over crappy internet
<RoyK> BlessJah: then I don't know
<_max_>  i try to build a ubunut nfsroot from/with "trusty archive.ubuntu.com/ubuntu"  and i get a "not configured" error for udev, plymouth and initramfs-tools depending on udev, but if i manually try to install udev on 14.04 i have no problems
<_max_> any ideas why?
<BlessJah> I know my limits may sound strange, but they
<BlessJah> 're pretty reasonable: to be able to perform PXE boot using just iso image
<dr4c4n_> hi, I'm having trouble setting a static ip address on a box to create a management network that's running kvm
<dr4c4n_> do I need to create a bridge for that?
<sdeziel> dr4c4n_: what trouble are you having when setting the static IP?
<dr4c4n_> sdeziel: what I'm trying to do for now, is set a static ip on the kvm host machine, and a static ip on the one machine I want to use to manage it, but I'm getting destination host unreachable, both are connected to a switch, and ip link show, shows that I'm using the right interfaces on each
<dr4c4n_> and ifconfig shows that both machines have their ip addresses assigned
<sdeziel> dr4c4n_: try to start a ping stream from you host and use tcpdump on the destination to confirm your traffic reaches it
<coreycb> jamespage, keystonemiddleware 4.4.0-3 is in the xenial queue.  I've asked inifinity if he can accept that before any core packages since it is needed at build time to generate the right configs.
<coreycb> jamespage, he just accepted middleware
<jamespage> coreycb, lol following along
<jvwjgames> I have an issue with curl
<jvwjgames> I am trying to use my server to try to send push notifications and it keeps on saying error
<ratrace> like what
<crazybluek> can't remember  why Iv
<jvwjgames> Are you talking to me
<ratrace> jvwjgames: yup. error like what?
<crazybluek> I've set up mine ubuntu server with localhost 127.0.0.1 and pc_box_name 127.0.1.1
<jvwjgames> http://picpaste.com/pics/IMAG0456-58PGayJB.1460049132.jpg
<sarnold> crazybluek: that's normal; 127.x.x.x. goes via loopback and never leaves the machine
<ratrace> jvwjgames: -d flag sets the post body string. you're using it wrong, it should be an "urlencoded" string
<sarnold> jvwjgames: easy, call google and ask them to give you an API key for their gcm-http service
<crazybluek> sarnold ok
<sarnold> crazybluek: some applications get cranky if they forward-resolve an address then reverse-resolve the address and get something different
<sarnold> crazybluek: this configuration lets the computer be used for one set of addresses, localhost for another set, and the two won't stomp on each other
<crazybluek> ok
<crazybluek> trying to figure out if there are some misconfiguration on  pc/ubuntu
<ratrace> jvwjgames: -d "to=/topics/foo-bar" -d "data=message"        or as single string -d "to=/topics/foo-bar&data=message"  however, as I said, it must be urlencoded.
<jvwjgames> Ok
<jvwjgames> Maybe this will help
<jvwjgames> Http://goo.gl/w7Yrg7
<ratrace> jvwjgames: help what?
<jvwjgames> Cause I am trying to URL encode that code
<jvwjgames> Where it says http post request
<ratrace> jvwjgames: for starters, curl is not really a good tool for structured data. But if you must, I suppose using application/json content type is easier because you don't have to urlencode.
<jvwjgames> Then how do I use it then
<ratrace> use a programming language that understands structured data like perl or python.
<jvwjgames> Ok
<jvwjgames> Thanks
<jvwjgames> So just slap the code into a .py file and execute it
<jvwjgames> Right
<ratrace> and if you wish to stick to curl, you can try the "HTTP POST Request" example from that second link, using --header "Content-Type: application/json" and -X POST and  -d '{"to": "/topics/foo-bar", "data" ....   '
<ratrace> put that whole json structure into -d string
<ratrace> you can also put it in a file and use -d @filename for curl
<ratrace> jvwjgames: there's a python client for GCM so you don't have to reinvent the wheel: https://github.com/geeknam/python-gcm
<ratrace> surely a programming language of your choice might have a lib/binding for that API
<jvwjgames> ratrace thanks for your help
<jvwjgames> Returned http 200 Ok
<jvwjgames> And message arrives at my phone
<ratrace> jvwjgames: yw :)
<jvwjgames> I ended up adding the json code in a .py and just having curl
<jvwjgames> https://gcm-http.googleapis.com/gcm/send @GCM-sender.py
<jvwjgames> And that worked
<jvwjgames> *-d @GCM-sender.py*
<jvwjgames> And now to automate that with cron jobs witch I can do
<jvwjgames> Again thanks for your help
<ratrace> jvwjgames: you shouldn't. even though extensions don't define mimetype, you shouldn't use .py if it's not python code, and it isn't, it's JSON.
<wsirccc> Wer hat Lust auf nen Smalltalk Ã¼ber Paketbau mit mir.
<jvwjgames> So even though it works it's not good practice
<ratrace> jvwjgames: exactly.
<jvwjgames> So use .sh
<wsirccc> pardon: how likes smalltalk about packaging in broader sense with me?
<sarnold> wsirccc: "who" :)
<wsirccc> you?
<wsirccc> (;
<sarnold> wsirccc: it depends on the specific question; I only do simple things..
<wsirccc> sarnold: Are you interested?
<sarnold> sure
<wsirccc> Well in my case we speak of packaging a program, which must be developed first, by and by.
<wsirccc> Got me?
<wsirccc> it is about runnable qemu image on finger snip.
<sarnold> "finger snip"?
<wsirccc> at the finger tips
<sarnold> aha :) so you want to package something like a qcow2 image and a command line that launches the VM?
<wsirccc> yes
<wsirccc> thats part of it.
<wsirccc> And now I ask the packager for experience in user program specifics.
<wsirccc> That describes the situation.
<wsirccc> launch? I want it to be "fully scriptable".
<wsirccc> controllable over ssh out of box as a user interface
<wsirccc> So what does it take?
<wsirccc> Boss?
<wsirccc> (;
<wsirccc> sarnold
<wsirccc> sarnold:?
<wsirccc> Is it possible and how much time one should invest for it?
<sarnold> wsirccc: it depends how far you want to go with it. you could do something similar via lxd quite quickly, but containers aren't VMs. if you -want- it to be qemu, then you've got a lot more work.
<wsirccc> "got a lot more work" about which steps works you speaking
<wsirccc> ?
<nacc> and by "do something via lxd", sarnold means lxd already does this
<sarnold> wsirccc: lxd is easy, you pull down an image, configure it as you need, then publish the modified image
<zzxc> Hey I have an apache issue. I have multiple site-enabled. Currently I have a conf for each of foo.com/a foo.com/b foo.com/c. I'm wanting to handle multiple another domain url for bar.com/a. So I went with virtual_host *:443 and servername foo.bar with a location /a in one conf, /b in another /c in a third,
<zzxc> then another file that has the same virtual host with the servername of bar.com. bar works withouth any issue, but only the first of the foo.bar will load. All foo.com confs are disregarded.
<zzxc> unload adv_windowlist.pl
<zzxc> err sorry about that.
<wsirccc> sarnold
<wsirccc> sarnold
<wsirccc> sarnold: thanks, you send me to another project. Understand from it that it is another tool like uvtools that download images from the test server, patch them start them and controll them by ssh.
<wsirccc> Is it that perfect?
<sarnold> wsirccc: I'm sorry to say that I never got the hang of uvtools
<sarnold> wsirccc: uvtools uses simplestreams to distribute information about images and that's very nearly a black-box -- it's impossible to learn more about it, it's not very well documented, and the tools that exist have no options for debugging..
<wsirccc> Well, I am definitely searching something more: my the automation of physical machines shall be test es well.
<wsirccc> Just recall uvtools because soone else mentioned it before here in the chat.
<wsirccc> sarnold: "got a lot more work", which one ?
<sarnold> you certainly could distribute raw images and virsh command line scripts to import the VM but assuming people have a working libvirt install may be asking a lot too
<wsirccc> sarnold: Do you know anybody, who does it?
<sarnold> nothing that does everything you want
<wsirccc> I could distribute the creating, preseeding script es well, as templates, voala
<wsirccc> sarnold: can one asume that there might be really noone how offers ready to use that qcow2 machines?
<sarnold> wsirccc: you could either package a shell script that starts kvm with all the needed parameters
<sarnold> wsirccc: or you could package a shell script that calls virsh with the needed parameters
<wsirccc> why not qemu?
<sarnold> wsirccc: but different users will have different networking needs, differentinput devices they want passed through, etc
<sarnold> kvm/qemu, same thing :)
<wsirccc> well my package is qemu-buro. It is for that use and type of user, not much problem in designing that interface. One Person Buro.
<wsirccc> sarnold
<wsirccc> sarnold
<wsirccc> sarnold
<wsirccc> user is set to create is own image templating
<wsirccc> pardon
<wsirccc> sarnold:?
<wsirccc> is/its
<teward> give him a minute to read lol
<wsirccc> lol about nothing?
<teward> he's busy too you know :P
<teward> we all are :P
<wsirccc> teward: Thats no reason to laugh loud. Especially if you agree on the sake.
<wsirccc> So one can hold fast: I have been understand falsely: My tool should imply:user is set to create is own image templating
<wsirccc> could distribute raw images and virsh command line scripts to import the VM but assuming people have a working libvirt install may be asking a lot too
<wsirccc>  could distribute raw images and virsh command line scripts to import the VM but assuming people have a working libvirt install may be asking a lot too
<wsirccc> sarnold
<wsirccc> you certainly could distribute raw images and virsh command line scripts to import the VM but assuming people have a working libvirt install may be asking a lot too
<wsirccc> " a working libvirt install "
<wsirccc> pardon
<wsirccc> what is that?
<wsirccc> sarnold:?
<lordievader> !patience | wsirccc
<ubottu> wsirccc: Don't feel ignored and repeat your question quickly; if nobody knows your answer, nobody will answer you. While you wait, try searching https://help.ubuntu.com or http://ubuntuforums.org or http://askubuntu.com/
<wsirccc> ok
<sarnold> wsirccc: take a look at the virsh manpage for the full range of options available
<wsirccc> sarnold: did that some time ago, what should I learn from it?
<Blueking> just wonder about dhcp lease time  dhcp request, expire , offer, nak ack   My ethernet interface has mac address  00.25.90.aa.b3.b4  BUT on dhcp logs it says its mac are 00:25:90.aa.bf.97    why is it difference from my ethernet interface ?
<Blueking> sarnold u've seen something like that ?
<sarnold> Blueking: no, sorry, doesn't make any sense to me
<sarnold> wsirccc: virsh shows how to import vms from files into libvirt
<Blueking> trying to figure out why renew of ip lease fails
<Blueking> patdk-lap alive ?
 * Blueking pinching on patdk-lap's right eyeball,,, could it be fake ?
<patdk-wk> heh?
<patdk-wk> crappy dhcp server
<Blueking> think it's dhclient giving wrong mac address somehow
<patdk-wk> how is that possible?
<patdk-wk> dhclient doesn't give a mac
<Blueking> it doesn't ?
<patdk-wk> nope
<Blueking> tcpdump + dhcp.pcap file shows both wrong mac address
<patdk-wk> you do know your osi stack right?
<Blueking> when dhcp server was working as expected 3 times in row it used RIGHT mac address 2 days ago
<patdk-wk> ya, not helpful
<patdk-wk> as I cannot see it
<Blueking> I have it in logs
<Blueking> trying to understand where this wrong mac coming from :/
<Blueking> can I pm u ?
<patdk-wk> no
<Blueking> just show you the two mac addresses  nothing else
<jrwren> a mac comes from a NIC
<jrwren> dhcp server assigns IP to a MAC (poorly worded)
<patdk-wk> it cannot cross l2 segments
<Blueking> right mac  00:25:90:aa:b3:b4    wrong mac   00:25:90:aa:bf:97
<nacc> Blueking: so you have a DHCP server, and you have your client, and your client has MAC 00.25.90.aa.b3.b4, but your server says it gave a lease to 90.aa.bf.97 ?
<nacc> and no mention of b3.b4?
<Blueking> mine ISP's dhcp server
<Blueking> yes
<patdk-wk> isp's do not do l2 segments
<patdk-wk> they use dhcp relays
<patdk-wk> the dhcp relay is likely having issues
<sarnold> same mac vendor is funny thuogh
<patdk-wk> or maybe the isp supplied hardware, dsl/cable/... modem
<sarnold> ahhh
<sarnold> then they're all going to be arris or whatever :)
 * patdk-wk is just happy grub got fixed
<patdk-wk> and I can boot encrypted with uefi
<sarnold> \o/
<Blueking> toshiba  dhcp server ?
<patdk-wk> Blueking, you aren't doing something odd are you, like using bonding/teaming/....
<patdk-wk> or have some kind of shared nic (ipmi/ilo/....)
<Blueking> dhcp server  on ip 81.167.184.1
<Blueking> no, the other nic are busy on lan (lokal)
<Blueking> local
<Blueking> that has ip 10.25.0.1
<patdk-wk> nothing you can locate on your network has that other mac?
<patdk-wk> or one close to it
<Blueking> just wonder why bf:97 instead of b3:b4
<Blueking> the other nic b3:b5
<nacc> Blueking: if this was your ISP providing a DHCP lease, what logs were you referring to above? do you mean DHCPREQUEST or so in /var/log/syslog?
<Blueking> tcpdump.cap + tshark  udp port range 67-68
<Blueking> both shows same wrong mac
<nacc> Blueking: and i guess that goes back to patdk-wk's question -- is it possible another device on your network has that mac?
<patdk-wk> what does both mean?
<patdk-wk> what where in those does it show it?
<Blueking> 2 days ago  there were 3  correct lease renew and IT used the correct mac address
<Blueking> I only have one supermicro device   this mobo
 * patdk-wk gives up
<Blueking> wait I'll show pcap file
<jrwren> i wonder if it is something silly like you rebooted and your ethernet devices swapped for some reason.
<Blueking> em1 = 00.25.90.d4.c1.c6 and em2 00.25.90.d4.c1.c7
<Blueking> em1 & em2 thernet devices
<Blueking> dhcp server  think it's 00.25.90.d4.bf.97
<patdk-wk> doubtful
<Blueking> mine logs show it
<patdk-wk> the dhcp server is not on the same l2 segment as you
<patdk-wk> so there is no way you can see it
<patdk-wk> you can see the dhcprelay/gateway that responds to you
<wsirccc> sarnold: so "libvirt install" what does it for me, what I should have done otherwise manually.
<Blueking> Internet Protocol Version 4, Src: 81.167.184.1, Dst: 255.255.255.255
<wsirccc> sarnold:having read: http://wiki.libvirt.org/page/FAQ#What_is_some_of_the_major_functionality_provided_by_libvirt.3F
<Blueking> Ethernet II, Src: SuperMic_d4:bf:97 (00:25:90:d4:bf:97), Dst: Broadcast (ff:ff:ff:ff:ff:ff)
<Blueking> wireshark  on pcap file
<Blueking> the correct one
<Blueking> Ethernet II, Src: SuperMic_d4:c1:c6 (00:25:90:d4:c1:c6), Dst: ToshibaT_01:00:01 (00:06:00:01:00:01)
<jrwren> Blueking: pastebin `ip addr show` output and `ip route show` output
<jrwren> Blueking: and pastebin `ip neigh show` output for good measure
<Blueking> http://paste.ubuntu.com/15678338/
<Blueking> http://paste.ubuntu.com/15678342/
<Blueking> http://paste.ubuntu.com/15678354/
<patdk-wk> in no way whatsoever is that mac address from the dhcp server
<Blueking> I have had problem with dhcp renew of ip/lease
<Blueking> for 3 weeks
<supNow> just upgraded ubuntu server to 14.04 and I'm getting a grub_term_highlight_color not found error. google search suggests it can't find grub but I havn't found any real information on how to fix. Does anyone know a quick fix to this issue?
<Blueking> jrwren what u think ?
<jrwren> Blueking: I think it is very strange.
<patdk-wk> we still dunno if that mac is somewhere else on your network
<Blueking> havn't noticed the mac address before 30 minute ago
<patdk-wk> or even in another home
<Blueking> dhcp.pcap showed wrong mac when dhcp renew lease fails... when dhcp server  finally 3 times in a row two days ago in 4-5 hours worked as it should do it used the right mac address
<Blueking> I can upload  two pcap files that u can study in wireshark
<wsirccc> sarnold:https://linuxcontainers.org/lxd/try-it/ this is amazing. But I still do not feel very secure that I am on the right way..
<wsirccc> ..
<jrwren> can you kill your running dhclient and run `dhclient -1 -v -pf /run/dhclient.em1.pid -lf /var/lib/dhcp/dhclient.em1.leases em1` ?
<Blueking> ok
<jrwren> Blueking: err, add a -d to that.
<DirtyCajun> if i had a managed switch and i vlan separate vlan 1 to port1 (my server)  and port 2 (the modem).. then vlan2 port 1 (my server) and ports 3-8 .. i could dhcp with only 1 nic that way right?
<Blueking> ok problem solved :)
<Blueking> ok problem solved jrwren and patdk-lap patdk-wk :)
<patdk-wk> till next week?
<patdk-wk> what do you believe solved it?
<NwS> heya guys, any suggestions for a full clone/backup of an ubuntu server 14.04LTS without a restart?
<TJ-> NwS: image the block devices, or clone the installed packages and configs?
<NwS> I want to make a full backup copy just in case something happens and restore everything later
<NwS> And yeah have never done this before so -.-"
<TJ-> !backups | NwS
<ubottu> NwS: There are many ways to back your system up. Here's a few: https://help.ubuntu.com/community/BackupYourSystem , https://help.ubuntu.com/community/DuplicityBackupHowto , https://wiki.ubuntu.com/HomeUserBackup , https://help.ubuntu.com/community/MondoMindi - See also !sbackup and !cloning
<NwS> ty for the info TJ-
<Blueking> patdk-wk: ipmi   management lan  had same IP as ethernet inerface em1    ipmi have mac address 00.25.90.d4.bf.97
<Blueking> TJ-  fixed problem
<Blueking> TJ-  it was ipmi/management lan that was reason for net drop outs... dhcp did see ipmi's mac address
<TJ-> Blueking: so I hear! So the problem was due to the server-board
<Blueking> yes
<Blueking> it's good to have it fixed :)
<Blueking> TJ-  patdk-wk didn't believe in wrong mac reason :P
 * Blueking pats patdk-wk on shoulder
<TJ-> Blueking: so the IPMI was getting between the ISP and the OS?
<Blueking> supermicro   for some years  had ipmi and ethernet nic on same physical connector ?
<Blueking> but  not anymore says RoyK
<Blueking> bedtime  hopefully there will be none disconnects for long time
<Blueking> we'll see in some days :)
<sarnold> Blueking: if you've only got one ethernet wire to your supermicro, it'll run the ipmi/bmc _and_ regular NIC functionality on that one NIC
<Blueking> ok
<Blueking> think I have to swap nic's  so ipmi are only visible on local network
<Blueking> just tested arping -i em1  impi's IP
<Blueking> ipmi
<Blueking> not visible on em2
<sarnold> nice, that's robably a good idea
<sarnold> i'm not sure i'm going to bother at home, chances are good nsa / mossad are already embedded (cool dudes i'm sure)
<rbasak> hallyn: I've filed bug 1567696 and bug 1567695
<ubottu> bug 1567696 in mysql-5.7 (Ubuntu) "mysql-server-5.7.postinst is influenced by ~/.my.cnf, causing installation hangs" [Critical,Triaged] https://launchpad.net/bugs/1567696
<ubottu> bug 1567695 in mysql-5.7 (Ubuntu) "mysql-server-5.7.postinst is influenced by $HOME, causing installation hangs" [Critical,Triaged] https://launchpad.net/bugs/1567695
#ubuntu-server 2016-04-08
<hallyn> rbasak: great, thanks
<patdk-lap> heh, I always set dedicated ipmi interface, set static ipmi ip
<patdk-lap> normally fixes up those issues
<sarnold> patdk-lap: even for the home lab?
<patdk-lap> expecially
<patdk-lap> I'm running 7 vlans currently here at home
<patdk-lap> none of that is a lab though
<sarnold> none of my switches can do vlan, heh
<sarnold> I've never learned enough about vlans to see it as a priority, so i've never spent the extra money
<patdk-lap> I don't own a switch that doesn't do vlans
<patdk-lap> swapping the current one out soon, for a poe model
<patdk-lap> crap, this unbound issue I have, is still present as a bug :(
<frickler> jamespage: coreycb: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1567811
<ubottu> Launchpad bug 1567811 in nova (Ubuntu) "nova-compute should depend on libvirt-bin.service instead of libvirtd.service " [Undecided,New]
<jamespage> frickler, oh good spot!
<smb> jamespage, if possible maybe depend loosely on both variants. Would be nice to have something like "provides" aliases for services. But I cannot find anything
<jamespage> smb, we only have libvirt-bin in Ubuntu right?
<smb> jamespage, right now, but it might change in future when we get rid of more delta
<jamespage> smb, sure - we can switch when we do that :-)
<smb> jamespage, of course :) but still a pain to always think of the implications if say backporting into cloud-archive
<smb> for sysvinit scripts it was at least possible to say this provides libvirt-bin and libvirtd... but rather useless nowadays
<frickler> jamespage: smb: if you want to add Alias=libvirtd.service to libvirt-bin.service, that would also work, just tested that variant
<jamespage> frickler, that's a nice backwards compat feature
<smb> Yeah... wondering why I did not see that in the systemd.service doc
<smb> jamespage, would you know whether hallyn has some other things queued or shall I wait till he comes around to sync up
<jamespage> smb,  not sure tbh
<jamespage> smb, I'd wait for hallyn
<smb> jamespage, ack
<jamespage> smb, I'd also say that this is not critical for xenial
<jamespage> hmm - well not from openstack's perspective
<jamespage> I have no idea about other systemd units that might be dependening on libvirtd
<smb> jamespage, hard to say. just if there is an "Alias" option that should not break anything existing while resolving anything we do not know about
<jamespage> smb, yah so maybe it does make sense for compat with direct syncs from debian
<jamespage> smb, shall I raise a libvirt task for that bug as well?
<smb> jamespage, That is what we usually tried to maintain (for build dependencies and in sysvinit, just did not know about alias for systemd). yeah I think makes sense. probably can be lesser prio as you fix nova
<bvi> anyone experience with ServeRAID M5120 in jbod mode?
<bvi> installer doesn't even boot when this hba in in the server (lenovo x3650m5)
<smb> frickler, Just to confirm: under which section did you add the alias? Install?
<frickler> smb: yes, then disable/enable the service once in order to create the symlink
<smb> frickler, ok, thanks
<bvi> err M5210 .... damn typo
<frickler> would it make sense to change to upstream connection of https://launchpad.net/ubuntu/xenial/+source/nova to mitaka instead of essex?
<frickler> jamespage: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1567881
<ubottu> Launchpad bug 1567881 in keystone (Ubuntu) "keystone postinst should not call keystone-manage pki_setup" [Undecided,New]
<BlessJah> ubuntu seems to completely ignore preseed file I provide
<BlessJah> In syslog I can see file has been loaded properly, but I cannot get any directive working, it still asks me questions that has been answered
<frickler> jamespage: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1567935 is my final one for today, but https://bugs.launchpad.net/neutron/+bug/1567923 might also be interesting for you
<ubottu> Launchpad bug 1567935 in nova (Ubuntu) "nova-compute-libvirt should not depend on open-iscsi" [Undecided,New]
<ubottu> Launchpad bug 1567923 in neutron "Neutron advertises too high MTU for vxlan" [Undecided,In progress]
<supNow> just upgraded ubuntu server to 14.04 and I'm getting a grub_term_highlight_color not found error. google search suggests it can't find grub but I havn't found any real information on how to fix.
<supNow> From reading it's saying that it can't find the bootloader. I have booted into a live session from a usb drive. Is there an easy way to fix it from there? Or should I try to boot into ubuntu-server using grub from the usb drive?
<hateball> !fixgrub | supNow
<ubottu> supNow: GRUB2 is the default Ubuntu boot manager. Lost GRUB after installing Windows? See https://help.ubuntu.com/community/RestoreGrub - For more information and troubleshooting for GRUB2 please refer to https://help.ubuntu.com/community/Grub2
<supNow> @hateball it was actually ubuntu-server upgrade. There is no windows. Will that same thing work though that you linked?
<hateball> supNow: Yes, it just tells you how to reinstall grub
<supNow> ok great, thanks
<coreycb> jamespage, we're not going to be able to sync magnum at this point in the cycle as it needs a big bump to python-docker
<hallyn> smb: queued up for libvirt? i got nothing
<ddellav> coreycb swift is ready for review/push lp:~ddellav/ubuntu/+source/swift
<coreycb> ddellav, ok
<jamespage> coreycb, 1.5.0 not good enough?
<jamespage> coreycb, we may want the new version anyway - jgrimm - thoughts ^^ ?
<coreycb> jamespage, not according to requirements.txt
<smb> hallyn, k, I sent you a patch as well as have pushed a branch to lpgit. don't think its urgent now that nova got fixed
<jgrimm> jamespage,  syncing magnum?
<jgrimm> or bumping up python-docker?
<coreycb> jgrimm, python-docker
<jamespage> jgrimm, no syncing python-docker - we're right up to date with docker so we may actually want that right?
<jgrimm> yes, we are at latest upstream
<jgrimm> not sure if python-docker is that current tho?
<jgrimm> docker.io-1.10.3 is what we carry.
<coreycb> jgrimm, it's a bit behind, we're currently at 1.5.0 and it looks like they're up to 1.8.0 upstream
<hallyn> smb: you didn't push to the xenial branch?
<smb> hallyn, no, though I could have I guess. I used a smb/xenial one
<jgrimm> coreycb, indeed 1.5.0 sounds quite long in the tooth.   I'm not seeing anything in their docs that gives me a warm fuzzing on what versions of docker their client supports?
<jgrimm> coreycb, but i'd think you'd be better off moving up, though testing needed
<kaffien> i ran a do-release-upgrade and ended up on 16.04.  Now our veeam setup cannot contact the NFS shares.  Is there something i need to configure on my ubuntu server to allow that again?
<kaffien> veeam cannot connect to the SSH server i have running either
<kaffien> but i can via a ssh client  on that same workstation.   nmap shows all the appropriate ports are open.
<kaffien> nm wrong channel
<coreycb> jamespage, ddellav: all of the final mitaka release packages are uploaded to xenial for mitaka now (sans the magnum sync)
<jgrimm> \o/
<ppetraki> if I use an lxd profile to limit the number of cpus to say 2. /proc/cpuinfo is showing two cpus, but lscpu is showing all the host cpus. Is that the intended behavior? http://pastebin.ubuntu.com/15690427/
<patdk-wk> yes
<patdk-wk> the kernel restricts what cpu's things can run on
<patdk-wk> but it doesn't go around limiting the view of the physical hardware
<ppetraki> ah ok
<ppetraki> thanks
<sdeziel> ppetraki: /proc/cpuinfo of the container is a lxcfs mount showing just the CPUs that you have enabled for that host
<nacc> ppetraki: lscpu is looking in /sys/ in contrast, iirc
<tiblock> Hi. I have ubuntu 14.04 server with 20GB on / and some logs files ate everything and i was not able to lauch anything so i removed about 18gb logs but `df -h` says only 300mb free, `ncdu` says 2.1gb used. Tryed to do `tune2fs -m 5 /dev/...` but its not helping. How to get space back?
<tiblock> Oh, it just jumped to 9.8gb free... Okay. Problem kinda solved itself. Still missing about half but 9.8gb its okay.
<nacc> tiblock: might need to issue a `sync;sync` if you deleted a ton of files
<nacc> tiblock: it would probably be better to figure out what log exploded
<lordievader> tiblock: Restart your loging service, it probably still has a link open to those files, thus they are still on disk.
<tiblock> nacc, after `sync;sync` still says 8.8gb used and i fouln log, my own application made it, forgot to add folder to logrotate. I deleted that logs.
<nacc> tiblock: ah ok, yeah i'd check lsof or as lordievader said
<nacc> tiblock: in case it's being held open
<nacc> `du -h` at the appropriate level might give you a better idea of what is consuming space rather than `df`
<tiblock> Ah i see, `lsof` shows 7115024813 byte log file opened and says `(deleted)`. Okay, there is my space. Thank you very much!
<tiblock> Awesome! After restart of that application i got all my space back.
<tiblock> 2.2gb used
<nacc> tiblock: nice, yeah, it was presumably still ref'd so couldn't be fully removed from disk (just unlinked from the fs effectively)
<lordievader> You can do lovely tricks with this...
<nacc> lordievader: "lovely" :)
<sdeziel> nacc: https://bugs.launchpad.net/ubuntu/+source/unbound/+bug/1556308 was approved for upload when you get a chance
<ubottu> Launchpad bug 1556308 in unbound (Ubuntu) "[FFe] Please merge unbound 1.58-1 from Debian unstable" [Wishlist,Triaged]
<nacc> sdeziel: ack, i don't have upload rights :)
<nacc> sdeziel: hence sponsors is subscribed
<sdeziel> oh didn't know. Thanks anyways
<Blueking> patdk-wk  everything works fine now :)
<patdk-wk> heh, I just filed an unbound bug last night, that just annoys me
<sdeziel> patdk-wk: yeah, noticed this
<patdk-wk> stupid unbound segfaulting on libcrypto :(
<nacc> sdeziel: np, i'll see if i can find somebody
<sdeziel> patdk-wk: would you mind opening a bug in Debian re /usr/sbin being in PATH? The Ubuntu delta should be kept minimal
<patdk-wk> figured cron restart would be a temp fix, nope :(
<patdk-wk> ah, I'll look
<sdeziel> patdk-wk: latest Debian and soon Ubuntu will use the package helper which now calls unbound-anchor with a full path. Only unbound-checkconf will need fixing making it a oneliner
<ppetraki> before first install lxd, I configured my own bridge, bridge0 (via nm) and made that the default bridge. Then I created a new profile called scaleout, which when used makes my containers use lxcbr0, which doesn't even exist. Even claims there's an IP address, which is unreachable. That's a bug right?
<sarnold> ppetraki: that does sound like a bug; iirc there's a way to tell lxd which bridge to use (it's liable to be a different mechanism than whatever you used to mark yours 'default') -- but it feels funny that it then reports unreachable IPs
<ppetraki> sarnold, well, I told it not to even make the lxcbr0 (whatever the default bridge is) to begin with, it physically doesn't exist. So it's like the new profile, which I created from scratch decided to inherit a device config without telling me. It gets better, when I take the 'device' config from default and insert it into my new profile, those containers now have no interface.
<ppetraki> sarnold, about to retest on a fresh vm, I'll keep you posted
<sarnold> ppetraki: note that recent lxd changes have switched to using lxdbr0 by default, among other bridge-related changes https://launchpad.net/ubuntu/+source/lxd/+changelog
<sarnold> ppetraki: it may be worth mkaing sure your image has the 2.0.0~rc9-0ubuntu3 release..
<ppetraki> sarnold, ok, I have that release installed. the default config isn't getting my bridge. http://pastebin.ubuntu.com/15696446/  . just installed this vm from scratch
<sarnold> ppetraki: probably one for stgraber ^^^
<ppetraki> sarnold, ... and if I restart the lxd service it's there :)
 * ppetraki sees if it can use it
<sarnold> hunh
<ppetraki> sarnold, http://pastebin.ubuntu.com/15696528/
<stgraber> ppetraki: either you're on an old packaging which doesn't restart lxd-bridge for you or your lxc profile command ran right before the change is effective
<sarnold> ppetraki: try the ps aux again after sudo service lxd start; iirc it's a socket-activitated thing
<ppetraki> stgraber, just installed the whole distro an hour ago
<stgraber> ppetraki: ok, so chances are that if you ran that lxc profile command again a second later it would have been right
<ppetraki> stgraber, see http://pastebin.ubuntu.com/15696446/
<stgraber> ppetraki: ok, so yeah, you are on the latest, which means that the change very likely was applied a few ms after you ran that lxc profile command
<stgraber> ppetraki: profile changes can only be applied after the daemon is started so the very first query you do may return the old config. We do however hold container startup until the config was updated so you can't race it and start a container with the old bridge.
<ppetraki> stgraber, I'm not the fastest typist in the world, perhaps a few seconds passed between the start and me asking for the profile. In either case, it's easy enough to sleep a sec or restart the service before continuing.
<stgraber> ppetraki: lxd is socket activated
<stgraber> ppetraki: so the first lxc command you run will start it, it won't be running until then
<ppetraki> stgraber, ah, right, the new systemd stuff
<ppetraki> ok
<stgraber> so if the first command you run is "lxc profile show default" then it very likely will show you the old values
<ppetraki> stgraber, I'll just poll for the right keys to show up then
<stgraber> if it's the second command you run, it should be fine as the profile update script is very quick and starts immediately when the API is available
<sarnold> is there a do-nothing command that'll start it that would be a nice first command to run?
<stgraber> ppetraki: why do you need to wait?
<stgraber> sarnold: lxc finger
<sarnold> stgraber: looks perfect :)
<ppetraki> stgraber, I don't. Just like you said, if I run the lxc command again it should have reflect the correct profile
<sarnold> ppetraki: try: 'lxc finger ; lxc profile show default' and see if that's the results you expect
<ppetraki> stgraber, so when I script this, I'll just call that and wait until bridge0 actually shows up in the config
<stgraber> ppetraki: well, my question is more, why do you need to check that bridge0 made it in there?
<ppetraki> stgraber, its correct
<ppetraki> stgraber, about to create a bunch of containers with it via script
<stgraber> ppetraki: if the next thing you do is launch a container, lxd will already wait for profile config updates to be done before actually starting it
<ppetraki> so I want it to be correct
<ppetraki> stgraber, ok, I will try that, less work, better
<stgraber> yeah, we sure didn't want users running "lxc launch ubuntu:14.04 blah" as their first command and that picking up the wrong bridge, so we do have logic to avoid that very problem :)
<ppetraki> I'm sure. It's just that I just spent a hour or so working with a slightly older version that wasn't as well behaved. So I got a little paranoid
<stgraber> we just can't do the same with the profile API as the profile update script itself needs the API to be functional to update the config :)
<stgraber> ah yeah, that particular fix was introduced in rc9
<ppetraki> stgraber, good stuff so far, it looks like it'll suit us well. passing through kernel devices was fun
<stgraber> :)
<ppetraki> now to get the rest of the stack up
<ppetraki> stgraber, ugh, check this out. it's like the default profile isn't the default. http://pastebin.ubuntu.com/15696865/
<stgraber> ppetraki: nope, you're just not reading this right :)
<ppetraki> of course!
<stgraber> ppetraki: it's listing you the interfaces INSIDE the container
<stgraber> ppetraki: ubuntu:x/amd64 is pretty dated (last beta) so it still has lxcbr0
<stgraber> ppetraki: if you were to use ubuntu-daily:x/amd64 you wouldn't see that interface anymore
<sdeziel> stgraber: nested lxc ?
 * ppetraki switches images
<stgraber> sdeziel: yeah, it's showing the lxcbr0 that's running inside the container as we have lxd pre-installed in cloud images
<sdeziel> makes sense, thanks
<sarnold> owwwww
<sarnold> now my head hurts
 * patdk-wk is so far happy with lxc, though, it kindof annoying to adjust my apparmor profiles for it
<patdk-wk> or is there a better way to *stack* apparmor profiles
<patdk-wk> think I use changehat :(
<patdk-wk> ah, used aa_change_profile
<tyhicks> patdk-wk: the Xenial kernel has support for stacking AppArmor profiles
<sarnold> patdk-wk: I -think- that just landed..
<patdk-wk> ya, I'm using 16.04
<sarnold> patdk-wk: .. dunno if lxd has adapted yet
<tyhicks> patdk-wk: the required userspace changes will hopefully land within the next few days
<tyhicks> sarnold: no, they haven't
<patdk-wk> I found out my profile change didn't work, so I created a lxc profile + my changes
<patdk-wk> then adjust the lxc profile to allow that profile to be used
<patdk-wk> worked for now, just seems like a bit of extra work
<patdk-wk> ok
<tyhicks> patdk-wk: you'll (hopefully) soon have aa_stack_profile() in Xenial's libapparmor
<patdk-wk> so that will only work when running lxc 16.04 guests?
<patdk-wk> but not 14.04 right?
<patdk-wk> unless backport libapparmor somehow
<sarnold> the stacks would be managed by the 16.04 lts system; i'd expect it to work..
<tyhicks> patdk-wk: it'll require a 16.04 host
<tyhicks> patdk-wk: note that lxc/lxd integration for stacked profiles is not in place yet
<patdk-wk> oh
<tyhicks> patdk-wk: as long as you have a 16.04 host, it should work with any ubuntu container, including 14.04
<patdk-wk> so the inside will call normal aparmor call, and it will be adjusted to a stack call?
<patdk-wk> I think that is what I don't get :)
<patdk-wk> ok :)
<tyhicks> the container setup code in the host will change to aa_stack_profile() (or simply adjust policy)
<tyhicks> inside the container, nothing needs to change
<patdk-wk> well, I have a good test for it when it's ready atleast :)
<tyhicks> great :)
<patdk-wk> updating my profiles for 16.04 was fun, so many new things I can limit :)
<tyhicks> :)
<sarnold> :D
<ppetraki> stgraber, I guess I can't call this from a container? mlockall( MCL_CURRENT|MCL_FUTURE ); our process doesn't want to be swapped out
<stgraber> ppetraki: I'm guessing that's a privileged call so not for unprivileged containers
<stgraber> ppetraki: you could try to make the container privileged though (lxc config set NAME security.privileged true && lxc restart NAME)
<ppetraki> stgraber, can I make it a privledged container?
<stgraber> at which point root in the container will be real root outside of it, so unless the kernel is being weird because of cgroups and stuff, this should work
<ppetraki> stgraber, that's working
<ppetraki> it got further, and then I ran out of space :). good, it looks like it's getting the resources it needs
<Velus-universe> hello im folling this https://help.ubuntu.com/community/PostfixVirtualMailBoxClamSmtpHowto and i get to  this bit Make Dovecot listen for authentication requests and it says the code needs to be updated for newer version? can someone help me with this bit please?
 * patdk-lap wonders why it wouldn't work
<Velus-universe> https://help.ubuntu.com/community/PostfixVirtualMailBoxClamSmtpHowto#Configure_Dovecot shows a pre and post 12.4 setup and they look different, i cant figure out how it needs to be changesd
<Velus-universe> patdk-lap, would you bhe able to help me with this please
<patdk-lap> http://wiki2.dovecot.org/HowTo/PostfixAndDovecotSASL
<patdk-lap> though, you still need the passdb and userdb sections
<Velus-universe> patdk-lap, would the service auth be like this then http://pastebin.com/EMuMbmnN
<patdk-lap> remove executable
<patdk-lap> remove user
<patdk-lap> and the example you are following uses auth-client instead of auth
<Velus-universe> ok
<patdk-lap> dunno why they are having you use cram-md5, that is horrible
<Velus-universe> ok patdk-lap ifd i remove the user and that it reuses connection
<patdk-lap> odd, I don't have it
<Velus-universe> i got it working now anyway, now i just need to ad mx records to my domain name but i forgot how to do them lol
<patdk-lap> just mx?
<patdk-lap> if you plan to send email you should have ptr, spf, and maybe dkim
<Velus-universe> yeah i will be doing them in a bit but i need to figure out how to do the mx i will be setting up the ptr and spf after the mx
<conrmahr> Can you format a disk with a RAID array on UBNTU server?
<patdk-lap> conrmahr, what does that even mean?
<conrmahr> I know I'm an idiot
<conrmahr> I took an 4TB WD Red Drive from a Synology DiskStation
<conrmahr> and installed it on a computer runnning Ubuntu Server but it can't mount it.
<patdk-lap> well ya
<patdk-lap> no one claims synology uses some standard raid/filesystem that ubuntu understands
<conrmahr> Can I just erase and format it to ext4?
<patdk-lap> I personally have no idea what they use
<patdk-lap> sure
<conrmahr> yeah that's what i want to do
<sarnold> patdk-lap: i'd always had the impression that it was dm-raid with ext4; iirc they even use ecryptfs for their encrypted storage
<patdk-lap> dunno
<sarnold> but their own 'magic raid" thing may be different, non-standard..
<conrmahr> f-synology
<conrmahr> im sick of it
<patdk-lap> most of them use mdadm/dm-raid
<conrmahr> i just want to run my own Ubuntu server
<patdk-lap> some of them, heh, don't
<patdk-lap> I know mine uses raid-3d
<patdk-lap> believe it was made by ibm
<Velus-universe> patdk-lap, i have done the mx record now yay
<conrmahr> Can anyone help me format a HDD on Ubuntu server with a raid array?
<nacc> conrmahr: you mean you want to use SW RAID with the partitions of a single HDD?
<conrmahr> i just remove everything from a HDD and start fresh with a clean ext4 partition
<nacc> conrmahr: ok, maybe just restate what you are trying to do?
<conrmahr> Yes I have a computer with a 60GB SSD with Ubuntu Server on it
<conrmahr> I want to install a 4TB WD Red HDD as a secondary
<bekks> And where is that raid array then?
<conrmahr> but that WD HDD is from a Synology NAS which was formated with a RAID array
<bekks> So you need to put a new primary partition table on it.
<bekks> Do you use UEFI?
<conrmahr> yes
<conrmahr> i think i've tried but it says it's "in use"
<bekks> You've tried what?
<conrmahr> with 'parted"
<conrmahr> new primary partition
<bekks> UEFI has no primary partitions.
<bekks> you are still trying to use legacy mode.
<conrmahr> i'm pretty sure my BIOS is UEFI
<conrmahr> let me check
#ubuntu-server 2016-04-09
<conrmahr> yes it's UEFI
<conrmahr> there's even a UEFI guide that you can run
<conrmahr> my UEFI version is H97M Pro4 P2.00
<conrmahr> by ASRock
<JanC> I think meant GPT instead of UEFI
<JanC> I think bekks meant GPT instead of UEFI
<sarnold> conrmahr: normally "raid array" means "two or more disks"
<conrmahr> yes
<conrmahr> i had 2 WD 4TB HDDs Raided together
<conrmahr> on a Synology NAS
<conrmahr> Can I delete one of them on Ubuntu Server so I can transfer the backup HDD from the NAS to the Ubuntu Server HDD over the network?
<nacc> conrmahr: your question is very confusing
<nacc> coreycb: you had a 2-disk RAID array
<nacc> coreycb: sorry
<nacc> conrmahr: you had a 2-disk RAID array
<conrmahr> yes
<nacc> conrmahr: you have broken that array
<nacc> conrmahr: and you want to install one of those 2 disks as an external drive on your server?
<conrmahr> yes, but internal drive
<nacc> conrmahr: ok
<nacc> conrmahr: so to make the formerly RAID disk visible to your new system, you should just need to format it (partition table and all) and put a fs on it
<conrmahr> i'm sick of Synology. I want to use Ubuntu Server.
<nacc> conrmahr: but then I really don't understand the last question: "Can I delete on of them on Ubuntu Server so I can transfer the backup HDD from the NAS to the Ubuntu Server HDD over the network?"
<nacc> conrmahr: a) delete one of "what" on Ubuntu Server?
<nacc> conrmahr: b) transfer a HDD over the network?
<conrmahr> ok I have about 3 TBs of data on HDD 1
<conrmahr> and HDD 2 is a backup
<JanC> backup?
<nacc> conrmahr: RAID mirror or backup?
<conrmahr> exact mirror
<nacc> conrmahr: so ideally, you'd just plug in HDD2 (or 1) into your server and it would work, but it's formatted as RAID member?
<conrmahr> so I want to take HDD2, delete everything and format it for Ubuntu Server, then over the network copy all the files from HDD1 with a network mount.
<nacc> conrmahr: so if it's plugged into your server now, I'd expect you to be able to edit the partition table (fdisk, cfdisk, parted) and delete all partitions and create a new one
<nacc> *new partition
<conrmahr> I tried that but it didn't reconize because of the format that Synology uses.
<JanC> Synology uses standard linux RAID AFAIK
<nacc> conrmahr: why would you need to recognize the format that Synology uses?
<conrmahr> ok
<nacc> conrmahr: you're deleting all the partitions, afaict
<sarnold> recognizing the format might allow skipping the repartition and reformat steps :)
<conrmahr> when i do "sudo fdisk -l"
<nacc> sarnold: that's true, but conrmahr seemed ok/resigned to that :)
<JanC> actually, I guess his server recognizes the raid mirror & automatically starts it as a degraded RAID
<JanC> or something
<nacc> JanC: oh that could be ...
<sarnold> JanC: ooh
<JanC> that would explain the "in use"
<nacc> JanC: didn't think of that
<nacc> conrmahr: need to step away, sorry, but if JanC is right, you'll need to stop that case (possibly with mdadm)
<conrmahr> what do i do after
<conrmahr> actually looks like i got it removed
<conrmahr> i don't see /dev/md127 anymore
<conrmahr> i do i create a new primary partition after i type
<conrmahr> $sudo parted /dev/sbd
<conrmahr> so i did "(parted) print"
<conrmahr> and it says the Partition Table: gpt
<conrmahr> with no partition
<sarnold> alright, now you've got to decide what you want the disk to do on ubuntu -- if you want to use mdadm to create a mirror, then format ext4, then copy data.. or use zfs to create a degraded mirror, add a filesystem, then copy data over..
<conrmahr> So I just want it to store media files
<Velus-universe> what is happening here E: Sub-process /usr/bin/dpkg returned an error code (1)
<conrmahr> i have a 60GB SDD for just the Ubuntu OS
<conrmahr> I"m just wanting to set up these 2 WD Red 4TB HDDs as secondary drives for a NAS.
<JanC> conrmahr: you want to create a new RAID?
<sarnold> Velus-universe: it means "look higher in the log to find out what the error is"
<conrmahr> no, I just want to get this working and not lose my data.
<Velus-universe> in what log
<JanC> so you want both drives as separate disks in the end?
<conrmahr> yes
<conrmahr> Install them like they are fresh out of the box
<conrmahr> with  nothing on it
<JanC> well, if there is nothing on it, you also can't put files on it  ;)
<sarnold> Velus-universe: the apt-get run
<conrmahr> ha, true. But I was hoping to clean one, transfer the data over the network, then clean the next one and install it.
<Velus-universe> ok its hissing about a folder missing
<JanC> you'll need a partition & a filesystem, but you could also make it such that you have one filesystem spread over both, etc. (there are lots of options...)
<Velus-universe> http://pastebin.com/e0Jk8GcX that is the log with it
<conrmahr> when you say file system, you mean like OS?
<JanC> no, the filesystem is what remembers which file, directory, etc. is where on the disk
<JanC> like ext4, NTFS, FAT32, etc.
<sarnold> Velus-universe: check dmesg to see if you have errors, from the scsi layer or memory problems..
<conrmahr> oh right
<sarnold> Velus-universe: something like dmesg | tail -30   ought to be enough to give you an idea if there's anything serious
<conrmahr> so would you recommend NTFS if I was sharing it with other Mac and PC computers?
<JanC> if you mean sharing over the network, then no
<sarnold> what do you mean by "sharing it with"?
<conrmahr> yeah sharing over the network
<conrmahr> i want to make the device a NAS/Media Server
<Velus-universe> http://pastebin.com/wJ7Y88i1
<sarnold> Velus-universe: good
<sarnold> Velus-universe: check df -h  -- are any filesystems at 100%, or close to it?
<Velus-universe> nope all 1 or 2 %
<Velus-universe> or 0 %
<sarnold> Velus-universe: alright, try apt-get install -f
<Velus-universe> http://pastebin.com/TQ62TRZB
<JanC> might also be useful to check df -i
<Velus-universe> same all 1%
<sarnold> JanC: definitely good idea, it's something I overlook all the time..
<conrmahr> so any recommendations on the filesystem for NAS?
<JanC> conrmahr: it all depends on how you want to use it, how much RAM you have, what Ubuntu version this is, etc.
<JanC> and the type of files too
<conrmahr> only 8GB DDR3 now
<sarnold> conrmahr: i'm partial to zfs; it provides compression, mirroring, snapshots, and other cool features; see https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ for a great series of blog posts that describe it well
<conrmahr> but will upgrade to 16GB later
<JanC> and you could later add the second drive to the same filesystem with ZFS
<conrmahr> basically a lot of mpg, mkv files for TV and Movies
<Velus-universe> i wonder why this aint working then
<conrmahr> that I will run Plex Media Server over
<sarnold> Velus-universe: I'm not sure. the next step is to save aside your dovecot configuration if you hve any, apt-get purge the docvecot package, then apt-get install the package again
<Velus-universe> hwo do i patget purge
<Velus-universe> and also i got rid of the dovecot to start again
<sarnold> "apt-get purge dovecot" should do it
<Velus-universe> im still getting the problems
<Velus-universe> ok i rmeoved dovecot-lmtpd and it worked now
<conrmahr> so is zfs easy to use for a noob?
<sarnold> conrmahr: I think it's easier than mdadm or lvm
<sarnold> probably nothing comes close to the ease of the synology
<conrmahr> well once i have it set up i should be golden
<conrmahr> I worked on RHEL and Ubuntu VPSs for web development
<conrmahr> but never had to install hardware
<conrmahr> where's the zfs ubuntu repo? This looks like Debian.
<sarnold> heh, I know the feeling, I've used linux for years, but only installed it a few times. the installer ...
<sarnold> conrmahr: in xenial, zfs is included; in wily, it's done via dkms packages. earlier than that, and you'll want to use some pakages from a PPA instead
<conrmahr> i have 14.04.4, is that xenial?
<conrmahr> so i can just do apt-get?
<sarnold> that's trusty
<conrmahr> what's PPA?
<sarnold> a personal package archive; in this case, darik horn has zfs packages built for precise, trusty, and wily here https://launchpad.net/~zfs-native/+archive/ubuntu/stable
<conrmahr> gotcha
<JanC> alternatively, you could try installing Ubuntu 16.04 (xenial) beta instead
<sarnold> that's what I did last week with my new computer
<sarnold> so far so good
<JanC> final release of 16.04 should be in two weeks, I think
<sarnold> it only took two or three minutes to set up my nine-drive zfs array
<JanC> nice  :)
<conrmahr> which zfs package should i install for trusty
<conrmahr> zfs-fuse?
<sarnold> no, skip that
<conrmahr> i installed the ppa repo
<conrmahr> don't i have to install the package?
<sarnold> you do, but zfs-fuse is an ancient and terrible thing
<conrmahr> oh
<sarnold> install zfs-linux -- hopefully that'll install whatever else it needs
<conrmahr> how about ubuntu-zfs?
<sarnold> that might install everything from his repository in one go; using zfs for the / filesystem is a bit difficult, I wonder if that packge exists to help the zfs root case..
<conrmahr> i did a apt-cache search
<conrmahr> and it says
<conrmahr> ubuntu-zfs - Native ZFS filesystem metapackage for Ubuntu.
<conrmahr> zfs-dkms - Native OpenZFS filesystem kernel modules for Linux
<sarnold> might as well go with ubuntu-zfs. it'll probably do the right thing.
<conrmahr> done
<conrmahr> so can you walk me through how to create a new filesystem and partition?
<conrmahr> with parted right?
<sarnold> zpool create -n /dev/disk/by-id/.... --- pick the device name for the 4tb drive
<sarnold> zpool create will know how to partition and format the disk
<conrmahr> oooh
<conrmahr> i like i like
<sarnold> dang, i've got to take off in a minute..
<sarnold> the -n is a no-op mode, to make sure it looks fine
<sarnold> take it off if you like the way the command output looks
<sarnold> and re-run it..
<conrmahr> FRIDAY NIGHT!
<conrmahr> ok
<sarnold> ah I got the command wrong, "zpool create -n poolname /dev/disk/by-id/..."
<conrmahr> so if the disk is /dev/sdb
<conrmahr> it would be "zpool create -n poolname /dev/sdb"
<conrmahr> do i have to create a poolname?
<sarnold> the downside to the shortname /dev/sdb rather than /dev/disk/by-id/ata-WD_..... is that the long name shouldn't change; hte short name can change based on which drives are brought up first by the bios or os
<sarnold> yeah, a lot of people pick 'tank' or 'pool' or 'srv'
<patdk-lap> heh
<patdk-lap> aggr :)
<sarnold> aggr? never seen that one before :)
<patdk-lap> netapp
<sarnold> ahh
<conrmahr> i can find the long name right?
<patdk-lap> export and import it
<sarnold> conrmahr: alright, after that, "zfs set compression=lz4 poolname  ; zfs set atime=off poolname ; zfs set checksum=sha256 poolname ; zfs create poolname/movies ;  zfs list"    :)
<sarnold> patdk-lap: good point.
<sarnold> conrmahr: time to run, have fun :)
<patdk-lap> why checksum sha256?
<patdk-lap> that would make it slow
<sarnold> because I trust it more than fletcher4
<conrmahr> i like trust
<patdk-lap> sure, it's *better*
<patdk-lap> but the level of difference is really small :)
<sarnold> actually
<patdk-lap> have to have an ecc error from the disk first
<conrmahr> i do need my p0rn fast though
<sarnold> ... conrmahr if you're not going to have any redundancy anyway, you don't need the better checksum. heh.
<sarnold> patdk-lap: hmm. I hadn't thoght about that.
<conrmahr> so like later
<patdk-lap> I would keep with default checksum, unless your going attempt dedup
<conrmahr> when i add the 2nd drive
<conrmahr> and i just want to clone it
<sarnold> patdk-lap: .. is that why pool scrub runs at only 400 MB/s? :)
<conrmahr> would i just do an rsync?
<patdk-lap> sarnold, no
<patdk-lap> that more has to do with iops
<patdk-lap> and well, transaction sizes
<patdk-lap> so much metadata to read and verify per block of data
<patdk-lap> well, figured out why some of my disks are going so slow :(
<patdk-lap> 3 bad bbwc
<sarnold> conrmahr: you'll use "zpool attach". DO NOT USE ZPOOL ADD. write that down.
<patdk-lap> and 4 bbwc without any ram installed
<sarnold> patdk-lap: awwww
<sarnold> ouch
<patdk-lap> I never put any workloads on them before I setup this test
<patdk-lap> so didn't notice or really care
<conrmahr> where does it say zpool add?
<sarnold> conrmahr: the zpool manpage.
<patdk-lap> zpool add = making raid0's
<sarnold> conrmahr: it's way too easy for people to screw this up :) I've seen three people make that mistake already...
<patdk-lap> had an l2arc die monday :(
<patdk-lap> caused the server to panic
<sarnold> patdk-lap: awwwww :(
<sarnold> yikes
<patdk-lap> but that was illumos though
 * patdk-lap wonders if it is worth attempting to use luster
<patdk-lap> so far, gluster has been horrible
<patdk-lap> or, lustre
<conrmahr> so i ran everything
<conrmahr> i get cannot open 'data1'; dataset does not exist
<conrmahr> that was my poolname
<conrmahr> oh creat
<conrmahr> crap
<conrmahr> i didn't remove the -n
<conrmahr> what's this command?
<conrmahr> zfs set attime=off data1 ;
<conrmahr> it errors out
<patdk-lap> one t
<patdk-lap> atime, not at time
<conrmahr> beautiful
<conrmahr> now i just need to transfer the files!
<patdk-lap> rm does that good, really quick too
<conrmahr> that's a joke right?
<patdk-lap> :)
<conrmahr> how about rm *
<conrmahr> oops, meant to put it here
<conrmahr> would you use "$ cp" to move 3TBs of data?
<patdk-lap> what is it?
<conrmahr> media files
<conrmahr> i mounted the network drive to /mnt
<patdk-lap> rsync -avwP --inplace, should do it nicely
<patdk-lap> the v and P are optional
<conrmahr> what's the directories look like
<patdk-lap> two directories?
<conrmahr> i mean which is first and second
<patdk-lap> source dest
<patdk-lap> put a / on the end, if you don't want it to copy the source folder itself
<patdk-lap> but just stuff in it
<conrmahr> and what are the v and P do?
<patdk-lap> v shows you the files as it copies
<patdk-lap> P gives you progress per file
<Velus-universe> patdk-lap, i have not got soem of it working by my dovecot wont start and i get nothing in the logs
<wsirccc__> can anybody help which preseed scenarios are possible. Managed a working cl with virt-install, local preseed.cfg and remote network install. 1) Do not know if preseeding is possible with local iso and 2) do not know, what if in front of a physical computer. tia
<wsirccc__> can anybody help which preseed scenarios are possible. Managed a working cl with virt-install, local preseed.cfg and remote network install. 1) Do not know if preseeding is possible with local iso and 2) do not know, what if in front of a physical computer. tia
<ratrace> Hello. Alt-tab between windows. Which program/package is responsible for that behavior? Unity or Compiz?
<ratrace> eh... sorry, wrong chan.
<mohanadvoxo> hello can anyone help me ?
<mohanadvoxo> http://paste.ubuntu.com/15706982/
<conrmahr> JanC: Thanks a million for your help.
<JanC> helping is what we are here for  :)
<conrmahr> Currently ryncing data to drive
<conrmahr> patdk-wk: thank you too.
<wsirccc__> is there anybody with any experience with unintended install?
<ratrace> wsirccc__: unintended? or unattended?
<wsirccc__> ratrace
<wsirccc__> ratrace: the latter.
<ratrace> wsirccc__: does this help? https://help.ubuntu.com/lts/installation-guide/armhf/apb.html
<ratrace> wsirccc__: oops, sorry, this link: https://help.ubuntu.com/lts/installation-guide/amd64/apb.html
<wsirccc__> well one could discuss differences with debian https://wiki.debian.org/DebianInstaller/Preseed#Adding_the_preseed_file_to_the_installer.27s_initrd.gz
<wsirccc__> What is different to debian? Then I understand easier.
<wsirccc__> ratrace
<wsirccc__> ratrace:
<ratrace> I don't know what is different to debian.
<ratrace> Unless you were referring to this? https://help.ubuntu.com/community/AutomaticSecurityUpdates
<ratrace> unattended upgrades...
<wsirccc__> no I want to atomate the install for any use cases, under standing the methods and unittest them,
<wsirccc__> https://github.com/qemu-buro-point-dpkg/qemu-buro-point-dpkg/tree/master/qemuburo
<wsirccc__> really
<FManTropyx> ok, I tried to reboot my virtual server, but after disconnecting me it just hanged
#ubuntu-server 2016-04-10
<Superbest> My server can ping eg. 8.8.8.8 but ping www.google.com doesn't work, can someone help me troubleshoot and fix it?
<karstensrage> why was systemd so quickly adopted?
<patdk-lap> cause debian odopted it
<patdk-lap> and to not do what debian does, causes a lot of stress
<karstensrage> was ubuntu happy about it?
<patdk-lap> how should I know?
<BLUG_Fred> hi! Having some mysql not starting problems on 16.04. Is this the right place?
<ikonia> bilde2910: #ubuntu+1 is the pre-release channel disucssion
<a7ndrew> Is installing Ubuntu Server 16.04 possible without a network connection?
<ikonia> yes
<ikonia> it's ships on a dvd and you can use other media, such as usb
<ikonia> (once it's releaseD)
<a7ndrew> awesome! I'm just doing something wrong then. I've downloaded the beta iso and copied it to USB and it isn't working for me.
<a7ndrew> I get an error screen after it can't reach the mirror I choose.
<ikonia> you won't be able to use the mirrors until you configure networking
<a7ndrew> That's not a problem, I just can't seem to proceed with formatting disks and copying everything over.
<ikonia> copying everything over ?
<a7ndrew> I mean, copying the OS components from the installer to disk.
<ikonia> installer to the disk ?
<ikonia> it doesn't do that
<ikonia> how are you booting, from an install DVD?
<a7ndrew> from a USB key
<a7ndrew> I used ubuntu-16.04-beta2-server-amd64.iso and then used the USB installer as documented here http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows
<ikonia> ok, so I suggest a.) use the #ubuntu+1 channel b.) be a bit more descriptive on what's failing, it's gone from "can't use the mirror" to "can't partition teh disk" to "can't copy the OS"
<a7ndrew> The installer fails to proceed beyond the mirror selection screen.
<ikonia> so thats totally different than what you just said
<ikonia> which was you couldn't partition and it wouldn't copy the SO
<ikonia> os
<a7ndrew> Well I can't do those things because it seems teh installer pre-requires mirror configuration.
<bekks> a7ndrew: "we cant do those things" is as unclear as "it doesnt do", "it doesnt work", "it fails" without any details.
<patdk-lap> this is annoying
<patdk-lap> 16.04 refuses to start openvpn :(
<patdk-lap> well, that is kindof annoying
<ratrace> refuses, eh? how dares it have an opinion!
<GrandPa-G> Any suggestions on how to setup wifi password at command level without going through all the edits and commands that are normally described?
<vbotka> GrandPa-G, wpa_cli is the client to wpa_supplicant
<GrandPa-G> vbotka: thanks, what other options for type of password than psk? Is it in the manual?
<vbotka> GrandPa-G, generic info is here https://w1.fi/
<GrandPa-G> vbotka: that all help. I am really trying to set up a raspberry box headless with only a command level interface. Also, the box is going to
<GrandPa-G> a novice user so all the editing needs to be eliminated. Basically we have the box working at one site, but moving it to another, so wifi password will change.
<jvwjgames> Is there a way to print in real time to a terminal over Bluetooth to my server
<jvwjgames> Is there?
<ratrace> jvwjgames: print what?
<jvwjgames> So say my arduino sends data such as earthquake 7.0 I want the console on my server to print it
<jvwjgames> And then if console read data then execute GCM-sender.py
<ratrace> sends data how?
<jvwjgames> Over bluetooth
<ratrace> that's carrier, but what's the protocol?
<jvwjgames> 2.0
<ratrace> ...
<ratrace> jvwjgames: https://en.wikipedia.org/wiki/OSI_model
<ratrace> Bluetooth is physical layer. I'm asking you what's the application protocol.
<jvwjgames> Oh OK sorry
<jvwjgames> Data layer
<jvwjgames> Obviously
<ratrace> what?
<jvwjgames> I am sorry you are going to have to better explain
<ratrace> not me, you are. :) You're the one with the problem.
<ratrace> jvwjgames: in other words, how is your arduino sending data? Custom JSON packet over HTTPS? zeromq packet? proprietary binary over raw tcp? taht kind of thing.
<ratrace> then you define which service on your computer is picking up this data. is it logging? can you hook into syslog? etc...
<jvwjgames> Oh OK I see now sorry
<jvwjgames> Over http
<ratrace> and which service is picking up the data?
<jvwjgames> And then logged over to /var/log/earthquake/stations/UT-Kearns-1.log
<ratrace> so there's a custom http application receiving the data, and logging to a file?
<jvwjgames> Yes
<ratrace> can you configure it to use syslog?
<jvwjgames> Yes
<ratrace> then that's how you do it.
<jvwjgames> Is that better
<jvwjgames> Ok
<ratrace> configure sysloging for an unused facility number, and then configure syslog to log both to the file and console.
<jvwjgames> Ok
<jvwjgames> Is there a way that if the log file contains certain text that it will trigger a .py file
<ratrace> I think so yes
<ratrace> syslog-ng at least can.
<jvwjgames> Ok thanks so much sorry to be a pain earlier I just couldn't think of what you meant IDK why.
<ratrace> jvwjgames: although, the proper way to do it is to have that receiving application do those triggers
<jvwjgames> Ok
<GrandPa-G> do you really need the data to go to a console? or do you just want the data to go from a remote to the server printer? What is the function of the console?
<GrandPa-G> I found my answer.
<GrandPa-G> oops, didn't mean to send that
#ubuntu-server 2017-04-03
<Erick3k> yep
<Erick3k> sounds like the problem is there
<Erick3k> am testing with the kvm image
<drab> Erick3k: hi, how are you sertting up kvm?
<drab> I'm needing KVM for some stuff, used to containers, and having a hell of a time to get something simple going...
<renatosilva> please help, there's some errors in upgrade, how to fix them? http://vpaste.net/y030l
<fishcooker> is your /boot partition have enough space, renatosilva?
<fishcooker> please send the output of your $ uname -a
<renatosilva> Linux <nodename> 2.6.32-042stab120.20 #1 SMP Fri Mar 10 16:52:50 MSK 2017 x86_64 x86_64 x86_64 GNU/Linux
<renatosilva> I was trying to upgrade when libc complained about old kernel, I'm trying to upgrade the kernel and getting the above errors
<stgraber> that paste won't load for me for some reason, but the kernel you're running isn't supported by any Ubuntu release and indeed isn't supported by any modern version of the C library. Your only option is to run a more recent kernel.
<stgraber> this kernel looks like it could be a RedHat kernel as used for an OpenVZ based VPS or something similar to that, certainly not an official Ubuntu kernel
<renatosilva> stgraber: https://pastebin.com/raw/zuRnfgN9
<fishcooker> how abt list your linux-generic package please paste your output $ dpkg-query -l linux-generic, renatosilva?
<stgraber> I'd expect Ubuntu 12.04 userspace to run fine on such a kernel given that we had to support upgrading from Ubuntu 10.04 to 12.04 (and 10.04 was using a 2.6.32) but upgraded to anything after 12.04 would almost certainly fail
<stgraber> renatosilva: is that a VPS?
<fishcooker> the last time i upgrade the kernel... i just do this $ apt-get upgrade; apt-get install linux-generic then reboot
<stgraber> renatosilva: if so, can you post the output of "ls -lh /proc/user_beancounters"
<stgraber> renatosilva: if it's an OpenVZ VPS, you can't upgrade the kernel as containers run on the host's kernel, making the kernel you're running up to your provider, not to you
<renatosilva> fishcooker: dpkg-query -l linux-generic => dpkg-query: no packages found matching linux-generic
<renatosilva> stgraber: yes that's a vps
<fishcooker> then the stgraber hypothesis is right
<renatosilva> fwiw this is what I tried to update the kernel: apt-get install --install-recommends linux-generic-hwe-16.04
<stgraber> renatosilva: your VPS is a container, containers can't run their own kernel. Even if you succeeded to install any of the Ubuntu kernels and bootloader, they'd never run
<renatosilva> stgraber:  ls -lh /proc/user_beancounters => -r-------- 1 root root 0 Apr  2 22:39 /proc/user_beancounters
<stgraber> renatosilva: ok, that output confirms that your container is OpenVZ based (which makes sense given the host kernel)
<stgraber> renatosilva: in such an environment you should stick to Ubuntu 12.04, anything more recent than that is unlikely to be compatible with the kernel you're hosting provider uses
<stgraber> renatosilva: which is to say, if you need to move to a version of Ubuntu that won't be unsupported in a few weeks, you may need to move to another hosting provider that's running a less outdated version of the kernel
<renatosilva> not sure if my company uses such OpenVZ thing
<stgraber> the commercial version would be called Virtuozo
<stgraber> virtuozzo*
<stgraber> renatosilva: and given that /proc/user_beancounters only exists inside OpenVZ (or Virtuozzo) containers, you are definitely using it
<stgraber> that file isn't part of the normal Linux kernel, only kernels built with the OpenVZ/Virtuozzo support patch will have that file
<drab> stgraber: are there any plans for lxd to integrate with say virt-manager or something?
<renatosilva> 12.04 is pretty old and its support is gone this month :(
<drab> I need something to give ppl to manager their containers and virt-manager would be idea since they are all vmware/vbox users
<stgraber> drab: we definitely don't have any plans to integrate with libvirt. There are a couple of attempts at a web frontend for LXD around but there again we're quite happy with those being external projects.
<renatosilva> stgraber, fishcooker: ok so how do you think I got into this situation? do you think I have been dist-upgrading since 12.04?
<drab> stgraber: fair enough, thanks
<stgraber> drab: the slightly overkill solution would be to run openstack with nova-lxd, which would then give you access to all the openstack tools and the web frontend, but that's not exactly a lightweight solution :)
 * renatosilva afraid of needing to reinstall the whole thing
<stgraber> renatosilva: your paste suggests your container is at least partly on Ubuntu 14.04 as it references a number of packages which don't exist on Ubuntu 12.04
<stgraber> renatosilva: it looks like you have a few options 1) reinstall on Ubuntu 12.04 and continue running it after it's end of support 2) move that server somewhere else, physical machine or VM or container on a host that's not outdated 3) figure out how to get the host upgraded to a more recent kernel
<renatosilva> for now how can I revert this command? apt-get install --install-recommends linux-generic-hwe-16.04
<renatosilva> I don't know the current state of the system
<stgraber> renatosilva: you can remove all those linux-generic and linux-image packages, since you're in a container, they're not used anyway
<stgraber> renatosilva: same goes for grub
<renatosilva> so 12.04 runs kernel 2.6? cause it looks pretty old
<stgraber> renatosilva: no it doesn't, Ubuntu 12.04 runs a 3.2 kernel
<stgraber> renatosilva: again, you are in a container. containers don't run their own kernel, they use the host's kernel.
<stgraber> renatosilva: your host is most likely a Red Hat 6 system running OpenVZ/Virtuozzo which came with a 2.6.32 kernel by default
<renatosilva> so their created an odd installation of 12.04 with a kernel smaller than 3.2? I see
<renatosilva> I think I'll need to reinstall the whole thing from scratch
<renatosilva> it makes no sense to allow a dist-upgrade without any kind of notice, they should add something to avoid people struggling with it like me
<renatosilva> stgraber: I *think* I have removed those packages, I will reboot and hope it's just ok
<renatosilva> (although the libc will be angry)
<drab> anybody aware of any good docs on settiungs up kvm on 16.04 ? all I'm finding is old and/or broken
<drab> ubuntu-vm-builder fails so that's not an option
<lynorian> use libvirt
<drab> I tried, that seemed to open a whole new can of worms, a large number of xml files and a whole new set of terminology, but maybe I've been too hasty
<drab> I'm 99% containers and happy with it, but I need a kvm instance or two for some zfs stuff
<renatosilva> ok rebooted successfully, but the apt-get update output is so small now, is this normal? it used to be much larger https://pastebin.com/raw/UsjZW8gq
<drab> and I don't really want to invest in libvirt since there's no compatiblity/future with lxc (altho still waitinfg for a response from their devs on that)
<drab> but docs and stuff on lxc and libvirt are at best just not there, ti seems to cater 99% to kvm
<renatosilva> is there any way to find out if the current release has been installed from scratch or if it's originated from a dist-upgrade?
<drab> stgraber: any chance you're still around? I'm reconsidering that comment you made on kvm inside a container
<drab> at least it would allow for some clean experimentation
<renatosilva> it seems my vps company is really using kernel 2.6 on ubuntu 16.04, that's a shame!
<renatosilva> I think I need to open a ticket for them to address the problem but... how can a such large base of users not have complained about it, no broken servers? weird
<renatosilva> thanks anyway stgraber, fishcooker!
<cpaelzer> drab: uvtools is what you need
<cpaelzer> drab: fro the easy way to a kvm
<cpaelzer> drab: getting a simple guest comes doen to
<cpaelzer> 1. uvt-simplestreams-libvirt sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=xenial
<cpaelzer> 2. uvt-kvm create --password=ubuntu xenial-testguest release=xenial arch=amd64 label=daily
<cpaelzer> drab: and for kvm in lxd stgraber has a post somewhere which surely is better
<cpaelzer> drab: I use http://paste.ubuntu.com/24304999/
<cpaelzer> drab: which I combine with the default template when launching containers by tailing with --profile default --profile kvm
<ishaved4this> hey guys, Is anyone willing to help someone with some basic stuff out real quick?
<cpaelzer> If you would have just asked you'd get an answer or not, but that way you will wait twice ishaved4this
<ishaved4this> alright then. Just didn't want to bother anybody
<cpaelzer> general IRC rule - just ask, you'll usually be helped or redirected - if neither happens everyone is asleep or you are so off that everybody just goes??? but tha is rare
<ishaved4this> haha my bad. I don't really use IRC too much anymore
<ishaved4this> anyway, my problem is with permission issues involving plex and transmission reading and writing to my job and 2 other external HDD
<ishaved4this> any takers?
<cpaelzer> ishaved4this: people don't commit to solve it, ask the question itself what is failing?
<lordievader> Good morning
<ishaved4this> good morning. What is failing is that I'm still fairly new with using ubuntu server, and transmission doesnt have read write access to my external drives. now I went ahead of my better judgement and followed a youtube tutorial typing in sudo gpasswd
<ishaved4this> -a plgudev & -a plex sudo & -a plex (USER), but I know thats super sloppy.
<lordievader> Why not fix the permission issues?
<lordievader> I.e. give transmission-debian rw access to the download dir.
<ishaved4this> well that's what I'm here for, I'm not sure how
<lordievader> acl's.
<ishaved4this> I also can't remember how to mount my drives. I'm basically a noob when it comes to linux
<ishaved4this> acl?
<lordievader> !acl
<lordievader> ishaved4this: https://help.ubuntu.com/community/FilePermissionsACLs
<ishaved4this> event not found
<ishaved4this> thanks, let me check that out
<ishaved4this> okay so this look like a simple way to add drives/applications to a group to allow for rw access. correct?
<lordievader> It is a good way of adding other groups to the rw pool. Normal unix permissions only allow one owner:group combination.
<ishaved4this> ahh. well, in 16.04, It said acl is already installed. Yet I cant get to it
<ishaved4this> man, there is so much to learn with this os
<ishaved4this> sudo tune2fs -l /dev/sdaX |grep acl
<ishaved4this> Default mount options:    user_xattr acl
<ishaved4this> is see this on the page, but I have no idea what drives are labeled what, and dont know how to find out
<cpaelzer> ishaved4this: you are good with the initial setup, you can go on the subsection https://help.ubuntu.com/community/FilePermissionsACLs#Adding_a_Group_to_an_ACL
<cpaelzer> ishaved4this: to add the permissions you need to the paths you need them to be
<ishaved4this> okay, I believe I am about 4 steps ahead of where I should be.
<ishaved4this> I get that I need to add these drives to this list, but I don't know the command to see what drives are connected. I also have'nt mounted them yet
<cpaelzer> ishaved4this: there is some reverse way to aproach this and make it less complex
<cpaelzer> ishaved4this: the latter to be added USB drives are usually mounted by your user - so called user mounts
<cpaelzer> ishaved4this: they get to /media/UID and the UID is what you now don't know
<cpaelzer> but as soon as you have another disk you will have a different one again
<cpaelzer> ishaved4this: depending on your setup you might considerer this being far easier for you http://askubuntu.com/questions/395291/plex-media-server-wont-find-media-external-hard-drive
<cpaelzer> ishaved4this: what you have to consider, is that it gives plex the access that your user has
<ishaved4this> okay. So I have 5 externals right now. Let me look at that link, and thank you very much
<ishaved4this> I've read in the past that changing plex from its own user is a bad idea
<cpaelzer> right it is as I mentioned above, but it is easy - which I want you to choose from at least
<ishaved4this> now, I've made plex a user of plugdev, sudo, and my username already. Should I do the same with transmission to accomplish the same thing?
<cpaelzer> The user might be transmission-daemon, but if adding to plugdev is what you want you can try
<cpaelzer> Although I don't see why it should be in sudo
<ishaved4this> me either, I just followed a guide. Sudo would be overkill wouldnt it?
<lordievader> Transmission-daemon runs under the user transmission-debian.
<ishaved4this> yeah, i've added it to groups plugdev, and my name, and I added my name to its group using passwd
<dn`> Is it possible to redirect the installer output/debug log/anything ;-) to a remote syslog with a kernel(?) param while installing?
<caribou> cpaelzer: do you have any idea why LP: #1317491 is still stucked in trusty-proposed ?
<ubottu> Launchpad bug 1317491 in libvirt (Ubuntu Trusty) "virsh blockcommit hangs at 100%" [Medium,Fix committed] https://launchpad.net/bugs/1317491
<cpaelzer> caribou: no I don't
<cpaelzer> caribou: if anything then because it links bugs that were taken out in the SRU page http://people.canonical.com/~ubuntu-archive/pending-sru
<cpaelzer> caribou: but I explained that last week (don't remember who asked)
<caribou> cpaelzer: maybe tinoco, he was inquiring about this bug too
<sonu_nk> hi i want to create subdomain on my ubuntu server.. what is the best way any link ?
<sonu_nk> for apache
<fnordahl> zul: good morning! would you have a moment to assess when stable/mitaka horizon 9.1.2 will be available as a ubuntu uca package? it will solve among other things bug 1666827
<ubottu> bug 1666827 in horizon (Ubuntu Xenial) "Backport fixes for Rename Network return 403 Error" [High,New] https://launchpad.net/bugs/1666827
<TafThorne> sonu_nk: Sub-domain is a DNS thing that you configure in a DNS server for the domain.
<TafThorne> @sonu_nk If you do not have access to an authoritative domain name server for your parent domain you may not be able to configure a sub-domain for your Apache server to be part of.
<zul> fnordahl: yep
<Tazmain> Hi all, my /var/sppo/mqueue-client is using 170GB currently , can I somehow get that space back ?
<EmilienM> jamespage: hey, since this morning we have some issues with E: Unable to locate package ubuntu-cloud-keyring
<EmilienM> jamespage: our repo config: http://logs.openstack.org/28/450628/2/check/gate-puppet-nova-puppet-beaker-rspec-ubuntu-xenial/acdee4a/logs/apt-cache-policy.txt.gz
<EmilienM> coreycb: ^
<coreycb> EmilienM, it seems to be working fine from the main ubuntu archive. i wonder if the mirror you're using is having issues?
<EmilienM> yeah, that's what i'm looking now
<EmilienM> but I see it: http://mirror.regionone.osic-cloud1.openstack.org/ubuntu/pool/universe/u/ubuntu-cloud-keyring/
<ronator> Is there a reliable way to find out if an Ubuntu server has had an release-upgrade? Meaning, can I prove that a certain Ubuntu16 setup was upgraded from Ubuntu14?
<ronator> asides network stuff like "still uses old NIC naming scheme" ..
<cpaelzer> yeah ronator, let me check the exact filename for you
<ronator> cpaelzer: cool thx
<cpaelzer> ronator: /var/log/dist-upgrade/main.log
<ronator> cpaelzer: oh that's cool even with timestamps, awesome& quick , thx
<cpaelzer> ronator: even better - this only holds your last upgrade, but if you want to know where you started check /var/log/installer/media-info
<ronator> cpaelzer:  ' Ubuntu-Server 14.04.2 LTS "Trusty Tahr"  - Release amd64 (20150218.1)' Yeah, that's all the prove I needed, thanks a lot again :)
<ronator> I was only lookign into /var/log/apt - installer is so obvious :)
<coreycb> EmilienM, i just installed ubuntu-cloud-keyring from http://mirror.regionone.osic-cloud1.openstack.org/ubuntu succesfully so the mirror seems ok from my end
<EmilienM> coreycb: ok, I'm doing 'recheck' to see if it's consistent :/
<coreycb> EmilienM, ok
<EmilienM> coreycb: it works now. Go figure :-)
<coreycb> EmilienM, oh good, just a temporary glitch in the matrix
<EmilienM> yes
<ronator> yeah, fu**** dÃ©jÃ -vus ;-)
<keithzg> What's the preferred webmail component out there these days? I remember Roundcube being reworked into Roundcube-next a while back but I can't say I kept track of how that development work went.
<erick3k> Can someone help me, i can't solve this just gave up. This happens after you resize the disk on a vm
<erick3k> https://i.imgur.com/d2zTZLB.png
<erick3k> gets stuck there during boot
<keithzg> (Just migrated an email server from RHEL5 to Ubuntu Server 16.04, and with modernizing the backend I've been thinking of convenience additions for the frontend)
<keithzg> erick3k: Can you boot into a recovery session?
<blackflow> keithzg: I wouldn't recommend using packages for roundcube. it's a very front end application, from universe and isn't patched quite well.
<erick3k> keithzg how can i do that?
<erick3k> this happens as soon as you change the disk size, yet grow part is installed so not sure whats causing it
<keithzg> erick3k: You have to choose that from the GRUB menu when rebooting.
<blackflow> keithzg: otherwise, yeh, I'd recommend Roundcube. quite nice webmail.
<erick3k> ok i'll try but it boots so fast
<keithzg> erick3k: You can change that if you boot from a live session or such and chroot into your existing install, but yeah, default configuration doesn't give much room, heh, I don't think it's even visible that it's an option but sometimes spamming ESC will hit the small window of time.
<erick3k> ok
<keithzg> blackflow: Has roundcube-next become the main roundcube? If not from universe (which I did suspect would be quite outdated), where best to grab it from? (I'd certainly like to handle the dependencies from the main repos as much as possible)
<blackflow> keithzg: roundcube-next is a development project. current latest stable is 1.2.4, grab the tarball directly from roundcube. you could install the dependencies according to the roundcube package from universe, but it's just a few php modules.
<blackflow> maybe there's a PPA with latest stable, but I don't trust those anyway, so wouldn't recommend.
<erick3k> keithzg i added timeout on grub config but doesn't seem to work
<erick3k> is something not correct ? https://i.imgur.com/sH2VCNC.png
<keithzg> blackflow: Yeah fair enough. As long as it's pretty much just PHP I'm fine with untarring it into a folder and running it on a server VM.
<blackflow> keithzg: and updates are simple. you untar it into another dir, and run the update script, pointing to the running application dir.
<keithzg> erick3k: Well, at very least from the config you're showing it'd still be hidden, although it should show up. You ran update-grub, right?
<blackflow> keithzg: subscribe to the roundcube mailing list to get mail on new updates and secvuln fixes
<keithzg> erick3k: (from within the install itself via a chroot, hopefully with bind mounts!)
<keithzg> blackflow: Good idea
<erick3k> yes i did
<keithzg> erick3k: Well, if you've gotten into the chroot, it should also be possible to set the default boot or just the next boot to be into recovery mode (although I forget how precisely to do that, it's been quite a while)
<Score_Under> (moving from #ubuntu) So I want to start a small apt repo for the company I'm in. We're migrating from CentOS (mostly), and so on the machines that still have people's gpg keys for example, the correct scripts to set up an apt repository aren't present. CentOS provides dpkg-scanpackages, but I can't find any other software which is necessary to create the rest of the repo (mostly the Release
<erick3k> ok but should this config https://i.imgur.com/1TRRUvC.png show the menu?
<Score_Under> files). I tried hacking together a shell script to do it, and I got apt to be able to "update" from it without complaint, but it doesn't create any corresponding /var/lib/apt/lists files and it can't find any packages in those repos. Regarding documentation, I can't find a nice medium between the kind which directs me to a few tools (with some invariably not existing on CentOS) or the kind
<Score_Under> which attempts to define everything from the ground up (which provides far more information than is necessary and would take an enormous amount of time to read through and implement)
<Score_Under> And my question is either: 1. where should I start debugging this, or 2. where can I get a copy of the scripts required to create these repositories
<erick3k> nothing shows it boots instantly
<erick3k> hum i think the recovery kernel is not installed
<erick3k> perhabs why it doesnt show
<erick3k> https://i.imgur.com/BUikNZZ.png
<keithzg> erick3k: Hmm, quite possibly.
<blackflow> Score_Under: I don't have experience setting my own apt repo, so I can't quite help with that, but I'm failing to understand the use case, or why would you need it.
<erick3k> anyway to install it
<erick3k> ?
<keithzg> erick3k: I can't honestly remember if there's a separate recovery kernel needed, I didn't actually think so.
<erick3k> umm
<Score_Under> blackflow: The first use case is for version pinning (for the foreseeable future we're stuck on a version of docker which ubuntu has long since stopped providing), and the second is for adding our own software.
<keithzg> erick3k: I'd strongly suggest using `pastebinit` from the terminal rather than screenshots, though, they're kindof constrained. Try for instance `cat /etc/default/grub | pastebinit` to get your entire config up and readable
<keithzg> Maybe there's still something in there that's tripping it up.
<erick3k> oh ok hold on
<blackflow> Score_Under: I understand. Seen this? https://help.ubuntu.com/community/Repositories/Personal
<keithzg> erick3k: And you can look at /boot/grub/grub.cfg to see what menu items you *should* have
<erick3k> here https://0bin.net/paste/+Vd7Jk3Ox7dd5bXO#5vmbd6uoIXFYSgiuze-M+1O5uryBgzD/Dy3v7r6MacH
<erick3k> ok
<keithzg> erick3k: Hmm yeah that all looks fine. Is this a VM? Perhaps the host is directly booting the kernel or such and thus grub isn't even in the picture.
<erick3k> yes it is kvm
<Score_Under> blackflow: I took a look at that, but it only deals with software kept on one machine. For comparison our RHEL package repo is currently 3.4GB large, so that may end up requiring regular rsyncs of 3GB-ish directories over 200+ machines
<Score_Under> to me it's not an attractive prospect
<erick3k> i don't select any kernel in the vm options so it shouldn't
<erick3k> i think is disabled on the grub.cfg hold on
<keithzg> erick3k: Hmm. I must admit I'm stumped then, sorry
<erick3k> yea, never had this happened before
<erick3k> ubuntu 14 works like a charm
<erick3k> 0 problems
<erick3k> https://0bin.net/paste/-AHCy1JjWVAnhva4#pnDBdZmqKlK9PvHXBKugIoOXgKc3Rw7y1A3l3zYlIgi
<blackflow> Score_Under: check the accepted answer: http://askubuntu.com/questions/170348/how-to-create-a-local-apt-repository    Sounds like serving the dir over http with apache or nginx is all it takes to make it available in a network.
<Score_Under> Yeah. I'm just struggling to get it in a format that apt likes
<erick3k> i didnt even change the disk size now, only turned off
<erick3k> and look is stuck https://i.imgur.com/7NoZoZp.png
<erick3k> i just don't get it is beyond me
<nacc> erick3k: are you having issues with iscsi? or what is the problem (sorry only recently joined)
<erick3k> hi nacc, is a kvm virtual machine using cloud image ubuntu 16
<erick3k> once you shutdown this happens
<erick3k> reboots fine tho
<erick3k> and no i have centos 6 7 and ubuntu 14 running on the same exact machine type and 0 problems
<erick3k> can't get into recovery, booted with system rescue, no errors i mean i don't know about to give up
<nacc> erick3k: once you shut down -- what happens?
<erick3k> that
<erick3k> gets stuck booting at https://i.imgur.com/7NoZoZp.png
<nacc> erick3k: ok, please use words ... still waking up :)
<nacc> erick3k: got it
<erick3k> xD
<nacc> erick3k: so `sudo shutdown` then `virsh start` or whtever you use, fails. But from within the instance, `sudo reboot` works?
<erick3k> yes, as long as you reboot it boots back again, until you shutdown
<erick3k> and start
<erick3k> very very very weird
<nacc> so if it fails to 'start' again, do you have to create a fresh VM each time?
<erick3k> yep
<erick3k> and that fresh vm will boot until again shutdown
<nacc> erick3k: any particular hw details? using iscsi, etc.?
<erick3k> as for disk is using Virtio
<erick3k> and console qxl
<erick3k> not much else
<nacc> hrm
<erick3k> gonna try and change those and see if something happens
<nacc> virtio should be fine, i'd see if the console makes a difference (or see if you can ping it -- i did see a console=tty1, which seemed a bit surprising
<erick3k> yes that might be something
<erick3k> what should i change?
<erick3k> should be tty0?
<nacc> erick3k: that's what i would have expected, but it depends on your config -- did you manually change that?
<erick3k> i do think might be something with the console cuz it usually boots right after what's shown on the pic
<erick3k> nop, image is as it comes from ubuntu cloud images
<erick3k> so defaults
<erick3k> hold on i'll show you
<dn`> is there any easy way to redirect the netinstaller output or at least the log output to a remote syslog while installing via a param?
<erick3k> nacc there are two
<erick3k> GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0"
<nacc> erick3k: ack, i'm going to try and reproduce locally
<erick3k> kool
<nacc> erick3k: interesting, which image format did you use? i used current/ disk1.img with virt-manager and it hangs immediately for me
<erick3k> nice (that you reproduced) hehe
<erick3k> qcow2 i think
<erick3k> hold on let me link
<erick3k> https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
<erick3k> says Cloud image for 64-bit computers (QCOW2 disk image file for use with QEMU and KVM)
<nacc> erick3k: yea, that's the one i used too
<erick3k> so what you think could be the problem?
<erick3k> bad image?
<erick3k> ubuntu 14 cloud image works like a charm btw
<nacc> erick3k: ah! i waited ... a while
<nacc> erick3k: and it booted
<erick3k> hum
<nacc> wht's the default user/password?
<erick3k> umm there is none, you have to create on with cloud-init
<nacc> heh, duh -- well, it did boot
<nacc> let me restart it with some cloud data
<dn`> Anyone got an idea how to redirect the output of the installer or the log to a remote syslog?
<nacc> smoser: --^ erick3k's issue,  is it because it's trying to contact a ds?
<smoser> dn`, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=605614 . it can be done. i'm almost certain. i dont remember the syntax. its probably kernel command line.
<ubottu> Debian bug 605614 in busybox "debian-installer: Ability to configure remote syslog" [Wishlist,Fixed]
<dn`> smoser: thanks, that would be so wonderful helpfull *searhcinggg from that post*
<smoser> erick3k, well, if you pass console=ttyS0 (as it appears you are, and is default from the cloud image) then init's messages are going to go to a serial console rather than tty
<erick3k> i might just install ubuntu from scratch
<erick3k> already spent too many hours on this
<erick3k> can't find the root of the problem
<smoser> erick3k, but also be aware taht the cloud images are not going to be useful unless you give them some sort of datasource. they're meant to be booted in a cloud.
<erick3k> thanks for your help tho
<erick3k> yes i know
<erick3k> i sell vpses and all other images work great
<smoser> where are you booting it ?
<erick3k> am using ovirt / rhev
<erick3k> has cloud init integrated
<smoser> and you're sure its not working ?
<smoser> ie, its not that you just aren't seeing something
<smoser> but that its actually not doing something.
<smoser> erick3k, i'd appreciate your hepl here though... this could be unintended fallout of
<smoser>  https://lists.ubuntu.com/archives/ubuntu-devel/2017-February/039697.html
<smoser> i'd like to make sure that is not the case.
<smoser> could you grant me access to a vps ? or i can sign up if you're a clodu provider.
<erick3k> i only like them because it saves time instead of installing from scratch but i know they are not perfect :)
<smoser> erick3k, well, ubuntu would rather you use those when you can also
<smoser> it gives ubuntu users a standard isntallation across multiplep providers
<smoser> so i'd really like to help
<smoser> (especially if the datasource identity stuff regressed this)
<dn`> is there any way to fix the installer âsetting up the clockâ - âgetitng the time from a network time serverâ¦â bug? it kinda always hangs there for ages.. - like some magic presseed value to make it go away?
<drab> d-i clock-setup/utc boolean true
<drab> d-i time/zone string US/Pacific
<drab> d-i clock-setup/ntp boolean true
<drab> I have that in my preseed
<drab> never had a issue hanging around time
<dn`> https://bugs.launchpad.net/ubuntu/+source/debian-installer/+bug/1558166
<ubottu> Launchpad bug 1558166 in debian-installer (Ubuntu) "Xenial installer hangs for a long time with "Setting up the clock" message on IPv6-only systems" [Undecided,Confirmed]
<dn`> I think thatâs the bug that Iâm facing
<drab> strange, I do about half a dozen installs of xenial a day (pxe+preseed) and never had that issue
<drab> but I'm not ipv6 only
<drab> in fact I disable ipv6 right after via ansible becuase I don't use it and have just had rpoblems with it
<dn`> itâs random - I have it on some but not on athers..
<drab> same network/same dhcp settings?
<drab> dn`: try to pass ipv6.disable=1 to boot cmdline?
<drab> mmmh
<drab> dd if=/dev/zero bs=4M count=100 | nc -v lxc-srv1 2222
<drab> running from another lxc container on the same host
<drab> I get 117MB/s, ie Gigabit
<drab> but if I use iperf between the same two containers I get 21Gbit
<drab> even between container and host I get about same speeds with iperf
<sarnold> /dev/zero may not be the fastest way to generate zeros
<sarnold> try dd if=/dev/zero of=/dev/null or something
<sarnold> see what speeds you get with that
<drab> but as soon as I introduce /dev/zero things go down to 1Gbit, how can dev/zero be so slow?
<drab> uhm ok
<drab> it seemed unreal that dev/zero would be slow...
<drab> testing that, thanks, good test
<sarnold> page faults aren't the fastest things in the world
<sarnold> zeroing pages isn't the fastest thing in the world
<drab> ok, dd to dev null is 6.8Gbps
<sarnold> if your application zeros a page and just calls write() on that page repeatedly, it'll probably go way quicker
<drab> which is still not 21, but much faster than dd + netcat
<sarnold> dd + nc is going from kernel -> user, user -> pipe, pipe -> nc, nc -> kernel, kernel -> whatever's listening in the lxc-serv1 container...
<drab> true, it's more complex that thar for sure, but 6Gbit seems a heck of a loss for nc and some pipes, even if doubled, but then I don't knwo what I'm talking about really :P
<sarnold> dd from zero to null is just two copies, kernel -> dd, dd -> kernel
<drab> I don't know much at that level of depth kernel wise so it may very well be reasonable
<drab> I wanna try kvm inside lxc and trying to get some baseline
<drab> the idea of a bridge on top of a bridge is kinda scary, but again maybe it's just ignorance
<sarnold> iirc the linux kernel just smoooshes all the connected bridges together into one
<dn`> while installing ubuntu on iscsi (root device) - I run into a kenerl errror(?) https://gist.github.com/anonymous/20af414db286d8893257a588f226557d - anyone got a tip? - Iâm using l http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64 as installer
<nacc> dn`: hrm, is it reproducible?
<nacc> dn`: seems like a networking issue for the iscsi connection
<dn`> nacc: thatâs the fun part - kinda. the same machine worked on an other ISCSI device - both are the same brand (synologyâ¦); I would bet that it works without authentication - and that itâs a combination of synology doing something stupid
<dn`> and something wrong
<dn`> let me try without auth (will take a moment) â but before I manual tried it with iscsiadmin - I could login on the target without auth, but not with auth
<nacc> dn`: did you get a similar backtrace with iscsiadmin?
<dn`> nacc: I didnât check syslog at that time - but I think I could retry it and check (currently trying without auth/chap)
<dn`> oki, also getting the same error without chap/password - but giving it one more try
<nacc> dn`: sure, just wondering
<dn`> nacc: itâs just kinda odd; I used another Synology for the exact same installtion (pxe, preseed) and both Synologies have the same version running;
<dn`> oki, same error twice (without auth)
<dn`> thatâs annoying
<dn`> fun is also, after the installer complaining it canât login - if I want to retry, I get âYou entered an empty password, which is not allowed, Please choose a non-empty passwordâ. but itâs an endless loop with the same error ;-) without any chance of changing it
<nacc> dn`: so, just curious (i'm working on curtin iscsi support right now ) -- what's an example target name in your setup?
<dn`> iqn.2017-01.com.xxxx:srv-n11-0001
<nacc> dn`: thanks :)
<dn`> the fun thing is - it worked before
<nacc> dn`: that conforms to the spec, wast just wondering (we're reading bunches of specs and fixing our code to match right now)
<dn`> The most confusing part for me is that the exact same configuration and names - beside 0003 instead of 0001 ;-) works with another pair of machine <> nas
<dn`> the other version works with chap/auth or without - all works fine
<dn`> âversionâ == machine
<ThiagoCMC> Hey guys, I'm trying to deploy OpenStack Ocata on Ubuntu 16.04, with Cloud Archive, also, with OVN.
<ThiagoCMC> I'm trying to follow this: https://docs.openstack.org/developer/networking-ovn/install.html
<ThiagoCMC> however, looks like that the Ubuntu's ovn-common pakcage is missing a binary!
<ThiagoCMC> ovn-ctl: command not found
<tomreyn> ThiagoCMC: http://packages.ubuntu.com/search?searchon=contents&keywords=ovn-ctl&mode=&suite=xenial&arch=any
<tomreyn> not everything is always in PATH
<ThiagoCMC> tomreyn, thank you!
<Garogat> can anyone help me with high availability web clustering? i have at least two (more are coming) nginxs with php and mysql and i need to sync my files now. what should i use, because the servers are not in the same local network
<sarnold> Garogat: you could rsync all your servers from a 'golden' server somewhere; or you could use git
<Garogat> are there any common issues with rsync i have to be aware off? (i read about problems with deleted files coming back etc.) git sounds weird to me, because i would have to save my customers file to git.
<sarnold> rsync's changes aren't made in any sort of atomic way
<sarnold> for ubuntu archive mirrors, this is worked around by copying all the data files before copying the metadata files
<sarnold> but this might be complicated to reproduce for arbitrary website files
<sarnold> depending upon the sizes of changes, how long it takes to transfer, how many requests your servers get, etc. it might be best to rsync to a target directory and do a directory rename once the transfer is finished
<sarnold> git will have similar problems but much shorter window for trouble, if you use branches to manage the changes
<Garogat> this is semi professional caused by the diffrent locations with 100mbit up/down anyway, so do you thing i could have a system where node 1 rsyncs to 2 and if node 1 fails i will run node 2 as long as 1 is back and got all data back?
<sarnold> Garogat: it depends upon the application you're serving on the thing
<Garogat> im having: ddns-service, multiple cms (wordpress with assets) and so on
<Garogat> sarnold: is GlusterFS worth to have a look at?
<tomreyn> Garogat: not packaged in ubuntu AFAIK, but totally worth a try: https://cernvm.cern.ch/portal/filesystem
<sarnold> Garogat: maybe. I had the impression from glusterfs sources that the object storage capabilities were alright but I didn't care for the filesystem part of glusterfs. Maybe that's been improved in the meantime..
<sarnold> Garogat: but you can't just add that to an application that wasn't planning on being clustered from the start
<Garogat> that so much more complicated than i thought. i also need to find a way to sync php session?!
<erick3k> does anyone else here works with cloud-init?
<nacc> erick3k: there is a channel for it as well
<nacc> erick3k: #cloud-init
<erick3k> ty nacc
<nacc> erick3k: np
<sarnold> Garogat: depending upon the application's goals, yes; it's sometimes common to have haproxy or whatever send all connections from a given host back to the same backend webserver to try to avoid needing to share session state in a cookie or in the database
<nacc> powersj: re: LP: #1679357, it's a metapackage
<ubottu> Launchpad bug 1679357 in php-defaults (Ubuntu) "Missing dep8 tests" [Wishlist,New] https://launchpad.net/bugs/1679357
<powersj> nacc: rats, thought I checked for that
<powersj> nacc: thanks
<nacc> powersj: you can check to be sure, but i think it's contentless
 * nacc is double checking too
<nacc> ok, there are a few binarie, but pretty few -- i guess we could write tests taht they do work when the packages are installed, so i'll leave it
<powersj> yeah... ok! thx for checking
<cliluw> I have some code that runs as a service. I want to let this service SSH into some of my other machines and run commands. What's the most secure way to go about this?
<sarnold> cliluw: ssh-keygen a key for the service to use; distribute the public portion in ~/.ssh/authorized_keys as needed
<sarnold> cliluw: decide if you want the key to be used by whoever has the key alone, so the service doesn't require any unlockling or any agent
<sarnold> cliluw: .. or if the key shouldn't be useful on its own, and must be unlocked by an agent for use. then you'd want to make sure an ssh-agent is running, set up the environment variables correctly, and then figure out how you'll unlock it every reboot or key expiriry or whatever.
<cliluw> sarnold: Ok, thank you.
<blackflow> cliluw: this code as a service, is a public service?
<cliluw> blackflow: No, not a public service. Just a standard systemd service.
<blackflow> cliluw: and what do you want it to do over ssh to other machines?
<cliluw> blackflow: Various things. I want it to modify files on the remote machine and run commands to set them up.
<blackflow> cliluw: well, giving unbound ssh access from one machine to another is not quite th emost secure thing to do. if that machine is compromised, all the other machines are wide open to compromise as well.
<cliluw> blackflow: That's true. What do you suggest?
<blackflow> cliluw: depending on what exactly you want to do, it may be wise to run something like SaltStack's reactor.
<blackflow> or in other words, have those other machines poll for a "command file" (that's not just a bash script or something like that) from which they can parse what to do.
#ubuntu-server 2017-04-04
<Mead> I am reading through this guide to set up pass throughs for guest OS's : https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Isolating_the_GPU ,  when it says "You can then add those vendor-device ID pairs to the default parameters passed to vfio-pci whenever it is inserted into the kernel."  is this implying create a file or add it to the grub kernal config?
<Mead> I am using ubuntu-server, that just happens to be a recomended guide
<Logos01> Howdy, folks. I'm hoping someone might be able to point me in the right direction for this. I have an Ubuntu 16.04 machine that I just upgraded from 14.04.  It is a KVM/libvirt hypervisor, serving a manually configured (outside of libvirt, that is) routed network to its VMs. This much is fine; my VMs get address, and successfully are able to reach and be reached from the internet.
<Logos01> The challenge is that the VMs can no longer initiate any traffic to my intranet machines.
<Logos01> This means that while my router and hypervisor can successfully communicate bidirectionally on port 22/80/443/etc., none of my other physical systems can.  My workstation/laptop (this is my home network) can successfully determine via netcat that ports are open on the VMs, but it cannot receive traffic from those ports.
<Logos01> ( examples: http://lpaste.net/8120205894421053440 )    ... any suggestions on where I might look to determine what has become misconfigured as a result of the upgrade on the hypervisor would be appreciated.
<Logos01> This is confusing to me because the VMs can all reach one another; and they can reach the router. It's just everything *ELSE* on the physical network they can't reach.
<drab> Logos01: are you using a firewall or doing something else with that bridge?
<drab> Logos01: also have these been rebooted etc after the upgrade and got on a new kernel?
<Logos01> drab: Well, I *do* have an haproxy instance acting as a loadbalancer for ports 80/443 from the outside world to my machines for the openconnect daemon and webserver stack; but I also have an independent VM that's running a Katello instance to act as management for the machines.  Mostly it's a lab for me to practice/sandbox/experiment my sysadmin-skills on my own recognizance.
<Logos01> drab: Yes. This started on Friday and I've rebooted a couple of times.
<Logos01> I updated on Friday, it's persisted over the weekend. Granted I didn't really investigate it on Saturday.
<Logos01> I've pretty much narrowed it down to the VMs not getting routing information from the VM-net gateway onwards (mtr is a lovely thing)
<drab> ok, if you get on the hypervisor and tcpdump, do you ee the netcat traffic from the www-node1 going to the laptop?
<drab> ok
<drab> are the vms on a diff subnet/network than the physical stuff on your lan?
<drab> but it sounds like yuo got it already :)
<Logos01> Yeah, the VMs are all on 192.168.121.0/24 ; the physical systems are all on 192.168.1.0/24
<drab> ok
<Logos01> My router (192.168.1.1) has a routing table entry to the hypervisor -- 192.168.1.3.
<Logos01> http://lpaste.net/354253  <-- mtr output example
<drab> you mean an entry to direct .121/24 to the HV?
<sarnold> Logos01: I suggest trying 'ip route get ....' commands on all the different computers (real and virtual) with IPs from all the real and virtual computers..
<drab> what's ip route ls on the VMs?
<drab> yeah, or that, try the get
<Logos01> sarnold: That all looks correct.
<Logos01> http://lpaste.net/354254
<Logos01> drab: And yes, the router has a static routing table entry using 192.168.1.3 (the hypervisor's physical address) as the gateway for 192.168.121.0/24
<drab> Logos01: if you tcpdump traffic on the bridge, do you see the replies on the br interface?
<drab> I'm guessing they are getting lost on the HV and not going back to destination
<drab> maybe something funny with asymmetric routing, maybe they are taking a diff path on the way back and getting dropped
<drab> I assume you tcpdumnp'ed on your laptop, yes?
<drab> and don't see that traffic coming back at all
<drab> I'm wondering if the laptypo is sending traffic to the router, but it gets it back directly from the VM
<drab> doesn't recognize it and drops it
<Logos01> http://lpaste.net/354255  <-- not necessarily useful but
<Logos01> Hrm... interesting ... laptop1 is in fact showing icmp from katello
 * Logos01 tries adding the routing table entry on the laptop locally
<drab> :)
 * Logos01 facepalms
<Logos01> Why did I not need to do this before, I wonder ...
<Logos01> You know what? I may have had to and it's just been so long I don't remember it.
<drab> maybe you did and forgot? :)
<drab> yeah, I do that all the time, that's why I use ansible now :P
<drab> or whatevert, just don't do changes by hand
<drab> been bitten by it far too many times
<Logos01> drab: ... my ansible setup is on my laptop and was what was inspiring me to work on this.
<Logos01> <_<
<drab> even if it doesn't work after an upgrade I see stuff failing and I know I have to change something
<Logos01> Because I couldn't ssh to the VMs.
<drab> lol :D
<drab> good inspiration
<Logos01> I mean it's only 17.04 and I'm finally migrating my physicals from 14.04 to 16.04. You can tell I am suuuper on it about latest-and-greatest.
<Logos01> Anyhow, I appreciate it.
<drab> latest and greatet is overrated :)
<drab> Logos01: btw, maybe there was a point in this all... :) any chance you can share libvirt setup? I'm trying to get started on KVM
<drab> Logos01: I have my own bridge and stuff, so I want none of the automagic
<drab> at least until I understand where the magic comes from
<Logos01> drab: Oh. I ripped out the libvirt networking component and am instead running my own manually initialized dnsmasq instance (it's not starting anymore but my VMs are all statically configured now anyhow)
<Logos01> Also, the upgrade to 16.04 overwrote my /etc/iptables/rules.v4 file so it's a mess until I rewrite it.
<Logos01> But...
<drab> k, care to share how to rip that out? I have a centralized dnsmasq, don't want any additional dnsmasq or bridge set up
<drab> just use the bridge I tell it to
<Logos01> You just set the default network it defines to not autostart
<Logos01> (And then never start it)
<sarnold> drab: depending ujpon how little magic you want you may prefer a different tool entirely; libvirt afterall is just a wrapper around qemu and iptables and so on glued together with an xml parser
<drab> sarnold: I'd love that, but I couldn't find much of a documentation on that and I'm already quite behind to figure it all out
<drab> so trying to find a compromise between magic and starting from scratch
<Logos01> sarnold: Lots of things work with libvirt as the backend for their hypervisor management though
<Logos01> Like in my case I was actually using Katello to spin-up / spin-down VMs
<Logos01> http://lpaste.net/354256  <-- current state of my hypervisor's iptables. (I'm not thrilled with this.)
<sarnold> Logos01: that's very true.
<Logos01> It's fugly and I know it.
<drab> but before I need to get a container setup for kvm
<Logos01> Used to be a loooot prettier.
<drab> so I can experiment without trashing the host
<drab> Logos01: also you don't happen to have tried libvirt with lxc, do you?
<Logos01> I was playing around with the notion a while back.
<Logos01> But I never went anywhere with it.
<Logos01> Honestly I'm starting to look at rkt right now -- especially with the asshattery that Docker is pulling now.
<Logos01> (Monthly releases with each new monthly release marking the end-of-life of the previous month.)
<Logos01> Of the docker engine itself, that is.  (Oh, they'll have LTS too. Quarterly instead of monthly.)
<Logos01> drab: But yeah, once you *HAVE* a bridge device manually created and configured to allow traffic in/out via iptables forwarding rules, you can just define libvirt domains (VMs) to use that bridge-device for their networking.
<sarnold> drab: a few similar things are listed here http://www.linux-kvm.org/page/Management_Tools
<Logos01> I just added mine to /etc/network/config
<drab> sarnold: iirc you have a zfs nas, don't you? do you happen to have looked into sanoid/znapsend for backups?
<Logos01> ZFS ... :D
<sarnold> drab: I've only got the one zfs system so far, so I haven't looked at sending snapshots anywhere yet
<drab> k
<Logos01> http://lpaste.net/354257
<drab> I've narrowed it down to those two solutions, need to test them and figure out hwo to work with ZVOL since I'll need those for KVM
<Logos01> drab: I've never actually heard of either. I should really start doing zfs send/recv for my snapshots
<drab> whoa, r00t? crazy man :)
<Logos01> sole filesystem
<Logos01> Was that way back in 12.04 too
<drab> O_O
<Logos01> Yeah, the latop's made a few migrations with me.  I even once used zfs send/recv to migrate the OS from one laptop to another.
<drab> so how did you put / on zfs?
<Logos01> zfs-native PPA
<Logos01> And, at the time, zfs-grub PPA
<drab> u blogged about it? or any links?
<sarnold> heh those days it felt even hairier than today
<Logos01> drab: I basically followed the howto/walkthroughs for this from the zfs-native ppa peeps
<drab> k
<drab> will google that out, thank you
<Logos01> https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer
<drab> I ended up putting two drives in mdadm for root
<sarnold> drab: stick to rlaager's guide for today's stuff
<drab> and the rest on zfs
<Logos01> sarnold: Heheh, hard to find now though
<Logos01> sarnold: He's actually merged it into the page I linekd to
<Logos01> Well. https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS
<drab> yeah then we're looking at the same thing
<drab> not for me then, seen that before
<Logos01> ... I have to figure out how to get zfSnap to honor the com.son:auto-snapshot flag
<Logos01> drab: I have historically had a habit of moving from one company to the next once every year to year-and-a-half. I pretty much always wind up using zfs as sole filesystem on my personal linux machines when doing so
<Logos01> So ... I've done that process a few times.
<Logos01> Sadly, on my *current* work laptop, they gave me an encrypted disk drive so I can't reinstall the OS. :-(
<faekjarz> Hey there! I'm running 16.04 server. I'd like to run several commands on shutdown/reboot. I'm looking for something like rc.local, but for the shutdown process. In which file would i put those commands?
<lynorian> faekjarz, do you know about @reboot in cron
<lynorian> oops I do not think I read your question properly
<lynorian> so you want when you shutdown run these commands
<faekjarz> aye
<faekjarz> i think i'll see what i can do with this â https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html
<Logos01> faekjarz: Best I can think of in your case would be to start up a dummy service that has a series of ExecPost steps
<Logos01> And never directly interact with the service otherwise; shutting down the host would cause it to shut down that dummy service and thus execute those commands as part of the shutdown process.
<Logos01> Err, they'd be ExecStopPost commands
<Logos01> Just have the actual startup command be something tiny and silly like a simple script with a "while true ; do sleep 10 ; done" command inside it.
<faekjarz> Logos01: interesting, thanks
<Logos01> You'd want to make them part of the sysinit.target.wants, I *think*
<Logos01> Or actually hey -- there's a shutdown.target.wants
<Logos01> haha -- forgot all about this.
<Logos01> https://unix.stackexchange.com/questions/39226/how-to-run-a-script-with-systemd-right-before-shutdown/39604#39604
<Logos01> There ya go
<faekjarz> yea, looks about right, thanks Logos01! Once again, i can avoid actually understanding systemd ;D
<faekjarz> (oh, there's a #systemd channel â¦of course)
<cpaelzer> jamespage: did the tempest run return good results on bug 1672367 to mark it v-done?
<ubottu> bug 1672367 in libvirt (Ubuntu) "libvirt uses password-secret on old style drive_add syntax" [Undecided,New] https://launchpad.net/bugs/1672367
<Logos01> faekjarz: I approve of that sentiment.
<jamespage> cpaelzer: lemme check
<jamespage> cpaelzer: I swear I triggered the test run but apparently not - have done so now
<FilipNortic_> I'm getting an error while trying to restart sshd. (Tried /etc/init.d/ssh restart and the systemd version)
<FilipNortic_> ssh.service failed because the control process exited with error code. See "systemctl status ssh.service" and "journalctl -xe" for details.
<blackflow> FilipNortic_: and did you do as instructed?
<FilipNortic_> yes
<FilipNortic_> was no real info in either case
<FilipNortic_> well status says: failed to start. UNIT enterd failed state
<blackflow> FilipNortic_: then we can't help you :)   but just in case, please do pastebin the logs
<blackflow> FilipNortic_: in the status, there's excerpt from the log below, anything in there?
<FilipNortic_> blackflow: http://lpaste.net/354264
<blackflow> FilipNortic_: there should be more, please check with journalctl -xe, or journalctl -p err or journalctl -u ssh.service -n 40
<FilipNortic_> ok i'll try to extract the relevant parts
<FilipNortic_> error: Bind to port 22 on 0.0.0.0 failed: Address already in use. and fatal: safely_chroot: stat("/home/ftpuser"): No such file or directory
<blackflow> there you go :)  So first, is there an ssh daemon already running? Is this in a container bound to host's IP?
<FilipNortic_> can there be multiple ssh daemons
<blackflow> yes, but each bound to its own port
<blackflow> (though not with default set up in Ubuntu, you'd have to run additional ssh daemons either in a container, or manually / with a custom unit file)
<FilipNortic_> root     23097     1  0 Apr03 ?        00:00:01 /usr/sbin/sshd
<FilipNortic_> this is the only shhd process i find
<blackflow> FilipNortic_: ss -4lp | grep ssh
<blackflow> this will give you port used and pid of the process named ssh
<blackflow> if you have that, then you can't run additional daemons on the same port
<blackflow> but.... sounds to me like you're doing something wrong here. What exactly do you wish to achieve?
<FilipNortic_> tcp    LISTEN     0      128     *:ssh                   *:*                     users:(("sshd",pid=23097,fd=3))
<FilipNortic_> we were trying to configure sftp
<blackflow> right, so configure it within the existing ssh daemon, you don't need to run an additional (and how do you even run it btw)
<FilipNortic_> there wasn't suppose to be an additional one the fist time we restarted sshd it worked fine and ftp worked then we tried to give access to the group instead and upon that restat we got the bind error
<FilipNortic_> is it trying to start itself twice or something like that?
<blackflow> FilipNortic_: can you pastebin your sshd_config file?
<FilipNortic_> http://lpaste.net/5543173436047622144
<FilipNortic_> can't see anything wired in it
<blackflow> FilipNortic_: yeah, looks okay, except I dont think you need any options for internal-sftp.
<blackflow> FilipNortic_: also, this setup is very unsafe, you allow password auth and use default port 22. Just a matter of time until a bot breaks in.
<FilipNortic_> yeah I know that much. so far they all try as root but i will change it just need 2 resolve this first
<FilipNortic_> I still have no clue what is wrong
<blackflow> FilipNortic_: you can't log in as root, you have PermitRootLogin no
<FilipNortic_> yeah but i still se boots trying
<FilipNortic_> was kind of my point (though a sort of mute one)
<blackflow> ah you mean the bots try as root.... yeah.... sorry, my mind was in the context of your sftp group users
<FilipNortic_> but what i really which to know is why i can't restart sshd, if there's another process blocking why can't I see it
<lordievader> To enable sftp on my host I needed to add 'Subsystem sftp /usr/lib64/misc/sftp-server'.
<lordievader> That path might be a bit different on Ubuntu.
<blackflow> internal-sftp is needed with that group match stanza to chroot sftp users, otherwise they could roam freely on the system
<blackflow> and forcing the command it blocks regular ssh log in, allows only sftp
<blackflow> as for why it's behaving like FilipNortic_ says, I don't know. it's not normal behavior for ssh
<FilipNortic_> any idee how to get back too a normal stat... should i try and kill the sshd process
<blackflow> FilipNortic_: first, when you "systemctl restart ssh.service", does it log an error about binding to port 22 again?
<FilipNortic_>  error: Bind to port 22 on 0.0.0.0 failed: Address already in use.
<FilipNortic_> sshd[30276]: error: Bind to port 22 on :: failed: Address already in use.
<FilipNortic_> yeah
<blackflow> weird.
<jamespage> zul: urllib3 and requests are still wedged in zesty-proposed - something you have time for?
<jamespage> cpaelzer_: testing OK - marked bug 1672367 as requested
<ubottu> bug 1672367 in libvirt (Ubuntu) "libvirt uses password-secret on old style drive_add syntax" [Undecided,New] https://launchpad.net/bugs/1672367
<cpaelzer_> thank you so much jamespage!
<cpaelzer_> the next bunch of SRUs are waiting, so this should help to clear the queue
<jamespage> cpaelzer_: yw
<cpaelzer_> well waiting is too muhc, I need to code them up first :-/
<jamespage> ah yes the relentless queue of SRU's
<cpaelzer_> if you are not having them you either own "hello" or your package isn't used a lot :-)
<cpaelzer_> rbasak: given my frequent typos, could I ask you to re-release uvt as uvtoool
<cpaelzer_> it would be nicer and auto-supports triple-o that way right :-)
<rbasak> :-)
<FilipNortic_> when I run: netstat -tapn | grep ssh
<FilipNortic_> i get: tcp6       0      0 :::22                   :::*                    LISTEN      23097/sshd and tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      23097/sshd
<FilipNortic_> this is one for ipv4 and on for ipv6 ?
<hateball> it's the same pid as you can see
<FilipNortic_> ahh right
<hateball> also I think 'ss' is prefered to netstat these days
<FilipNortic_> yeah i ran that first
<hateball> "ss -tap" is nice
<lordievader> hateball: It is.
<FilipNortic_> kind of hoped it missed something
<hateball> needs sudo to show which pid uses ports <1024 iirc
<lordievader> Just like the ipconfig -> ip
<lordievader> ifconfig*
<FilipNortic_> is ip the new one?
<lordievader> FilipNortic_: Yes.
<lordievader> FilipNortic_: Do you still have the problem of starting ssh? If you are not connected via ssh you could kill that remaining process and start the service again.
<FilipNortic_> well ssh is my only method of connection right now
<FilipNortic_> but does killing the sshd service stop the established connections
<FilipNortic_> any other recomendations ? change port and se if i can start anther daemon there?
<lordievader> You could do that as a detour to restart ssh on the original port.
<lordievader> Though, I am not sure how ssh behaves with running multiple daemons.
<ronator> not sure if this suits but "sshd has had support for multiple ListenAddress directives for a good while"
<FilipNortic_> so it might still try and restart the old one
<lordievader> FilipNortic_: There is no way of access of another kind?
<FilipNortic_> there should be a vnc point set up by the server provider but it comes up blank when i try it
<FilipNortic_> guess i have to call thier support
<lordievader> Or you run the commands in a screen/tmux and hope for the best :P (bad advice, I know)
<ronator> is it save to remove package "landscape-common" if I don't plan to use landscape?
<lordievader> Check the reverse dependencies.
<zul> fnordahl: probably start doing SRU processing again later this week
<lordievader> ronator: If nothing (important) requires it, I'd say it is save to remove.
<zul> jamespage: sure...
<ronator> lordievader: thats exactly where my question was aiming at :)
<fnordahl> zul: that would be great. just a update of the package to be based on horizon-9.1.2. would suffice as the necessary patches have been upstreamed
<lordievader> ronator: apt-cache can tell you the reverse dependencies.
<ronator> thx lemme check that
<ronator> lordievader: like so?  $ apt-cache rdepends landscape-common
<ronator> shows only landscape-common and -client so should be fine thx
<lordievader> ronator: Indeed, apt will also show you if it has to remove more due to a dependecy.
<ronator> lordievader: yes I know. We tested landscape for a short period of time, I removed it and now I was unsure if landscape-common was always there. removing didn't raise any dependencies, but you never know, so I asked and learned something new :)
<zul> jamespage: my old nemesis dogtag-pki
<Aison> hello
<Aison> I have 4 network devices
<Aison> enp5s0, enp6s0, rename4, rename5
<Aison> why the hell are two of them called rename*
<rbasak> Sounds like they got stuck halfway through the rename, possibly due to a conflict.
<rbasak> Do you have four NICs in reality? And can you reproduce this eg. on a live USB boot?
<rbasak> Also, which release?
<Aison> rbasak, no, there is a dual 82571EB and a dual 82574L controller
<Aison> one is onboard, one is pcie
<nacc> iirc, dmesg should have some indication of what is going on (or syslog)
<rbasak> Perhaps it's trying to rename each of the two NICs on each controller to the same enpXs0 name?
<Aison> this is my dmesg: http://paste.ubuntu.com/24313876/
<Aison> I try to find something :)
<nacc> [    4.009571] e1000e 0000:02:00.0 rename4: renamed from eth2
<nacc> [    4.022635] e1000e 0000:02:00.1 rename5: renamed from eth3
<rbasak> Which release?
<nacc> looks to be 16.04 with 16.04.1 kernel
<Aison> yes
<nacc> that rename is happening much earlier than the other
<nacc> you of course, if not concerned with hotplug could use net.ifnames=0 (iirc)
<rbasak> I wonder if this is a bug. If so, it'd be nice to fix it properly.
<nacc> i think it would require some systemd bugging -- may be worth filing regardless
<Aison> btw: the pcie LAN card is the new device
<Aison> before I just used the on board
<Aison> but the same card was used in another 16.04 server before without any problems
<Henster> evening ,, i have like 10 drives lying around and heard of  zsf , silly question i need to fist format them all on to the same system frmat ? they are mostly ntfs
<nacc> Henster: you mean zfs?
<Henster> yes sorry
<drab> Henster: nah, it wont' care, zfs utils will just take care of that
<nacc> Henster: aiui, what drab said
<nacc> Henster: you just need to tell zfs what disks to use
<Henster> wow ok cool
<Henster> and is it reasy just to add extra dives ?
<drab> yes and no
<drab> yes as in it's easy, no as in it probably doesn't work as you think it does
<Henster> do all the drives have to be the same size ?
<drab> Henster: please read through this at thev ery least: https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
<drab> zfs is great, but also not particularly forgiving
<nacc> heh
<drab> it's closer to what linux used to be: friendly but chooses its friends wisely
<Henster> ok cool thanks was lookign for more content ..
<Henster> is there a newer or better version than zfs now ?
<drab> https://wiki.ubuntu.com/ZFS
<drab> follow that to get going
<drab> read the other thing past the first chapter about edbian to understand more about the concepts
<drab> it's still the best walkthrough around
<drab> along with the other one I'm about to paste... sec
<drab> this is the best resource on zfs I've found, explaining the concepts in enough detrails that you won't shoot yourself in the foot while not being voerwhelming (and avoid cargo culting some of the many misunderstandings spread on the internet):
<Henster> cool
<drab> https://forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/
<Henster> thank you so much , new toys for my server :)
<drab> there's a few more good docs I bookmarked, but don't want to voerwhelm you, that should keep you busy for a while :)
<drab> as you proabbly heard for raid, raidz is not a backup, so backup your stuff!
<teward> anyone know of a clamav/clamd *alternative* that works fine with amavisd-new?  ClamAV eats over 780MB RAM in running on even an idle mail server, so it causes some... problems.
<sarnold> re "raid is no backup" https://twitter.com/leyrer/status/847816162557689857
<drab> lol, and the prize goes to the https://twitter.com/nuintari/status/848249592609202179
<drab> but I 'spose only for MP fans :D
<sarnold> hehehe
<dasjoe> Ancient machines
<teward> sarnold: ohai
<sarnold> gutenabend dasjoe :)
<sarnold> hallo teward :)
<dasjoe> Hi sarnold :)
<teward> sarnold: what do you know about clamav being a memory-consuming resource whore on servers and if there's any solution for it?  Or should I be bothering the server team to add a warning to the server guide about ClamAV taking up massive resource usage and have minimum reqs. of 2GB RAM or more to use it on the server
<teward> since you've got some security team insights :P
<teward> (clamav for mailservers == resource hog)
<patdk-wk> heh? clamav doesn't consume a lot of memory
<sarnold> teward: heh, 2gigs feels smallish today..
<teward> patdk-wk: well, running clamav ate 750MB of RAM on a VPS where i'm setting up a test mailserver with amavis+clamav
<teward> and it actually swapped so much I had to force-restart the VPS
<teward> so............
<patdk-wk> clamav did? your your av libs for clamav did?
<teward> i'll let you restate your question (E: Unclear what's being asked)
<patdk-wk> my clamav with a LOT of 3rd party libs added to it, is using 710megs of ram
<patdk-wk> is clamav using all that memory? or is your clamav-virus-definitions using it all
<teward> patdk-wk: stock ClamAV from the repos.  650MB RAM + the rest was swap.
<teward> patdk-wk: looked to be the clamav process on htop
<patdk-wk> what clamav process? clamavd?
<teward> i'd have to relaunch it to check.  I'mi currently away from my SSH console, but will get back to you :)
<patdk-wk> odd though
<patdk-wk> mine is using 710megs exactly, no swap
<teward> unless it's a leaky version in Xenial
<patdk-wk> using clamav libs, securite, bofhland, foxhole, ...
<patdk-wk> but then the stock clamav libs are 250megs
<teward> well, i have a trial of Avast's solution for antivirus, giving that a test go, otherwise Postfix + DoveCot + Amavis + SPF + DKIM + DMARC all works heh
<patdk-wk> I use bitdefender also, but that is slow, cause it won't run in daemon mode and uses lots of ram also
<patdk-wk> but then, my mailservers have 30gigs of ram
<patdk-wk> clamav uses only alittle ram, spamassassin uses a lot more
<SineDeviance> hi all. i want to add a xubuntu environment to my server for use over NX. i am running 16.04 amd64
<SineDeviance> is xubuntu-desktop still the correct metapackage?
<patdk-wk> if that is what you want to use
<patdk-wk> you should probably ask xubuntu though
<SineDeviance> it is
<SineDeviance> both what i want to use, and the correct package :D
<teward> patdk-wk: spamassassin eats most of my RAM currently, on the box, next big user is Amavis but the problem is on a small email server (1GB RAM is low, yes), clamav's RAM usage is actually an issue.  Avast's solution seems to behave better in terms of resource usage
<blackflow> teward: clamd (note the d) eating up a lot of RAM?
<queeq> Has anyone got any problems with recent qemu update? My VMs on top of Xen are not starting anymore.
<queeq> libvirtd gives this error: invalid argument: could not find capabilities for arch=x86_64
<nacc> cpaelzer: --^
<teward> blackflow: yep.
<queeq> Is anyone here running VMs on Xen and have restarted a server after applying upgrades today?
<sarnold> queeq: please file a bug report against whatever it is that actually does your vms, whether that's qemu, libvirt, or xen. Of the three the most recently changed was five days ago, so it'd be best to be more specific than "today's updates" -- dpkg -l output of the affected packages, etc., would be helpful
<queeq> Thanks sarnold. I'm not sure it's a bug. I now tried downgrading qemu and it didn't help. Neither libvirt nor xen were upgraded recently
<queeq> Upgrade that I suspected caused the issue included the following...
<queeq> Upgrade: landscape-common:amd64 (16.03-0ubuntu2, 16.03-0ubuntu2.16.04.1), grub-common:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), makedev:amd64 (2.3.1-93ubuntu1, 2.3.1-                  93ubuntu2~ubuntu16.04.1), grub-xen-bin:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), qemu-system-x86:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), grub2-common:amd64 (2.        02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9),
<queeq> grub-pc:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), libapparmor1:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), grub-pc-bin:amd64 (2.     02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), libapparmor-perl:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), qemu-utils:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), apparmor:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), wget:amd64 (1.17.1-1ubuntu1.1, 1.17.1-1ubuntu1.2),
<queeq> grub-xen-host:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), qemu-block-extra:amd64 (1:2.5+dfsg-   5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), qemu-system-common:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10)
<queeq> I don't think it could be caused by grub. I'm now trying to downgrade apparmor, but not sure it could have caused this either
<queeq> Nah, apparmor downgrade haven't helped
<sarnold> pfew ;)
<sarnold> not a surprise
<sarnold> but still
<queeq> Don't know what else to try... Something went wrong. And I haven't found recent information on this error. There were some bugs with qemu capabilities caching back in 2015, but that's it...
<tyhicks> pfew times two
<queeq> libvirtd verbose logging doesn't give any additional clues either
<queeq> The only error is this: error : virCapabilitiesDomainDataLookupInternal:699 : invalid argument: could not find capabilities for arch=x86_64
<sarnold> queeq: skim this mail and see if it rings any bells https://lists.ubuntu.com/archives/ubuntu-devel/2016-September/039492.html
<queeq> Thanks, will do
<tyhicks> queeq: I'm guessing that `virsh cpu-models x86_64` returns an error?
<queeq> tyhicks: "this function is not supported by the connection driver: virConnectGetCPUModelNames"
<tyhicks> queeq: is a libvirtd process even running?
<queeq> Yes it is
<tyhicks> odd
<tyhicks> I'm no help here
<queeq> thanks anyway :)
<tyhicks> oh
<tyhicks> I guess virConnectGetCPUModelNames could be a qemu/kvm thing
<queeq> Maybe, but there's xen as a hypervisor, no kvm
<queeq> That's why I suspected qemu upgrade to be the cause
<queeq> sarnold: That mail hasn't rang any bells. It's mostly migration-related between major versions.
<queeq> In my case this was very minor upgrade without any migration. This setup has been working fine for a long time. Until today, lol :D
<compdoc> queeq, I have vms running on kvm, and installed the recent qemu updates, but havent rebooted yet. how do you manage the vms? I'm guessing its not virt-manager
<queeq> It is virt-manager usually
<queeq> But bridging is manual
<compdoc> why do you mention bridging?
<compdoc> Im rebooting my host. lets see what happens
<queeq> Because this is part of VM management :)
<compdoc> I define bridges in /etc/network/interfaces
<queeq> Me too, I turned off libvirt's networking because it conflicts with another bridge I have on the host
<compdoc> one of the guests in Windows Server 2008, which provides dns, dhcp, and is the domain controller. so until it finishes booting, I cant browse
<queeq> You seem to have more complex setup. I've only Linux guests
<compdoc> both guests are running. the other guest is ubuntu server running bacula
<queeq> So you had no problems
<compdoc> you should save teh xml file for the guests, and search for refernece to x86_64, or whatever the error is
<compdoc> reference
<queeq> There is a reference for it, but I think it's very standard file
<compdoc> Ive had to cut out sections in the past, that were supported on centos kvm, for example, but not in ubuntu's kvm
<compdoc> then just save and import the xml file
<queeq> Oh, I thought you're talking about qemu capabilities cache
<compdoc> I mean using virsh to save the xml definition, edit it, then import it back
<queeq> I think they're stored in xml anyway, /etc/libvirt/libxl/vmname.xml
<queeq> Also accessible via virsh edit vmname
<queeq> Haha, when I tried to edit it, it gives me the same error again
<queeq> compdoc: what arch do you have set in those xml files?
<compdoc> <type arch="x86_64" machine="pc-i440fx-trusty">hvm</type>
<Boulevard> Hey everyone. I Asked in the Ubuntu channel but I figure this is worth a shot too. I have four disks in my PC. Two are raid0 array for Windows,, and the other two are just standard use for data and whatnot. I'd like to dualboot Windows 10 and Ubuntu(or others) safely, but I don't know how to properly install along the raid or install to one of the
<Boulevard> other disks and where to put the bootloader for the latter idea. I was urged to try asking here, but I'm looking for desktop use. anyone have suggestions? Thanks.
<compdoc> not sure how recent that is
<queeq> Straaaange, same arch as I have
<queeq> Could you show dpkg -l | grep qemu?
<compdoc> sorry, thats an old backup file. this is what I sue now, for more modern chipset features:   <type arch='x86_64' machine='pc-q35-2.5'>hvm</type>
<compdoc> *use
<queeq> Boulevard: your BIOS would try to read MBR from single disk first, anyway, so the way to do it would be to install grub on the one you are booting from
<queeq> compdoc: arch is the same....
<queeq> I have <type arch='x86_64' machine='xenfv'>hvm</type>
<compdoc> https://pastebin.com/uN01w5nD
<Boulevard> So I could safely drop linux into the 300gb or so I cleaned up on one of my other disks and then drop the loader on my raid?
<Boulevard> I apologize, I haven't cut my teeth on these hardcore installs before
<compdoc> queeq, so youre booting a xen kernel? is it a standard ubuntu package?
<compdoc> the host, I mean
<queeq> Thanks compdoc, looks similar to mine, but I don't have qemu for archs like ppc or sparc. I use custom kernel.
<queeq> Boulevard: you can drop the loader on any disk, `update-grub` utility should be able to find both windows and linux installations
<queeq> You would then just need to point BIOS to the disk where grub is installed
<queeq> By the disk I mean physical drive
<Boulevard> So it'd see windows from the raid and linux from my other non raid
<queeq> Oh, sorry, I missed there's RAID. What kind of RAID is that?
<Boulevard> Yes, sorry.
<Boulevard> Raid0. Bios controlled, not hardware
<Boulevard> The CPU is a bit old, so I'm using raid to squeeze some speed out of the whole thing.
<queeq> compdoc: what emulator do you have set in the xml?
<nacc> Boulevard: fyi, bios raid is fakeraid and usually does not actually help
<Boulevard> Eh? :o
<Boulevard> Must be placebo effect then. I thought it helped a bit at least
<queeq> Boulevard: not sure if it would work in this case. I remember long time ago I was trying to set up something like this with no success. Ended up using Linux mdraid or zfs
<nacc> Boulevard: it might help a bit, but it's not real raid and isn't really accelerated
<queeq> Neither mdraid nor zfs raids are cpu intensive
<nacc> right
<Boulevard> I suppose I'll just get a couple new hard disks soon and dedicate an os to either of them then. I just reinstalled windows this weekend so I don't really feel like fiddling around with too much so soon
<nacc> Boulevard: so the issue is just deciding where to put the bootloader? where is it now?
<Boulevard> Hell they're 50 bucks on egg right now for 7.2 1TB's. (I'll take two dozen on the double :P)
<Boulevard> The Linux bootloader? Nowhere. I'm running a live usb session right now
<sarnold> Boulevard: some suggested reading before you build your 24-disk storage machine https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/
<sarnold> the whole blog series is wonderful
<Boulevard> Nah I don't need that much XD. Jut being cheeky :P
<queeq> Seems that I've lost hardware virtualization on my host
<Boulevard> I see a 1TB volume under dev/mapper/ and then sda and sdb, which are my two 500GB raid disk members
<queeq> Tried creating new VM and it only has an arch option of xen (paravirt)
<Boulevard> SDC is my 1tb media drive, which would be a candidate for install if it would work smoothly. sdd is the linux usb and sde is my external
<queeq> wtf really, how's it possible?
<Boulevard> Well. Whatever. I'll just order a couple disks and set it up like a normie. Thanks for the help guys.
<Boulevard> Have a good afternoon :)
<nacc> queeq: do you see the flag in /proc/cpuinfo?
<queeq> Which one should it be?
<drab> vmx
<queeq> oh well
<queeq> I know
<nacc> or you can run `kvm-ok`
<queeq> I had a power outage today and there were some issues with BIOS setup. I guess I lost vmx option there
<queeq> It seemed to have reset to default. As long as this is remote machine I wasn't able to check thoroughly
<nacc> strange
<queeq> Thanks everyone!
<queeq> The clue is solved :)
<drab> nacc: you don't happened to have tried to run kvm/libvirtd in a container, have you?
<drab> and when I say tried I mean succeeded :)
<queeq> This is a home server and there were some nasty outages lately. Everything started with it being unable to boot.
<nacc> drab: no, i haven't tried that
<nacc> drab: i assume it would only work in a privileged container, bu teven then ... maybe not :0
<drab> nacc: yeah, doesn't look like it...
<queeq> When a screen was attached, it was residing on a BIOS warning about faulty set up. I instructed remote person to enter BIOS, and upon entering the Linux started to boot and I thought it's all fine
<queeq> Shite, killed half of the night trying to troubleshoot this
<nacc> queeq: sorry :/
<queeq> np :) Without your help I would kill another couple of hours
<queeq> Thanks all, good night
<drab> nn
<drab> is it ok to rant about xml in here? I don
<drab> t want to offer anybody :)
<drab> offend*
<compdoc> its too late for that :/
<compdoc> jk
<drab> heh
<compdoc> queeq, if I remember right, the bit I had to cut out of the xml file was at the bottom
<queeq> compdoc: thanks, the issue is resolved now. 99.9% probability is that my BIOS settings got screwed after power outage
<compdoc> ah, cool
<queeq> I have dual-bios Gigabyte MB there which seems to have rewritten main BIOS with backup settings, and virtualization was turned off there.
<sarnold> queeq: so you just had to turn on hardware accelerated vms in the bios?
<queeq> There's no vmx flag in /proc/cpuinfo
<queeq> sarnold: I dunno yet, will ask a person who has physical access to the computer to do it tomorrow
<sarnold> ugh
<queeq> But considering everything I've seen today I'm pretty sure this is the issue
<drab> is anybody aware of any nefarious consequence if I take out the 127.0.1.1 hostname entry from /etc/hosts ?
<drab> it's getting in the way quite a bit, and the reason it was added seems some old bug
<sarnold> I've never once heard of it getting in the way
<drab> sarnold: ok, let me tell you about it then you will have had :)
<compdoc> 127.0.0.1 is still there?
<drab> yes
<drab> with localhost
<compdoc> some systems have it, some dont. <shrug>
<drab> but 127.0.1.1 with the hostname is added at install time
<compdoc> yup
<drab> sarnold: what happens is that people, me included, use "localhost" to refer, to, well, local host
<drab> they otherwise use "hostname" to refer to the ip/interface that hostname should resolve to
<drab> however because of the 127.0.0.1, using hostname still refers to localhost
<drab> when configuring certain daemons, if you use hostname meaning the certain public ip/interface (could be lan), you get screwed becasue the daemon starts to listen on localhost
<nacc> sounds like bugs in those daemons
<drab> if I want tomsething to listen on localhost I will say, you guessed it, localhost
<nacc> because a hostname does not define an interface
<nacc> if you want to listen on a public ip, use the public ip?
<nacc> or specify the interface to use
<drab> the daemons are doing the right thing, theya re calling gethostbyname
<nacc> what happens when hostname resolves to multiple IPs?
<drab> which depending on how nss is configured will likely go hosts, dns
<sarnold> drab: thanks for the explanation. I've never seen anyone use hostnames quite like that before :) normally people either want wildcard binds or they want to bind to specific IPs or interfaces.
<drab> nacc: each ip would likely have its own hostname (plus a cname to all of those), or if it doesn't specifying the ip itself makes sense then
<drab> nacc: I'm not saying it's a general situation where specifying the hostname is always the right thing to do
<nacc> i don't think a 'hostname' uniquely identifies an interface
<drab> I'm pointing to what seems a logical assumption: localhost means localhost, hostname means "something else". if the ip it points to resolves to the local machine, that's fine
<drab> nacc: sure, I'm not saying ti should be
<nacc> and it seems like the daemons you are using make that assumption
#ubuntu-server 2017-04-05
<drab> I'm saying what the expectation when a hostname is used
<nacc> *assumption
<nacc> :)
<nacc> tomato - tomato
<drab> it seems incorrect for it to refer to "lo"
<drab> fair enough
<drab> sarnold: the thing came up trying to distribute a configuration to multiple hosts, including the one running the service
<nacc> drab: right but the very idea that 'hostname' refers to an interface is wrong
<nacc> it doesn't make sense to me
<drab> so all the hosts are told to point to "server1"
<drab> so the master gets set up to listen on server1 and the slaves to connect to server1
<nacc> or use a fqdn which may or may not be aliased in /etc/hosts (depends on the config, iirc)
<drab> however when that configs gets run, on the master, server1 resovles to 127.0.0.1 so the deamon never listens on eth0
<drab> s/run/read/
<drab> so then I should put the ip on the master to make uit work, but then if tomorrow I need to repoint the clients I need to change the ip on all of them instead of just repointing dns
<drab> or otherwise I need to introduce 2 variables, one to tell the server what to bind on, and another for the clients what to connect to
<drab> I ended up with latter, but it feels "bad" and likely that sooner or later those two will go out of sync/someone will make a mistake
<drab> I guess I could create another alias for the server
<drab> which wuold not end up in /etc/hosts and then work
<drab> that might be a better solution
<nacc> it seems like all your cluster members should have hosts entries that point to the actual ip records
<nacc> then it would 'just work' if they use those records, right?
<drab> that would work too, but seems to add more work instead of making things simpler
<drab> and dns is fast/reliable enough with local caches etc, and if the network is screwed stuff is broken anyway
<drab> in any case, I think the additional cname might be the way, that way I don't need to touch /etc/hosts to remove anything and things will just work
<drab> thank you for talking it through, better than a rubber duck :)
<nacc> drab: np
<nacc> drab: i think that's a sane approach (basically the same idea just at the DNS server)
<drab> is there a document somewhere that lists the steps necessary to customize an image so that it runs in a container?
<drab> I've read in some of stgraber's posts that it needs customizations given the restrictions, but I'd love to know what the process is exactly
<renatosilva> this is problematic for a vps installation, correct? http://vpaste.net/qFBIl
<renatosilva> that's what their image provides
<sarnold> that smells a lot like an openvz host
<sarnold> it should be very cheap
<sarnold> it can be fine if it is very cheap and that's what you want to pay for it.
<patdk-lap> dunno why it would be probamatic
<sarnold> if you want to do anything like firewalling, routing, create devices, use namespaces, etc., then you should find something willing to give you a KVM instance instead
<patdk-lap> glibc is made to basically handle any crap you throw at it
<patdk-lap> cause that is the linux way
<sarnold> if you just want a place to run a znc bouncer and host a tiny website it's probably fine
<renatosilva> patdk-lap: libc complains heavily about kernel 2.6 upon system update
<patdk-lap> libc or the libc packaging?
<renatosilva> sarnold: do you think there's a chance they update the host's kernel if I ask them?
<patdk-lap> they won't
<sarnold> they can't
<patdk-lap> they are running a stable centos6
<patdk-lap> it can't update, unless they move to centos7
<patdk-lap> and they probably don't want to mess with systemd
<renatosilva> patdk-lap: the libc package, do you think it's paranoia from debian then? the message is really scary
<patdk-lap> I don't know what message you are seeing
 * renatosilva doesn't understand how can they run an old host os to provide way newer vms which are stuck to that old kernel
<patdk-lap> that is the magic of libc
<patdk-lap> as long as libc supports that old linux kernel api, it works
<patdk-lap> and libc has all kinds of crud in it to work with all kinds of linux kernel bugs and changes and incompatability
<patdk-lap> most os's the kernel and libc come joined together
<renatosilva> patdk-lap: the message is like "libc does not support 2.6 anymore, do not expect it to work" -- well this is a server and I do expect *libc* to work
<sarnold> I'd expect Standard Unix Stuff to just work
<sarnold> but maybe the stranger things won't
<sarnold> but that won't be surprising, because it's just an openvz jail anyway. a lot of stuff won't work.
<patdk-lap> ya, but 16.04 has systemd, so
<patdk-lap> yep, openvz will block a lot of the kernel api anyways
<renatosilva> so what? systemd won't work?
<patdk-lap> I doubt it will 100%
<patdk-lap> not sure anyone wants to use systemd 100%
<patdk-lap> and you couldnt anyways cause it's openvz
<renatosilva> so the key here is asking them to upgrade to centos7? they seem worried about improvements
<sarnold> for what it's worth I'd spend the extra three dollars a month to go with some other host
<sarnold> and not have to figure out what does and what doesn't work
<renatosilva> (although I don't understand why they deliver these vms that don't fully work, I'd expect a lot of user reports about it)
<renatosilva> sarnold: I have a couple of years or something with them yet, already paid
<sarnold> that's unfortunate.
<sarnold> maybe stick with 14.04 LTS until they upgrade infrastructure
<renatosilva> ok thanks all anyway
<sarnold> good luck
<ishaved4this> hey guys, I need some help setting up WOL on 16.04. I would like my computer to suspened or sleep after a set amount of time, and fire back up with a WOL app, ssh, or if I can, if plex is requested. I have already enabled WOL on bios
<ishaved4this> 16.04.2*
<sarnold> I think that's it, no?
<ishaved4this> I'm assuming that wasn't for me?
<sarnold> ishaved4this: it was ;)
<ishaved4this> oh! Well, I was googling around, and it seems you have to configure it in the server as well. I'm pretty new at this whole server thing, and cant figure out a way to even make an ssh key yet. let alone make USBMOUNT mount the drives to the same letter each
<ishaved4this> time hahaha
<patdk-lap> heh? wake back up using ssh or plex?
<patdk-lap> does your nic/bios support this?
<ishaved4this> well, to set up WOL on its own
<ishaved4this> http://askubuntu.com/questions/764158/how-to-enable-wake-on-lan-wol-in-ubuntu-16-04
<ishaved4this> http://askubuntu.com/questions/893056/logout-of-ssh-and-then-suspend-machine
<ishaved4this> both of those have different instructions it seems, and I'm not sure whats right. and I know to wake on plex I need to do something with my modem/router, but I'm not sure what that is
<patdk-lap> your modem/router?
<patdk-lap> no
<patdk-lap> you would just have to have your nic support waking on traffic
<patdk-lap> most nics don't support this, some do
<ishaved4this> ahh. see, this is why I come here. you guys know your stuff haha
<patdk-lap> generally not a good idea, cause it is unlikely your system will ever sleep
<patdk-lap> do you have a gui installed?
<ishaved4this> oh. well if it wont sleep, then theres no point. But I would still like to be able to send a packet to wake the pc up from sleep
<ishaved4this> no I dont, But I have an Ubuntu live cd I can boot up
<patdk-lap> from sleep? or suspend?
<patdk-lap> from sleep will be an os thing, I'm not sure how to do that, never care to do that myself
<ishaved4this> hmm. Suspend is like hibernate on windows correct? which one do you think is better?
<patdk-lap> I only wol from suspend, full system poweroff
<patdk-lap> from suspend/hibernate, yes
<patdk-lap> that ONLY uses the bios, so you just have to setup the bios to handle wol with the nic
<ishaved4this> oh really?
<patdk-lap> then all you have to worry about in the os, is that you actually suspend/hibernate
<ishaved4this> so sleep is kind of unnecessary?
<patdk-lap> sleep is for other states
<patdk-lap> S1 and S3 normally, os controlled wol
<patdk-lap> S5 is full hibernate/poweroff, bios controls that wol
<patdk-lap> just depends on how long you want to wait :)
<patdk-lap> and how much power savings you want
<ishaved4this> could you walk me through how to get hibernate set up on my server? And how will it randomly shut off while I'm watching plex or anything?
<patdk-lap> no, I don't have time
<ishaved4this> well, I want a quick boot up, but I wouldn't want all these drives powered on at all times.
<patdk-lap> there should be lots of info on how to setup hibernate
<ishaved4this> alright
<patdk-lap> since there will be no physical user
<patdk-lap> you will have to figure out how to tell hibernate when the system is active or not
<patdk-lap> since I have people actually using the systems, I haven't had to worry about that
<ishaved4this> ah.
<ishaved4this> Yeah mines headless. I'm sure google can help out
<ishaved4this> now I just need to find help for the damn external automounting program to map the same mount points each time
<drab> anybody here has a preference for what monitoring tool to use?
<drab> I'm tired of recompiling nagios and it seems there's no ppa of sort to get it going on xenial
<drab> and compiling ndo is also not fun
<drab> at least icinga has a ppa ready to go
<drab> and the web interface isn't written in C so maybe I can make some adjustments in a reasonable timeframe
<drab> oh, nm, there's a ppa for nagios too it seems
<drab> interestingly enough not one single howto mentions them, looks like nobody knows about it just like I didn't
<ishaved4this> hey guys, I need some help setting up WOL on 16.04. I would like my computer to suspened or sleep after a set amount of time, and fire back up with a WOL app, ssh, or if I can, if plex is requested. I have already enabled WOL on bios
<ishaved4this> 16.04.2*
<ishaved4this> oh! Well, I was googling around, and it seems you have to configure it in the server as well. I'm pretty new at this whole server thing, and cant figure out a way to even make an ssh key yet. let alone make USBMOUNT mount the drives to the same letter each
<drab> ishaved4this: as long as the card supports it and there's power and it's enabled in the bios, it'll work
<drab> there's nothing special/different than a desktop
<ishaved4this> sweet. I got that part to work, but I cant find a way to get the server to know when to hibernate as its headless and almost never physically used
<drab> oh, no clue about that, you were asking about WOL :)
<drab> don't really user power management on servers
<drab> minus maybe some throttling of CPUs
<ishaved4this> yeah I usually don't either, But this one is gunna be downstairs by the router, and with the JBOD next to it that is brighter than god damn Polaris, i'd rather it be off when not in use haha
<ishaved4this> also, do you know how to get the mount points for external harddrives to stay the same through reboots?
<drab> the mount points don't change, in case the drives "letters" do
<drab> which is why you use uuids and mount those
<drab> those will stay the same even if the drive letters change
<drab> alternatively you can add udev rules
<ishaved4this> hmm. Well I used USBmount to auto mount them on plug in and startup
<drab> no clue, never used that, but the problem then seems to be how ti recognizes the drives
<drab> ie if it automounts /dev/sdc, that can change
<ishaved4this> nothing in the config shows how I can mount via uuid or even label
<drab> if it automunts /dev/disk/by-id/something it won't
<ishaved4this> yes. it automounts /dev/sdb /dev/sdc etc. all randomly on boot
<drab> if it wants a device use what I just mentioned
<drab> /dev/disk/by-id/
<drab> find your drive in there and use that wherever you'd specify /dev/sdb
<ishaved4this> # Mountpoints: These directories are eligible as mointpoints for
<ishaved4this> # removable storage devices.  A newly plugged in device is mounted on
<ishaved4this> # the first directory in this list that exists and on which nothing is
<ishaved4this> # mounted yet.
<ishaved4this> MOUNTPOINTS="/media/usb0 /media/usb1 /media/usb2 /media/usb3
<ishaved4this>              /media/usb4 /media/usb5 /media/usb6 /media/usb7"
<drab> oh, that makes no sense
<ishaved4this> right?
<drab> I'd get rid of that and use "auto-mount"
<drab> you can then tell automout to mount which device where
<ishaved4this> is that another program?
<drab> apt-get install autofs
<ishaved4this> okay, before i do that, should I unmounts my drives?
<ishaved4this> and delete usbmount?
<drab> until you configure them, autofs won't do anything,a ltho I forgot if and to which extent it tries to be smart and discover stuff
<drab> I guess safer to unmount and stop usbmount
<drab> but shouldn't be a prob
<ishaved4this> okay so pumount will do the trick, right?
<drab> there are no tricks, you need to read the docs, but it can certainly be configured to mount a specific drive at a specific lcoation
<drab> consistent across reboots
<ishaved4this> oh no, I mean to unmount my drives
<drab> whether it's the easiest etc, I've no clue, maybe usbmount can be made to work too
<drab> I've no idea what pumount is. if you mean unmount, yes
<drab> and you can run "mount" to see what's mounted
<drab> if usbmount is monitoring those mountpoints however it might autoremount the, I've got no clue about that
<drab> so I'd stop usbmount first then unmount, then look at autofs
<drab> and on that note, I'm outta here or I'm gonna get locked out of $HOME
<ishaved4this> lol ok thanks
<drab> ttyl
<drab> glhf
<renatosilva> the libc complaint fwiw http://i.imgur.com/ugUYxll.png
<sarnold> thanks renatosilva, I've never seen that thing before
<lynorian> I have not either
<renatosilva> Unpacking libc6:amd64 (2.23-0ubuntu7) over (2.23-0ubuntu3) ...
<renatosilva> https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1624837
<ubottu> Launchpad bug 1624837 in glibc (Ubuntu) "upgrading ubuntu 14.04 -> 16.04 deadlocks in libc6's preinst" [Undecided,New]
<renatosilva> https://anonscm.debian.org/cgit/pkg-glibc/glibc.git/tree/debian/debhelper.in/libc.preinst#n148
<renatosilva> https://anonscm.debian.org/cgit/pkg-glibc/glibc.git/tree/debian/debhelper.in/libc.preinst#n180
<renatosilva> so as long as the debian packager do not break it, it seems it's going to work fine
<sarnold> where "work fine" means "right up until it falls over in a flaming pile of wreckage" :)
<renatosilva> heh, just found out my hosting company does offer kvm support, they just call it "cloud" :-/
<sarnold> nice
<renatosilva> anyway, thanks all
<lordievader> Good morning
<cpaelzer> good morning lordievader
<lordievader> Hey cpaelzer, how are you doing?
<cpaelzer> missing the time to enjoy the nice weather I see out of the window :-)
<cpaelzer> I should start working in the basement
<sarnold> very german response :)
<lordievader> cpaelzer: It is misty here, want to trade?
<cpaelzer> hmm hat stereotype did I trigger sarnold?
<sarnold> cpaelzer: the happy craftsman, hard at work :)
<cpaelzer> you mean the grumpy german at work, thats stereotype #43 - but I'm fine it made you smile
<sarnold> hehehe
<cpaelzer> btw sarnold, did you see the update to the apparmor issue I filed - the setrlimit block seems not arch related
<sarnold> cpaelzer: ah, thanks. that's probably best :)
<cpaelzer> rbasak: once you are around - could you consider working on the unapproved queue for the USBSD
<cpaelzer> rbasak: I happen to find more and more on that queue, likely stalled by the z release work I'd think
<cpaelzer> rbasak: and this would be the SRU day anyway right?
<cpaelzer> rbasak: in particular I'd be interested in bug 1668093 and bug 1670745 if you can only spend a bit time there
<ubottu> bug 1668093 in openssh (Ubuntu Yakkety) "ssh-keygen -H corrupts already hashed entries" [High,Triaged] https://launchpad.net/bugs/1668093
<ubottu> bug 1670745 in openssh (Ubuntu) "ssh-keyscan : bad host signature when using port option" [High,Fix released] https://launchpad.net/bugs/1670745
<cpaelzer> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: cpaelzer
<cpaelzer> rbasak: ping here so I can add you once you are around as well
<cpaelzer> rbasak: nacc: if you could look into sponsoring 1630516 as part of USBSD#2 that would be nice
<rbasak> cpaelzer: o/
<cpaelzer> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: cpaelzer, rbasak
<cpaelzer> good morning rbasak
<rbasak> Good morning!
<Malusu> hi guys, I would like to divert all my server logs to an external mongodb database. my problem is: how can I connect to this database without storing the password on the server or using some kind of two factor auth, so if my server is compromised the intruder cant mess with the log files.
<cpaelzer> Malusu: could you come up with a concept that sends unathorized to the central server via a stream and only there pass it into the DB
<cpaelzer> Malusu: it would also allow to make sanity checks on the one place you trust your central server)
<cpaelzer> Malusu: before inserting to DB
<cpaelzer> I wonder if all the central logging solutions don't have something, but I'm no logstash (or siblings) expert
<Malusu> cpaelzer: thats a great idea thanks, what examples do you have in mind about the sanity checks?
<cpaelzer> Malusu: I just got the idea, had no plan around - but start with the usual things like "max size, length, strip chars not allowed" or such
<cpaelzer> Malusu: and I'd think that on logs dedup would be a massive storage win, can mongodb do that?
<Malusu> cpaelzer: I dont think mongodb has dedup build in. I'm sure its possible with 3rd party tools. I could use ZFS to deal with that.
<cpaelzer> Malusu: post-DB dedup in such a case will have a hard time as the blocks are filled with extra non deduppable info like the record id
<cpaelzer> rbasak: if you think there is a lessons learned on the logrotate bug for the multi publish you might add to the pad of the USBSD http://pad.ubuntu.com/JxNfyW4H0v
<cpaelzer> the bug itself already has a section there
<rbasak> ack
<jamespage> cpaelzer: responded on that thread we discussed yesterday
<cpaelzer> thank you jamespage
<jamespage> thanks for the summary in the bugs
<jamespage> cpaelzer: I looped a set of tests against a deployment last night and was unable to reproduce the issue with 4500 instances
<cpaelzer> jamespage: which confirms what I said
<cpaelzer> jamespage: thanks for the extra impressive number
<jamespage> yeah that was my message in the list as well - we're working to fix but can't reproduce outside of the gate
<blackflow> rbasak: ping
<rbasak> Context please?
<blackflow> rbasak: would it be possible for you to guide me one time through contributing the fix for Xenial wrt bug #1673357 ?
<ubottu> bug 1673357 in munin (Ubuntu Yakkety) "Munin core plugin "if_" doesn't work" [Undecided,New] https://launchpad.net/bugs/1673357
<rbasak> Sure, let me take a look.
<blackflow> I've maintained, build, patched, contributed upstream, for some FreeBSD ports so I'm not a total noob, but it'd be great if I could have guidance once, I can learn from it quickly.
<blackflow> For starters, this is what I think should be done: get src deb for xenial's munin package, apply the fix to source, create patch, and that's where I don't know the next steps.
<rbasak> OK
<blackflow> So that's a "backport code" approach. Or, perhaps I should just make <something> to pull in the next version of munin, the one that'll go to ZZ, 2.0.33, into Xenial? that doesn't sound right, tho'
<rbasak> Would you prefer to create patches by hand, or use git? We have a new git workflow we're working on. I prefer it because I feel it makes things easier, but we're happy to mentor/sponsor either way.
<rbasak> And are you familiar with https://wiki.ubuntu.com/StableReleaseUpdates ?
<rbasak> The SRU policy is that we backport fixes to stable releases for things like this.
<rbasak> (or the path of least resistance is that under the policy at least)
<blackflow> rbasak: on freebsd, there's a simple mechanism. you run "make extract" and it downloads and extracts upstream source tarball intoa  work dir. There you change the code for a fix and run "make makepatch" and the framework creates a patch that diffs current package with your changes.
<blackflow> I'm not, I'm a total noob about processes and protocols of contributing to Ubuntu. I've seen those docs, however, it's just that I don't have the big picture yet.
<rbasak> blackflow: we have two mechanisms here - the traditional, pre-VCS one, and the latest git stuff (that is still a work in progress, but usable)
<blackflow> I do prefer git.
<rbasak> OK, let's use that.
<rbasak> One moment, I'll check the git import for munin is in date
<rbasak> (that should be automatic but we're not fully ramped up yet)
<blackflow> rbasak: btw, you don't have to help me with this right now, not sure if you're busy.
<rbasak> The tooling for our git workflows is available from: "git clone https://git.launchpad.net/usd-importer" - or you can choose not to use that tooling and hit git manually if you prefer (depending on how you prefer to learn, understanding the pieces may be your preferences)
<rbasak> blackflow: now is absolutely fine. That's what today is designated for :)
<blackflow> ah yes the bug squash day :)
<rbasak> blackflow: would you prefer to use git with our "usd" wrapper tool, or git directly?
<blackflow> well I do have experience with git, and none with the wrapper tool. but I'd like to learn the "proper way".
<rbasak> I don't think we've settled the git "proper way" yet. It's still a fairly new thing.
<rbasak> But you can learn the "the intended preferred way and follow along as we tweak things" if you like :)
<blackflow> Sounds fine :)
<rbasak> OK, so clone usd-importer using the URL above please.
<rbasak> In there, there's an executable in bin/usd, which you'll need to run. I have a symlink to it from ~/bin/usd, and I have ~/bin in my PATH.
<rbasak> Then:
<rbasak> mkdir /tmp/munin
<rbasak> cd /tmp/munin
<rbasak> usd clone munin git
<rbasak> This will clone the packaging into /tmp/munin/git
<blackflow> okay. now cloning the usd-importer. is it a big repo? It's been a it for a minute now
<rbasak> It should be tiny
<rbasak> I gave you the https URL as that should need no setup
<rbasak> You can also access using git+ssh, but that needs you to have an ssh key in Launchpad set up
<blackflow> ah, no, wait, I forgot... our firewall rules... sec...
<blackflow> rbasak: okay, usd-importer cloned, let's set up the bin
<blackflow> rbasak: okay, I need to install some dependencies, looking at README.md
<rbasak> blackflow: no don't worry about that
<blackflow> but I tried running usd --help and it complained about missing libs
<rbasak> Oh
<rbasak> Sorry
<rbasak> Yes, you do need those
<rbasak> I thought you meant munin's READMEs
<blackflow> ah, no, still setting up usd-importer
<blackflow> btw, should've installed it via setup.py, because running usd in PATH doesn't find the module.
<rbasak> Hmm. I thought it did that magically - wfm.
<rbasak> bin/usd looks in ".." relative to its location for the module.
<blackflow> rbasak: no it has "from usd.__main__ import main"  and python has no idea where "usd" module is unless you're in the same directory with it
<blackflow> so I had to install it with "python3 setup.py install"
<rbasak> blackflow: it's the "sys.path.insert..." line above
<rbasak> But I can look into that another time - thank you for telling me about it
<blackflow> rbasak: btw, do I need to clone munin into /tmp exactly? I have a /home/devel user for these thigs set up
<rbasak> blackflow: /home/devel is fine :)
<rbasak> Note that we'll be dumping files into the parent directory of the git repository
<rbasak> So I usually go one level further, so /tmp/munin/git as above instead of /tmp/munin
<blackflow> rbasak: okay, I have a bit of a problem when I run usd: https://dpaste.de/Cge2
<blackflow> I see the file is in bin
<blackflow> okay I think what the problem is, just a sec...
<rbasak> blackflow: I think we have some bugs in relation to finding things. Maybe the same thing that stopped you using bin/usd in your PATH?
<rbasak> If you can fix up yourself easily that's fine. We'll take bug reports and patches! If you're struggling, I suggest trying to run out of the cloned directory (rather than installing to the system) and setting PATH and PYTHONPATH etc as a workaround for now.
<rbasak> export PATH=/cloned/usd-importer/bin PYTHONPATH=/cloned/usd-importer/usd
<rbasak> Uh, PATH=...:"$PATH" of course, etc.
<blackflow> rbasak: yeah I do have it in my path but python doesn't find the module, and if I install via setup.py then the txt file is not included in the egg
<blackflow> I think it's missing a manifest iirc. But I cant't figure out why the sys.path.insert is not doing the expected
<blackflow> ah but of course. it's expecting usd.py
<blackflow> not "usd". import is looking for .py filenames
<rbasak> "from usd.__main__" should look for a usd/ directory with a __main__.py in it (as well as requiring the usd/ directory to have a __init__.py in it; I'm not sure if that applies for __main__)
<blackflow> no ,that wasn't it...
<rbasak> And it should look in PYTHONPATH for that usd/ directory.
<rbasak> So if you set PYTHONPATH to the top level of the cloned directory, that should work I think.
<rbasak> blackflow: alternatively we can give up on the tool for now and use git directly. Up to you.
<blackflow> I did, still nothing. "No module named 'usd'"
<blackflow> rbasak: well, I'd like to figure this out. this is perfect example of problems noobs would encounter. more value if I get this done.
<blackflow> gimme a sec, I have to refresh my python path setup knowledge
<rbasak> That'd be really helpful - thanks!
<blackflow> rbasak: found part of the probem. os.path.dirname resolves via symlink path. I have to use os.path.realpath instead of abspath if I'm not mistaken, lemme try
<blackflow> rbasak: lol there's a bug in python and realpath doesn't resolve.
<blackflow> rbasak: okay I give up. can't use symlink, so I added real path to PATH. Now I have another problem coming from the fact that the "usd" package is not registered with python. and if I install it, then that file is missing.
<blackflow> rbasak: let's go the "just git" route for now :)
<Sinned> Hi there, would there be anyone here who has some experience with Landscape on Premise?
<blackflow> rbasak:   git clone https://git.launchpad.net/~usd-import-team/ubuntu/+source/munin   ?
<cpaelzer> dpb1: ^^ see Sinned
<Sinned> hmm? :) he's an expert on that? hehe
<rbasak> blackflow: sorry I got a phone call
<cpaelzer> Sinned: he would know who knows and is around at the time I'd think
<rbasak> Yes, that's right
<Sinned> I got it all up and running, and things are working fine, just 1 stupid thing which I cannot figure out. I got the following Alert on the Landscape Server: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? Exit code 100. This is allready fixed, but the Alert keeps staying there under Alerts.. All I w
<Sinned> remove that alert
<blackflow> rbasak: np, okay cloning munin directly
<blackflow> rbasak:   "warning: remote HEAD refers to nonexistent ref, unable to checkout."
<blackflow> and there's no code in cloned repo, just .git
<rbasak> blackflow: that's fine
<rbasak> blackflow: "git checkout -b lp1680035 origin/ubuntu/yakkety-dev"
<rbasak> Sorry was that the wrong bug number?
<blackflow> 1673357
<blackflow>  so -b lp1673357
<rbasak> Yep, thanks
<blackflow> was just about to ask if lp meant what I thought (launchpad # number)
<rbasak> (or any other name if you prefer)
<rbasak> Now you should be able to see the packaging source that is current for Yakkety users
<rbasak> To check, "head debian/changelog" and the version should match against the table in https://launchpad.net/ubuntu/+source/munin
<rbasak> 2.0.25-2ubuntu0.16.10.3 I hope
<blackflow> rbasak: btw, why yakkety-dev?
<blackflow> ideally I want to backport the fix for xenial
<rbasak> blackflow: oh.
<rbasak> Then sure, do xenial-dev
<rbasak> But really for an SRU we need to do both.
<rbasak> Otherwise a user upgrading from Xenial to Yakkety will face a regression.
<blackflow> makes sense
<blackflow> rbasak: btw, I'm not a git wizard, so I'm not quite sure what's going on here.   https://dpaste.de/8D50
<rbasak> OK sorry
<rbasak> Does "git branch" list lp167...?
<blackflow> nopr
<blackflow> *e
<rbasak> OK
<rbasak> So the command didn't do anything at all, no problem
<rbasak> Do: "git branch lp1673357 origin/ubuntu/yakkety-dev"
<rbasak> and then "git checkout lp1673357"
<rbasak> Sorry, I didn't realise git would refuse to do both things at once in this case.
<rbasak> BTW, I'm busily filing bugs to fix all the rough edges you're hitting here :)
<blackflow> rbasak: okay, done, but had to branch from origin/ubuntu/yakkety-devel (not -dev)
<rbasak> blackflow: ah, sorry
<rbasak> blackflow: now, you should be able to cherry-pick the upstream fix right in
<rbasak> blackflow: if needed, "git fetch" the upstream branch
<blackflow> rbasak: which upstream would that be? the ubuntu/zesty branch?
<rbasak> One moment, my browser crashed, sorry.
<rbasak> blackflow: you're cherry-picking https://github.com/munin-monitoring/munin/commit/290d5ac2be02ced4d09fda68dc561fcf082c9cbf presumably? So that one.
<rbasak> Something like "git fetch git://github.com/munin-monitoring/munin master" I imagine
<blackflow> rbasak: that's the required fix, yes
<rbasak> Then you should be able to "git cherry-pick 290d5ac" I think.
<blackflow> rbasak: oh, yeah, I thought you meant from ubuntu repos
<rbasak> Ubuntu's git repo probably won't contain the commit as a separate object. It'll probably have collapsed all upstream changes into one commit, as it's only importing.
<dpb1> Sinned: is it still there?
<rbasak> We have plans to fix that one day, but right now the importer doesn't provide that kind of "rich history" as it's importing entire source uploads only.
<blackflow> rbasak: okay I've got the fix cherry picked, and a new commit is created in my branch
<rbasak> OK. Next, we need to collapse that into a quilt patch.
<rbasak> First run "git format-patch -n1 HEAD"
<rbasak> That should create a single file in the local directory
<blackflow> yup, got the patch file
<blackflow> now copy to debian/patches?
<rbasak> I just realised I missed a step, but I hope it doesn't matter.
<rbasak> Yes, or move.
<rbasak> And rename to something sensible please.
<rbasak> Did you get a conflict when cherry-picking?
<blackflow> there already is a similar patch, also prefixed with 0001
<blackflow> rbasak: yes, the commit was fixing code that was a fix that occurred after the one in xenial/yakkety 's munin
<rbasak> We're not too precious about the naming. As long as it's not misleading.
<rbasak> OK
<rbasak> Next, undo the commit you added, since we want to replace it with the quilt patch
<rbasak> So "git reset --hard HEAD^"
<rbasak> That should still leave the patch file in debian/patches as git isn't tracking that yet.
<blackflow> sensible enough?  "fix-if_-plugin-reporting-wrong-interface-speed.patch"
<rbasak> Sure
<rbasak> Then "cd debian/patches" and "echo <patch name> >> series"
<blackflow> maybe I reference the PR in the filename? that's a way we did on freebsd
<rbasak> We reference them inside the patch itself using a metadata scheme, so no need to do it in the filename.
<blackflow> okay
<rbasak> I won't object if you want to do it, but I've not seen anyone else do that.
<cpaelzer> rbasak: FYI zesty logrotate migrated
<rbasak> (in Ubuntu or Debian anyway)
<cpaelzer> rbasak: thanks again
<rbasak> cpaelzer: great, thanks!
<rbasak> Once you've caught up, "git status" should report one new file in debian/patches/ and a changed file in debian/patches/series only.
<rbasak> And you can "git add" both of those and commit that.
<blackflow> rbasak: yup, I've reset my cherry-pick commit and got the new patch file in debian/patches/
<rbasak> Great
<rbasak> If you now set up quilt if you haven't already, then "quilt push -a" should work without errors.
<rbasak> To set up:
<rbasak> export QUILT_PATCHES=debian/patches
<rbasak> export QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
<rbasak> Unless you already have quilt configured to do that.
<rbasak> Technically you don't need REFRESH_ARGS right now.
<rbasak> This is from https://wiki.debian.org/UsingQuilt
<rbasak> The step I missed BTW is to get you to commit the result of "quilt push -a" before cherry-picking. That would have ensured that if the existing quilt patches touched the same area as the cherry-pick, you'd be resolving any conflicts against the end of the quilt series, not the start. And then I'd have had you rewind both commits. But it sounds like that wasn't an issue this time.
<Sinned> dpb1: Yes it still is, I waited for a few hours now, but that alert simply does not go away
<rbasak> blackflow: let me know once you've caught up and are ready to continue.
<Sinned> dpb1: I found a binary file where it is, and removed it, but then I get a system error so thats no good haha.
<dpb1> Sinned: Do you have another process on the system that is contending for that file?  that alert is only cleared on 6 hour intervals, so it's a bit annoying, especially if you have another unattended-upgrades running somewhere.  Also, last resort, you can unsubscribe from the alert.
<Sinned> dpb1: I checked the first thing, and no, no other processes uses it, your 2nd part if quite nice info to know, 6 hours ok. I can live with that. How do you know this info? Anywhere where I can find this? And yea, prefer not last resorting hehe. I will wait some hours more :) And make sure no apt thing is running
<blackflow> rbasak: sorry, now it was my turn to get hogged on the phone :)
<blackflow> rbasak: ok, gimme a minute for this
<rbasak> Sure
<blackflow> rbasak: okay I've got quilt installed and I've set up a basic .quiltrc from the wiki and your suggestions
<blackflow> also I see what you mean by that step I missed, with quilt push -a. that's a "make patch" step when working with freebsd ports :)
<blackflow> ie. apply current package patches to upstream code, so your changes are based on patched, not raw upstream code. got it.
<blackflow> hypothetical question: if there were other patches that touched the same files or even lines, how are those resolved? is there an order to patches being applied?
<rbasak> blackflow: yes, the order is as in debian/patches/series
<rbasak> blackflow: so does "quilt push -a" work?
<rbasak> blackflow: assuming it does, I'll carry on.
<rbasak> The next step is to add some metadata to the patch - bug reference, where you cherry-picked from, etc.
<rbasak> Our standard for that is http://dep.debian.net/deps/dep3/
<rbasak> It goes at the top of the patch file, together with the git-generated stuff. quilt ignores all of this.
<rbasak> The spec page has some examples at the bottom you can follow.
<rbasak> A git format-patch formatted output, as this is, already has most of it.
<rbasak> Here, we should probably add:
<rbasak> Origin: upstream, https://github.com/munin-monitoring/munin/commit/290d5ac2be02ced4d09fda68dc561fcf082c9cbf
<rbasak> Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/munin/+bug/1673357
<ubottu> Launchpad bug 1673357 in munin (Ubuntu Yakkety) "Munin core plugin "if_" doesn't work" [Undecided,New]
<rbasak> Last-Update: 2017-04-05
<rbasak> That should be all you need I think.
<rbasak> If you'd like to pastebin the result, I can tell you if the formatting looks about right.
<rbasak> Once you've done that, you can commit the patch file and series files.
<rbasak> I did ask you commit these before. I missed these dep3 headers, sorry. You can use git commit --amend if you know how to use that.
<rbasak> Or don't worry about it and just add another commit.
<rbasak> The final thing to do is to add a changelog entry to debian/changelog, and then the source should be ready (pending testing)
<rbasak> To do this, there's a tool called "dch" which should be able to do most of it for you.
<rbasak> Run "dch" and it'll fire up an editor to write the new changelog message.
<rbasak> The message must refer to the bug in the form LP: #XXXXXX
<rbasak> Eg. "  * Fix network interface traffic metric (LP: #1673357)."
<ubottu> Launchpad bug 1673357 in munin (Ubuntu Yakkety) "Munin core plugin "if_" doesn't work" [Undecided,New] https://launchpad.net/bugs/1673357
<rbasak> Adjust the sign-off to your name, change UNRELEASED to yakkety, and set the version string according to the version examples in https://wiki.ubuntu.com/SecurityTeam/UpdatePreparation#Update_the_packaging
<blackflow> rbasak: quilt push -a worked fine. I see, yeah the series file defines order. Adding metadata per DEP3 now.
<blackflow> rbasak: okay, here's the patchfile: https://dpaste.de/SOwV
<blackflow> haven't committed anything yet, I'll fix the changelog now
<rbasak> blackflow: the patch file looks great.
<blackflow> dch is from devscripts, right?
<rbasak> I suggest committing the quilt change (addition to series file and the new patch file itself) separately from the changelog change.
<rbasak> Correct
<rbasak> (dch)
<rbasak> Comitting separately makes it easier to cherry-pick, for example for xenial-devel.
<blackflow> understood.
<blackflow> rbasak: btw, the sign off name.... I don't have an @ubuntu address. Do I use the address I've registered with on launchpad?
<rbasak> blackflow: yes please - then Launchpad will be able to match it up to your Launchpad identit
<rbasak> identity
<blackflow> rbasak: I should then also use the same e-mail addr in the git config, for commit logs?
<rbasak> We don't have a policy about that.
<rbasak> So whatever you prefer I think.
<blackflow> okay, I have my github addr set
<rbasak> That should be fine.
<rbasak> The git side is still very new.
<blackflow> one more question, my launchpad e-mail addr was designed just for launchpad (I use an alias for each website/service I reg to), it's not something I intended to have otherwise public. What do you suggest I do?
<blackflow> change my Launchpad e-mail to something that's okay to be public?
<blackflow> i'm not hiding anything, it's just spam control :)
<rbasak> Understood
<rbasak> Launchpad does understand multiple email addresses AFAIK.
<blackflow> eg my freebsd contributions public addr is vlad-fbsd@acheronmedia.com
<rbasak> So you might be able to keep your master one your "Launchpad" spam control email.
<rbasak> And have a separate "Ubuntu public contribution" spam control email and tell Launchpad that one as well.
<rbasak> And then use the "Ubuntu public contribution" spam control email in your debian/changelog entries.
<rbasak> I think that should work.
<blackflow> understood.
<cpaelzer> rbasak: I'd need your opinion on proper changelog construction for SRUs
<cpaelzer> rbasak: apache2 in trusty has had a 2.4.7-1ubuntu4.14 in proposed but that failed verification
<cpaelzer> rbasak: now on creating a 2.4.7-1ubuntu4.15 for a different issue what is the right changelog approach
<cpaelzer> rbasak: a) mention the changes in the failed-to-verify as reverted
<cpaelzer> rbasak: b) not mentioning them but keeping the .14 version in the history  (I'd consider that wrong)
<rbasak> It's a good question.
<cpaelzer> rbasak: c) taking the old .14 OUT of the history so that for a user it goes .13 -> -15
<rbasak> I think my answer may change depending on how I'm feeling when you ask me!
<rbasak> I don't think we have consensus on this
<cpaelzer> rbasak:  I've seen people do a) but I personally would prefer c)
<rbasak> Let me ponder for a moment.
<rbasak> It might be worth asking in #ubuntu-devel BTW.
<cpaelzer> true
<cpaelzer> let me post there
<blackflow> rbasak: okay, patch changes committed, and this is the changelog diff, haven't committed yet: https://dpaste.de/mYs3
<rbasak> Looking
<rbasak> blackflow: perfect
<blackflow> rbasak: the new version was given by dch, I didn't have to manually updated, only UNRELEASED to yakkety
<blackflow> *update it
<rbasak> OK. In this case it's because it can do the 3->4 thing automatically.
<blackflow> any rule of thumb for commit message for the changelog only?
<rbasak> For the first SRU for a particular package, it's incapable of going from 2.0.25-2 to 2.0.25-2ubuntu0.16.10.1.
<rbasak> commit message> no rule. I use "Changelog for 2.0.25-2ubuntu0.16.10.4"
<blackflow> okay. so that's done then.
<rbasak> Let me summarise what is left.
<rbasak> Testing, SRU information in the bug, and then submission for sponsorship.
<blackflow> The change is in production on all our ubuntu servers since the day I filed that LP# . Does that count for testing?
<rbasak> It certainly helps and gives us much more confidence in the SRU.
<rbasak> But we also want to make sure that the updated source package will build for both Xenial and Yakkety.
<rbasak> And presumably we want to test Yakkety as well?
<blackflow> so, the full test cycle, I'm guessing would be to produce the package, and then run installation -> runtime -> deinstallation... ?
<blackflow> yeah, I haven't got access to a yakkety machine atm, I might spawn up a vm later
<rbasak> I don't usually test deinstallation. That doesn't usually regress for a change like this that doesn't really touch the packaging (only the object code in the final binary)
<rbasak> To build binary packages, you can do it locally, or in a PPA.
<rbasak> Setting up a local environment for clean package builds is a pain IMHO, but useful if you intend to do this often.
<rbasak> Using a PPA is certainly easier, but sometimes a longer wait for builds, and you can't do incremental debugging.
<blackflow> rbasak: Well, for now I'd like to speed up the resolution to that particular munin bug. In the process, I want to see what it takes to contribute such changes to Ubuntu and if it's something I'd be comfortable with doing more, esp. for stuff in Universe.
<blackflow> so the whole point of this is to go through full contrib cycle, as if I was aiming to become a dev.
<teward> i'd like to make a note that unless you have a *lot* of experience with things in Universe, or have worked with a ton of packages, you *might* be after individual package uploads.  Just saying.
<blackflow> I've been documenting all the steps we did today, and when this is done I'd like to update roundcube, it's in universe, it's old and vulnerable...
<teward> blackflow: define "old" - RoundCube does have an LTS release.  perhaps we track the LTS release there?
<teward> blah lag.
<blackflow> teward: the LTS releaes is old
<rbasak> OK. We certainly appreciate your help.
<rbasak> Shall we use a PPA for now to test, as that'll be quicker?
<teward> blackflow: point.missed == true.
<teward> anyways...
<rbasak> Then if we have time later I can go through setting up a local environment with you.
<blackflow> teward: that's a comparison, so I don't follow :)
<blackflow> rbasak: does it involve setting up a chroot with debootstrap?
<rbasak> https://wiki.ubuntu.com/SimpleSbuild is the best documentation we have on setting up a local environment I think.
<rbasak> blackflow: roughly yes, though we have wrappers so you don't do that directly
<blackflow> okay. I wouldn't mind that, all our deployments are ansible powered zfs on root debootstraps from debian resuce env, remote over ssh :)
<rbasak> IMHO it should be one command not 11 steps :-/
<rbasak> Well mk-sbuild wraps it all. I usually use that :)
<teward> ^ this
<rbasak> It's just that you still have to mess with ~/.sbuildrc last I checked, which IMHO shouldn't be necessary.
<teward> (and i have probably the most verbose set of sbuild schroots - all arm archs, i386 and amd64, all supported releases, plus Debian too :P)
<blackflow> rbasak: okay, so how about we try the faster, PPA route, and I'll check that doc in detail later
<rbasak> OK
<cpaelzer> blackflow: I wanted to note that I like that you use the USBSD to gain tracktion on being an even more active community member - that is just what we wanted this day to be for
<rbasak> So the PPA route is fairly straightforward. Just one minor think I recommend.
<rbasak> Let's tweak the version in the changelog before uploading.
<teward> rbasak: do you know offhand who exactly would be the best person to prod about issues with clamd on servers?
<teward> sorry to intrude.
<cpaelzer> teward: I think he only wants to prep fixes not apply for MOTU or such yet
<rbasak> teward: probably me, cpaelzer or nacc
<teward> cpaelzer: I am going to argue otherwise because: [2017-04-05 10:22:01] <blackflow> so the whole point of this is to go through full contrib cycle, as if I was aiming to become a dev.
<blackflow> teward: what I meant wrt roundcube is that xenial shows this for policy: 1.2~beta+dfsg.1-0ubuntu1  which is old, according to the package changelog, it doesn't have fixes for at least three vulns, and roundcube 1.2.x is now at 1.2.4
<teward> cpaelzer: that's my confusion.
<rbasak> blackflow: this is to differentiate between what came out of the PPA vs. what came out of the archive later after the update lands.
<rbasak> blackflow: and also to allow you to bump the PPA version up for testing if necessary.
<teward> blackflow: ah, well allow me to make one note - security updates can be applied whiel the main version stays the same, such as we have to do for NGINX frequently with backporting security patches
<cpaelzer> teward: we all worked a bit on clamav, but thre is no clear "this is the guy" marker on this package
<teward> and while for NGINX that's done by the Security team, blah.
<rbasak> blackflow: so in debian/changelog, make the version 2.0.25-2ubuntu0.16.10.4~ppa1
<rbasak> 2.0.25-2ubuntu0.16.10.4~ppa1 sorts _before_ 2.0.25-2ubuntu0.16.10.4
<teward> cpaelzer: ah, well, core problem is clamd is eating RAM.  And I mean ***eating*** RAM.  >= 50% RAM usage on a small mail server.
<rbasak> And allows you to have ppa2 if needed, etc
<rbasak> blackflow: do you have a GPG key, and is it registered on Launchpad?
<blackflow> cpaelzer: correct, first I learn to walk, then I might run with an application for MOTU :)
<rbasak> Sorry just remembered that's a prerequisite for a PPA upload.
<teward> cpaelzer: also with their last statement I rest my case.
<blackflow> teward: I know, but unless I'm reading the wrong changelog, this hasn't gotten an update since march last year:  http://changelogs.ubuntu.com/changelogs/pool/universe/r/roundcube/roundcube_1.2~beta+dfsg.1-0ubuntu1/changelog
<teward> blackflow: I would suggest a different approach, I'd start by getting PPU for a handful of packages in Universe, master updating/packaging them, before hunting MOTU privs.  Even myself, I wouldn't apply for MOTU even though I have my fingerprints in multiple Universe packages.
<blackflow> rbasak: I understand
<blackflow> rbasak: no GPG key yet, no
<teward> (I'm fine just maintaining NGINX, and maybe a PPU application for ZNC, soon...)
<rbasak> blackflow: OK, so "gpg --gen-key" to sort that out. Defaults should be fine for a key you use for Ubuntu uploads. If you want, use a 4096 bit RSA key size.
<rbasak> blackflow: the name and email should match your sign-off line in debian/changelog
<teward> rbasak: cpaelzer: on the off chance there is one, is there a "high memory consumption by clamd" bug?  Because i let clamd run overnight, it ended up 50%+ RAM, and then over 200MB in Swap.
<teward> had to actually shut off the VPS in question to free up space.
<teward> Not cool.
<rbasak> blackflow: then upload the key to Launchpad using the web UI.
<rbasak> blackflow: (just the public part of course)
<blackflow> rbasak: got it, give me a minute
<rbasak> blackflow: oh, it looks like you have to push the key to the keyserver first, then give the Launchpad web UI the fingerprint.
<teward> rbasak: i was about to say... :)
<rbasak> blackflow: so that'll be (when you're ready) "gpg --keyserver keyserver.ubuntu.com --send-key <key id>" I think.
<teward> blackflow: and to support rbasak's last message: once you push to the key server wait 5 minutes and then add on Launchpad
<teward> i've had issues where it takes some time to propagate for LP to pick it up
 * rbasak last did this in 2011 :-/
<teward> rbasak: i win then, had to redo this in 2014 when my computer with most of my keys decided to fry the drive.  Oopsies.
<teward> And every upgrade I lose my devscripts and such but meh
<cpaelzer> pah I did in 2015 and would not remember
<cpaelzer> Who is better at forgetting contest is open
<teward> *raises hand*
<teward> Because I forgot what I did on Monday :)
<teward> literally.
<cpaelzer> ok, you won teward
<teward> that said, I'm still angry at clamd, 1GB RAM should be enough to run a small personal mail server, and it eating well over 50% RAM and over half my swap is not cool.  (Using Avast trial right now)
<teward> (with Amavis, etc.)
<teward> cpaelzer: well I forgot what I did on Monday because I had only two hours sleep the night before.  Sleep deprivation: not cool.
<cpaelzer> teward: I haven't seen such an issue in the last half year - I'll look into it a bit shortly if I find one by explcitly searching for the topic
<teward> That said, I slept over 14 hours on Monday -> Tuesday night so meh
 * cpaelzer lives a rather steady family live - I think I didn't sleep 14 hours in all my life actually
<blackflow> rbasak: yes, I found the whole procedure, I pushed the key, and registered, and verified just now.
<rbasak> OK great!
<rbasak> So now we need to build the source package and upload it.
<rbasak> Assuming you've tweaked the version in debian/changelog (I don't think you need to commit that, not sure, we'll see)
<blackflow> teward: LP accepted the pushed key right away. but I was ready for some caching or wait-till-we-process-it issues :)
<rbasak> You should be able to run "usd build-source" and that should do everything for you.
<rbasak> I hope.
<blackflow> rbasak: except the part I haven't been able to get usd runnign :)
<rbasak> It should drop a .dsc, .debian.tar.gz and .orig.tar.gz and a .changes into the parent directory.
<rbasak> Oh.
<rbasak> OK, we'll do it manually ;)
<blackflow> that's why we went the git-only route
<cpaelzer> rbasak: need to set the signing key maybe? - well it will derive from his mail adress in changelog if they match
<rbasak> blackflow: "git branch --track pristine-tar origin/ubuntu/pristine-tar"
<blackflow> I made sure the key is registered with the same name and e-mail addr I used in the changelog.
<cpaelzer> blackflow: great
<rbasak> blackflow: now "pristine-tar list" should show you some orig tarballs
<rbasak> ...but is missing 2.0.25, which we need.
<rbasak> So that's a bug :-(
<rbasak> It seems to be in Debian though.
<rbasak> That's interesting. I wonder if that's intentional?
<rbasak> blackflow: so undo: "git branch -d pristine-tar"
<rbasak> blackflow: and redo against the Debian pristine-tar branch: git branch --track pristine-tar origin/debian/pristine-tar"
<powersj> rbasak: if a package is source only in zesty and I have a bug in xenial should I mark zesty "invalid"?
<cpaelzer> powersj: bug# ?
<rbasak> blackflow: we want the orig tarball against 2.0.25, since that's the part before the hyphen and corresponds to the upstream source tarball
<rbasak> powersj: that sounds correct to me
<powersj> cpaelzer: LP: #1664179 tomcat7 is package
<ubottu> Launchpad bug 1664179 in tomcat7 (Ubuntu Yakkety) "Wrong POM dependency in javax.servlet.jsp:jsp-api:2.2" [High,In progress] https://launchpad.net/bugs/1664179
<blackflow> rbasak: okay, sec
<cpaelzer> ah this nice topic again, thanks for working on this powersj
<teward> cpaelzer: No rush, but if there's no issue I'll file one.  Maybe first-run issues but I doubt it...
<cpaelzer> teward: I found no related bugs, but searchign gave me the impression that sizes 250-500m can be just normal
<cpaelzer> teward: http://unix.stackexchange.com/questions/114709/how-to-reduce-clamav-memory-usage
<teward> that's... inefficient.
<cpaelzer> teward: http://lists.clamav.net/pipermail/clamav-users/2014-May/000468.html
<teward> cpaelzer: can we update the wiki for PostfixAmavisNew to make a note about this for ClamAV, that if the server is low-RAM you can't use ClamAV?
<cpaelzer> teward: the first link tries to explain a bit why it is so
<cpaelzer> teward: if instead of "can't" we use something softer like "need to carefully consider due to high memory consumption" I think such an entry would be good
<blackflow> rbasak: okay, was looking up what pristine-tar is. anyway, it appears there's no origin/debian/pristine-tar, I'm not sure if I missed to add an upstream?   "error: the requested upstream branch 'origin/debian/pristine-tar' does not exist"
<teward> cpaelzer: well that's why i made the suggestion - someone with better doc writing exp. should write it :P
<cpaelzer> teward: here https://help.ubuntu.com/community/PostfixAmavisNew ?
<teward> but I think that needs to be added, make a note that it could consume up to 500MB just being idle of RAM and that's a consideration
<teward> cpaelzer: that's the one
<rbasak> blackflow: oh, sorry. "git fetch origin debian/pristine-tar"
<blackflow> rbasak: you mean importer/debian/pristine-tar? I have only that in remote origins
<blackflow> (and /ubuntu/ )
<blackflow> git fetch origin debian/pristine-tar   did not work, but    git fetch origin importer/debian/pristine-tar   did
 * cpaelzer stops from exploding in anger
<rbasak> blackflow: ah yes, sorry
<cpaelzer> powersj: might I borrow a few minutes from you to be my "polite answer man"?
<powersj> lol
<rbasak> blackflow: so then you need "git branch --track pristine-tar origin/importer/ubuntu/pristine-tar"
<powersj> cpaelzer: sure
<rbasak> blackflow: I'm doing this off the top of my head mostly, so sorry about the errors.
<blackflow> rbasak: no problem, it actually helps me understand the step and look up why it's wrong and how to fix it, myself.
<blackflow> okay, branch pristine-tar set up
<rbasak> OK so now "pristine-tar list" should work
<rbasak> "head debian/changelog" shows 2.0.25-2ubuntu0.16.10.4, so we want "pristine-tar checkout munin_2.0.25.orig.tar.gz"
<blackflow> rbasak: having installed pristine-tar, yes. (so that's now two pkgs I needed to install, pristine-tar and devscripts  --> wrt those bugs you've been filing for rough edges  :)  )
<rbasak> Noted, thanks :)
<rbasak> Now you should have a munin_2.0.25.orig.tar.gz file in the current directory.
<rbasak> Move that to the parent directory.
<rbasak> Then "dpkg-buildpackage -S -nc -d -I -i" should ask you to sign, and then you should have a source package ready for upload in the parent directory.
<blackflow> rbasak: wait, we're tracking debian/pristine-tar. there's no munin_2.0.25 in there, munin_1.2.5.orig.tar.gz is highest version available, unless I missed a step?
<rbasak> Hmm
<blackflow> which is weird, debian has 2.0.x in testing, stable and oldstable
<rbasak> I just did, in a fresh directory:
<cpaelzer> teward: updated the wiki page with a note about it
<rbasak> git clone git://git.launchpad.net/~usd-import-team/ubuntu/+source/munin test
<rbasak> cd test
<rbasak> git fetch origin importer/debian/pristine-tar
<rbasak> git branch --track pristine-tar origin/importer/debian/pristine-tar
<rbasak> pristine-tar list
<rbasak> and I see munin_2.0.25.orig.tar.gz in there.
<blackflow> oh wait, wait, I think I see what went wrong
<teward> cpaelzer: thanks
<rbasak> What is odd is that I expected it to be in origin/importer/*ubuntu*/pristine-tar, and I've filed a bug for that.
<rbasak> blackflow: do you need to delete your local pristine-tar branch and point it at the debian one again?
<blackflow> rbasak: okay, fixed. yes, I had to branch --track pristine-tar origin/importer/DEBIAN/pristine-tar   not ubuntu  (small caps, emphasis here)
<blackflow> rbasak: okay, 2.0.25.orig.tar.gz checked out
<rbasak> OK. Move it to the parent directory please
<rbasak> Then "dpkg-buildpackage -S -nc -d -I -i" should ask you to sign, and then you should have a source package ready for upload in the parent directory.
<rbasak> Run dpkg-buildpackage from the top level of the repository, not the parent directory.
<rbasak> It'll look for the orig.tar.gz in the parent directory.
<blackflow> rbasak: done, got the new tarball, .dsc and .changes
<rbasak> Great.
<rbasak> And the version in those files is suffixed ~ppa1, right?
<rbasak> Now go to https://launchpad.net/~ and create a PPA
<rbasak> I have one called "experimental" I use for this stuff.
<rbasak> Unless you already have one you can use?
<blackflow> rbasak: yes, I added that version as you suggested
<blackflow> *to vesion
<blackflow> Okay, PPA created
<blackflow> now, dput?
<rbasak> Yep!
<rbasak> "dput ppa:<lpid>/experimental <whatever>.changes"
<rbasak> <lpid> should not include the ~
<blackflow> yeah. and now I add the PPA to sources list on the test machine, install the update, run test ... ?
<nacc> it will take some time to build, but yeah
<nacc> rbasak: iirc, re: LP: #1680125, we can only import an orig tarball once, so if we find an upstream tag for something, we don't import it again
<ubottu> Launchpad bug 1680125 in usd-importer "pristine-tar branch for Ubuntu is missing orig tarballs needed for Ubuntu" [Undecided,New] https://launchpad.net/bugs/1680125
<nacc> rbasak: we would have to add some state above and beyond our `gbp-import-orig` logic, which is fine
<rbasak> nacc: understood, thanks
<cpaelzer> hi nacc btw I'm taking myself out of the list and repost a final time
<cpaelzer> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: rbasak, nacc, powersj
<cpaelzer> wtf what-patch reports cdbs - never seen this
<cpaelzer> is this ancient packaging fun or did I just miss that so far
<nacc> cpaelzer: which package? there are few cdbs around
<rbasak> blackflow: let me cover the other two steps while you're doing that.
<rbasak> blackflow: for preparing the SRU bug, follow https://wiki.ubuntu.com/StableReleaseUpdates#Procedure
<rbasak> blackflow: to submit for upload, there are a few options.
<cpaelzer> nacc: numactl
<rbasak> blackflow: you can propose your git branch. To do this, push to Launchpad in your own git space, and then file a merge proposal against the importer branch you cloned.
<cpaelzer> but I just see written by pitti, so it might be old but of usual good pitti-quality then
<cpaelzer> ah only the cdbs-edit-patch
<rbasak> blackflow: cpaelzer, nacc or I would be happy to review and sponsor that from there.
<rbasak> blackflow: alternatively, the traditional method is to post a debdiff as an attachment to the bug. You can produce that by using "git diff origin/ubuntu/yakkety-devel lp..."
<rbasak> blackflow: and if you attach to the bug, subscribe ~ubuntu-sponsors to the bug, and it'll go into the sponsorship queue.
<rbasak> blackflow: finally, sponsors don't particular mind the method used to get the proposed upload to them. If you linked to a source package or git tree or something somewhere that's usable, I think most sponsors would be happy to review and accept from there.
<rbasak> *particularly
<rbasak> blackflow: in any case, I'll be happy to sponsor this upload of course :)
<blackflow> Understood. Now, this final step will have to wait a bit, I have to run some errands and then have to set up a yakkety machine to test it out.
<rbasak> OK.
<rbasak> I need to take a break. I'll be around later.
<blackflow> so I might ping you later if I need more help.
<rbasak> Sure, please do.
<blackflow> rbasak: Thank you A LOT for guiding me through this. excellent experience, learned a lot with a bunch of info I have to look up in detail
<Sinned_> dpb1: 6 hours it was lol :) Alert is gone now. Many thnx.. if I knew before the interval would be 6 hours, I would not have wasted 3 hours in searching a way to fix it -,- :D
<dpb1> Sinned_: sweet. :)
<powersj> rbasak: cpaelzer: LP: #1679792
<ubottu> Launchpad bug 1679792 in mongodb (Ubuntu) "Please remove i386 binaries for 1:3.4.1-3" [Undecided,New] https://launchpad.net/bugs/1679792
<powersj> too late for that to happen on zesty I assume, so should I propose for AA?
<powersj> or can migration still occur??
<cpaelzer> powersj: in general it can
<cpaelzer> powersj: Rule of thumb: currently the release Team is for zesty-proposed what usually the SRU Team is for e.g. xenial-proposed
<cpaelzer> powersj: but what shall migrate here - a full new version? - very very unlikely
<cpaelzer> powersj: do you have context on this or did you just run into on bug triage?
<powersj> cpaelzer: bug triage - looks like version bump from 3.2 -> 3.4
<cpaelzer> yep found it in http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
<cpaelzer> so it actually is in proposed from before freeze
<cpaelzer> powersj: the arm64 build was aborted it seems
<cpaelzer> powersj: IMHO that shold be decided by whoever usually does mongodb + the Release Team
<nacc> rbasak: i'll try and fix the bugs you found today and respin the snap
<cpaelzer> powersj: they are who have to decide eventually anyway
<Sinned_> Hey dpb1 can I ask you 2 other things about landscape on premise? I allrdy got it fixed, but would like to know why it happends :)
<cpaelzer> powersj: I'd recommend you to bring it up in #ubuntu-release and/or subscribe them
<powersj> cpaelzer: ok thx
<dpb1> Sinned_: sure, but it might be better if you post a Q to ask-ubuntu if they are long, tag with landscape.  I might even be the one to answer. haha
<cpaelzer> powersj: as I read the bug it is actually a request to an AA to remove the binaries, so not that much of classic triage for the server Team anway
<Sinned_> we got lots of servers, if I restore snapshot, I often get the following alert: update_security_db.sh not ran last 70 mins orso. I fix it by running sudo -u landscape bash -x /opt/canonical/landscape/scripts/update_security_db.sh. What is the interval of this?
<rbasak> blackflow: you're very welcome! Thank you for driving the bug.
<cpaelzer> powersj: but AA density is high in ubuntu-release so still the right place
<Sinned_> dpb1: many thnx, will do that :)
<nacc> powersj: cpaelzer: while i see a ton of work in LP: #1677578 -- but i don't know if you actually did or did not run their specific case
<ubottu> Launchpad bug 1677578 in php7.0 (Ubuntu) "php-fcgi: max_execution_time causes memory leaks" [Undecided,Opinion] https://launchpad.net/bugs/1677578
<nacc> and the replies are kind of going past the user without being real replies
<rbasak> nacc: wow thanks! I was just recording for the future - didn't think there was much urgency in it.
<cpaelzer> nacc: IMHO I ran his case excluding the last step asking him for his own pictures
<rbasak> nacc: I wonder if we can get automated snap publication? I think Launchpad can do that now, right?
<nacc> rbasak: yeah, i just haven't gotten to that point
<cpaelzer> nacc: his case was loading all images in a dir in a loop and thumbnailing them - I did so in comment #13
<nacc> cpaelzer: ok, it's a lot of comments and he explicitly asked if you ran his testcase a few times and it wasn't obvious to me if you did or didn't
<cpaelzer> nacc: he replied while I was working and LP isn't pushing - we both got to the point that a single huge image is enough - I even provided a way to construct it
<nacc> cpaelzer: right, but you seemed to be tracking something different
<nacc> cpaelzer: the memory limit issue is not what they are complaining about
<cpaelzer> nacc: so it is about cleanup
<cpaelzer> only now I see his posts in between mine :-/
<nacc> yeah
<cpaelzer> LP please autorefresh for me in future
<nacc> so i'm fine with the conclusion
<nacc> i just want the user to understand they weren't intentionally ignored
<nacc> and yes, i hate that about lp
<nacc> cpaelzer: re:LP: #1650493
<ubottu> Launchpad bug 1650493 in numactl (Ubuntu) "numastat <pid> fails with double free or corruption" [Medium,Incomplete] https://launchpad.net/bugs/1650493
<nacc> non-contig is very common under PowerVM
<nacc> because the hypervisor is dumb :)
<nacc> iirc, i did some stuff to allow for it in qemu upstream, but i can't remember if it got merged
<cpaelzer> nacc: I posted to the bug on finally understanding
<cpaelzer> nacc: will take a look tomorrow - but if you want USBSD on on this today as our php mastermind please feel free
<cpaelzer> powersj: sorry to lure you into that
<nacc> cpaelzer: np, i will take a loook -- i also sort of own the php-imagick stuff so i would not be surprised if there are bugs there
<nacc> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate in working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: powersj, nacc.
<nacc> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate in working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: powersj, nacc.
<a01029> how do i run a command on startup in ubuntu server? there's no /etc/rc.local anymore
<Frickelpit> a01029: ofc it is, as a systemd service. rc-local.service
<sarnold> thanks Frickelpit, I hadn't seen that yet
<Frickelpit> np
<a01029> https://github.com/joeroback/dropbox/blob/master/dropbox%40.service
<ztane> is there any convenient way of disabling *shutdown -h* on 1604 server
<sarnold> ztane: you could probably abuse this https://www.freedesktop.org/software/systemd/man/systemd-inhibit.html
<ztane> this is one box that cannot be restarted after halt without manual intervention so it is kind of awkward :D
<ztane> hmm, yea :)
<sarnold> ztane: another possibility is to chmod the shutdown executable to forbid execution. that's a different kind of brittle, since updates will put the modes back the way they shuold be..
<ztane> ... and I do want to reboot sometiems
<ztane> sarnold: hmm but krhm, this is not really convenient either ...
<ztane> because I can send ctrl-alt-del remotely, but I guess if I use this, then the vulcan nerve pinch would be disabled too.
<sarnold> no idea on the control-alt-del, I can't recall having used that in a decade..
<genii> !dontzap
<genii> Hm
<tarpman> ztane: maybe you want molly-guard
<sarnold> looks perfect
<dasjoe> sarnold: how would I go about an SRU to a package? What bothers me is fixed in zesty due to being a newer upstream release. I'm reading https://wiki.ubuntu.com/StableReleaseUpdates#Procedure but getting stuck.
<nacc> dasjoe: which package/bug?
<sarnold> hey dasjoe :)
<dasjoe> Hey :)
<dasjoe> nacc: network-manager-strongswan
<nacc> dasjoe: provide a .debdiff with a correct version for the fix to the older series in the bug
<sarnold> dasjoe: normally you'd file a bug report with the template in the wiki; fill out the bits you can; attach a debdiff, describe how you tested
<renatosilva> http://vpaste.net/KOK0K -- do these warnings sound problematic?
<tomreyn> whoever runs vpaste.net needs to fix their ipv6 availability
#ubuntu-server 2017-04-06
<renatosilva> https://pastebin.com/raw/RHDyE4mR
<j_> good night world
<tsimonq2> Good night j_ :)
<lordievader> Good morning
<DK2> hey im trying to debug a invalid certificate should be be valid
<DK2> im seeeing that there are some parameters missing in the CSR as in organization name etc, could that be the cause of it?
<cpaelzer> DK2: how do you check currently that it is invalid?
<cpaelzer> DK2: I'd ahve hope that tells you at least a bit about the reason
<cpaelzer> Without any other data my first guess for unexpected invalid certs would be odd local time
<cpaelzer> DK2: since I don't know what cert you are looking for I don't know if it applies, but step by step checking like the following might be good https://www.cyberciti.biz/faq/test-ssl-certificates-diagnosis-ssl-certificate/
<DK2> i think i figured it out
<DK2> maybe its just my buggy thundebrird
<ikonia> win 4
<ikonia> sorry
<Village> nano vsftpd.conf - [ Error reading lock file /etc/.vsftpd.conf.swp: Not enough data read ]
<Village> hello, how fix it?
<ikonia> Village: you're editing a swap file
<ikonia> Village: you're supposed to edit the real file
<ikonia> I'd guess the swap file is caused by another editor crashing with the file open ?
<lordievader> Or another editor still has the file open.
<marahin> Hello! I'm trying to run systemd --user, but it results with
<marahin> Trying to run as user instance, but $XDG_RUNTIME_DIR is not set.
<marahin> Any idea how do I fix this?
<Village> ikonia, how i can make in not swap , unlock?
<lordievader> Village: Is there another editor running with the file open?
<Village> no, i don't think so
<lordievader> Village: Did you verify?
<Village> now i im with vim
<Village> and i dont know how to to exit from file with vim
<lordievader> Village: :q
<lordievader> Given you didn't edit your buffer.
<Village> :Q1
<Village> :q1
<Village> it needs
<Village> so how now i can unlock file?
<Village> It's now not opened with any editor
<lordievader> I usually remove those (don't even have the option enabled), but to be safe you can simply rename the file.
<Village> how i can remove?
<Village> i remove file and then his again shows same, maybe i need remove
<lordievader> Village: To rename the file use 'mv', if you really want to delete it use 'rm'.
<Village> .vsftpd.conf.swp
<lordievader> Ofcourse it is recreated if your editor is set up to create a swap file.
<Village> ok it's now free
<Village> thank You
<Village> I remove .vsftpd.conf.swp
<Village> and all good
<danpawlik> coreycb: hello!
<_yeeve> hey chat, those who use mysql-server, what have you done about the changes to sql_mode/group-by? It seems that I keep running into this "only_full_group_by" error and I'm ready to give in and just hardcode the change to be like it used to in 5.6
<nacc> rbasak: --^ ?
<erick3k> good morning
<rbasak> _yeeve: not sure. It's not in packaging, so is that a change in 5.7 perhaps? Or are you reporting some change in behaviour during the lifetime of a stable release?
<_yeeve> 5.7 changed to include a setting in sql_mode for `only_full_group_by` which causes errors in previous which worked. I've been fighting this a lot lately (in symfony you can add a setting to the doctrine options to change it for that project) but that's not always doable
<_yeeve> previous queries*
<_yeeve> This: http://stackoverflow.com/a/37248560 fixes the issue but it's not an ideal fix. At first I didn't want to change this, I was updating queries when I first ran into it but it's not feasible at all. I then moved to overriding specifically but I've got to the point where I can't override the setting via the platform code so I'm just doing it on a server-wide level now :( (there might be a way to set it per user or db/table but I didn't find
<_yeeve> anything when I looked)
<rbasak> Ah, OK.
<rbasak> I don't know much about that, sorry.
<_yeeve> No worries rbasak, cheers :)
<zioproto> coreycb: I am upgrading from Mitaka to Newton and I have hit this bug https://bugs.launchpad.net/horizon/+bug/1643964
<ubottu> Launchpad bug 1643964 in horizon (Ubuntu) "compressing static assets fails with xstatic-bootswatch 3.3.7.0" [Undecided,Fix released]
<zioproto> I got the error CommandError: An error occurred during rendering /usr/share/openstack-dashboard/openstack_dashboard/templates/horizon/_scripts.html: '\"../bower_components/respond/dest/respond.min.js\"' isn't accessible via COMPRESS_URL ('/horizon/static/') and can't be compressed
<zioproto> Following the discussion on the bug
<zioproto> I fixed it doing
<zioproto> apt-get remove python-django-horizon
<zioproto> and then I could finish the installation of openstack-dashboard
<coreycb> zioproto, zul, i think we came across that before.  are you using newton-proposed?   the problem i think is that we shouldn't be refreshing xstatic files in stable releases.
<zioproto> I am using newton stable
<zioproto> coreycb: are you saying the fix is in newton proposed but not yet in newton stable ?
<coreycb> zioproto, i'm hoping (and looking to see) that updated xstatic files weren't promoted to newton-updates.  are you using ubuntu packages or your own?
<zioproto> coreycb: I am using ->  sudo add-apt-repository cloud-archive:newton
<zioproto> coreycb: I have to leave my office ... I can give you feedback on this tomorrow
<zioproto> bye
<coreycb> zioproto, zul, i just checksummed the xstatic orig files for newton and it looks like xstatic files have not been updated for ubuntu packages since newton was released, so that's good
<Logos01> So ... not sure how useful this'll be to anyone here, but I'm currently working on evaluating the costs we're paying for ec2 instances in AWS, and as such I've been screwing around with jq. Thought I'd share the product since I know I'm not alone in being a sysadmin-type who doesn't know jq well.
<Logos01> aws ec2 describe-instances | jq '.[]|.[].Instances|.[] | select ( .State.Name == "running" ) | .InstanceId + "," + .InstanceType + "," + "\( .Tags[] | select ( .Key == "Name" ) | .Value)"'
<Logos01> That command will take the output of `aws ec2 describe-instances` and output in CSV-format the instance-id, instance-type, and instance-name ... for every running ec2 instance your CLI can find.  This makes for easy importation into spreadsheet.
<Logos01> Then calculate the annual cost of instances by multiplying the hourly rate for that instance-type by 8677
<sarnold> you had much better success with jq than I did. I think I got lost with it when trying to parse maps (objects?) with arbitrary keys
<sarnold> I had better luck with jshon, it fit my mental model far better; but I spent less time learning how to use serde_json in rust and got more done. hehe. :)
<Logos01> sarnold: heh.
<powersj> rbasak: did you have an updated ubuntu-server-triage merge for me to look at yet? Should I accept what is there?
<rbasak> powersj: sorry, not yet
<powersj> rbasak: no worries, just wanted to be sure I haven't missed it :)
<rbasak> powersj: feel free to accept what is there though. It should do no harm.
<rbasak> (IIRC no functional change either, unless you use the new option)
<powersj> rbasak: thx - was just looking at it and do like the changes, so I'll grab 'em both
<rbasak> We can make it default in a future commit
<rbasak> (after changing the bits we discussed)
<manukapua> howdy - i'm trying to clean out a boot partition on a small VM, GRUB version is 1.99 i want to know if i can dpkg purge a kernel version later than the "uname -r" one which is the only one that boots ?
<manukapua> with out breaking things : )
<sarnold> manukapua: without knowing details it's hard to be specific
<sarnold> manukapua: in general, I like to keep at least two kernels installed, sometimes three -- the newest kernel, the currently running kernel, and maybe the next most-recent kernel.
<manukapua> sarnold: the more recent kernels dont "work" due to lack of disk space need to purge them
<sarnold> ow :)
<sarnold> then by all means start deleting away
<manukapua> 3.2.0-48-virtual is good but 51 and 126 are borked
<manukapua> doing housekeeping , leaned out a lot of older ones
<sarnold> does apt-get purge work at that point?
<manukapua> *cleaned
<sarnold> or is it too busted/
<manukapua> ive been doing dpkg --force-all -P on the earlier ones
<manukapua> as apt-get purge wouldn't
<manukapua> just need to be sure its ok to go for the "latest" ones
<sarnold> yeah
<sarnold> you can even delete the currently running kernel, if you're positive another kernel actually works
<sarnold> the trouble with removing the currently running kernel is that you can't then load any new kernel modules (say, for cifs or something)
<manukapua> want to keep current running and remove a couple of more recent ones..
<manukapua> already ditched everything proir to currently running
<manukapua> just being cautious - as do not want to break it
#ubuntu-server 2017-04-07
<manukapua> cool that seemed to work ok cheers sarnold for the assistance
<sarnold> manukapua: don't forget to put back the -meta kernel package if you removed it earlier, to keep getting kernel updates
<manukapua> its still there : )
<lxle> Anyone know if when 'unattended-upgrades' is set to download the packagelist, does it download an entire packagelist or just the 'allowed' repository parameters you have set, like 'security'
<lordievader> Good morning
<zioproto> coreycb, rbasak at SWITCH testing our new Xenial/Newton setup we figured out that we are hitting this bug https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1647389 This must be fixed before we can move on from Trusty/Mitaka. We need to bug fixed to be able to live-migrate VMs safely, because we evacuate the nova-compute nodes to reprovision them with
<zioproto> Xenial/Newton. Do you know about this bug ?
<ubottu> Launchpad bug 1647389 in qemu (Ubuntu Xenial) "Regression: Live migrations can still crash after CVE-2016-5403 fix" [High,Confirmed]
<rbasak> I don't know about it, sorry.
<rbasak> I'd ask cpaezler but he's out
<rbasak> jgrimm: ^
<zioproto> rbasak: but you commented on the bug :O !
<zioproto> are you Robie Basak (racb) ?
<rbasak> zioproto: not with any knowledge, just referring more relevant people.
<rbasak> Yes.
<zioproto> ah ok :)
<zioproto> I all come back here when is daytime in the US :)
<coreycb> zioproto, did you see the workaround mentioned?
<coreycb> zioproto, i'm just scrolling through the bug coming up to speed
<zioproto> coreycb: if you mean  'virsh dommemstat --live --period 0 <VM instance name>' we tried this and did not work
<coreycb> zioproto, it looks like turning live_migration_tunnelled off is a work around too
<zioproto> coreycb: ok, I found something else to check. We had the default value for live_migration_tunnelled
<mdeslaur> zioproto: FYI, there are _untested_ pre-release packages to work around that bug here: https://launchpad.net/~ubuntu-security-proposed/+archive/ubuntu/ppa/+packages
<zioproto> now it looks like Mitaka has live_migration_tunnelled default to True and Newton has live_migration_tunnelled default to False
<coreycb> thanks mdeslaur
<mdeslaur> they haven't had any sort of QA yet, but you're welcome to try them out in a test environment
<zioproto> mdeslaur: thanks ! I will try them !
<zioproto> mdeslaur: the source server is Trusty, the destination server is Xenial. It is enough to install the qemu package on the Xenial server to test the live migration fix ?
<mdeslaur> zioproto: I think so, but I'm not sure, sorry.
<cpaelzer> rbasak: here chiluk is the expert on this bug
 * cpaelzer is not here
<cpaelzer> and I see mdeslaur is also here, perfect
<cpaelzer> so I can be away then
 * cpaelzer silences his speaker to not get back to the office on a highlight
<zioproto> I have a question on live_migration_tunnelled in nova.conf
<zioproto> so this is an important setting to be explicit in nova.conf on the controller or on the compute nodes ?
<zioproto> mdeslaur: so we installed qemu-system-x86_2.5+dfsg-5ubuntu10.11_amd64.deb and qemu-block-extra_2.5+dfsg-5ubuntu10.11_amd64.deb only. These packages we installed them only on the Xenial target compute-node. We did not touch at all the source Trusty compute node. I can confirm this packages fixed the bug #1647389 for me
<ubottu> bug 1647389 in qemu (Ubuntu Xenial) "Regression: Live migrations can still crash after CVE-2016-5403 fix" [High,Confirmed] https://launchpad.net/bugs/1647389
<mdeslaur> zioproto: oh, great, thanks for testing it
<zioproto> my team here is thanking you, because these package repo is not linked on the bug report
<zioproto> I guess if the link was there more people had tested the packages, because 12 people are watching the bug
<mdeslaur> zioproto: they will be released as security updates once they have been through QA...probably in a couple of weeks
<mdeslaur> I'll add a comment to the bug
<zioproto> thank you
<nacc> cpaelzer: re: LP: #1673714, what is dquilt?
<ubottu> Launchpad bug 1673714 in usd-importer "provide a subcommand to convert a git sha to ubuntu patch" [Undecided,New] https://launchpad.net/bugs/1673714
<roaksoax> /w/win 8
<erick3k> quick question
<erick3k> the command route add is permanent
<nacc> erick3k: that's not a question, but no
<nacc> erick3k: if you reboot, the routes will be gone
<erick3k> hehe
<erick3k> nacc whats the best way to add them permanent that don't involve puting them in /etc/network/interfaces?
<erick3k> and apart from puting them in /etc/rc.local
<erick3k> or rc.local best way?
<nacc> erick3k: any reason for not using /e/n/i ?
<erick3k> Yes, cloud-init will remove the post up / post down commands
<erick3k> in interfaces
<erick3k> so it has to be somewhere else
<nacc> erick3k: that seems like soething to ask cloud-init about, i assume there is either a way to not do that or some config option
<erick3k> yes am sure cloud-init has an option, but my current system won't allow me
<erick3k> so forget about cloud-init
<erick3k> back to first question hehe
<nacc> erick3k: will c-i remove /e/n/i.d entries too?
<erick3k> well am wrong, and you are kind of right
<erick3k> thats where ci puts the config
<erick3k> so i guess it doesn't touch interfaces
<nacc> there you go :)
<erick3k> thats the best place anyway to put them permanent routes?
<nacc> erick3k: aiui, but i'm not necessarily an expert on it :)
<erick3k> kool ty
<nacc> erick3k: np
<erick3k> ummm
<erick3k> any ideas what can be wrong? looks good to me https://i.imgur.com/dyJhjhD.png
<nacc> erick3k: do you have the `route` command? or what is happenig?
<erick3k> not sure, i am following this guide http://docs.ovh.ca/en/guides-network-bridging.html
<erick3k> but yea route command is there
<erick3k> i can use it manually to add the routes and works
<nacc> erick3k: oh sorry misread the bottom f it
<nacc> erick3k: which line is 14?
<erick3k> first post-up command
<nacc> erick3k: oh of course
<nacc> erick3k: it's an iface option
<nacc> erick3k: what are you post-up'ing in that file? :)
<erick3k> umm the routes
<nacc> erick3k: as in, post "what"'s up state woudl that command run?
<nacc> erick3k: routes aren't up
<erick3k> it should run route add when the interface goes up
<nacc> erick3k: right, *which* interface
<nacc> erick3k: syntactically your file makes no sense
<erick3k> is on /.d
<nacc> erick3k: post-up only makes sense in the context of a interface definition
<erick3k> ok so can't put them on .d cuz is gonna be modified by ci
<erick3k> so am stuck to rc.local then?
<nacc> erick3k: so you want to add a route to a iface that c-i configures?
<erick3k> correct
<erick3k> ci configures on /e/n/i/.d
<erick3k> without having to tell ci to do it because i can't
<nacc> i don't understand fully why you can't but ok
<nacc> erick3k: you don't control your cloud-config?
<nacc> erick3k: it seems like you can define routes vai systemd.network as well
<erick3k> yes and no, is limited. That would be the obvious solution but perhaps why am here looking for another solution
<erick3k> rc.local is gonna work, ive used before but i don't think is the  best way
<erick3k> nacc i would have to somehow modify rhev completly, templates etc to be able to add routes on cloud-init launch
<erick3k> that is beyond me
<erick3k> thats why
<nacc> erick3k: huh?
<erick3k> rhev / orvirt
<nacc> erick3k: you aren't able to pass cloud-config to your instances?
<erick3k> yes
<erick3k> but limited config
<erick3k> so ovirt which is the system i use
<nacc> erick3k: hrm, dunno
<erick3k> doesn't have an option to pass routes throught cloud-init
<nacc> erick3k: but ok
<erick3k> so i simple can't use cloud init to pass routes
<erick3k> very simple
<erick3k> so i want permanent routes on the O.S
<erick3k> that has nothing to do with cloud-init
<nacc> erick3k: but you can write arbitrary files? i'm super confused :)
<erick3k> nacc sorry got dced
<erick3k> but nvm i'll just stick to adding routes with rc.local
<nacc> erick3k: yeah, i think that's easiest for wht you've described
<teward> has anyone had an issue where an upgrade on Xenial of mysql-server-5.7 would totally lag out and just consume all the memory and swap?
<teward> because that happened on my mail server :/
<erick3k> hi
<erick3k> i see in ubuntu 16.04 /etc/init/rc-sysinit.conf was removed
<erick3k> what is it now?
<sarnold> I can't figure out what that thing's supposed to do..
<erick3k> what thing
<erick3k> ?
<sarnold> erick3k: systemd aims to run multi-user.target, and runs whatever is supposed to be run to make that happen
<sarnold> erick3k: if you want to change the systemd target, this might help https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sect-Managing_Services_with_systemd-Targets.html#sect-Managing_Services_with_systemd-Targets-Change_Default
<erick3k> ummm
<erick3k> i just wanted to change start on (filesystem and static-network-up) or failsafe-boot
<erick3k> but is no longer in ubuntu 16
<erick3k> 14 the file is there
<sarnold> what's the task that you're actually trying to solve?
<erick3k> my problems is network timeouts on boot
<erick3k> so i want to disable network to start on boot
<erick3k> or bypass all timeouts or waits for network
<erick3k> in centos i put ONBOOT=no
<erick3k> ubuntu 14 change start on (filesystem)
<erick3k> now on 16 i don't know
<erick3k> doesn't have same options as 14
<erick3k> the root of the problem is that i don't use dhcp for network, instead i use static but is injected by cloud-init
<erick3k> and is done after the network is started
<erick3k> so it will timeout
<erick3k> sarnold this worked perfectly on ubuntu 14
<erick3k> http://askubuntu.com/questions/185515/disable-network-configuration-services-during-boot-time
<erick3k> i want the same for ubuntu 16
<erick3k> simple
<Logos01> ... I'm assuming you mean 14.04 and 16.04 respectively
<erick3k> correct
<Logos01> ( Y.ear Y.ear . M.onth M.onth ) -- sorry, that's a pet peeve of mine
<erick3k> but those files are not present on ubuntu 16.04 only in 14.04
<Logos01> Right because it operates differently
<Logos01> You're seeing the timeout/delay?
<erick3k> yes
<erick3k> very long
<Logos01> Oh wait ... "instead I use static but is injected by cloud-init"
<Logos01> This is in what cloud provider?
<erick3k> yes
<rharper> erick3k: please join #cloud-init and we can help debug from there
<erick3k> oh trust me
<erick3k> already tried
<erick3k> only one that can help is smores and he busy all the time
<rharper> I work on cloud-init as well, have you filed a bug with the details ?
<erick3k> i know cloud-init has a timeout on cloud-init-nonet.conf that i already removed
<erick3k> am not looking for cloud-init timeouts but the looking for network in ubuntu 16 while booting
<Logos01> erick3k: Have you looked at /etc/network/interfaces yet?
<erick3k> yep, no timeout
<Logos01> No there wouldn't be there; that's not even a concept.
<rharper> in general, the 'networking' service in 16.04 will look at the eni and for each iface thats marked 'auto'; it will wait for all of those to become up
<Logos01> Care to share your /etc/network/interfaces ?
<rharper> if one or more of those marked auto does not come up, then the networking service itself blocks;
<Logos01> Yeah -- I was going to suggest changing the device from "auto" to "manual" in the config and then let cloud-init handle initializing it
<Logos01> It's not so much that it *blocks*
<Logos01> But that you never pass the network target and so never reach the multi-user target
<rharper> but the service itself does block
<rharper> that's by design
<erick3k> lord i just want the corresponding files /etc/init/failsafe.conf and /etc/init/rc-sysinit.conf but for ubuntu 16.04 that are no longer present
<Logos01> rharper: I'm saying this in systemd terminology
<rharper> you told it, bring all of these interfaces up
<rharper> I mean, it's a oneshot service, calling ifup and friends
<rharper> so, is it systemd is it the unit script?
<Logos01> erick3k: The concept doesn't exist anymore; there is no corresponding files.
<erick3k> right
<rharper> doesn't really matter; the effect is to block until all interfaces in eni marked auto are up
<erick3k> so i want to know where are the waiting for dhcp timeouts on boot
<Logos01> erick3k: The thing that those files were doing, that you needed to tweak -- it no longer exists.
<Logos01> You have a different problem.
<erick3k> logos01 i understand they no longer exist perhabs why am asking for a 16.04 solution
<Logos01> rharper: Well if you speak it in systemd nomenclature it helps you to troubleshoot/diagnose what/where is going on.
<rharper> it does an ifquery --allow-auto to list the interfaces from eni which are marked auto; until ifup on each of those interfaces returns and they're up it blocks (for some fixed time)
<Logos01> erick3k: Right. Are the interfaces marked auto or manual in /etc/network/interfaces ?
<erick3k> logos01 they will be auto after cloud-init injects it
<erick3k> now say i seal the vm before using cloud-init
<Logos01> erick3k: Right but they're *NOT* auto when the VM is booting.
<erick3k> what should be on interfaces to avoid any timeouts
<erick3k> correct
<erick3k> so what should i put there
<erick3k> to avoid them timeouts
<Logos01> erick3k: Can we see your /etc/network/interfaces file?
<erick3k> auto lo
<erick3k> iface lo inet loopback
<erick3k> source /etc/network/interfaces.d/*.cfg
<erick3k> thats it
<Logos01> Okay what's in the /etc/network/interfaces.d/ directory?
<rharper> so cloud-init will write a 'dhcp on "first" interface' on instances which do not provide a network configuration;  if do not want cloud-init to attempt to configure networking via fallback, you can tell cloud-init to disable network configuration
<erick3k> https://i.imgur.com/EJDmklx.png
<rharper> there's a 50-cloud-init eni file which will attemtp to dhcp on one of the interfaces
<erick3k> yes i can delete that 50-cloud-init.cfg before sealing the vm
<erick3k> thats not a problem
<Logos01> erick3k: lpaste the output of:  `grep -H '' /etc/network/interfaces.d/*.cfg`
<erick3k> the only file that is on inderfaces.d will be deleted
<erick3k> before sealing the vm logos01
<erick3k> so makes no sense showing you that file
<Logos01> Well this is where your problem is ... because the machine is attempting to dhcp up -- it has to, because that's how cloud-init *works*
<erick3k> right
<Logos01> Even if you get a static lease on whatever the following interface/address is, that's different.
<rharper> no, you need to tell cloud-init via config that you don't want it to attempt ot configure networking;  you can write out "network: {config: disabled}" into a /etc/cloud/cloud.cfg.d/network-disable.cfg
<Logos01> You're getting blocked because you're not *getting* dhcp addressing the machine is looking for.
<erick3k> so on ubuntu 14 i fixed by using those commands or mods i showed you on ask ubuntu
<rharper> that prevents cloud-init from attempt to configure networking for you
<Logos01> erick3k: That didn't actually fix the problem in question. 14.04 used a *completely different* networking model than 16.04
<Logos01> All it did was make the problem non-blocking to boot.
<Logos01> You cannot do that in 16.04.  It cannot be done.
<Logos01> You have to either resolve the problem or have a successful startup of the networking service that doesn't depend on the problem's being solved.
<erick3k> ok so i will get 15 minutes time out while booting no matter what i do?
<Logos01> Did I just say that?
<Logos01> (No.)
<erick3k> well the solution would be not to get network up until cloud-init injects the static ip
<erick3k> but that is not how is happening
<rharper> how are you injecting networking in cloud-init ?
<erick3k> config drive / nocloud
<rharper> then you have control over what gets rendered in the 50-cloud-init.cfg file
<erick3k> and it works
<rharper> in the network config in the nocloud seed, make sure you don't attempt to dhcp on non-existent interfaces
<erick3k> https://i.imgur.com/PXBt5N2.png
<erick3k> perfectly
<erick3k> only after 15 minutes of timeouts
<rharper> so I suspect that your first nic isn't getting the name 'eth0'
<rharper> 16.04 uses persistent nic naming, so it'll be like ens3 or ens1p1
<erick3k> yea i did the grub biosdevicename thingy
<erick3k> na, it must be eth0
<rharper> that looks fine, if eth0 is present, then ifup on eth0 should be just fine;
<erick3k> right but when i seal the vm
<erick3k> eth0 will not exist until cloud-init injects it
<erick3k> so ubuntu will timeout trying to get it with dhcp
<rharper> so you must not be embedded the datasource into /var/lib/cloud/seed/nocloud-net/
<erick3k> ummm
<erick3k> let me try
<rharper> when you boot it soemwhere else, it'll have a new instance id, in which case the previous config may not apply
<rharper> in there you'll also set a meta-data with an instance-id: <unique string>
<erick3k> yes that i delete before sealing
<erick3k> correct?
<erick3k>  /var/lib/cloud/instances
<rharper> if this is a template then, yes;
<erick3k> okay no timeout
<erick3k> that worked
<erick3k> i think i wasn't clearing /var/lib/cloud/instances and caused the problem
<rharper> you can also write a network-config file for the nocloud-net seed like so
<rharper> http://paste.ubuntu.com/24335893/
<rharper> format is documented here: http://curtin.readthedocs.io/en/latest/topics/networking.html  ;   I've an PR to get that published under the cloud-init docs
<ThiagoCMC> Guys, I'm trying to launch an Instance on Ocata / Ubuntu 16.04, with DPDK and hugepages, the following error is appearing on nova-compute.log:
<ThiagoCMC> can't open backing store /dev/hugepages-1048576/libvirt/qemu for guest RAM: Permission denied
<ThiagoCMC> Any tips?
<ThiagoCMC> Might be related to https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1524737 ?
<ubottu> Launchpad bug 1524737 in libvirt (Ubuntu Wily) "systemd presents hugetblfs at /dev/hugepages" [Undecided,New]
<ThiagoCMC> I'm also seeing "apparmor="DENIED"" at my syslog...  =/
<sarnold> ThiagoCMC: ubuntu-bug libvirt, please, and make sure to include those DENIED lines
<ThiagoCMC> Damn... ok!
<ThiagoCMC> =P
<ThiagoCMC> "ubuntu-bug libvirt0", right?
<ThiagoCMC> never mind, sending
<rharper> https://help.ubuntu.com/lts/serverguide/DPDK.html
<rharper> may be of help as well
<sarnold> rharper: nice, thanks
<rharper> all thanks go to cpaelzer
<sarnold> cpaelzer, thanks for the nice https://help.ubuntu.com/lts/serverguide/DPDK.html guide it looks like good reading :)
<ThiagoCMC> Reported: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1524737
<ubottu> Launchpad bug 1524737 in libvirt (Ubuntu Wily) "systemd presents hugetblfs at /dev/hugepages" [Undecided,Confirmed]
<ThiagoCMC> Oops
<ThiagoCMC> Damn clipboard are...
<ThiagoCMC> https://bugs.launchpad.net/cloud-archive/+bug/1680956
<ubottu> Launchpad bug 1680956 in Ubuntu Cloud Archive "Fail to launch an OpenStack Instance with hugepages on top of OVS+DPDK" [Undecided,New]
<ThiagoCMC> I can easily use Ubuntu with DPDK for my VMs...
<ThiagoCMC> Problem is when with OpenStack... Plain KVM is fine.
<nacc> ThiagoCMC: do you know ifthe libvirt in openstack has the fix to LP: #1524737 in it?
<ubottu> Launchpad bug 1524737 in libvirt (Ubuntu Wily) "systemd presents hugetblfs at /dev/hugepages" [Undecided,Confirmed] https://launchpad.net/bugs/1524737
<ThiagoCMC> It is: libvirt-bin 2.5.0-3ubuntu5~cloud0
<nacc> ThiagoCMC: do you know ifthe libvirt in openstack has the fix to LP: #1524737 in it?
<ubottu> Launchpad bug 1524737 in libvirt (Ubuntu Wily) "systemd presents hugetblfs at /dev/hugepages" [Undecided,Confirmed] https://launchpad.net/bugs/1524737
<nacc> bah
<nacc> ThiagoCMC: do you know ifthe libvirt in openstack has the fix to LP: #1524737 in it?
<nacc> i'm also a bit confused why that bug was closed by 1.2.21-2ubuntu3
<ThiagoCMC> Hmmm.. I don't...
<ThiagoCMC> If it was fixed a long time ago...
<nacc> ThiagoCMC: sorry bad keystroke earlier
<ThiagoCMC> np
<nacc> ah it was fixed in 1.2.21-2ubuntu2 and lp is being weird
<ThiagoCMC> maybe re-introduced?
<nacc> ThiagoCMC: about to eat lunch but i can check on it
<ThiagoCMC> Enjoy it!   :-D
<erick3k> omg help
<erick3k> rharper you still on?
<tsimonq2> !help | erick3k
<ubottu> erick3k: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience
<erick3k> the machine now gets stuck here https://i.imgur.com/DVpf0RX.png
<erick3k> why is raid loading am using no raid thing
<nacc> erick3k: that's mdadm reloading
<nacc> erick3k: *loading
<nacc> erick3k: it has to know what raid levels are allowed so that you can use raid if you want or not
<nacc> erick3k: i'm 99% sure that's not what'
<nacc> *that's not what's hanging
<nacc> ThiagoCMC: back, sorry for the delay
<erick3k> nacc ok checking
<ThiagoCMC> nacc, oh, that's totally ok!   ^_^
<nacc> ThiagoCMC: ok, so 2.5.0-3ubuntu5, in the source, does have
<nacc>   # for access to hugepages
<nacc>   owner "/run/hugepages/kvm/libvirt/qemu/**" rw,
<nacc>   owner "/dev/hugepages/libvirt/qemu/**" rw,
<ThiagoCMC> Hmm...
<ThiagoCMC> =/
<nacc> ThiagoCMC: should be abel to confirm by looking at /etc/apparmor.d/abstractions/libvirt-qemu on the host
<ThiagoCMC> Checking...
<nacc> ThiagoCMC: oh!
<nacc> ThiagoCMC: the path changed!
<nacc> ThiagoCMC: why is it /dev/hugepages-1048576 ?
<nacc> ThiagoCMC: can you pastebin `ls -ahl /dev/hugepages*`?
<ThiagoCMC> Yes, /etc/apparmor.d/abstractions/libvirt-qemu have those two lines.
<ThiagoCMC> ls -ahl output: https://paste.ubuntu.com/24336518/
<nacc> ThiagoCMC: is this the system where you were using 1g hugepages?
<ThiagoCMC> yep
<ThiagoCMC> I can launch an Instance on top of OVS+DPKD (OVN topology), but, when I try the hugepages on OpenStack's flavor, it doesn't boot because of this apparmor stuff
<nacc> sarnold: being completely dumb about apparmor, can you look at LP: #1689056 ? I am unable to put my comment in lp, due to timeouts righ tnow, but: http://paste.ubuntu.com/24336524/
<ubottu> Error: Launchpad bug 1689056 could not be found
<nacc> bah
<nacc> LP: #1680956
<ubottu> Launchpad bug 1680956 in Ubuntu Cloud Archive "Fail to launch an OpenStack Instance with hugepages on top of OVS+DPDK" [Undecided,New] https://launchpad.net/bugs/1680956
<nacc> sarnold: also, why does `man apparmor` on 17.04 refer to upstart? :)
<nacc> "Ubuntu systems use upstart(8) instead of a traditional SysV init system."
<nacc> ugh
<ThiagoCMC> Nevertheless, something else is going on here... I mean, this is for sure a problem (LP 1680956). And, when I try it without hugepages, the VM boots up but, it can't ping OVN L3 Router... But, this is a different problem entirely.
<ubottu> Launchpad bug 1680956 in Ubuntu Cloud Archive "Fail to launch an OpenStack Instance with hugepages on top of OVS+DPDK" [Undecided,New] https://launchpad.net/bugs/1680956
<ThiagoCMC> Oh this ubottu... lol
<ThiagoCMC> =P
<nacc> sarnold: and one last thing for the security time, the manpage refers to : http://wiki.apparmor.net/index.php/Distro_ubuntu
<nacc> which, um, is as currnet as 11.04 :)
<erick3k> nacc what could it be?
<erick3k> i can't find out why is getting stuck
<erick3k> nacc now booted, where can i look for that ugly timeout
<erick3k> ?
<nacc> erick3k: i'd look in syslog, messages, dmesg, etc
<erick3k> k
<erick3k> nacc is the stupid network again
<erick3k> i can reproduce by using cloud-init network as dhcp
<nacc> erick3k: ah
<erick3k> i use static btw
<nacc> erick3k: do you not have network on first boot or something? or using dhcp but not really?
<erick3k> this is the options on the template hold on
<sarnold> nacc: wow, nice bugs all around :)
<erick3k> https://i.imgur.com/IG9URrT.png
<nacc> sarnold: my guess on profile seems ok, at least :) but will probably need a fix and sru, cpaelzer
<erick3k> should i remove the start on boot?
<nacc> erick3k: i have no idea how ovirt works so i don't know
<erick3k> or untick network
<erick3k> nice back to square 1 again :(
<erick3k> thats why
<erick3k> before
<nacc> erick3k: i would guess you could try those options, but i wonder if it will just fail to boot at all
<erick3k> i wanted to attack the stupid timeouts
<erick3k> not cloudinit or ovirt who cares
<erick3k> is stupid ubuntu timing out looking for dhcp
<erick3k> all other O.S work great, including 14.04
<erick3k> 5 days trying to fix the stupid timeout, bout to give up on ubuntu
<erick3k> my customers can stick to 14.04 lol
<nacc> this seems relevant: https://review.openstack.org/#/c/416664/
<nacc> https://bugs.launchpad.net/tripleo/+bug/1653812
<ubottu> Launchpad bug 1653812 in tripleo "Five minute delay DHCP'ing isolated nics" [High,Fix released]
<nacc> looks like ONBOOT is the issue, but i'm not sure if you actually need it for your case
<erick3k> nacc on centos 7 yes i put ONBOOT=no and thats it issue fixed
<erick3k> is there an option like that in ubuntu?
<nacc> erick3k: where do you put that?
<erick3k> centos /etc/sysconfig/network-scripts/ifcfg-eth0
<erick3k> is there a way to completly block the network from starting on boot and then just rc.local a command to start it?
<erick3k> am desperate
<erick3k> anything works i dont care
<nacc> erick3k: ip=off maybe
<nacc> erick3k: on the kernel commandline
<nacc> but i don't understand, if you're cloud based, you depend on network in order to boot, i'd assume
<erick3k> not really since am not using dhcp
<erick3k> am using static network
<nacc> that doesn't matter
<erick3k> i put the ip's
<nacc> you still depend on network
<nacc> it sounds like you've got a configuration issue
<nacc> somewhere, id on't understand your deployment
<nacc> but it should not be doing dhcp, i think is your root point
<erick3k> right
<nacc> but you aren't telling it (afaict) that it's got static networking
<nacc> erick3k: you can of course pass networking information to guests on the kernel command-line, but i don't understand when you're having this issue
<erick3k> so i got 5 oses working but ubuntu 16.04 am sure the system / deployment is fine
<nacc> first boot with cloud-init?
<erick3k> i just want ubuntu to stop waiting for dhcp
<erick3k> where is the dhcp wait time set in ubuntu 16?
<nacc> probably /etc/dhcp/dhclient.conf
<nacc> yeah, it's set to 300 there
<nacc> but again, it only would do that if it is configured to dhcp at all
<nacc> so don't configure your images to dhcp
<erick3k> ok gonna try that
<nacc> i don't understand why you think you don't have a configuration issue
<erick3k> yes am not sure why is even picking dhcp
<nacc> i don't carea bout your other images
<erick3k> i don't have no dhcp nowhere
<nacc> yes, i understand -- so how are your other instances getting network configuration?
<nacc> erick3k: you boot an instance and configure it?
<erick3k> all get them throught cloud-init
<nacc> erick3k: how do you do that?
<nacc> *how*
<nacc> they dont' have network at boot
<nacc> since you're not using dhcp
<nacc> i think you or I am missing something obvious
<erick3k> no, you clean the vm, sealed, boot it with cloud-init (ovirt plugs a floppy with config drive / nocloud) and boots while picking up the info injected throught the floppy
<nacc> ah you are using a config drive
<nacc> ok
<erick3k> oyes
<nacc> i don't know what 'sealed' is
<nacc> yeah, so if you don't need network to boot, don't set it to onboot (that gui you showed earlier)
<nacc> or send ip=off, i guess
<erick3k> just get it ready for customer, delete /var/lib/cloud/instances, history, interfaces etc
<erick3k> yes, i tried all options on that gui, nothing works
<erick3k> where do i add that ip=off
<erick3k> GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
<erick3k> there nacc?
<nacc> yeah, i think so
<nacc> erick3k: but that will disable networking period, possibly, that's what i'm not sure about
<nacc> erick3k: hence, test it :)
<erick3k> by disable you mean you can't even start it with manual command after it boots?
<nacc> erick3k: i'm not sure, i can't remember if that twiddles anythin gmore than just not doing any network configuration at boot
<erick3k> that would work
<erick3k> trying
<sarnold> kind of strange that the only mention of 'network-interface' in the docs is on the nocloud datasource :( http://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html?highlight=network-interfaces
<nacc> sarnold: right, but earlier erick3k said they didn't have control over cloud-init
<nacc> sarnold: but agreed, it's strange
<sarnold> nacc: I'm surprised no one's needed to integrate cloud-init with an IPAM service before to make this easier..
<jgrimm> sarnold, nacc: more docs on the way fwiw -> https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+ref/network-config-doc
<erick3k> ok with ip=off booted now let me add an ip see what happens
<nacc> jgrimm: thanks
 * jgrimm EOWs
<sarnold> jgrimm-away: thanks, have a good weekend
 * jgrimm-away waves
<erick3k> nacc ip=off just completly remove network
<erick3k> cloud-init doesn't even add an interface
<erick3k> nacc, is there a default that after the 300 seconds timeout on dhcp, ubuntu will retry?
<erick3k> i see on /etc/dhcp/dhclient.conf is commented out #retry 60; so  by default ubuntu does not retry?
<nacc> erick3k: did you rebuild your initrd after changing that file?
<erick3k> if you mean update-grub yes
<nacc> no
<nacc> that updates grub
<nacc> has nothing to do with your initrd
<nacc> i'm fairly sure you need to update the initrd on the image in order for that file to affect boot
<erick3k> yes i know :)
<erick3k> eitherway ip=off is a nono
<erick3k> is like removing the nic card physically
<erick3k> nacc sorry you mean dhclient?
<erick3k> file
<nacc> erick3k: yeah /etc/dhcp/dhclient.conf edited without an initrd update doesn't do anything afaict
<erick3k> ummm
<erick3k> k doing it
<erick3k> nacc update-initramfs -?
<erick3k> -u?
<nacc> erick3k: yeah, maybe with -k <version> or -k all
<sarnold> if you're editing the image and you just don't want networking, could you use systemctl disaable networking or something similar?
<nacc> sarnold: afaict, they do want networking, just not hte dhcp on boot
<nacc> sarnold: and i still don't understand how this works on other OSes or if this is a systemd change or what
<nacc> as i would think most would be attempting to dhcp on all interfaces by default, if not told otherwise
<erick3k> at this point
<erick3k> am not even sure is a dhcp problem
<erick3k> i mean a machine its been stuck for 1 hour
<erick3k> tried that nothing
<erick3k> same
<erick3k> stuck https://i.imgur.com/UDKYAHZ.png
<erick3k> why is raid showing any idea? am not using any raid device
<nacc> erick3k: again, i already explained that to you
<sarnold> you could probably blacklist the modules if you don't want their startup routines firing
<nacc> erick3k: mdadm loads and determines what raid levels are supported
<nacc> iirc, at that poitn, it's trying to mount your root fs
<nacc> (also based upon my own dmesg)
<erick3k> the only difference
<erick3k> between me firing the vm manually and with my script (they get stuck) is that the disk size is changed
<erick3k> could that make ubuntu not boot?
<erick3k> or get stuck there
<erick3k> i am sorry guys, maybe has nothing to do with the network
<nacc> you're changing the disk size under a fs?
<erick3k> i increased the disk size before turning on the machine and reproduced the error :(
<erick3k> yes, virtio
<nacc> well, at least you disparaged ubuntu in the process! :)
 * nacc goes back to other work
<erick3k> lol
<erick3k> why, isnt it suppose to run growpart and just expand?
<erick3k> like the other cloud images i have working?
<erick3k> nvm me
<erick3k> it booted and did resize the disk https://i.imgur.com/TINxMPD.png
<erick3k> i don't know guys i give up for now, maybe one will boot tomorrow and will collect some more info. Thank you nacc and all for your help and time.
<nacc> erick3k: gl!
#ubuntu-server 2017-04-08
<lordievader> Good morning.
<Sinned> morning
<tomreyn> if applying a liuvepatch via the canonical-livepatch serive failed (due to lack of disk space imn my case, which i have since remedied), how can you trigger a retry?
<erick3k> nacc u on?
<Jack_Daniels> How do i get ufw to open a port because i have done ufw allow <port>
<tomreyn> Jack_Daniels: this should normally suffice. how do you know it doesn't work?
<Jack_Daniels> i have done a port check and it is closed
<tomreyn> Jack_Daniels: and you have a service listening on this port?
<Jack_Daniels> its for telnet
<tomreyn> i see. this doesn't answer my question, though.
<Jack_Daniels> I use a program that is running called JTS3servermod for teamspeak 3 and i have a webinterface that needs to have that port open to communicate with the server and the program is running on the server
<tomreyn> assuming you opened port 23, then the service you have bound to this port would this show up as LISTENing if you run: sudo apt-get update && sudo apt-get install lsof sudo lsof -i :23
<Jack_Daniels> i ran the command and the port is still not open
<Jack_Daniels> To                         Action      From
<Jack_Daniels> --                         ------      ----
<Jack_Daniels> 5873                       ALLOW       Anywhere
<Jack_Daniels> 22                         ALLOW       Anywhere
<Jack_Daniels> 23                         ALLOW       Anywhere
<Jack_Daniels> 5873 (v6)                  ALLOW       Anywhere (v6)
<Jack_Daniels> 22 (v6)                    ALLOW       Anywhere (v6)
<Jack_Daniels> 23 (v6)                    ALLOW       Anywhere (v6)
<Jack_Daniels> thats whats open on my server
<_sed_> hello, i'm trying to set up a web app and it's not working, what could be the problem?
<_sed_> the local server does not display the page in the browser
#ubuntu-server 2017-04-09
<Mead> am running ubuntu server, and I have two GPU's and want to do hardware pass through for a VM.  I black listed/placed the device on a pci stub, yet the system still displays the terminal/cli on the GPU, although the resolution has changed.   How do I force ubuntu server (my host OS) to use anothe GPU?
<drab> hi, am I missing something or this is indeed a dependency bug:
<drab> I'm trying to isntall build-essentials
<drab> but it won't install because deps can't be satisfied
<bekks> drab: So pastebin the entire output please.
<drab> looking at it libc6-dev depends on libc6 (= 2.23-0ubuntu3)
<drab> but on xenial libc6 (2.23-0ubuntu7)
<drab> so it seems indeed a problem to me
<drab> bekks: the pastebin doesn't actually show that, had to do multiple run to get to that since the other deps were satisfied and this is not shown in the first run trying to apt-get install build-essentials
<drab> still, can do that if it helps
<bekks> If apt-get install build-essentials doesnt show a missing dep, where do you see that missing dep?
<drab> http://dpaste.com/2Q36F82
<drab> it does, it's just not pointing tou exactly what I wrote above
<drab> but anyway, there it is
<bekks> can you pastebin the output of "sudo apt update" as well please?
<drab> http://dpaste.com/28S31RE
<drab> added the urls I actually mirror, assume that's what you wanted to see as well
<drab> mirror updates every night, the version numbers above I took from http://packages.ubuntu.com/xenial/libc6
<drab> (and they match what I see with apt-get)
<bekks> And what happens when you install libc6-dev and g++ ?
<drab> http://dpaste.com/1GJ3SA5
<bekks> Seems like your mirror is out of date.
<drab> weird, I pastebin those v numbers from packages.u.c and they were as per above
<drab> but now I refreshed and they are correct...
<drab> both read 2.23-0ubuntu7
<drab> ok, fair enough
<drab> thank you
<bekks> you're welcome :)
<bekks> Maybe you should consider using another upstream mirror.
<drab> I've tried several... I keep having some problem or another, if it's not out of sync I get hash mismatches...
<bekks> Most likely thats due to a proxy server then.
<tekk> hi guys, i've created a virtual bridge interface on my box... is there an ifconfig / iptables / interfaces file parameter that will isolate each host from the virtual bridge subnet from one another? (i'd normally do this on a l2 switch itself per port... but this is all virtual)
<drab> tekk: each host from the bridge from one another? you mean attached to the bridge?
<drab> ie they can talk to the host, out, but not to each other?
<tekk> exactly
<tekk> should i just create a mac-based VLAN on the switch? or can i do it at the virtual-bridge level?
<compdoc> Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license.
<tekk> :o
#ubuntu-server 2018-04-02
<ShellcatZero> dumb question: when I see warnings about init scripts, where should I find these files?  For example: insserv: warning: script 'K01xfce4-power-manager' missing LSB tags and overrides.  I've checked in /etc/init and /etc/init.d and I can't find a script with this name.
<TJ-> ShellcatZero: the prefix K01 tells you it's a 'stop' (kill) script, and the number tells you it's a sylink under one of the /etc/rc?.d/ runlevel directories. Such symlinks point to the real script in /etc/init.d/xfce4-power-manager
<tomreyn> also asked and answered in #ubuntu :-/
<TJ-> haha ahh well, 2 heads are better than 1
<ShellcatZero> thanks anyway TJ-
<ShellcatZero> that's very informative for the future
<ShellcatZero> what would you recommend to do to investigate this error? https://pastebin.com/ZhwNAxVA
<TJ-> check the "journalctl -u systemd-modules.load.service" log
<ShellcatZero> TJ-: executing that command gives me: "-- No entries --"
<TJ-> ironic that systemd doesn't log it's own services
<TJ-> try "journalctl -xb -p warning"
<ShellcatZero> TJ-: yes, that gave me quite a lot of output to read through
<TJ-> ShellcatZero: scary how many warnings and so on a standard boot can cause
<ShellcatZero> TJ-: yes, I'm investigating an issue where lightdm fails, along with the error message: "Failed to start Detect the available GPUs and deal with any system changes".  I feel like I'm running in circles investigating a whole bunch of non-issue error messages.
<TJ-> That message comes, I think, from gpu-manager, and may be none-fatal.
<ShellcatZero> neither lightdm nor gdm will succeed, but lxdm will succeed and the resulting desktop environment is crippled, with few applications that will launch.
<TJ-> check the /var/log/gpu-manager.log
<TJ-> also check /var/log/Xorg.0.log - maybe the accelerated drivers for the system's GPU aren't being loaded
<TJ-> "lspci -nnk -d ::0300" should list the VGA-compatible GPU(s) and the drivers
<ShellcatZero> Hmm, here is the gpu-manager.log file: https://pastebin.com/ddFjYDJT
<ShellcatZero> output for lspci command: https://pastebin.com/fheAesEU (it's a hybrid system, but the Intel driver is the only one I use AFAIK)
<ShellcatZero> https://pastebin.com/fheAesEU
<ShellcatZero> here is Xorg.0.log: https://pastebin.com/FVKzLy5C
<ShellcatZero> attempting to boot into an older kernel fails as well
<mojtaba> Hello, I am backing up a computer using rsync, do you know how should I deal with the file names with special characters?
<tomreyn> <tomreyn> mojtaba: try quoting them
<tomreyn> <tomreyn> or rather the entire argument
<tomreyn> is what i responded last time you asked
<mojtaba> tomreyn: How should I do that, they are all in different directories.
<mojtaba> tomreyn: As the source, I gave directory path.
<tomreyn> mojtaba: so you don't actually provide paths with special characters in them as an argument to the rync command?
<tomreyn> it just recurses through directories and then hits them?
<mojtaba> tomreyn: No
<mojtaba> tomreyn: yes
<tomreyn> mojtaba: are both computers running ubuntu? same release or different? which ones?
<tomreyn> same architecture?
<mojtaba> tomreyn: source is Mac, and dest is ubuntu
<tomreyn> both ext4 file systems?
<tomreyn> Mac as in Mac OS, or just Mac hardware running Ubuntu?
<tomreyn> you probably need to https://askubuntu.com/questions/533690/rsync-with-special-character-files-not-working-between-mac-and-linux
<mojtaba> tomreyn: thanks
<mojtaba> tomreyn: The thing is that, I initialize the rsync from ubuntu (destination), using ssh.
<mojtaba> I have just ssh access to the mac.
<mojtaba> tomreyn: I still get the same error.
<tomreyn> mojtaba: the article i pointed to explains how to use the iconv option dpending on which side you initiate the transfer from.
<tomreyn> i have no further means to help here, i'm afraid.
<mojtaba> tomreyn: thanks anyway.
<Henster> hi guys i have a free bare metal machine ( i7 intel 2700 12 gb of ram) , i want to install a few servers on it  ,, what free hyper-v OS are you using ?
<Ussat> Well....at home I use vmware player to test/play with stuff, at work I have a an esxi cluster I use for prod
<Ussat> you could always install the free esxi hypervisor and use that
<Henster> https://www.cb-net.co.uk/linux/managing-vms-in-ubuntu-server-16-10/
<Henster> mybey i shoudl try this ,, wonder if i can make it run on a nas drive as well
<Henster> ill try unsing this https://www.proxmox.com
<ShellcatZero> Any help is greatly appreciated: https://askubuntu.com/questions/1021244/can-not-get-lightdm-or-gdm-working-on-lts-16-04
<sarnold> ShellcatZero: try uninstalling all the virtualbox guest stuff
<ShellcatZero> why do you suggest that, sarnold?
<sarnold> ShellcatZero: two reasons (a) a few weeks ago someone else was in here with a problem that sounded similar to yours and uninstalling the guest additions fixed it (b) those were installed around the time your bug said the problem started
<sarnold> I can't recall the details of the last guy, whether it was no graphics at *all* during boot or just couldn't get past the plymouth graphics or something like that. but uninstalling the guest additions let him get to X11
<ShellcatZero> Wow, sarnold, you were right. That fixed it, I am furious about virtualbox but glad to have my system back.
<ShellcatZero> I uninstalled all virtualbox stuff for good measure
<sarnold> ShellcatZero: that might be overkill :) but keeping guest packages in guests is probably a good idea, hehe
<ShellcatZero> I am going to try to replicate this again sarnold and pinpoint the offending packages.  Looking through my history, how do I interpret the version installed from something like: virtualbox:amd64 (5.0.40-dfsg-0ubuntu1.16.04.2, 5.1.34-dfsg-0ubuntu1.16.04.2)
<sarnold> ShellcatZero: I *think* that indicates the old version (5.0.40..) and then the new version (5.1.34..)
<teward> sarnold: old, new.
<teward> at least for upgrades.  In that order.
<sarnold> nice, thanks teward
<ShellcatZero> ok, awesome, thanks teward
<teward> you're welcome.  (Evils of debugging nginx installation/upgrade failures that only happened between two versions lol)
<teward> (I learn a lot about apt history with those evils)
<ShellcatZero> lol
<sarnold> teward: oh man. :/ that sounds unfun.
<teward> it was.  back in the 13.10 era.
<dpb1> sarnold: how did you know this was in virtualbox???
<dpb1> I'm still stuck on that
<sarnold> dpb1: heh, a case of being in the right place at the right time ... from the ask ubuntu page, "the issue then occurred on March 31st"  .. and then noticed the upgrades on the 30th all looked harmless enough but the 31st included virtualbox stuff
<dpb1> wow
 * dpb1 bows
<sarnold> .. and then remembering that other guy a few weeks ago with a problem that made *no sense* until he reported back that removing the guest additions solved the problem
<sarnold> similar to this, none of the other logs looked at *all* related
<dpb1> right, digging through the bug, virtualbox does stand out
<dpb1> ShellcatZero: I'm going to edit a couple of those things to mention virtualbox, just so it's a bit more obvious (now that we know it's a factor)
<sarnold> if the other guy hadn't been here first, I would have suggested uninstalling the intel microcode update. I wouldn't have *liked* that answer, but it's what I would have suggested.
<dpb1> oh man
<dpb1> if that fixed it.  smh
<sarnold> yeah :/ we've had enough reports of trouble with the latest drop from intel that I'm not convinced of their stability yet
<nacc> rbasak: around?
<ShellcatZero> sure dpb1, I'll update it again once I pinpoint the specific offending package(s). I don't think the systemd/openbox updates had anything to do with the issue as of now.
#ubuntu-server 2018-04-03
<NginUS> how to replace dhcp-assigned dns while keeping dhcp-assigned address without uninstalling resolvconf?
<TJ-> NginUS: configure the dhclient to ignore the nameservers it's passed so it doesn't pass them to resolvconf
<TJ-> NginUS: I think you need to add to the dhclient.conf something like "prepend domain-name-servers 127.0.0.1;"   - see "man dhclient.conf"
<NginUS> Hey thanks, that worked. Tried using 'supercede' first but it wouldn't take- so 'prepend' just adds 2 more on top, so there's 4 now but whatever
<NginUS> Somehow now I have to reboot since "sudo service networking [stop,start,restart]" stopped working somehow...
<NginUS> I can resolve URLs, but ipleak.net says I fail all 100 DNS server tests- have no DNS.. wtf?!
<cpaelzer> good morning
<lordievader> Good morning
<cpaelzer> good morning lordievader
<lordievader> Hey cpaelzer
<lordievader> How are you doing? Did you have a good easter?
<cpaelzer> yeah, I hope you as well lordievader
<lordievader> Yes, have eaten way too many eggs :)
<cpaelzer> I didn't get to eggs :-)
<cpaelzer> we started with an anti Good Friday BBQ
<cpaelzer> and after that we were all filled up for the weekend
<lordievader> Hahaha, we had a brunch on the first easter day... We didn't need dinner anymore.
<ntp_demon> Good morning: D I have an ubuntuserver install and we have issues with NTP. It does not seem to sync nor does it seem to be available. It does give a syntax error so i fixed it by removing the part in brackets. https://codeshare.io/a3ZWrB
<ntp_demon> It still does not seem to sync properly in ubuntu 14.04
<ntp_demon> https://codeshare.io/a3ZWrB
<ShellcatZero> dpb1 and sarnold: The offending virtualbox package has been identified and I've update the thread and bugreport accordingly: https://askubuntu.com/questions/1021244/can-not-get-lightdm-or-gdm-working-on-lts-16-04-after-virtualbox-update, https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1760371. Feel free to make any edits or tag maintainers as needed.
<ubottu> Launchpad bug 1760371 in lightdm (Ubuntu) "LightDM and GDM fail to start after update in virtualbox-guest-x11" [Undecided,New]
<rbasak> cpaelzer: FYI: https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/1759751/comments/3
<ubottu> Launchpad bug 1759751 in mysql-5.5 (Ubuntu) "mysql not downgrading to v5.5 after installing mariadbv10.2 and then purging it completely, mariadb service instance still clashing" [Undecided,Invalid]
<cpaelzer> rbasak: I knew you'd have better words to it, thanks!
<rbasak> waveform: https://bugs.launchpad.net/cloud-images/+bug/1746806
<ubottu> Launchpad bug 1746806 in linux (Ubuntu) "sssd appears to crash AWS c5 and m5 instances, cause 100% CPU" [Critical,In progress]
<rbasak> cpaelzer: would you mind taking care of bug 1466926 please? No rush.
<ubottu> bug 1466926 in apache2 (Ubuntu Xenial) "reload apache2 with mpm_event cause scoreboard is full" [Undecided,Fix committed] https://launchpad.net/bugs/1466926
<cpaelzer> rbasak: yeah, but taking care in this case means we have tried our best but it won't be SRUed
<cpaelzer> rbasak: so it is about explaining why not
<rbasak> cpaelzer: sure. I'll leave it to your judgement to decide how hard to try.
<cpaelzer> rbasak: updated the bug, could you cancel from x-proposed?
<rbasak> cpaelzer: thanks. I can't - needs an AA. I'll ask.
<ddstreet> cpaelzer for your lp #1466926, i have apache2 sru i need to upload as well for lp #1752683; are you planning to adjust and re-upload your changes anytime soon?  or can i go ahead and upload my apache2 sru?
<ubottu> Launchpad bug 1466926 in apache2 (Ubuntu Xenial) "reload apache2 with mpm_event cause scoreboard is full" [Undecided,Fix committed] https://launchpad.net/bugs/1466926
<ubottu> Launchpad bug 1752683 in apache2 (Ubuntu Artful) "race condition on rmm for module ldap (ldap cache)" [Medium,In progress] https://launchpad.net/bugs/1752683
<ddstreet> or, if you have new patch(es) ready, could upload a combined version
<maciek> hi, which network configuring method is currently the best in ubuntu 17.10 (server)?
<maciek> Im trying to cofigure br0 for bridged lxd mode
<maciek> and Im little bit confused between /etc/network/interfaces and /etc/systemd/network/* and /etc/netplan/*
<maciek> which one should I use and how to disable the others
<dpb1> maciek: is this a new install of 17.10?
<maciek> yes
<dpb1> ok, /etc/netplan/* files will be *rendered* out to systemd-networkd
<dpb1> /etc/network/interfaces is legacy
<dpb1> so, you should start with "netplan"
<dpb1> maciek: https://netplan.io/
<maciek> ok - so the /etc/network/interfaes file should be empty?
<maciek> alright, it works, thanks dpb1.
<olivierbourdon38> Hello everyone, I was wonderning if someone had some clue on an issue I have currently on Ubuntu Xenial amd64 minimal (ssh server) installation (net based) when I install squid + dnsmasq I get an error during dnsmasq setup
<maciek> empty /etc/network/interfaes file, empty /etc/systemd/network/ directory and the whole configuration in /etc/netplan/01-netcfg.yaml
<olivierbourdon38> Apr 03 23:44:21 ubuntu-min systemd[1]: dnsmasq.service: Failed with result 'timeout'.
<olivierbourdon38> after than running service dnsmasq start works. Stopping works but restarting it gives same error as during installation :-(
<olivierbourdon38> note that I have this behaviour both in dhcp or in static IP+DNS mode
<nacc_> olivierbourdon38: `systemctl status dnsmasq` ? journalctl?
<olivierbourdon38> https://pastebin.com/HcxVYUXj
<dpb1> hrm, oh well
<dpb1> he left
<dpb1> happy netplan customer?
<nacc_> dpb1: +1
<nacc_> olivierbourdon38: ok, see line 2
<nacc_> for more logs to pastebin
<olivierbourdon38> https://pastebin.com/BR5s4g0Z
<olivierbourdon38> sorry for the delay I had to relaunch my test machine
<olivierbourdon38> again what I can not understand is why stop/start fails every 1 out of 2
<sarnold> do you need dnsmasq?
<nacc_> olivierbourdon38: it's not clear to me what you mean; you've provided logs that show it failed, with a timeout, possibly because 192.168.0.254 failed to respond to DNS queries?
<nacc_> olivierbourdon38: i don't see logs for a success
<olivierbourdon38> I also forgot to mention that dnsmasq alone (without squid) works great but once squid is installed (and even removed afterwards) I get this one time ok next time failure loop
<nacc_> olivierbourdon38: removed or purged?
<olivierbourdon38> nacc_ yes I did not re run the full stuff and as it's 20 past midnight here will not be able to do so until tomorrow, thx anyways
<olivierbourdon38> removed
<nacc_> olivierbourdon38: that's not 100% surprising, purge is the way to get back to without the package
<nacc_> and possibly it left some config files around
#ubuntu-server 2018-04-04
<cpaelzer> ddstreet: #1466926 is dead for now after the insights on verification, please go for yours WITHOUT the changes my upload had
<lordievader> Good morning
<cpaelzer> rbasak: you are on SRU today IIRC - would you mind considering accepting the improved open-vm-tools that is in x-unapproved for 1741390?
<rbasak> cpaelzer: ack
<blitz-_> Anyone got experience installion 16.04 on a Dell T130 with a Perc H330 using software raid? Refuses to find boot drive and not sure if its a install problem or H330 problem.
<rbasak> cpaelzer: was this an SRU regression from the previous backport? We need a Test Case and the issue resolved for Bionic first I think, unless there's some reason this is urgent?
<cpaelzer> rbasak: it is resolved in bionic
<cpaelzer> the bug has an update
<cpaelzer> well wait there are two things in flight
<cpaelzer> I need to ensure not to mix them
<cpaelzer> 1741390 is a regression when backporting the new version
<cpaelzer> it wasn't seen in Bionic yet
<cpaelzer> (I already asked to check there, as it needs a special setup)
<cpaelzer> but so far the assumption is that with newer systemd it is fine
<cpaelzer> if they report it is not, then I can still fix Bionic
<cpaelzer> so for now we have
<cpaelzer> Bionic likely unaffected (not known better), Xenial fixup in x-unapproved
<cpaelzer> the other one that is resolved in bionic was a libvirt issue, that I work on atm
<cpaelzer> sorry
<cpaelzer> there also is a refresh to the Xenial SRU incoming - so they are similar cases
<cpaelzer> otoh the vmware guy "expects" it to affect Bionic
<cpaelzer> as I said that part just isn't clear
<cpaelzer> I can push a Bionic upload with the same fix (one line removal) to Bionic first if you'd prefer that
<cpaelzer> rbasak: the other one is 1758428 - do you want me to fix this in Bionic and then get a ping again for the refreshed SRU?
<cpaelzer> as I said the biggest inhibitor here is the uncertainty that I can't test it on my own (uses some special vmware api call to trigger)
<cpaelzer> it might even be easy but I don't know how (yet)
 * cpaelzer stops flooding the chan without rbasak rpelying :-)
<rbasak> Sorry I just realised there's more to this than I thought and I've been reading up
<cpaelzer> hehe
<rbasak> What is the Xenial backport based upon?
<cpaelzer> the version in Bionic
<cpaelzer> in recent history we hit (and fixed) several things only affecting it in systemd-level in xenial (but good in bionic)
<rbasak> Would the version then not be 2:10.2.0-3ubuntu2~16.04.1 or something like that?
<rbasak> And the changelog doesn't have the Bionic entry
<cpaelzer> rbasak: it is based on the version in bionic a few weeks ago, and ther ubuntu1 and ubuntu2 in Bionic are reverts of each other
<rbasak> I see, OK.
<cpaelzer> so 10.2.0-3ubuntu2 essentially = 10.2.0-3
<cpaelzer> therefore there was no need to rebase the backport
<cpaelzer> rbasak: you are not wrong to ask for a Bionic fix of 1758428, it is just that I know for sure it is in Xenial (therefore the upload) and not yet sure it is in Bionic (so no upload yet)
<rbasak> It's all starting to make sense not :)
<rbasak> now :)
<cpaelzer> lol+
<rbasak> cpaelzer: 2:10.2.0-3ubuntu0.16.04.1 has not yet been published, right?
<cpaelzer> rbasak: correct
<cpaelzer> rbasak: it was held by finding the new issue on the verifications step
<cpaelzer> so I ask to accept the new one into x-proposed "over" the former
<cpaelzer> we might still even hold back the release of it until the Bionic case is calrified
<cpaelzer> but having in x-proposed allows all the parties to retest
<rbasak> But 2:10.2.0-3ubuntu0.16.04.1 hasn't even been in xenial-proposed, AFAICT?
<cpaelzer> it was put to x-unapproved on 2018-03-22
<cpaelzer> and you are right, it wasn't in proposed yet (people are testing the ppa while waiting for it)
<rbasak> So I'd have squashed 2:10.2.0-3ubuntu0.16.04.2 into 2:10.2.0-3ubuntu0.16.04.1
<rbasak> But I don't think it's necessarily worth changing that now
<rbasak> That did confuse me a bit though.
<cpaelzer> hmm
<rbasak> Technically 2:10.2.0-3ubuntu0.16.04.1 is higher than 2:10.2.0-3, whereas for a backport you generally want it to be lower.
<cpaelzer> on the second upload 2:10.2.0-3ubuntu0.16.04.2 I ensured to have a better -v on buildpackage
<cpaelzer> so it includes all the history since Xenial anyway
<rbasak> But in this case, Artful has already moved on
<cpaelzer> one more or less doesn't change the long list of entries
<rbasak> So in practice that also may not matter
<rbasak> Your -v is fine
<rbasak> But that cannot show me everything, so I was inferring the rest based on an assumption that you'd only burn the minimum version strings for an Ubuntu upload.
<rbasak> s/Artful/Bionic/ above.
<cpaelzer> rbasak: so next steps?
<cpaelzer> rbasak: I thought we could accept the Xenial sru into proposed and wait on the confirm that Bionic is affected before making useless changes there
<cpaelzer> after you read into the case, do you agree or suggest differently?
<rbasak> cpaelzer: is there a reason this can't wait until the package is settled in Bionic? Also, what about Artful?
<cpaelzer> rbasak: several parties depend on the backport, but we can postpone it for now and reconsider if the bionic case needs too long
<cpaelzer> artful is not considered at all, the decision was to only provide that for last LTS
<cpaelzer> after bionic is out we will only do the same for Bionic until 20.04 is there
<cpaelzer> thanks rbasak, despite causing more work you are right :-)
<rbasak> cpaelzer: I'll write up a review in the bug.
<cpaelzer> I updated the respective bugs, let the peers on them have some time to confirm the state in Bionic and then kick off a new round
<cpaelzer> rbasak: when writing your review, please consider my latest post and feel free to cancel at least the first open-vm-tools in x-unapproved
<cpaelzer> keeping the other one there until resolved
<cpaelzer> rbasak: I also updated the planning (to no more ignore interim releases)
<cpaelzer> dpb1: ^^ discussion affecting planned open-vm-tools SRUs
<cpaelzer> dpb1: I updated the plan you drafted already
<cpaelzer> TL;DR 4 instead of 2 such SRUs per year needed
<cpaelzer> I have some hope that post Bionic those will be less painful backports than now
<rbasak> cpaelzer: leaving it in the queue will mean every other SRU team member will spend time looking before discovering it's blocked and moving on.
<rbasak> I'd prefer to reject for this reason. You can keep your upload around and re-upload without changes easily enough I think.
<cpaelzer> well then, cancelling from unapproved does not imply version bumps, so ok
<Gargravarr> hey all, on Launchpad trying to log into the bug tracker, is there any reason my Ubuntu One account isn't being accepted and i'm being told to 'Go away' for being a 'Bad bot'?
<dpb1> cpaelzer: so, we need it in artful too?
<cpaelzer> dpb1: yes
<dpb1> cpaelzer: ok
<cpaelzer> I can do that, but as this was a little box of backport-surprises so far I don't expect it to be different there
<cpaelzer> dpb1: if you want to talk about the plan I can HO if you want
<cpaelzer> I hope the updates to the sheet are good and understandable
<dpb1> cpaelzer: it's probably enough, but I haven't checked
<tomreyn> i believe Gargravarr's issue was resolved in #ubuntu
<Gargravarr> okay, so, root reason for me wanting to log into the bug tracker - i'm hitting the same sssd problem again, a lot harder this time
<Gargravarr> are sssd and libpam-ldapd the only two options for doing LDAP user auth with Ubuntu?
<Gargravarr> the latter works as intended but doesn't have native caching so won't work for laptops taken off-site, and my attempts to set up external caching failed
<Gargravarr> rbasak: so i'm running into a real mess with this SSSD problem, but my suspicion is that the bug you linked me to is not the whole story
<Gargravarr> i'm starting to think this is related to the intel-microcode
<Gargravarr> this is the last package installed on my laptop before it broke (which occurred on the next boot)
<Gargravarr> and i'm running sssd on the current 4.13.0-38 kernel on a desktop without it crashing
<Gargravarr> should i post my findings onto the same bug ticket or should i look at opening a new one?
<waveform> Gargravarr, could you post a link to the ticket (sorry, it's not in my scrollback)
<Gargravarr> it's fine
<Gargravarr> https://bugs.launchpad.net/cloud-images/+bug/1746806
<ubottu> Launchpad bug 1746806 in linux (Ubuntu) "sssd appears to crash AWS c5 and m5 instances, cause 100% CPU" [Critical,In progress]
<waveform> thanks
<Gargravarr> i'm just trying uninstalling the intel-microcode package
<Gargravarr> fingers.cross()
<waveform> just a bit of background: I use LDAP and sssd here too, and recently had a machine (my main desktop) hard-lock on boot after a kernel upgrade. Booting with prior kernel worked fine; after a bit of investigation, removing intel-microcode resolved things for me
<Gargravarr> waveform: i'm finding similar. a colleague just had the issue and i rolled him back to 4.13.0-26 from 4.13.0-38 and it booted
<Gargravarr> nope. tried uninstalling -> reboot -> log in as root from a TTY -> run sssd -id 9 and netcat it to a different machine -> switch to lightdm -> login as LDAP user -> login succeeds, but machine freezes
<waveform> I *think* in my case (I'd have to check my notes somewhere), -36 worked for me and -37 was the one that locked up
<Gargravarr> it seems to be a random alignment of intel-microcode, CPU and kernel
<Gargravarr> we have Skylake desktops and Kaby Lake laptops. so far the desktops are unaffected
<Gargravarr> but someone in the bug ticket mentions having a Skylake desktop
<waveform> mine's pretty old - it's an i7-2600K
<waveform> (sandy-bridge)
<Gargravarr> i still have a 2520m in use in my ThinkPad
<Gargravarr> now, if i recall correctly, the whole point of microcode is that it gets loaded into the CPU
<Gargravarr> so there's a good chance that uninstalling it won't fix the problem
<Gargravarr> so the Kaby Lake laptop has frozen, and the CPU fan is pumping out a fair amount of heat, so i assume it's spinning the cores again
<waveform> hmmm, it certainly did in my case - I removed both intel-microcode and the "no longer required" package that went with it - I'll just check what that was...
<waveform> ah, iucode-tool
<Gargravarr> ta, i'll check they're both gone
<Gargravarr> never installed on this laptop, it seems
<Gargravarr> ah wait, typo'd
<Gargravarr> waveform: did you reboot after uninstalling?
<waveform> yes, I did
<waveform> I was in the older kernel to do the uninstall, then rebooted into the newer one
<Gargravarr> i have to use the 4.4-series kernel on this laptop due to intel wifi driver hell
<waveform> found my notes: that was the one change I made between booting and finding -37 failed, and -37 worked. Incidentally, would be interesting to know if -36=works, -37=fails for you as well - that might at least confirm we're looking at the same thing (and narrow down the changesets that need investigating)
<waveform> incidentally, I did also encounter this with 4.4-series and found that -116 to -117 were the culprits there
<Gargravarr> -116 was fine until last week when i updated the microcode
<Gargravarr> -117 is no improvement
<Gargravarr> okay, sssd starting up (on -116)
<Gargravarr> if this breaks, i'll roll back to -112 or something
<Gargravarr> wow, premature celebration at its finest. that got far further than normal, Firefox loaded
<Gargravarr> THEN the machine froze
<waveform> the 4.4 series I'm not so clear on as my jump to 4.13 was literally a re-install of ubuntu desktop on that machine (after being unable to fix it over the weekend, and needing it working in a hurry I just did "oh sod it, I'll reinstall" thing only to find that it still froze after re-installing all the stuff I had installed before :)
<Gargravarr> yeah. we generally use 4.13 here for HWE
<Gargravarr> but i ran into error after error with the Intel drivers (my laptop has an Intel card, everyone else has Qualcomm)
<Gargravarr> eventually i got it stable by downgrading the card and using the basic 4.4
<waveform> it was only on 4.13 that I attempted to remove intel-microcode and found that fixed things - hence my 4.4 assumption is purely based on -117 being the first version that failed for me (I don't know whether intel-microcode removal would fix things there, and can't really rollback to test it!)
<Gargravarr> fair enough
<Gargravarr> -117 isn't even officially released afaik
<sdeziel> Gargravarr: -119 was just released
<Gargravarr> my mistake
<waveform> ah, could well be 119! Handwriting ... bah
<sdeziel> well, the official announcement hasn't arrive just yet, the kernel is still "hot" :)
<Gargravarr> sdeziel: it shows in apt-cache so that counts as released
<Gargravarr> right, trying -112 (and will add -119 as well (which might well be lucky, it's my grandmother's house number...))
<dpb1> Gargravarr: btw, the microcode is a per-boot operation.  So, removing the update actually does revert things.
<Gargravarr> dpb1: thanks for clarifying
<Gargravarr> is a plain 'remove' operation enough or does it need to be purged?
<sdeziel> and the BIOS could also load a fresh microcode if you updated it recently
<waveform> hmm, I did purge (out of habit) but if a package is only half-installed I'd slightly surprised if it was still "doing things"
<dpb1> yes, it's actually a very weird process.
<Gargravarr> oh joy. i upgraded the BIOS out of desperation too
<waveform> ah, I definitely didn't do that (it's too old for there to be any updates available!)
<dpb1> Gargravarr: so, you are hitting this issue in a cloud, or just on your local workstation?
<Gargravarr> dpb1: on multiple Dell laptops
<dpb1> ok
<Gargravarr> not hit it in a cloud context (yet)
<dpb1> Gargravarr: I just asked since the bug title was about aws
<dpb1> thx
<Gargravarr> indeed
<TJ-> Guys, "dijuremo" in #ubuntu-kernel has been reporting and testing this for several weeks, I suggest you join in that channel and pool observations and resources
<Gargravarr> if it makes a difference, the machines i've hit it on so far have all been Kaby Lake
<Gargravarr> TJ-: thanks for the hint
<waveform> thanks, will do
<dpb1> Gargravarr: did you test the fix in comment #43
<TJ-> bug 1759920
<ubottu> bug 1759920 in linux (Ubuntu Artful) "intel-microcode 3.20180312.0 causes lockup at login screen(w/ linux-image-4.13.0-37-generic)" [High,Confirmed] https://launchpad.net/bugs/1759920
<Gargravarr> dpb1: i did. made no difference
<TJ-> Seems like we have several problems with the Spectre_v2 microcode
<dpb1> Gargravarr: and did you post on the LP issue?
<Gargravarr> LP?
<dpb1> Gargravarr: sorry, I don't actually know who you are. :)
<Gargravarr> oh, Launchpad?
<dpb1> launchpad
<dpb1> yes
<Gargravarr> no, i was drafting a reply when i started noticing the microcode bit
<dpb1> OK
<dpb1> at least writing there would be good so that Kamal knows
<Gargravarr> yeah, i guess that would help
<dpb1> thanks
<waveform> I posted something on *an* LP issue, but it wasn't that one - I'll see if I can dig it up and link it in
<dpb1> waveform: is it sssd in your case too?
<waveform> well ... I am running sssd but I never tied the issue to that specifically. I just noted my machine freezing at the login screen, and eventually removing intel-microcode sorted things
<dpb1> ok
<Gargravarr> my machines have a tendancy to freeze *before* the login screen
<Gargravarr> basically as soon as the sssd service loads at boot, so the splash screen freezes
<Gargravarr> dpb1: i posted comment #48 on the AWS ticket
<dpb1> thanks.
<waveform> hmmm ... I did note I could boot in recovery mode (without network) on the new kernel and the freeze wouldn't occur. The lack of networking would obviously preclude sssd, so there's perhaps a link there. I could install intel-microcode, make a local user and disable sssd - see if I can re-create it that way if that's any use
<Gargravarr> waveform: did you drop to a root shell there and then, or continue the boot?
<Gargravarr> i could get a root shell from recovery, but as soon as i tried to resume boot, it re-occurred
<waveform> I dropped to the root shell a few times to poke around, and did notice that certain actions like continuing the boot would always freeze the system but until I never narrowed down *precisely* what started before the crash (even tried videoing the screen - but it was too blurry to read :)
<waveform> but until I *reinstalled ...
<Gargravarr> i figured out that disabling the sssd service brought the system back up
<Gargravarr> this started happening on an XPS that i was re-imaging
<Gargravarr> the script errored out and the machine froze immediately after restarting sssd
<dpb1> interesting.
<Gargravarr> the logs clued me in that sssd was responsible
<dpb1> Gargravarr: this is xenial + the HWE?
<Gargravarr> that one was, yes
<Gargravarr> we're 90% Xenial/HWE with a few exceptions
<Gargravarr> the one that broke today was HWE, 4.13.0-38
<Gargravarr> okay, so got the laptop up with 4.4.0-112, let's see what happens invoking sssd
<Gargravarr> nope, frozen again, immediately after successful auth
<TJ-> I wonder if there's a clue on the sssd compilation optins
<waveform> interesting - ah, finally found the bug I commented on and yup, it's now marked as  a dup of 1759920 (https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1760377)
<ubottu> Launchpad bug 1759920 in linux (Ubuntu Artful) "duplicate for #1760377 intel-microcode 3.20180312.0 causes lockup at login screen(w/ linux-image-4.13.0-37-generic)" [High,Confirmed]
<Gargravarr> TJ-: i've been running sssd in the foreground and collecting the logs on another machine
<Gargravarr> with the debug level set at 9, so got some pretty verbose logs if it'll help
<dpb1> Gargravarr: would not hurt to put on the bug.
<waveform> ah, I was wrong about -116 working for me, it was -112 that worked - which is curious given it's failing for you
<TJ-> Gargravarr: I mean that the most likely explanation, if it is caused by the microcode update, is the spectre_v2 changes. Some of that could have an interaction with compile time options that aren't spectre-aware ... have you tried disabling some of the kernel mitigations, e.g. adding to kernel command line "nospectre_v2"
<Gargravarr> i have not
<Gargravarr> kernel flags are a dark art to me :) i'll try that
<TJ-> looks like tyhicks has good advice, I'll stay with #ubuntu-kernel
<waveform> just reading the rest of that long bug thread - intriguing stuff
<Gargravarr> it'd be more intriguing if Meltdown/Spectre weren't such a !"Â£$%^ mess :)
<Gargravarr> this is my first job as a sysadmin after being a developer for years, i only started last year and it feels like i'm fighting not just The Bad Guys trying to get into my systems, but the vendors themselves making it impossible to secure them!
<waveform> oh indeed - I'm rather interested in why sssd is such a good trigger for it (and ultimately what the root cause is ... other than chipzilla's incompetence ;)
<Gargravarr> yeah, on that level it is interesting
<Gargravarr> i found it quite hard to get my head around why an auth daemon could cause such a low-level problem
<waveform> my rough guess would be it's doing something "fancy" with memory (trying to ensure certain things are secure / never reach swap / etc. but that's total speculation on my part at present)
<Gargravarr> indeed, but the interesting thing is that it does this after successful auth, too
<Gargravarr> by which point the secure memory should have been deleted
<Gargravarr> TJ-: the nospectre_v2 hasn't made a difference, frozen again
<waveform> ahh, #61 has some meat on  it
<Gargravarr> i'll read through in a minute
<tyhicks> when you have the latest microcode from Intel, a new code path is taken in the kernel when it is switching between two tasks
<waveform> and apparmor is implicated by the looks of it?
<waveform> (sorry, still reading :)
<tyhicks> if the old task is confined by apparmor, the kernel will attempt to generate an audit message during the task switch
<tyhicks> unfortunately, a lock is already held from early in the task switch code that the audit code then attempts to take again
<tyhicks> that results in a hard lockup
<Gargravarr> so it's a fight between the kernel and apparmor?
<waveform> ah, thanks very much for the clarification - I got some of that from the ticket, but your explanation is very clear!
<tyhicks> Gargravarr: it is just a bad interaction between several kernel subsystems due to the complexity of switching between tasks
<waveform> (and presumably that that complexity has just been increased with the advent of all the spectre/meltdown mitigations)
<tyhicks> right, this bit of problematic code is trying to decide if new microcode features need to be triggered when switching between two tasks
<dpb1> tyhicks: man that's some complex stuff right there. :)
<waveform> bah, got a three year old to go and deal with - will be back later - good  luck with the testing!
<Gargravarr> i'll trade you
<Gargravarr> spectre debugging for childcare
<dpb1> woah
<nacc> cpaelzer: fixed pg-repack uploaded
<nacc> submitted to debian as well
<waveform> Gargravarr, heh - I'd happily swap! spectre may be complex and certainly beyond my current knowledge in several areas but ultimately it's deterministic and logical
<waveform> ... this is more than I can say for the thought processes of a 3-year old :)
<sarnold> huh I'd have thought three year olds would make many demands :)
<sarnold> very determined
<dpb1> lots of speculative execution too
<waveform> it's the context switching that gets me - far too many topic changes within a minute!
<dpb1> almost leading to a meltdown?
<sarnold> :D
<ahasenack> can package descriptions in debian/control, and I mean the short description here, be longer than 80 chars?
<ahasenack> I'm not finding anything about this in https://www.debian.org/doc/debian-policy/#s-f-description
<TJ-> does the lint tool give any hints?
<ahasenack> nope, it seems to ignore it
<waveform> ahasenack, see section 3.4.1
<ahasenack> "The single line synopsis should be kept briefâcertainly under 80 characters."
<ahasenack> does that include the "Description: " bit?
<dpb1> I'd think no
<waveform> it refers to just the first line of the Description: field by my understanding (read section 3.4 before it, which refers to 5.6.13 which you originally linked to)
<ahasenack> the dpkg -s output includes "Description: ", so if the intent is to be able to display dpkg -s in a 80-char terminal, then the whole thing should be less than 80 chars
<waveform> the example in 5.6.13 includes "single-line-synopsis" as part of their Description, which would seem to refer to 3.4.1 ?
<waveform> in other words, the first line of Description: should be strictly <80 chars, the following lines are effectively separate and form the extended description
 * ahasenack gets creative with English and brings up a thesaurus
<waveform> (or at least, that's my reading :)
<ahasenack> Description: Persistent Memory low level remote access support library - debug build
<ahasenack> meh
 * ahasenack drops "low level"
<waveform> good call - that sort of detail can always go in the extended description. I'd think "debug build" could too (e.g. python2 / 3 packages don't specify whether they're 2 or 3 in the synopsis, typically only in the extended description)
<ahasenack> I have libfooN, libfoo-dev and libfoo-debug is a bit controversial (it was part of libfoo-dev). It's not just debugging symbols, it has extra assertions, checks, slower code, logging
<waveform> hmmm, just having a quick comparison of various -dbg package synopsis' vs their "parent" - doesn't seem to be any particular standard
<ahasenack> this is not a -dbg package. The -dbg packages are being automatically generated
<sarnold> I think most -dbg packages ought to be dropped these days for the ddebs instead
<ahasenack> yes
<sarnold> but a package with extra assertions might be useful thing to have, ala electric fence or valgrind etc
<waveform> ah, not something I'm familiar with - I'll have to go read up on ddebs
<sarnold> there's precious little docs :/
<sarnold> http://ddebs.ubuntu.com/
<waveform> https://wiki.ubuntu.com/Debug%20Symbol%20Packages ?
<sarnold> yup :)
<stanford_AI> do you know how to stream webrtc from /dev/video0 ?
<dpb1> cat /dev/video0 | webrtc-program    # sorry, I got nothing.
<sarnold> firefox or chromium is my best guess
<stanford_AI> sarnold, talking to me?
<stanford_AI> dpb1, lol
<sarnold> stanford_AI: yeah
<stanford_AI> sarnold, I would like to publish webRTC from the terminal, not from a browser
<sarnold> yeah, that sounds like a good idea. I just haven't seen anything except browsers implement it..
<stanford_AI> sarnold, what else could I use for video streaming from linux?
<dpb1> there was something I did
 * dpb1 looks it up
<dpb1> oh, I did rtsp
<dpb1> webrtc is the chat thing, right?
<dpb1> like hangouts
<dpb1> I wouldn't be anymore help than google
<sarnold> crtmpserver looked promising, but the webpage is dead :/
#ubuntu-server 2018-04-05
<nacc> cpaelzer: postgresql-10 is now blocked by the freeze, feel free to pick it up from here (pg-repack has already migrated)
<paul__> Can anyone recommend a web app I Can install on my home server to allow basic file management to a specific server location (upload, download, delete) through the browser?
<paul__> remaining space on partition would be nice too
<sarnold> owncloud or nextcloud seemed popular
<CodeMouse92__> (Nextcloud is awesome)
<CodeMouse92__> paul__: I would definitely look into Nextcloud for that. Nice, clean web interface, good controls, sync clients, etc. You can also set quotas on users.
<lordievader> Good morning
<cpaelzer> nacc: I see pg-10 is now just waiting for freeze as you said
<cpaelzer> nacc: there is not much more to do, I pinged the release team, but we are good if that migrates next week as well
<cpaelzer> nacc: IIRC after the beta spins are complete it should migrate prior to release
<cpaelzer> or do you think without pings it will automatically become a post release SRU?
<Gargravarr> tyhicks: think i figured out why the Precision (on the 4.4 kernel) still broke despite removing intel-microcode - i updated the BIOS, which as i proved with the XPS, also updated the microcode
<Gargravarr> trying your 4.4 kernel now
<Gargravarr> hallelujah, 4.4.0-119 working with SSSD
<Gargravarr> tyhicks: many thanks for your hard work fixing this
<tyhicks> Gargravarr: thanks for the testing!
<ahasenack> rbasak: hi, since you know debian
<ahasenack> rbasak: https://anonscm.debian.org/gitweb/?p=collab-maint/logwatch.git I'm trying to see the commit for "dovecot: Ignore "Logged out" and "Debug" lines. closes...", at the very end of the "short log" section
<ahasenack> but I get "bad object id"
<ahasenack> is anonscm.debian.org broken?
<rbasak> ahasenack: https://anonscm.debian.org/cgit/collab-maint/logwatch.git/commit/?id=f431e2e386af0da575d58fe80748199849d705ef
<rbasak> ahasenack: maybe a bug? I went via the log page.
<rbasak> ahasenack: also given the move over to salsa I wonder if somebody broke something doing something by hand to the repo.
<ahasenack> thanks
<ahasenack> ddstreet: hi,
<ahasenack> ddstreet: I'm doing some bug triaging, and this old-ish bug came up: https://bugs.launchpad.net/ubuntu/+source/vlan/+bug/1701023
<ubottu> Launchpad bug 1701023 in vlan (Ubuntu Trusty) "(on trusty) version 1.9-3ubuntu10.4 regression blocking boot completion" [High,Confirmed]
<ahasenack> does that ring a bell? The bug fixed since 10.1 (which is the last working package for the reporter) is https://bugs.launchpad.net/ubuntu/+source/vlan/+bug/1573272
<ubottu> Launchpad bug 1573272 in vlan (Debian) "default gateway route not installed for bond interfaces through reboot" [Unknown,New]
<ahasenack> 3 attempts even
<nacc> cpaelzer: ack
<OliPicard> Hey Everyone, I was wondering how do I download an older version of SQLite? I'm looking for SQLite 3.10.1 but I'm not able to find a download link on SQLite.org to the older version. I was wondering if theres a method I could use directly in APT to request an older version of the application?
<rbasak> OliPicard: https://launchpad.net/ubuntu/+source/sqlite3 -> View full publishing history
<Gargravarr> tyhicks: so which combination of intel_microcode and kernel is it that causes SSSd to go down in flames?
<rbasak> OliPicard: click on the version string you want, then see the Downloads section
<OliPicard> rbasak: Thank you Robie :)
<Gargravarr> i'm looking at machines in Landscape and trying to figure out which ones are safe to upgrade
<tyhicks> Gargravarr: intel-microcode with 3.20180312.0 in the version string with any of the recent 4.4 and 4.13 kernels (since January)
<Gargravarr> okay. i've got a machine with that version microcode and kernel 4.13.0-26
<Gargravarr> i assume upping the kernel (before the current patch is released) would be Bad :)
<Gargravarr> are previous versions of the microcode affected?
<tyhicks> Gargravarr: you could downgrade the microcode if you wanted to run the latest kernels
<tyhicks> Gargravarr: you could also uninstall the intel-microcode package until the new kernels are available
<Gargravarr> i might do that, thanks for clarifying
<bladernr> Hey, is the live ISO (Subiquity) going to completely replace d-i for 18.04, or will both options be available?
<dpb1> both will be available
<dpb1> and supported
<bladernr> dpb1, thanks!
<bladernr> I was just playing with the live one to answer some questions and it's come along nicely.
<dpb1> thoughts so far?
<dpb1> I mean, it's pretty much final at this point of course
<dpb1> but we have things in store for 18.04.1.
<jbicha> hi, when you get a chance, it would be great for you to update https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes
<jbicha> Beta release is scheduled for today-ish and the Server section is incomplete (I filled out the Desktop section but didn't know what y'all wanted to highlight)
<ProCycle> I'm trying to figure out what the apache varible HTTP_REFERER could contain but there seems to be no documentation. Anyone know if it gives the full url or just the domain?
<ProCycle> Or how I might expose the contents of that varible so I can find out?
<dpb1> https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
<sdeziel> ProCycle: the referer is included in the default logging of Apache
<ProCycle> So it sounds like it's the full url
<sdeziel> grep -w combined /etc/apache2/apache2.conf
<sdeziel> ProCycle: it is whatever the browser/UA decided to put in
<ProCycle> Trying to redirect people coming from scam coupon sites to a disclaimer page
<tomreyn> ProCycle: the best way to have a webserver print the value of a variable is probably a http response header (make up your own X-header (e.g. X-VAR-REFERER) and have the webserver return it and the value of the variable.
<tomreyn> you can then view this with curl, wget or the web development utilities integrated into firefox or chromium
<ProCycle> Interesting approach, thanks
<ProCycle> Well shoot I get a redirect loop because the redirect doesn't strip the referer header which triggers the rewrite again
<ProCycle> Putting [L] at the end doesn't stop it, and putting [END] breaks my server
<ProCycle> Huh I don't get why it just loops. The only thing I can think of it's because the host uses a nginx reverse proxy for apache
<ProCycle> Guess it's time to move to a better host
<arooni> trying to connect to a mysql server from another machine... ri
<arooni> lets reprhrase that
<arooni> if i cant telnet to that ip / port number; theres going to be no hope to connecting via mysql right
<sdeziel> arooni: indeed, if you you cannot telnet to $IP:3306, then the mysql client won't be able either
<arooni> whats weird is that i have allowed access via ufw from this ip address and port number; and restarted ufw
<arooni> am i missing something ?
<lordcirth_work> arooni, are you sure that mysql is listening on that ip instead of only on localhost?
<arooni> well i did telnet locally to 3306 and it did work
<arooni> i.e. telnet localhost 3306
<arooni> (from the machine running mysql server)
<arooni> and ufw reports 3306                       ALLOW       192.168.1.100
<arooni> fixed it; apparently a bind-address of localhost in mysql conf disallows it to thikn about listening to any remote connections
<nacc> arooni: yes, that seems correct.
<JanC> obviously
<arooni> i credit my amazing insomnia for making me stupider today
<JanC> it's a good thing that it only listens locally by default too  :)
<arooni> agree
<arooni> do i have to do something when i make ufw changes or are they live immediately
<jdstrand> arooni: if you use the ufw command (eg, ufw allow to any port 3306 from 192.168.1.100), it is leve immediately
<jdstrand> live*
<nacc> teward: ping -- you've got some noise in your last nginx merge (i'm fixing it up) -- will probably need you to sanity check what i end up dropping (e.g., you've got undocuemnted chagnes to debian/gbp.conf and d/p/0003....patch
<ProCycle> Huh what would cause the root account to write files as -rw-r----- but directories as drwx------
<ProCycle> so group can read files but can't read directories...
<sarnold> the modes are computed by what the application requested for the operation along with the umask in place at the time
<ProCycle> so when I run sudo is it using the root account umask or the user running sudo?
<sarnold> ProCycle: like everything else with sudo, it's complicated :) see umask_override and umask in sudoers(5) for the details ..
<ProCycle> Hmm so, it's complicated. Best just set file permissions afterwards
<ProCycle> Writing a backup service file so need to make sure files it makes have the right permissions so the backup server can grab them
<sarnold> systemd.exec(5) has a UMask= setting you may find helpful
<ProCycle> Ah cool that looks useful
#ubuntu-server 2018-04-06
<rchavik> hi, is there an equivalent in to to 'yum history' and 'yum history undo <transaction>' ?
<sarnold> I don't know what those do, but /var/log/dpkg.log has some information on what was installed when
<rchavik> it lists history of package installs, and installed dependencies.   the undo is particularly helpful because it can automatically remove the dependencies too
<rchavik> pity there's no equivalent in ubuntu
<sarnold> oh
<sarnold> if you uninstall a package that dragged in dependencies, apt-get autoremove can clean those up
<dpb1> yummy
<sarnold> the deborphan package can help if apt has lost track of what was installed strictly for dependencies..
<rchavik> got it, thanks
<JanC> apt-get also has the --autoremove option for the remove command, which will do that in one step
<sarnold> oh sweet
<JanC> or you can set APT::Get::AutomaticRemove to make that automatic
<sarnold> "--auto-remove, --autoremove". dude. <3
<JanC> careful if you mix several APT tools though, as I'm not sure all of them use the same database to store installed-as-dependency info
<JanC> (I remember they didn't in the past, not sure they do now)
<sarnold> twenty years of typing apt-get update &&  apt-get -u dist-upgrade has kinda burned that into my fingers
<JanC> I mean apt-get vs. aptitude vs. ...
<sarnold> you'd think I' shorten that .. but no.
<sarnold> so I tend to forget that aptitude even exists.
<JanC> I guess it's trivial to make an alias for update-then-upgrade  :)
<JanC> or a bash function if you want to make it somewhat fancier
<sarnold> it's been a busy two decades
<lyn||orian> muscle memory probably now
<lordievader> Good morning
<Neo4> where is located syslog file in ubuntu?
<Neo4> I've read it should be in /etc/syslog.conf , but there is nothing
<Neo4> Has ubuntu syslog.conf file?
<Neo4> and what does this command ps auxwww | grep syslog ?
<Neo4> book that I'm reading about Unix, and for 2005 years, but I think changed nothing so far
<Neo4> who know what is hostname lookup?
<lordievader> Neo4: What version of Ubuntu are you running?
<lordievader> And what is it that you are trying to accomplish? Read the syslog or configure the syslogger?
<Neo4> 16.03
<Neo4> nothing, in book written this is the main log file
<Neo4> there placed all paths where your system stores logs
<Neo4> I wanted look at, but didn't find, book old and unix
<parlos> Good Morning
<lordievader> Neo4: Logfiles are typically stored in `/var/log`, though with 16.04 you use systemd which comes with journald. To access those logs you need to use the `journalctl` utility.
<lordievader> Hey parlos
<JanC> Neo4: Ubuntu uses rsyslogd instead of some older syslogd, so the syslog configuration is in /etc/rsyslog.conf
<Neo4> ok
<JanC> but as lordievader says, you can see logs with journalctl too
<adac> Is there a standard way of removing all old kernels?
<rbasak> adac: "apt autoremove". With --purge if you wish. This will remove everything that apt thinks is unused, including old kernels.
<rbasak> At least from Xenial onwards. Not sure about Trusty.
<adac> rbasak, thanks!
<adac> is purge needed as well?
<adac> mean so that the kernels do get removed?
<rbasak> adac: the payload will get removed just with autoremove. purge also removes config files and knowledge of the package from the package manager.
<rbasak> I almost never use remove on its own.
<adac> rbasak, ok yes thanks!
<adac> rbasak, "apt autoremove" is something different then "apt-get autoremove"?
<rbasak> apt is a friendlier front-end with some defaults changed.
<rbasak> Since apt-get is generally locked in to interface and defaults because scripts use it
<adac> rbasak, ok thanks
<adac> rbasak, so one should generally use apt now?
<tomreyn> adac: either is fine, apt may be more user friendly
<tomreyn> for scripting things, use apt-get
<adac> kk
<adac> hmm even tough I did "apt-get autoremove --purge" it stil shows me a lot of images left still
<adac> dpkg --list | grep image
<adac> https://pastebin.com/XxrEeiGA
<tomreyn> !info linux-image-generic xenial
<ubottu> linux-image-generic (source: linux-meta): Generic Linux kernel image. In component main, is optional. Version 4.4.0.119.125 (xenial), package size 2 kB, installed size 14 kB
<tomreyn> do you have update-manager installed?
<rbasak> adac: the mechanism depends on apt considering the kernel packages "automatically installed".
<rbasak> The apt-mark command will tell you what is marked auto and what is marked manual.
<adac> ok have to check thanks guys!
<rbasak> Usually there's a metapackage like linux-image-generic marked manually installed that depends on the latest actual kernel package
<rbasak> And the actual kernel packages remain marked automatic.
<rbasak> If you have your kernels installed in some special way, that may break.
<adac> ok need to go trough this be back in some time surely have some more questions :)
<adac> I think I forgot --purge on that last host where the *images* are still there
<adac> on another host where I used --purge now definitely the images are gone
<adac> no that was not the isue. checking this marked stuff now
<adac> apt-mark autoshow shows me:
<adac> https://pastebin.com/wr8QCfuw
<adac> rbasak, can i get rid of those old images then somehow?
<rbasak> adac: you can purge the old package manually
<adac> rbasak, ok simply by package name
<adac> worked
<adac> thanks again rbasak and tomreyn
<rbasak> nacc: dpb1: https://irclogs.ubuntu.com/2018/03/26/%23ubuntu-server.html#t13:52
<nacc> rbasak: teward: do you have a link to the 16.04 request?
<rbasak> nacc, dpb1: https://lists.ubuntu.com/archives/ubuntu-server/2015-June/007080.html
<JediMaster> Hey all
<JediMaster> Is there a tool to migrate from a basic /etc/network/interfaces to the new /etc/netplan/*.yaml file? And yes, I've tried what the ubuntu docs say, "netplan ifupdown-migrate" isn't a valid option (at least in bionic)
<dpb1> nacc: Hey, in between waiting on reviews, could you please dive down and see if this upgrade to nginx makes sense.  We'll still try to get teward's input, but would help to have your validation
<rbasak> nacc: dpb1: also https://lists.ubuntu.com/archives/ubuntu-release/2015-July/003310.html
<dpb1> JediMaster: where do you see the docs saying 'ifupdown-migrate'
<dpb1> JediMaster: that's not supposed to be there and we don't recommend an automated tool to migrate ATM
<JediMaster> I'm trying to clone a VM in VMWare using chef's 'knife vsphere' command, and vmware sets the IP, gateway and other bits via the /etc/network/interfaces file as it looks like it's not caught up with netapp on 17.10/18.04 yet. I could easily write a script to run a command to convert the script then run netplan to bring the interfaces up
<nacc> dpb1: ack, will do
<nacc> rbasak: thank you
<JediMaster> dpb1, https://wiki.ubuntu.com/Netplan under "Commands" near the top
<nacc> dpb1: cough, that should be just a link to netplan.io, no?
<nacc> cyphermox: --^
<JediMaster> I'm getting to the point that I think I'll have to write a script to do the migration myself lol
<JediMaster> it's probably not *that* hard, just didn't want to re-invent the wheel
<JediMaster> Still not entirely sure what it is that writes the /etc/network/interfaces file when VMWare clones the machine, I'm guessing it must be the 'open-vm-tools' package, in which case that probably needs updating to work with netplan
<JediMaster> dpb1, so I presume I'll need to write one myself then, just while the vmware tools don't support netplan?
<JediMaster> it's a super simple config, and will always be the same other than different IPs
<dpb1> nacc: :/
<dpb1> JediMaster: what vmware tool is that?
<dpb1> nacc: I'll fix that now, thanks
<nacc> dpb1: i'm *guessing* the wiki page predates netplan.io and was never updated once the other page went live
<dpb1> yes
<JediMaster> dpb1: My best guess is that it's the open-vm-tools package in ubuntu that writes to the /etc/network/interfaces file when you clone a VM and specify a new IP/gateway etc
<nacc> cpaelzer: --^ i think you were looking at that package?
<dpb1> JediMaster: what action do you take from the outside?  *just* pick an ubuntu vm and clone it?
<JediMaster> dbp1: I've just made an Ubuntu 18.04 (yes beta) template machine, the netplan file was made correctly from the installer and it has network access, I then use chef's 'knife vsphere' integration that talks to VMWare's Vsphere, which clones the machine and sends commands, I believe via vmware tools (open-vm-tools), to set the new IP/gateway and DNS
<JediMaster> I highly doubt that vmware/vsphere would actually write directly to the disk, so I suspect it's the open-vm-tools package that does the network changes. It's worked perfectly on Ubuntu 14.04 and 16.04, but then gets stuck waiting for the network interface to configure on 18.04, as it's written to the wrong file
<nacc> JediMaster: you also could install ifupdown (iirc) in 18.04
<dpb1> JediMaster: ok
<JediMaster> nacc.... ah, I didn't know that was an option, but that seems a more dirty hack than writing a script to create a yaml file for netplan somehow ;-)
<dpb1> JediMaster: see https://netplan.io -> examples -> I really do need ifupdown, can I still use it?
<dpb1> JediMaster: can you do that?
<dpb1> JediMaster: also, please take the advice in that first sentence and file your workflow.  The detail that you give in your IRC comments here would be great in a bug.  Say exactly what didn't work.
<dpb1> sorry, *file a bug
<JediMaster> dpb1, Sure, I'd be happy to, thanks for your help, nacc too
<JediMaster> Should I file one in both netplan and open-vm-tools?
<nacc> JediMaster: +1 :)
<dpb1> JediMaster: that same bug, just *target* it to both projects
<JediMaster> ah yes, of course
<dpb1> tyvm
<JediMaster> No problem, I'll get on to it shortly, thanks
<JediMaster> netplan is certainly more complex than ifupdown & /etc/network/interfaces syntax, but it's so much more powerful, it'll just take a bit of getting used to =)
<bladernr> Hey gang, while I'm testing a customer config, I wanted to see ask if booting from NVMe is a viable option. System has several platter drives meant for data storage and VM hosting, and an NVMe meant for the root FS.  Is that a valid deployment scenario?
<dpb1> bladernr: I'd think it would come down to EFI/bios support?  Unless I'm missing something?  it's just a disk to ubuntu.
<bladernr> dpb1, ok, that's what I thought, I just wanted to validate that. (I've never had my hands on a system with NVMes before now).
<bladernr> thanks
<dpb1> bladernr: oh, ok
<shibabandit> Hope I'm in the right channel and apologies if this was already asked... but attempting to figure out why certain configurations of ubuntu cloud image are missing that used to be there. We typically use the endpoint 'http://cloud-images.ubuntu.com/query/trusty/server/released.current.txt' (or xenial instead of trusty). Used to have the image combination of  'us-east-1', 'ebs-ssd', 'amd64', and 'hvm'. Any help is appreciate
<sarnold> rcj,Odd_Bloke, ^^ does shibabandit's question sound familiar?
<sarnold> shibabandit: I don't spot any hvm in that list ..
<rcj> fginther: ^
<sarnold> thanks rcj
<rcj> shibabandit: it's broken, we're looking into it
<fginther> shibabandit, yes, I'm currently working on the issue
<shibabandit> Thank you rjc. I noticed irregularities to what I had seen in the past for both trusty and xenial, which are the LTS endpoints we use. Would you be able to recommend the right place to get updates on their availability? Would it be this chat or is there a web page I should be checking?
<rcj> shibabandit: https://bugs.launchpad.net/cloud-images/+filebug is the place to file bugs and then you can track updates on fixes
<sarnold> oh cool, I don't think I've seen this yet :)
<sarnold> Odd_Bloke: ^^ it's been handled, feel free to ignore ;)
<rcj> shibabandit: We'll put a link on the top page of cloud-images.u.c because they're only on the individual releases where you'll see the bug link (ex. https://cloud-images.ubuntu.com/xenial/current/)
<shibabandit> Thank you for your help!
<rcj> shibabandit: We're going to revert trusty's released.current.txt to the prior serial which has a full compliment of images until the publication is complete.
<zero_shane> hi all - I'm testing Bionic Beta 1 via netboot - and all of my VMs or metal installs hang on 'update-grub'.  I searched through launchpad ... but no dice.  Is this a good place to discuss, or should I take my business elsewhere?
<sarnold> zero_shane: it's not wrong, but not exactly a high-traffic channel either .. but I don't know if #ubuntu+1 is mostly desktop folks or if there's netbooters there too
<sarnold> zero_shane: it's not wrong, but not exactly a high-traffic channel either .. but I don't know if #ubuntu+1 is mostly desktop folks or if there's netbooters there too
<zero_shane> ok - will check there - they're pretty low user count - but will try there
<ahasenack> zero_shane: which install image is that, server or desktop? And I presume it's beta2, right? Or one of the ubuntu variants?
<zero_shane> server
<zero_shane> http://releases.ubuntu.com/18.04/ubuntu-18.04-beta2-live-server-amd64.iso
<zero_shane> the only server ISO I could find for Bionic
<zero_shane> I had to download the Bionic netboot kernel and initrd which isn't bundled in this ISO
<ahasenack> interesting, cdimage.u.c has a non-live one
<ahasenack> maybe that's with the old installer, I'm not sure
<ahasenack> http://cdimage.ubuntu.com/releases/18.04/beta-2/ubuntu-18.04-beta2-server-amd64.iso
<zero_shane> ISOs everywhere .... I'll check that one out too - thx for the pointer
<dpb1> so
<dpb1> zero_shane: this would be the best: http://cdimage.ubuntu.com/daily-live/current/
<dpb1> zero_shane: that is the new installer, much faster, less questions, etc
<dpb1> zero_shane: you can read more about it here: http://blog.dustinkirkland.com/2018/02/rfc-new-ubuntu-1804-lts-server-installer.html
<zero_shane> hmm @dpb1 - it appears that's just a new interactive installer, right ?   I don't care about those - we deploy 10s of thousands of machines via Preseeds
<zero_shane> it definitely looks like a nice overhaul/replacement for the old text based installer, though
<dpb1> zero_shane: ok, then yes.  for preseed, stick with the old d-i based one
<zero_shane> :)
<sarnold> kirkland`: pretty screenshots on http://blog.dustinkirkland.com/2018/02/rfc-new-ubuntu-1804-lts-server-installer.html  :D thanks! everything needs more screenshots..
<dpb1> zero_shane: however, I'd use the one from here... http://cdimage.ubuntu.com/ubuntu-server/daily/current/, the beta itself isn't as interesting as the most up to date (it will be closer to what we ship)
<dpb1> zero_shane: but, I understand you are having issues with what you are trying, so if you repeat the issue and think it's a bug, please do give more details on it, we are very interested in getting that kind of feedback here.
<zero_shane> @dpb1 - will try the daily images - thx !
<shibabandit> I see the AMI listings showing up now in 'http://cloud-images.ubuntu.com/query/trusty/server/released.current.txt', thank you this resolves my issue.
<ProCycle> Does anyone have experience with using systemd to run multiple instances of mariadb on ubuntu 16.04?
<ProCycle> I'm looking at https://mariadb.com/kb/en/library/systemd/
<roaksoax> ProCycle: why not just put them in lxc container s?
<dpb1> +1
<dpb1> lxd
<ProCycle> It has a short blurb about it but I can't seem to find the mariadb@.service file anywhere to figure out how to use it
<dpb1> you'll save yourself a huge headache
<roaksoax> indeed
<dpb1> lxc launch ubuntu:xenial
<TJ-> That's a template file ProCycle
<TJ-> ProCycle: apt-file search reports mariadb-server-10.1: /lib/systemd/system/mariadb@.service
<ProCycle> I've never used containers before, seems like a whole can of worms
<TJ-> container of worms? :)
<sarnold> *groan*
<TJ-> Hey! I was gardening all day, I had containers of worms :)
<ProCycle> Thankyou TJ- found it
<dpb1> ProCycle: well, running multiple instances of mysql on the same filesystem surely has it's own challeges
<ProCycle> I was just running multiple databases on one server instance but that seems to open a can of worms when dealing with mariabackup
<ProCycle> They aren't high performance databases, just one main one and a bunch of seldom used databases
<TJ-> systemd's templating for multiple instances is very useful
<TJ-> and very elegantly implemented
<ProCycle> dpb1, are you referring to the fact you need separate directories or something else more sinister?
<ProCycle> TJ-, Okay looks like the template uses /etc/mysql/conf.d/my%I.cnf
<TJ-> ProCycle: yes
<ProCycle> Inside that cnf file do I still need to do the whole [mysqld1] group naming or just a plain copy with [mysqld] ?
<dpb1> ProCycle: really a portion of the cattle vs pet argument.  single-purpose your box device with containers.
<dpb1> then that container is focused on one thing
<ProCycle> Yeah I debated that, typically I would just run multiple VMs for each but seemed wasteful
<dpb1> that's the nice thing about containers
<dpb1> density
<dpb1> you can run 10-100x more per server than vms, really you are just paying for an init system and your application.
<ProCycle> But having a separate server instance for each different project makes backups easier than combining all of them into one instance
<ProCycle> One of these days I'll learn how to use containers, I just don't have the time to right now
<ProCycle> Thanks for the input though, it's certainly something I considered
<cocoa117> i am trying to create multiple routing table and mark IP packet to certain IP using ppp0 rather then default route
<cocoa117> however it appears the return packet from remote never reach to the application, e.g. curl timeout
<cocoa117> i run tcpdump showed the remote IP send packet back, but local app never received them
<cocoa117> can anyone think any reason this might be?
<ProCycle> TJ-, I got it working (needed to create data directory and set perms, run mysql_install_db, and run the secure script)
<TJ-> ProCycle: yes, that'd be expected
<ProCycle> That section on multiple instances sorta says that but doesn't really spell it out
<TJ-> yeah, they're familiar with the program so they forget to mention the hidden assumptions
<ProCycle> Huh there seems to be problem with the mysql_install_db script
<ProCycle> I ran it with --user=mysql but there's one directory it didn't set the group to mysql on the install
<ProCycle> which is strange cause the default database when it installs mariadb it does
<ProCycle> the mysql directory
#ubuntu-server 2018-04-07
<gtrmtx> hey guys, i have an apache question and #httpd is dead at the moment. mind if i ask here?
<sarnold> sure
<sarnold> it's pretty dead here too though :/ hehe
<gtrmtx> lol i hear ya
<gtrmtx> friday night
<gtrmtx> this is my setup
<gtrmtx> http://pasteall.org/pic/show.php?id=8799afe9f3ad11a8fefb7c702bff54e9
<sarnold> yah. It's just about the end of my attention span, hehe :)
<gtrmtx> the problem is site2 is trying to resolve the private ip on the client side
<sarnold> which process is doing the resolving?
<gtrmtx> what do you mean?
<gtrmtx> thats if i type sub.domain.com in the browser
<sarnold> you can use dnsviz to get an independent view of what your DNS servers are serving .. http://dnsviz.net/d/www.gmail.com/dnssec/
<RoyK> what happened to lvm in the 18.04 server installer?
<RoyK> or raid?
<tomreyn> hmm, good point, it doesn't seem to support lvm or raid or FDE (yet?).
<tomreyn> that's if we'Re talking the "live" installer
<tomreyn> i think the old one still does
<tomreyn> the "live" one also seem to fail to detect existing partitions
<tomreyn> that's my local copy  of the daily build form apr 5
<RoyK> FDE?
<RoyK> tomreyn: yes, I was thinking of the "live" installer - is there a boot argument or anything to use the old one instead? the new one doesn't seem complete, or perhaps just a nice thing for newbies
<RoyK> oh - fde - encryption - sorry
<tomreyn> FDE -> full disk encryption (dymcrypt-LUKS)
<RoyK> yeah, googled it ;)
<RoyK> just hadn't seen that acronym before
<tomreyn> i dont actually know but i'd be surprised if the server 'live' installer could switch to the old installer using a boot parameter. i think you will need to download a daily build or beta of the classic installer instead
<tomreyn> i seriously hope the serve 'live' installer won't be considered the default media for server installations before this functionality is added.
<RoyK> it's the default on the beta
<tomreyn> TJ-: any idea how to get the right people's attention on this?
<RoyK> http://releases.ubuntu.com/18.04/ <-- nothing but the "live" installer there for server
<tomreyn> right
<tomreyn> non "live" server installers for beta2 are at http://cdimage.ubuntu.com/releases/18.04/beta-2/
<tomreyn> daily: http://cdimage.ubuntu.com/ubuntu-server/daily/
<tomreyn> daily live: http://cdimage.ubuntu.com/ubuntu-server/daily-live/
<TJ-> tomreyn: sounds like typical Canonical, throw it out before it has feature parity. The whole idea of a 'live' server install makes little sense - from what I've heard it's like netplan.io - missing core features and seemingly not being actively enhanced (at least not rapidly enough to not cause a lot of pain)
<tomreyn> yes :-/ more dev than ops
<RoyK> I can see the point of a live installer - it gives you a *lot* of control and tools in case of fixing a broken system - but then - it doesn't make sense if the installer can't do the same as (or more than) the old one
<tomreyn> i do like the new live installers' interface, though, but without supporting these critical functionalities, it's just not ready.
<RoyK> indeed
<TJ-> less bling, more bang!
<RoyK> and now it's past feature freeze, so I guess it'll stay
<TJ-> look at the massive numbers of enhancements/bugs in the Canonical projects and you see the same theme. netplan.io, gnome-software (ubuntu-software), subiquity, etc.
<tomreyn> i guess preseeding works differently for live and non live also. if so, then companies will hate to switchto live and later back to classic server installer.
<tomreyn> i can understand the commercial drive to create something ubuntu-unique, pseudo (or really?) proprietary to create a(nother?) USP, but i wish this was created a different way.
<TJ-> well preseed is handled by debian-installer. not sure subiquity is designed to do that. From what I read it's more about 'dd'-ing prepared images into place
<tomreyn> on the other hand canonical seems to struggle a lot financially, setting off employees and killing projects regularly.
<tomreyn> https://wiki.ubuntu.com/UbiquityAutomation
<tomreyn> last edited 2015-03-24
<tomreyn> "Preseeding keys for the following installer components will not be used in Ubiquity, usually because they do not fit with Ubiquity's mode of operation: [...] LVM and RAID partitioning [...]"
<TJ-> tomreyn: Canonical is trying to prepare itself to get additional outside investment, which requires each project/unit/team to be profitable
<TJ-> Hence dropping Mir, Unity, and so forth
<tomreyn> i understand that killing non profitable projects makes complete sense economically and is actually a requirement for a healthy business.
<TJ-> Yes... the issue though is they throw these projects out into the path of everyone then leave them half-finished or buggy.
<tomreyn> right
<TJ-> Until they started this subiquity project I had thought -server would remain safe in the main, even though there's a lot of Canonical stuff going around it like Openstack, MAAS, Landscape, the LXC>LXD containers
<tomreyn> is "subiquity" not a typo then?
<tomreyn> i thought you meant "ubiquity"
<TJ-> not at all (Server Ubiquity) although from what I've seen it doesn't build on ubiquity, it's a totally separate project
<tomreyn> got it
<TJ-> !info subiquity
<ubottu> subiquity (source: subiquity): Ubuntu Server Installer. In component universe, is extra. Version 0.0.29 (artful), package size 60 kB, installed size 259 kB
<RoyK> what's Mir?
<tomreyn> oh ok
<TJ-> RoyK: it was Caonical's display server supposed to compete with Wayland protocol compositors
<RoyK> oh
<RoyK> well, guess I'll stick to Debian, then, if these are the ways of Canonical
<TJ-> tomreyn: well, finally got an 18.04 charoot doing 12.04 > debootstrap > 16.04 > debootstrap >18.04 !
<tomreyn> wohoo!
<TJ-> RoyK: the core desktop has dropped Unity and has returned to Gnome, adopted Wayland where it's viable but sticking with Xorg as the default now (since gnome-mutter (Wayland compositor) still has a lot of deficiences
<tomreyn> btw there is https://github.com/CanonicalLtd/subiquity/blob/master/subiquity/models/raid.py
<tomreyn> but i didnt see how to use it on the UI
<TJ-> tomreyn: best to stick to Debian-Installer - it has 20 years of experience of doing these things!
<tomreyn> yes, manual partitioning makes me want to do things to who invented it, though
<tomreyn> but we can preseed
<tomreyn> and pre-configure partitions
<tomreyn> s/invented/implemented/
<tomreyn> i guess sdebian installer is the new mini.iso then ;-)
<tomreyn> not officially supported but actually required and supported forever.
<TJ-> that's down to 'partman' - awful bit of software that is, always rescanning when it's not necessary
<tomreyn> err built, not supported
<tomreyn> i never used partman outside of the installer, but use parted, an di think both use libparted?
<TJ-> right - I have a washing machine bearing to change! Now 18.04 has installed I can come back and port the config over later :)
<tomreyn> pregress! stone -> wheel
<RoyK> TJ-: I noticed the Xorg rollback
<albech> we are in the process of researching alternatives to our Xenserver 7.2 setup, since they have changed the pricing policy in 7.3. We are currently looking into oVirt and Proxmox any other recommendations or comments? Less than 100 nodes and 400 VMs. All SAN storage. Will be setting up a lab environment next week.
<albech> We are NOT looking to make it more complicated than need be (KISS). So Openstack has been rules out unless we come up with a REALLY good reason to use it.
<AJ2> Hi all
<AJ2>  Hi All.  I need some help with my Ubuntu 16.04.  On my network all my devices have an IP as 192.168.1.xxx, but for some reason the IP on my Ubuntu server is 192.168.10.x.  Due to this reason I am not able to ssh into theis machien from the same network and moreoever from outside my home network. In my router settings I do not even see this ubuntu pc; therefore I can safely say I have not assigned a static IP to this machine on my
<AJ2> router.
<AJ2> I would appreciate anyones help on this.  This is a new install of ubuntu I did
<dpb1> albech: Canonical's bootstack might interest you.  https://www.ubuntu.com/cloud/managed-cloud  -- for a straightup xenserver replacement, I mean, it's vsphere, but I can't imagine price there is better. :)  All the features of public cloud are why you use openstack (developer self-service, standard instance launch models, staging -> production workflows, etc).
<dpb1> AJ2: are you on the terminal of the Ubuntu server?  do `ip addr; cat /etc/network/interfaces` and pastebin the results please.
<dpb1> !pastebin | AJ2
<ubottu> AJ2: For posting multi-line texts into the channel, please use https://paste.ubuntu.com | To post !screenshots use https://imgur.com/ !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic.
<dpb1> !pastebinit | AJ2
<ubottu> AJ2: pastebinit is the command-line equivalent of !pastebin - Command output, or other text can be redirected to pastebinit, which then reports an URL containing the output - To use pastebinit, install the Â« pastebinit Â» package from a package manager - Simple usage: command | pastebinit
<RoyK> You are correct that the new live installer does not have feature parity with the existing installer. Replacing something that was developed for more than a decade will take time.
<RoyK> The Answer To Why RAID And LVM And FDE Is Not Supported In The Live Install
<RoyK> ye gods - replacing a good installer with crap without implementing these things doesn't seem like a very good idea
<arooni> not sure why even though i have 3306                       ALLOW       192.168.1.100;;; i dont seem; allowed to telnet from that ip address
#ubuntu-server 2018-04-08
<RoyK> arooni: does mysql listen to IP other than localhost?
<Neo4> Hi guys!
<Neo4> What books do you suggest me to read for learn ubuntu server?
<Neo4> I'm interesting VPS
<Neo4> so far I've read 2 books, ubuntu official guid and the postfix https://www.amazon.com/Book-Postfix-State-Art-Transport/dp/1593270011/ref=sr_1_1?ie=UTF8&qid=1523195645&sr=8-1&keywords=the+book+of+postfix
<andol> Neo4: https://debian-handbook.info/ is really good, and most of it applies (not surprisingly) to Ubuntu as well.
<Neo4> andol: I'm going to read 20 books about programming or linux for now
<andol> That's a lot of reading :)
<Neo4> andol: what do you offer, how to learn it without?
<Neo4> reading is only way
<Neo4> fast way
<andol> Reading is good, just as long as it's mixed with a bit of doing/experimenting.
<Neo4> andol: about SSL about Apache Mysql
<Neo4> LAMP
<Neo4> there not muc applications that is interesting me, it's apache2, ssl, php, mysql, and mailserver, this is main
<Neo4> some book about linux in general
<Neo4> andol: Do you think this proverb will work (the more you read the more you'll know the more you  know the more places you'll go ) ????
<Neo4> andol: If I read more will I know Linux?
<Neo4> yes?
<Neo4> will I know more if I read 20 books about VPS?
<andol> Well, different people learn in different fashions.
<Neo4> andol: really?
<andol> Yes.
<Neo4> andol: but exists prover? Do you agree that is true that the more you read the more you'll know?
<Neo4> andol: if I will read book 300 page per day I will know much?
<Neo4> andol: Does reading speed influence on our knowledge?
<Neo4> andol: what is relation between reading speed and your knowledge?
<Neo4> maybe direct?
 * andol returns to writing code.
<Neo4> andol: ok
<Neo4> :)
<Neo4> I don't know what to do, I know nothing :(
<Neo4> strong lack knowledge :( :( :(
<andol> Socrates would be proud of you :-)
<Neo4> andol: He wouldn't, I'm stupid...
<Neo4> andol: I think about start reading every day 50 - 100 pages? How many pages do you read everyday?
<Neo4> andol: do you have some daylly minimal?
<Neo4> technical books very easy read, especially about OS or linux, we could read 100 pages for 2 - 3 hours
<Neo4> without training it might be 5 hours, but if do everyday it will 2 hours, on one page 1 minute
<Neo4> ok, 2 minute for 1 page average speed, 100 pages, it's 120 minutes, and for 2 hours you can read, and I don't think it's big change for us?
<Neo4> will in helpful?
<Neo4> can it overall my illiteracy?
<Neo4> overcome*
<MJCD> Hey all - anyone know a good gui for managing the standard lamp stack install from ubuntu?
<Neo4> guys, what is relay?
<Neo4> in postfix relay
<Neo4> ti's when we send message from MUA to postfix and its resend it to MTA it is called relay?
<Neo4> and there exists other way it's when we send message from so called 'localhost', it's machine where resized postfix.
<Neo4> and I see postfix could only delivered message to localhost or send or relay?
<Neo4> it has only 3 options of work?
<Neo4> 1 deliver - put message to /var/mail
<Neo4> 2 relay - send message from MUA or other MTA by SMTP
<Neo4> 3 send - simply send message from localhost
<Neo4> and there old book, and it is written that we can use SMTP AUTH, it's way when we install special set of package Curel SASL and it allows authendification over SMTP
<Neo4> and for test postfix we could use 3 way:
<Neo4> 1 send message using inner postfix sendmail (it's special inner utilit)
<Neo4> 2 send message from command line, it will equal like we send from MUA
<Neo4> 3 send using telnet, somethign like this telnet localhost 25
<Neo4> we can test our settings without real test, simply using telnet, it will equal real like real request, for thsi we must know basics of SMTP protocol
<Neo4> as you know postfix it's not one application, it's set of apps, there inside many daemons, Postfix linked them together
<Neo4> and as I know we could use this /etc/init.d for start/stop applications in debian
<Neo4> ###/etc/init.d/postfix (start/stop) as welll as apply for others applications /etc/init.d/mysqll start/stop/restart or apache2
<Neo4> guys, who want I will teach you how postfix works, I've read books and have plethora  of knowledge :)
<Neo4> we can compile postfix from source, why will it need?
<compdoc> postfix is too complex for some things
<Neo4> sometimes there is not all (types or formats) included, mysql, or something others. When we want support mysql and we checked (there exists special command that show list of supported data) and we don't see mysql
<compdoc> if you only need to send mail, like system notifications, nullmailer is a lot simpler
<Neo4> we must load source, there should be some settings file, decomment line with mysql and using make do new
<Neo4> compdoc: I need send mail from wp site
<Neo4> wordpress site*
<Neo4> compdoc: it's not much complex, there exists two files for settings, one has all variablees and other with tables, damouns or something simila
<Neo4> first file with constants, second with supported data, we can on/off this data off/on demons
<Neo4> compdoc: what do you use for send mail from site?
<compdoc> nullmailer
<Neo4> compdoc: what does mean relay only transport agent?
<Neo4> compdoc: it can only redirect message and can't send from localhost?
<Neo4> http://wiki.linuxquestions.org/wiki/Nullmailer
<Neo4> compdoc: can you site send message? Do you use it for VPS?
<Neo4> compdoc: fast fast please :)
<Neo4> ok, doesn't matter, I will use postfix anyway
<Neo4> compdoc: do you know SSL?
<Neo4> compdoc: I want to get this book, couldn't download for a while
<Neo4> https://www.amazon.com/Bulletproof-SSL-TLS-Understanding-Applications/dp/1907117040/ref=sr_1_2?ie=UTF8&qid=1523204454&sr=8-2&keywords=openssl&dpID=41DKxOvK21L&preST=_SX258_BO1,204,203,200_QL70_&dpSrc=srch
<Neo4> compdoc: is it ok if you give your father advise about SSL book? :) :) :)
<Neo4> for full newbie
<Neo4> from zero to master
<Neo4> compdoc: I found that book, couldn't download http://gen.lib.rus.ec/book/index.php?md5=A675C4F9C2DBC89A677D9EF8D77939D8
<Neo4> Interestingly, what is difference between TLS and SSL
<Neo4> transfer layer security and Secure socket layer, It always written like TLS/SSL that imply it should be equal things
<Neo4> compdoc: are you lost?
<Neo4> compdoc: sorry :(
<Neo4> what books must read pro linux administrator?
<compdoc> Neo4, sometimes you have to try things to see if its right for you. if you have postfix working, go with that
<Neo4> compdoc: ok :)
<Neo4> compdoc: Ð·Ð°Ð³ÑÑÐ·Ð¸Ð»Ð°ÑÑ ÐºÐ½Ð¸Ð³Ð° ÑÑÐ° )
<Neo4> 40 Ð´Ð¾Ð»Ð»Ð°ÑÐ¾Ð² ÑÑÐ¾Ð¸Ñ, ÐºÐ°Ðº Ð¼Ð¾Ð¶Ð½Ð¾ ÑÐ¸ÑÐ°ÑÑ Ð±ÑÐ¼Ð°Ð¶Ð½ÑÐµ ÐºÐ½Ð¸Ð³Ð¸?
<Neo4> :) :)
<Neo4> 48* Ð¿Ð¾ÑÐ¼Ð¾ÑÑÐ¸Ð¼ ÑÑÐ¾ ÑÐ°Ð¼ ÑÐ°ÐºÐ¾Ð³Ð¾ Ð½Ð° 48$
<Neo4> 531 ÑÑÑÐ°Ð½Ð¸ÑÐ°, Ð·Ð° 3 Ð´Ð½Ñ Ð½ÑÐ¶Ð½Ð¾ Ð¿ÑÐ¾ÑÐ¸ÑÐ°ÑÑ
<Neo4> Ð¾Ñ ÐºÐ°ÐºÐ¸Ðµ ÑÐ°Ñ Ð·Ð½Ð°Ð½Ð¸Ñ ssl? Ð½Ð¾Ð»Ð»Ñ
<Neo4> ÑÐ¾ ÑÑÐ¾ Ð½Ð°Ð³ÑÐ³Ð»Ð¸ CA, ÑÑÐ´Ð° Ð¾ÑÐ¿ÑÐ°Ð²Ð»ÑÐµÐ¼, ÑÐ°Ð¼ Ð´Ð°ÑÑ Ð½Ð¾ÑÐ¼Ð°Ð»ÑÐ½ÑÐ¹ ÑÑÑÐ°Ð½Ð°Ð²Ð»Ð¸Ð²Ð°ÐµÐ¼ Ð¸ Ð²ÑÐµ
<Neo4> Ð¾Ð±ÑÐµÐ¹ ÐºÐ°ÑÑÐ¸Ð½Ñ Ð½ÐµÑÑ
<Neo4> ÐµÑÐµ ÑÐ°Ð¼Ð¾Ð¿Ð¾Ð´Ð¿Ð¸ÑÐ°Ð½ÑÐ¹
<Neo4> ÐµÑÐµ Ð¼Ð¾Ð¶Ð½Ð¾ ÑÐ²Ð¾Ðµ CA ÑÐ´ÐµÐ»Ð°ÑÑ, ÑÑÐ¾ ÑÑÐ¾ Ð¥Ð
<Neo4> Ð¿Ð¸ÑÑÑ ÑÑÐ¾ Ð±ÑÐ´ÐµÑ ÑÐ¾Ð»ÑÐºÐ¾ Ð²Ð¾ Ð²Ð½ÑÑÑÐµÐ½ÐµÐ¹ ÑÐµÑÐ¸ Ð´Ð¾Ð²ÐµÑÐµÐ½ÑÐ¹
<Neo4> ÑÑÐ¾ ÑÐ°ÐºÐ¾Ðµ Ð²Ð½ÑÑÑÐµÐ½Ð½ÑÑ ÑÐµÑÑ? LAN? local area network?
<Neo4> ÑÑÐ¾ ÑÐ°ÐºÐ¾Ðµ CID Ð½Ð¾ÑÐ°ÑÐ¸Ñ?
<Neo4> ip /8 /8
<Neo4> ÐºÐ¾ÑÐ¾ÑÐµ Ð½ÐµÑÑ Ð·Ð½Ð°Ð½Ð¸Ð¹ :(
<Neo4> sorry, confused channel :(
<tomreyn> albech: i am in a simiar situation regarding (commercial) xenserver. my use case is a little different: need a solution which customers can use to manage virtualization on single (or a few pooled) dedicated servers (mini private cloud). management should nto require client software (i.e. provide web interface, if possible). https://en.wikipedia.org/wiki/OpenNebula might be another option.
<albech> tomreyn: yeah that is slightly different. will look at the opennebula again. did look at it some years ago.
<albech> tomreyn: what is really the most important to us is that there is an active community and IRC around the project.
<tomreyn> albech: i don't know the opennebula state there. ganeti is another option, but already too large for the single server use case.
<albech> tomreyn: we do have several servers and SANs available, i just dont want to add too much complexity if its not really needed.
<albech> we have a couple of young admins who are really pushing for openstack, but i know who is going to save their a.. and work late nights when its not working as expected. ;)
<Seveas>  /win 31
<tomreyn> hehe, right, i would not want to do this with a small team and not overly filled budget either.
<TJ-> tomreyn: am I missing something? Is libvirt/virt-manager not suitable?
<tomreyn> TJ-: neither are web interfaces
<tomreyn> i use them personally, but it's not end user firendly
<TJ-> There's web interfaces for libvirt
<TJ-> amongst many other libvirt using application categories... https://libvirt.org/apps.html#web
<tomreyn> yes. those web interfaces i know are either bad or expensive.
<tomreyn> .or do not work for single dedicated servers
<tomreyn> i guess the rquirements are the issue there. must be open source, good, cheap
<tomreyn> forcommercial use
<lynorian> how is virt-manager hard to use?
<lynorian> Why do they need to do it in a web browser?
#ubuntu-server 2019-04-01
<xibalba> what're you guys using for a time server now a days? ntpd/openntpd/chrony ?
<xibalba> i need it to provide time to a bunch of down stream devices
<tomreyn> many prefer chrony over ntpd for its reduced complexity / newer code. but i don't know how well it works as a server (i suspect it does).
<xibalba> i'm going to point alot of network gear at it, i know ntpd is tried and true but chrony is the new comer and has better/quicker time sync code
<xibalba> i have seen it more demo'd as a client based time sync than a server for various gear down stream
<tomreyn> maybe also ask in ##linux , ##networking , ##security to get more opinions
<xibalba> will do, thanks tomreyn
<lordievader> Good morning
<Ussat> 19.02 wont be a LTS release, correct ?
<sdeziel> Ussat: correct, next LTS will be 20.04
<Ussat> OK, thought as much, just making sure
<sdeziel> s/19\.02/19.04/
<Ool> each 2 years
<Ussat> we only run LTS, I might spin a 19.04 for testing etc....
<Ussat> so even LTS odd not ?
<sdeziel> in the past, the following were LTSes: 6.06, 8.04, 10.04, 12.04, 14.04, 18.04
<sdeziel> and 16.04 :)
<ahasenack> rbasak: hi, what would you expect to happen with an SRU for two packages, where one introduces a new api call (samba), and the other one just needs a rebuild in order to detect it and use it
<ahasenack> rbasak: samba will land later today, but the other one (gvfs) needs to wait for samba to arrive in proposed and then be rebuilt
<ahasenack> so for the second one, would you expect an MP? Or just a dch -R like change and upload?
<ahasenack> both packages are tasks on the same bug
<teward> hmm... has anyone had any issues submitting DNS queries to 127.0.0.53, the stub resolver for SystemD?  `dig` to it always times out as if it doesn't know how to reply...
<teward> or do I have to configure it differently?
<blackflow> teward: I have no issues, I don't use that pile of ...
<lordcirth> teward, systemd-resolve has worked pretty well for me. I just tried dig google.com @127.0.0.53 and it worked fine too.
<teward> hmm
<teward> wonder why then it's being a derp
<blackflow> yeah I wonder...
<lordcirth> teward, what does systemd-resolve --status show?
<teward>          DNS Servers: 127.0.2.1  <-- but this server uses a local bind9 recursive resolver and only the stub resolver derps on replies
<teward> DNS *does* work systemwide though, and querying the bind server direct works too (127.0.2.1, and yes that address does exist on-box)
 * teward shrugs
<teward> probably some SystemD nonsense
<lordcirth> teward, does /etc/resolv.conf point to systemd?
<teward> lordcirth: yes
<teward> sudo nano /etc/resolv.conf
<teward> oops
<teward> yep
<teward> so we know the stub resolver works NORMALY, but it just doesn't like something about the rest of the infra for some reason :?
<teward> not sure what that's about
 * teward checks the progress on his local packages mirror's sync, and sees it's completed.
<teward> ... wow 1.2TB in a little over 8 hours o.O
<sarnold> nice
<teward> sarnold: yeah, 20MB/s average (and yes that's mega*bytes*) is pretty decent :P
<lordcirth> shiny. I'm testing a Ceph cluster right now, getting 3 to 5 GiB/s read O_o
<sarnold> teward: man how'd you win the archive mirror lottery? :)
<teward> sarnold: well, gigabit internet is nice... and sleeping for 12 hours means nothing's using the internet majorly.  :P
<lordcirth> Turns out that sticking enough 7200rpm drives together can get things going pretty fast, as long as you have nvme for write journaling
<teward> heh nice
<lordcirth> Know what the bottleneck is? 100GiB/s ethernet.
<lordcirth> On reads, that is
<lordcirth> On writes, it's the Write-ahead-log (WAL) on nvme.
<sarnold> then you need more nvmes! :D
<lordcirth> Well, considering that the prod cluster is doing fine, and this next gen one I'm testing is several times faster on every metric, no, not really :P
<lordcirth> Not that I would *mind* more vroom
 * teward anti-vrooms the Ceph cluster with magicks
<teward> lordcirth: this server IS set up with its recursive resolver to pass through a pihole though...
<teward> ... wonder if *that* is the first bottleneck breaking the replies
<teward> yep that'd be the problem
<teward> lordcirth: solved it xD
<lordcirth> teward, cool. blackflow ha, not systemd's fault :P
<blackflow> lordcirth: lies!
#ubuntu-server 2019-04-02
<Goop> How do I install phpmyadmin without Apache? I have a server that MUST run on nginx only.
<andol> Goop: Likely the same you would install any other phpmyadmin application under Nginx, using php-fpm. Also, a quick google search suggest that there are plenty of tutorials availible.
<Maximxxx100> Ok interesting problem, one of my servers that I own has been comprised. It's making thousands of connections to one ip and port per second from random ports on my server. Now nothing about it is showing up on netstat, and nothing I can do shows what process or why this is happening. I've shut down every system process that is not necessary, and used ufw to block every in/out connection except SSH. And I tried blocking the ip
<Maximxxx100> with iptables/ufw with no luck. It's still making connections no matter what I do. did a port scan on the ip and the ONLY open port is the one my server is making thousands of connections per second to located in Vietnam. Thanks.
<bhuddah> Maximxxx100: backup all data.
<Maximxxx100> I have bhuddah, I've had this particular server for 6 years without any problems. I would like to save it without going Nuclear and wiping with a new install, but I cannot stop my server from contacting this ip at all.
<bhuddah> Maximxxx100: you can never ever save a compromised machine. you absolutely need to wipe it. repair is not an option. never.
<Maximxxx100> I dont know if it's compromised for sure, it would be nice If I could find more information about it before. I wonder how it would get compromised in the first place. I've had unattended upgrades working perfectly, using only ssh keys and good passwords. And only used it to host a few files and private services for me.
<Maximxxx100> and I always used user accounts for all services, yet the connections are coming from root. darn...
<Maximxxx100> How did the sneaky Vietnam guy hack his way in is what I want to know.
<awalende> Is it possible, that the "out of memory killer" kicks in not when there is not enough ram, but too many pagetables are created?
<awalende> My qemu vm keeps kicking the bucket, even tho I have still a bunch of memory left on the host
<cpaelzer> awalende: maybe memory of one special kind is depleted like lowmem in the 32 bit past
<cpaelzer> awalende: the oom should have put some output in your dmesg that might help
<cpaelzer> you could pastebinit to think about it together
<cpaelzer> soemtimes knowing /proc/meminfo, /proc/pagetypeinfo can also help
<cpaelzer> depends on your actual case
<cpaelzer> awalende: in general https://linux-mm.org/OOM has some more details and also a script to collect more data (I haven't tested/used that script, so it might need some polishing)
<awalende> kern log: https://paste.ubuntu.com/p/BpSQ6ysxj4/
<awalende> meminfo: https://paste.ubuntu.com/p/shg9q62ZbW/
<awalende> pagetype info: https://paste.ubuntu.com/p/KxTfPKjqDg/
<cpaelzer> ther kern log should have ~15 more lines above that starting with Mem-Info
<awalende> like this? https://paste.ubuntu.com/p/xXsv88fMHd/
<cpaelzer> awalende: without spending too much time I unfortunately also see no clear reason
<cpaelzer> awalende: the allog being only order=0 from GFP_HIGHUSER_MOVABLE should succeed if your pagetypeinfo matches
<cpaelzer> unless the pagetype info is e.g. form long before/after the actual issue
<cpaelzer> which means whatever was depleated before isn't anymore when taking the data
<cpaelzer> awalende: oh here we go
<cpaelzer> awalende: in the moment you fail the high memory has only free:33316kB
<cpaelzer> but there is min:33320kB
<cpaelzer> and the  GFP_HIGHUSER_MOVABLE can not tap on that reserve
<cpaelzer> it might be that your overall free mem is on other nodes and/or other zones
<awalende> meh, then I probably have to bash my monitoring since it was reporting 110gb free mem on crash :x
<awalende> but thank you for lookin into it cpaelzer! I'll try to get a bigger grip on different memory sections on my server
<awalende> cpaelzer , I believe to have found the cause now. Our QEMU VM has NUMA support enabled. I believe the seperation of memory banks can cause memory chokes depending on the load. I guess thats what you meant with "the free mem is on other nodes"?
<cpaelzer> awalende: yes
<cpaelzer> awalende: https://libvirt.org/formatdomain.html#elementsNUMATuning
<awalende> We probably want the "preferred" mode here.
<Ussat> Anyone here run an ansible master on Ubuntu ? I assume you use the official ansible PPA's as listed here:  sudo apt-add-repository --yes --update ppa:ansible/ansible
<sdeziel> archive.ubuntu.com is terribly slow (~15kB/s) from multiple locations
<lotuspsychje> sdeziel: known issue @ the ubuntu-mirrors guys
<sdeziel> lotuspsychje: thanks :)
<lotuspsychje> <moon127> tobikoch: we're aware, we had a large spike in traffic ~90 mins ago.  No sign that is anything but legitimate traffic so far, but we're pushing our transit to capacity at this time.
<sdeziel> I'll enjoy the dial-up experience in the meantime ;)
<lotuspsychje> sdeziel: :p try sudo apt update perhaps
<Ussat> ok...so a new build of ubuntu.....: Err:4 http://us.archive.ubuntu.com/ubuntu bionic-backports InRelease
<Ussat>   403  Forbidden [IP: 91.189.91.23 80]
<Ussat> getting a ton of those......
<dlloyd> yeah getting timeouts and sporadic failures for the aws us-east mirrors as well
<Ussat> Well.......have a few builds to do today...this puts a crimp in plans
<JanC> maybe ask in #canonical-sysadmin
<Ussat> sigh
<Ussat> this is not a good start to my day
<rbasak> Ussat: 403 seems odd. Are you sure you don't have something transparently MITMing?
<Ussat> definately
<Ussat> OK, I was wrong, transparent proxy on this network, talking to my network team now rbasak
<rbasak> Ussat: #canonical-sysadmin confirmed a known issue. Perhaps your MITM is transforming the known issue into a 403?
<Ussat> rbasak, yup...working with my networking team now
<Ussat> its a monday here :(
<Ussat> is there a someplace I can see all the IP's used for updates at canonical and ubuntu ? or a range, I need them for my prox
<teward> sarnold: i maaaaay have found a bug in using umt on a later release :/
<JanC> Ussat: doesn't seem like a good idea to hardcode that
<Ussat> Ya its fixed
<JanC> imagine if they would decide to dynamically add cloud instances or something like that, there is no way you could keep a list of "known download servers" up-to-date...
<Ussat> JanC, ya its been fixed here
#ubuntu-server 2019-04-03
<lordievader> Good morning
<brektyme> Hi, I'm getting a libc segfault when pxebooting an install during the disk detection. Does anyone have any ideas?
<blackflow> brektyme: hard to tell unless you trace something somwhere to see where it faults. otoh, is the booting OS correct for the hardware and architecture?
<brektyme> yes
<brektyme> I'm using the kernel and initrd from the mirrors, both amd64
<brektyme> which is correct for the hardware
<brektyme> the kernel and initrd from here: http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/
<brektyme> using the kernel and initrd from here gets past the segfault, but was causing issues with installation later around not being able to find kernel modules: http://archive.ubuntu.com/ubuntu/dists/xenial-updates/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64/
<brektyme> although just testing now with the newer kernel and initrd it had no issues with installation.
<DammitJim> has anyone set up drbd on ubuntu servers and use pacemaker to not only ensure who is the master, but also trigger the start of a service?
<sdeziel> DammitJim: drbd works really well (tested on every LTS from 10.04 to 16.04 so far) but not with pacemaker
<DammitJim> sdeziel, how are you using it w/o pacemaker?
<DammitJim> thanks for your info, btw
<DammitJim> man, a lot of people don't like glusterfs and recommend ceph besides drbd
<sdeziel> DammitJim: I'm using it through ganeti (cluster manager) that automates the creation of DRBD volumes that get assigned to QEMU VMs
<sdeziel> DRBD is kind of old school
<DammitJim> holy shnikies!
<DammitJim> but drbd is pretty solid, right?
<sdeziel> oh yeah
<sdeziel> we've been live migrating VMs since many years
<sdeziel> no DRBD issue to report
<DammitJim> how do you share the drbd file system to be accessed by clients?
<DammitJim> like is this something where I can have a process in one of the drbd servers (or on all of them) where the process writes locally to the file system?
<sdeziel> the clients are QEMU processes so all they see is a regular block device like you'd get from a LVM slice
<sdeziel> DammitJim: yes you can have that
<DammitJim> oh, interesting... they see a block device
<DammitJim> so, say 2 drbd servers
<sdeziel> DammitJim: you can see DRBD as RAID1 over the network
<sdeziel> read are served by the local copy if possible with a fallback to the peer
<DammitJim> then the qemu process runs on another server, but can see the RAID1 over the network to access the block device for a new VM?
<sdeziel> we have say /dev/drbd1 on node01 and node02. A given VM running off of node01 uses /dev/drbd1. During a live migration, the QEMU process is "moved" to node02 which already has the same disk content at /dev/drbd1 so only QEMU's memory needs copying
<DammitJim> yikes
<sdeziel> as far as QEMU is concerned it writes to a local disk, end of story
<sdeziel> DRBD does the magic behind the scene
<DammitJim> that's insane
<sdeziel> insane in the good way?
<DammitJim> I think I understand it, but my brain might still be behind processing this
<DammitJim> insanely good, yes
<sdeziel> yes it is
<DammitJim> do you remember if you had to zero out the hard drive before configuring drbd?
<DammitJim> I don't know why I'm running into issues with /dev/sdb1 where it doesn't allow me to add it
<sdeziel> DammitJim: I'd refer you to https://docs.linbit.com/docs/users-guide-8.4/#ch-configure as it's been an eternity that I manually configured a DRBD device manually (ganeti does it all for us)
<sdeziel> DammitJim: unless you have unrelated partitions on sdb, I would recommend to either directly feed /dev/sdb to DRBD or put LVM/etc under it
<sdeziel> partitions make it hard to resize later down the road
<DammitJim> gosh, LVM might make sense in case that someone decides we need more storage
<DammitJim> so, I can even do this with LVM?!!!
<sdeziel> that's what we use
<DammitJim> gosh, I have a lot of work ahead of me
<sdeziel> LVs under and DRBD over
<DammitJim> I need to take baby steps
<sdeziel> I think the LVM way is best even for experimenting
<DammitJim> totally separate question (I need to get to other important stuff)... any recommendations to join a Windows domain?
<DammitJim> I've run into so many issues with samba/kerberos, etc... even though I have a somewhat working solution
<DammitJim> the systems seem to break that funcionality when upgrading from Ubuntu 16 to 18
<fearnothing> hola
<fearnothing> having some DNS problems
<fearnothing> dig @8.8.8.8 www.google.com - works fine
<fearnothing> dig @192.168.x.x www.google.com - no packets get sent to 192.168.x.x
<fearnothing> wtf
<sarnold> is that address routable? check with ip route get
<fearnothing> it's on the same subnet
<fearnothing> fml
<fearnothing> nevermind
<fearnothing> found the problem
<fearnothing> I'm a dingus
<sarnold> ooh, what was it? :) I always like learning new ways to be a dingus ;)
<teward> fearnothing: wrong IP?  DNS server not working right?  Nonexistent DNS server at address?
<teward> we are curious
<teward> ohai sarnold there be work for you :p
<fearnothing> re-IPed my DNS server 3 months ago didn't I
<teward> hah
<fearnothing> updated all my other hosts' resolv.conf files but forgot this one
<teward> ERR: Wrong IP in DNS nameservers list.
<teward> I've done that too :P
<fearnothing> right, time to do-release-upgrade because I've been a lazy mofo
<sarnold> fearnothing: oooh that's a good one :)
<fearnothing> (?!.*regex)I've got(?:\s(?<problem>[^\s]+)){99}
<fearnothing> yeah for some reason I had used a 32-bit image for my bind server, and that meant I couldn't install something else I needed
<fearnothing> I mean wtf it was only installed 2 years ago where did I even get a 32 bit image
<sarnold> "a dns server probably doesn't need much address space anyway"
<fearnothing> that's ancient history man
#ubuntu-server 2019-04-04
<Deihmos> anyone use unattended updates? how do i blacklist kernel updates?
<sarnold> Unattended-Upgrade::Package-Blacklist in the right config file, /etc/apt/apt.conf.d/50unattended-upgrades on my bionic laptop
<tomreyn> i'm curious on the use case, can you discuss it?
<sarnold> I don't know the apt config system real well, maybe you can put it into a new file withotu modifying whichever one already exists..
<Deihmos> what do i use "linux-generic"
<tomreyn> linux-image-generic and / or linux-image-generic-hwe-* i would guess.
<sarnold>         if re.match(blacklist_regexp, pkgname):
<sarnold> maybe linux-image.*
<tomreyn> actually linux-.* may be better if you want to prevent module updates, too.
<tomreyn> + headers + firmware + tools.
<Deihmos> does livepatch update modules?
<Deihmos> maybe i will just skip live patching
<tomreyn> are you aware that unattended-upgrades will, by default, only install security patches and newer kernel versions, but not actually boot into them?
<tomreyn> i mean it won't trigger a reboot
<tomreyn> unless you ask it to
<sarnold> Deihmos: yes, livepatch can patch modules
<Deihmos> yes i know but you can set it to reboot if needed
<tomreyn> okay, just making sure you're not trying to workaround an issue which does not exist.
<Deihmos> are security updates always kernel related?
<Deihmos> answer is no
<lordievader> Good morning
<geodb27> People : hi ! I'm on a fresh install of ubuntu 18.04 (server) and face a problem with network setup. Indeed, all my parameters (ip, mask, dns servers) are provided by a dhcp server. This is quite fine. However, with the new setup for dns provided by ubuntu, my machine can resolve all but what resides on my own domain.
<nikolam> Hi, Is there a smarter way to handle BTRFS snapshots (after installing apt-btrfs-snapshot to create snapshot on every apt install/upgrade operation) , then to delete them manually in CLI, one by one?  Like some GUI?..
<geodb27> What would be the best practice to have my dns servers be provided by my dhcp server and not have this uggly 127.0.0.53 fake server that serves nothing at all ?
<tomreyn> geodb27: 127.0.0.53 is not really a 'fake' server, it is a real locally running dns caching service, systemd-resolved
<geodb27> tomreyn: I can believe you on this. However, what I see on my machine is that with this default setup, my machine can't get any name resolved on my local domain name.
<tomreyn> if you'll propagate a  search domain you could make your organizations' servers and service hostnames resolvable by users just providing the very service name.
<tomreyn> geodb27: does    systemd-resolve --status    list a 'DNS Domain' on the network link whihc connects to your organization network?
<tomreyn> and do the dns servers listed there match your organizations' authoritative DNS servers for internal use (allowing to resolve some internalservice.organizationdomain.tld)?
<tomreyn> i'm assuming you run your own auth dns servers there for internal use. other orgs just push everything out ot the internet.
<geodb27> let me check
<geodb27> Don't think I've left all... The machine has just been rebooted and can't be reached...
<tomreyn> geodb27: maybe the issue is really just your auth dns?
<geodb27> Indeed, we have two dns servers that are authoritative on our network for our local.domain.tld. The two dns server addresses are listed when U do a systemd-resolved --status.
<geodb27> However, my guess is that the 127.0.0.53 local dns wants to be authoritative also on this local.domain.tld, which is weird.
<tomreyn> geodb27: when you run    systemd-resolve --flush-caches && systemd-resolve some.local.domain.tld    does it then seem to resolve those correctly? and can you      ping -c1 some.local.domain.tld    without getting resolver errors?
<geodb27> no to all, I've already tried all this, ping some.logal.domain.tld fails with "Temporary failure in name resolution". But...
<geodb27> I say but... Because there seem to be another network error somewhere I'm trying to figure out.
<tomreyn> if you install dig and run    dig -t A some.local.domain.tld @resolver.listed.on.systemd-resolve--status   does it report that it was able to resolve it properly?
<tomreyn> what makes you think there is "another network error somewhere"?
<geodb27> I can't install anything, the machine can't reach our proxy (proxy.local.domain.tld)...
<geodb27> Machine boot 1 : I can ssh to it. Reboot, I can't.. Reboot, I can, and so on. Ping never reaches this id address... Well, there *DEFINETLY* is something wrong.
<tomreyn> can you ping -c1 the resolver?
<tomreyn> also the proxy, by ip address
<tomreyn> but i agree this looks like a network issue (probably outside of this system), not just dns.
<blackflow> geodb27: and what if you set up a static IP? does the connection flip-flop on reboot as well?
<geodb27> I'm waiting for one of our Network engineer to look at it closer.
<JonTheNiceGuy> Hi, just tried to use the Ubuntu server 18.04.02 image downloaded this morning with the new installer (I normally use Vagrant images, so I've not used the installer since circa 14.04)... Struggling to do an install on an inherited server (Fujitsu Primergy RX200S8) with static IPs and an LSI raid card. I've had to manually set up the IP addressing via a separate TTY (ip addr/ip route and tinyproxy on another host to work
<JonTheNiceGuy> around DNS) and now it's not finding the raid drives. I can generally work around things, but identifying where the issues are to improve other's experiences would be better. What can I give you to start working out where to file bugs? Bearing in mind, I'm not planning to do lots of rebuilds on this box after today :)
<tomreyn> JonTheNiceGuy: the RAID controller should be supported out of the box. maybe it is set to some bad mode in the firmware configuration. you can post a kernel log from a tty using: dmesg | nc termbin.com 9999
<tomreyn> JonTheNiceGuy: note that there is also an alternate server installer (which is based on the old debian-installer) available. it is better suited for complex configurations.
<tomreyn> https://wiki.debian.org/LinuxRaidForAdmins#megaraid_sas and https://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS may help you manage the RAID controller properly.
<JonTheNiceGuy> G7vri95129
<tomreyn> JonTheNiceGuy: you'll want a new passwrd
<JonTheNiceGuy> Fortunately it's only half a password..... However.... Yes.
<JonTheNiceGuy> http://termbin.com/7x24
<JonTheNiceGuy> Don't know if it's relevant, but I was using UEFI boot too?
<tomreyn> it may be if you had secure boot on.
<JonTheNiceGuy> Hmm, I might try without UEFI and see what happens next :
<JonTheNiceGuy> :)
<tomreyn> JonTheNiceGuy: you have a very old mainboard firmware installed there, too
<tomreyn> http://support.ts.fujitsu.com/IndexDownload.asp?Softwareguid=9BDBC15C-F1C1-467F-B1BE-94C787AF9501
<tomreyn> R1.19.0 (06/06/2018), you have R1.3.0 (12/04/2013)
<tomreyn> personally, i'd keep uefi booting
<JonTheNiceGuy> Looks like I'm running fwupd on there too then :)
<tomreyn> [    1.904433] megaraid_sas 0000:01:00.0: Failed to init firmware
<tomreyn> [    1.904685] ------------[ cut here ]------------
<tomreyn> [    1.904686] Trying to free already-free IRQ 34
<tomreyn> that's from your log
<tomreyn> firmware update may help
<JonTheNiceGuy> Ugh. OK. Fab thanks.
<tomreyn> JonTheNiceGuy: there are separate RAID Controller firmware downloads provided as disk images for this system (two different ones, not sure which one you need), to be boot an usb attached storage and carry out the controller firmware upgrade.
<tomreyn> http://support.ts.fujitsu.com/IndexDownload.asp?Softwareguid=3E48AED0-4A10-4354-9E83-78948AB08B62 and http://support.ts.fujitsu.com/IndexDownload.asp?Softwareguid=DBE58F8E-46EE-449D-A678-50090680CD32 (read "important information")
<Delvien> running ubuntu-server as a guest in KVM, i resized the disk and verified the new size on the host, but ubuntu is not picking up the change. Not running LVM.
<qman__> run partprobe
<andol> Delvien: On the guest, did you check the size of the block device or the size of the file system?
<Delvien> Hmm, i was mistaken, it does show the correct value (checking with lsblk) but running resize2fs states nothing to do.. odd
<qman__> you have to increase the size of the partition
<qman__> the process works like this: 1. Expand virtual disk 2. Run partprobe for the kernel to detect the new size 3. extend the partition
<qman__> 4. resize2fs
<Delvien> The filesystem is already 2620672 (4k) blocks long.  Nothing to do!
<qman__> I usually use fdisk/gdisk to do it
<coreycb> jamespage: sahid is going to handle the cinder and keystone rc2's for stein
<geodb27> Hi again ! I've ended up to re-create a vm with latest ubuntu 18.04.02 server iso, and at install set up a fixed ip address for my ens160 network interface. However, after reboot, the interface is not up and no ip are set. What did I do wrong ?
<teward> is your VM configured to connect the network at power on?  Is the VM's settings set to have the virtual nic on it "Connected"?
<geodb27> Yes it is.
<Ool> how did you fix the IP add ? into the netplan/file.cfg ?
<geodb27> It seems to have been filled in in /etc/netplan/50-cloud-init.yaml by the installer. I haven't modified anything by hand
<geodb27> Well, these problems killed me for today. Thanks Ool and teward for your kind help. I'll see for this tomorow...
<tomreyn> check whether netplan created a matching systemd-networkd configuration file with the static ip address in it as a result
<tomreyn> it should be in /etc/systemd/network, i think
<tomreyn> or in /run/systemd/network rather
<jamespage> coreycb: +1
<DK2> is there a way to logg the complete console for ssh session?
<leftyfb> DK2: screen can do it
<tomreyn> auditd
<sdeziel> "script" can also do it but only auditd is the secure way to do it
#ubuntu-server 2019-04-05
<Deihmos> is it normal for ubuntu server to have a new update every single day?
<sarnold> all the updates are mailed to eg bionic-changes or xenial-changes: https://lists.ubuntu.com/archives/bionic-changes/
<sarnold> you could take a look at them to figure out how often it happens
<sarnold> security updates are almost always monday through thursday, but we may release emergency updates outside of that window; and your server might receive them some time after we release them, based on your settings...
<sarnold> SRU updates may come out any day
<RoyK> Deihmos: frequent updates are good - just install unattended-upgrades and let it do its business
<lordievader> Good morning
<ygk_12345> hi all good morning
<ygk_12345> can someone look into this please
<ygk_12345> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1823146
<ubottu> Launchpad bug 1823146 in linux (Ubuntu) "network namespaces error" [Undecided,Incomplete]
<ygk_12345> is anyone familiar with network namespaces ?
<ygk_12345> is anyone here familiar with network namespaces
<bhuddah> ygk_12345: can you replicate the bug in a lab setting?
<ygk_12345> bhuddah: i have 4 ubuntu 18 servers which is having this problem but each time a reboot seems to fix it
<bhuddah> ygk_12345: so at the moment all you know is that it happens "randomly"?
<ygk_12345> bhuddah: yes and the command seems to never complete and hangs
<bhuddah> ygk_12345: and there are several different commands that hang by that time?
<ygk_12345> bhuddah: no only this command
<bhuddah> ygk_12345: you wrote apport hangs too...
<ygk_12345> bhuddah: apport doesn't hang but goes on forever . it is not related to the original issue
<bhuddah> ygk_12345: is that the first namespace you're trying to add?
<ygk_12345> bhuddah: no. I already have some created earlier
<bhuddah> ygk_12345: no idea how to debug that sort of intermittent error there. sorry :/
<ygk_12345> can anyone help me with my issue please
<bhuddah> ygk_12345: maybe try in #netfilter maybe they have an idea.
<tomreyn> what i recommended yesterday was to clone one of those untouchable production systems which run with outdated kernel versions and update the kernel on the cloned system, to see whether it still happens then.
<tomreyn> the kernel version ygk_12345 is using on those production systems is not just not the latest but also specifically known to break userspace.
<lotuspsychje> same here, i reccomended you also to get your system up to date first ygk_12345
<ygk_12345> lotuspsychje: but elsewhere we have another system with the same kernel and it is working fine
<bhuddah> ygk_12345: what is your primary goal? fixing the machine or understanding the bug?
<ygk_12345> bhuddah: both so that it doesn't occur again in the future
<bhuddah> ygk_12345: OR
<lotuspsychje> i really dont get why admins keep back important updates/kernels in production..
<bhuddah> in my experience it comes from bad development practices mostly. arcane software with outdated software requirements.
<blackflow> lotuspsychje: one day when you'll have a fleet of servers and any minute of downtime means clients and management yelling at you and money lost, you'll know :)   though, honestly, you keep them back only temporarily until the breakage is solved or it's fully tested.
<blackflow> also, I read kernel changelogs and do NOT upgrade the kernel every time there's one. there's no need to reboot for fixes and patches that don't concern the system it's running on and there's a lot of those in Ubuntu.
<ygk_12345> lotuspsychje: bhuddah tomreyn actually when you have a running critical system, you cant risk to upgrade everytime there is one as it may break the software application. And thats a pain.
<bhuddah> ygk_12345: you have a development system, a test system, an integration system and then a live system. of course you can risk doing upgrades.
<tomreyn> ygk_12345: i'm aware. i'm also aware that you don't run critical systems on a known bad foundation.
<tomreyn> and as bhuddah says, surely you can and hsould be able to upgrade your dev / test  and maybe integration systems
<blackflow> what's "integration systems".... some new marketing/devops speak? like "serverless"?
<tomreyn> staging
<blackflow> how is it different from testing?
<bhuddah> it's the development system for the integrators... sort of.
<blackflow> what "integrators"?
<bhuddah> the people who operate the deployment and so on?
<blackflow> and again how's that different from testing.
<blackflow> you have prod, and you clone prod for testing purposes (you test changes on systems that resemble production as much as possible, but aren't prod themselves). so what else do yo do?
<blackflow> I guess I get triggered by devops circlejerk. "serverless integration devops meta-testing bull"....... GAH
<ygk_12345> any kernel gurus here ?
<ygk_12345> atleast they can help me I believe
<bhuddah> blackflow: i think that's even way older than devops. we've been doing that for ages. separation of different systems for different purposes...
<bhuddah> ygk_12345: reproduce the error on a test system, please. don't make all the people nervous.
<blackflow> bhuddah: I think you're confusing unit testing terminology (levels of unit -> integration -> functional)  with deployment strategies.
<bhuddah> blackflow: i bet it's a translation problem. i'm not english native and sometimes we call stuff by weird names.
<ygk_12345> bhuddah: :)
<ygk_12345> bhuddah: the thing here is that it worked earlier on these systems but suddenly we started seeing these issues. so testing on a similar system wont expose the real root cause
<ygk_12345> bhuddah: it will go well with the test system also. it is related to these particular defective systems that is causing this issue
<bhuddah> ygk_12345: then clone one of the system. i don't know. that's all part of bug reproduction...
<ygk_12345> bhuddah: so we cant really replicate the issue
<bhuddah> hunting unicorns...
<ygk_12345> bhuddah: tomreyn how to clone a running system ?
<bhuddah> tell me you have backups...
<bhuddah> please.
<ygk_12345> bhuddah: we dont have any backups as of now
<bhuddah> aah. the "important" "production" machine without backups.
<bhuddah> i think you can just rsync the whole machine btw. or rsync an lvm snapshot.
<blackflow> lol!
<blackflow> ygk_12345: "testing on a similar system wont expose" -- then they're not similar enough, that's the whole point here, and hinted at by bhuddah with the suggestion to clone production.
<blackflow> but see, you should have a clone of prod to begin with, and if you don't, then make one asap and always run changes through that _first_. and indeed, your prod is only as valuable as the effort you put into backups.
<lotuspsychje> blackflow: i fully understand yeling customers isnt pleasing, but keep running older kernels/security updates can result in even more yelling customers right?
<lotuspsychje> and as tomreyn suggested, i would stay long on his current kernel
<lotuspsychje> wouldnt
<blackflow> lotuspsychje: depends on what it is. sometimes a bug fixed can nuke your prod. happened to us twice. once when PHP fixed a long standing bug in class constructors (param precedence), we had to change the apps before applying the PHP update (which was also a security fix) because the apps were written for the working, though bugged PHP API.
<blackflow> the other time ubuntu kernel had an apparmor security fix that broke boot, two years or so ago. held that off until next patch level that fixed it again.
<blackflow> so yeah there's no clear answer but one: test on as similar systems as possible, preferably down to every screw and nut in the chassis :)
<lotuspsychje> yeah test systems are wise
<blackflow> right. just saying why there are quite valid reasons to hold off important updates. but yeah those are temporary situations one should aim to resolve asap.
<lotuspsychje> asap i understand blackflow
<lotuspsychje> blackflow: but ygk_12345 said yesterday he was still on that kernel because he was running..something
<lotuspsychje> ygk_12345	lotuspsychje: we have installed openstack on it. so cant take that risk
<blackflow> lotuspsychje: yeah. I was just replying to "why admins keep back important updates/kernels in production". there be reasons. but also yes, one should look into fixing that asap.
<lotuspsychje> allrighty :p
<coreycb> sahid: cinder rc2 uploaded, thanks
<sahid> coreycb: ack
<J_Darnley> Why does ubuntu 18.04 not have an ip address despite me leaving dhcp when I installed it.
<J_Darnley> The interface today has no ip despite it getting one yesterday
<J_Darnley> Okay, what is this cloud-init thing?
<tomreyn> it's a utility to make cloud deployments easier, i think. you can uninstall it if you dont think you need it.
<J_Darnley> Is there a fallback for network configuration if I choose to do that?  Or do I need to install one?
<J_Darnley> Hm, I'd better search the installed packages
<tomreyn> ubuntu server 18.04 uses systemd-networkd for network configuration by default. there is also netplan, which can generate configurations for both systemd-networkd and network-manager (which is used on desktops by default).
<J_Darnley> systemd is fine, I can work it, just about
<Holiday> tomreyn: here
<tomreyn> Holiday: hi, please repost your questions here
<Holiday> Has anyone else notice any issues with networking when apt updates systemd, systemd-sysv, libsystemd0? We have 18.04 deployed but have to use ifupdown because of our vCenter version, and things seem to work fine until apt updates those libraries. When it does, networking is lost (nic doesn't reflect the IPs)
<Holiday>  restart networking and things appear. On one system as a test, I removed netplan.io and it appears it's the only one this morning that DIDN'T lose networking.. so I'm wondering if there's an issue with systemd updates and netplan
<Holiday> this is a VM, ubuntu server 18.04 LTS, and should be using whatever the default is for the network management iirc
<Holiday> (minus the fact I had to put ifupdown on it)
<tomreyn> do you see the connectivity loss reflecte din logs?
<Holiday> I do see the loss reflected and maybe it is a "who controls" issue although it only seems to show the one interface being mentioned (lo)
<tomreyn> can you share the relevant logs of such an event on a pastebin?
<Holiday> systemd stops the networking, starts it again, stops the name resolution, and then barfs with systemd-networkd lo link is not managed by us and that's the last of it
<Holiday> sure
<tomreyn> lo would be the loopback device.
<Holiday> yeah, never mentions the actual ether
<tomreyn> the loopback device becoiming unavailable would impact name resolution via the systemd-resolved dns cache listening on 127.0.0.53:53
<tomreyn> my (limited?) understanding is that netplan.io is never run automatically, so whether it is installed or not should not impact connectivity.
<Holiday> tomreyn: https://pastebin.com/ekQwFzGQ
<Holiday> always seems to be when systemd is updated
<Holiday> now maybe it's because I had to do the ifupdown and renamed the interface to make the whole "vCenter version configure it's IP" deal to work, which does.. and otherwise everything seems happy
<Holiday> its  literally just after the apt update runs
<cyphermox> Holiday: tomreyn: netplan does run automatically at very early boot to take the yaml and convert into the appropriate config for NetworkMaanger or systemd-networkd
<cyphermox> that said, if there is no YAML, it does nothing
<Holiday> @cyphermox hrm.. wonder if it's because I left the dhcp4: yes in the 01-netplan.yml config file.. basically it's the default with just ifupdown installed. Although, like I said, works on boot, ifdown && ifup etc all resolve the issue.. it's just when apt updates the systemd
<cyphermox> if you have dhcp4: yes for an interface in netplan yaml, then yes it could possibly do something if systemd is restarted
<cyphermox> you shouldn't have the same interface configured in both ifupdown and netplan, that will cause conflicts
<Holiday> I'll have to remove that then. I just left it as the Ubuntu netplan.io docs just say if you need/want to use ifupdown, just installing the package is enough
<tomreyn> thanks c-mox, i wasn't aware of that.
<cyphermox> to be clear, at boot, what happens is the same as running 'netplan generate'; it does nothing but write files, no restarting of services
<Ool> cyphermox: just to know, when you reboot it do a netplan apply ?
<cyphermox> no
<cyphermox> at boot time, it's always just running the generator; only writing files to /run
<cyphermox> unless I misunderstood your question
<cyphermox> 'netplan apply' and 'netplan try' are just for users to run when you change the config and want to see the result immediately
<Ool> but if you change the config and reboot without do netplan apply, the new config is apply at the next boot ?
<Ool> cyphermox: dsl, I do my best in english :)
<cyphermox> yes, if you reboot the config is applied
<maxel> Hi all, I'm not sure how to begin troubleshooting this situation, but I've been attempting to upgrade an ubuntu server vm from 16.04 to 18.04 lts. every time I've done it, the biggest things I've noticed that break are the ssh server, and my mounting of an xfs file system
<maxel> am I doing something wrong with my upgrade, or is this sort of thing expected?
<rbasak> maxel: local customisations may break if the surrounding infrastructure has changed between releases. There's no avoiding that unfortunately.
<rbasak> maxel: but for default or simple cases to break is a bug.
<maxel> I had the weirdest problem with the ssh servere, where ufw seemed to introduce  a rule to disable ssh, once I allowed it again I had a problem where I would log in, and it would immediately kill the session
<rbasak> Can you reproduce it on a fresh 16.04 installation thatn is then upgraded to 18.04?
<maxel> I have not gone to those lengths yet, no
<blackflow> maxel: are your ssh keys DSA or RSA? iirc between xenial and bionic OpenSSH has stopped supporting DSA
<maxel> I wasn't even using ssh keys, just user:pass login
<maxel> although, when upgrading I opted to use the maintainers configuration, I'm not sure if I should be opting to keep my existing config
<blackflow> maxel: yes if you changed it from default
<blackflow> (which would be the case if you used keys)
<blackflow> also, if that's some hosted server, companies usually modify the ssh config to allow root, they don't pre-suppose your non-root admin account name. mostly. if that's teh case, you'll have to re-enable root login
<blackflow> --- if you're using root, thatis.
<maxel> I was not using root to login, but a user that is in wheel
<maxel> and this is a self hosted server, I have an esxi server running vms, this is my ubuntu server vm
<blackflow> maxel: well, did you change the /etc/ssh/sshd_config in any way?
<maxel> so nothing is enforced as far as policy
<maxel> AFAIK I only changed the default port, and I checked the new port after upgrading
<blackflow> maxel: are you sure sshd is listening on that port? if you changed it back to your custom port, did you restart ssh.service?
<maxel> I should try an upgrade again, I keep going back to my snapshot of this vm after running into so many problems after an upgrade
<blackflow> or reload, that should suffice
<maxel> yeah, I checked with a netstat -tlpn to see it's listening on the correct port
<blackflow> maxel: journalctl -eu ssh.service  might have clues
<maxel> so I'm going back to my pre-upgrade state, I'm going to opt to keep my custom configs for any service. there are a couple prompts when upgrading asking which I want
<blackflow> ideally you should always keep local config modifications and then inspect them later to see if anything needs to change.
<maxel> ok, and I also noticed when ugprading it has an issue because I have postgresql running and it does not seem to like that
<maxel> should I just shut that service down before upgrading?
<tomreyn> no, you should just keep services running.
<tomreyn> but "it has an issue because I have postgresql running" is really not very easy to respond to because it's so unspecific.
<maxel> I'll post on here exactly what the prompt during upgrade is
<tomreyn> good plan
<maxel> it's not an issue, it just asks because of I think some libraries that are running when upgrading
<maxel> so I'm getting back to a clean, upgraded state in my 16.04 install
<tomreyn> so it's just a prompt, not an issue?
<maxel> right, it just made me paranoid because the description sounded like it couldn't perform an upgrade on those libraries since they were being used by the service
<tomreyn> so then there's no need to post it here unless something actually goes wrong.
<blackflow> maxel: over the years I found it best to boot into single user + network mode and upgrade with every service shut down.
<blackflow> Â¬.
<blackflow> oops
<blackflow> Â¬.
<maxel> ok, so I am on 16.04.6, 0 packages can be updated, 0 security updates. 18.04.2 LTS is available
<maxel> I'll make a snapshot here before I do the release upgrade again
<tomreyn> run     ubuntu-support-status --show-unsupported
<maxel> http://paste.ubuntu.com/p/HbZXqWvmYJ/
<tomreyn> those "unsupported" ones are not supported by canonical (but may or may not be supported by the community or 3rd parties). they have an upgrade path with your current apt sources.
<tomreyn> "no longer downloadable" ones are not 'backed by' an apt source, apt does not know how they got there, or what to do with them, really. they are 'foreign'.
<maxel> I don't think I'm using them anymore. sounds like you think I should remove those
<tomreyn> those are not supported by canonical, and they have no upgrade path (security patches), unless you maintain those differently.
<tomreyn> i would not want software installed which has no way to get security patches.
<maxel> were you referring to the no longer supported only, or also the unsupported packagfes
<maxel> and also, is there an easy way to clean up/remove the unsupported packages?
<tomreyn> anything i wrote after i meantiuoned "no longer downloadable" referred to packages which are in the "no longer downloadable" section of the output you posted.
<tomreyn> *mentioned
<tomreyn> https://github.com/tomreyn/scripts#foreign_packages outputs the "no longer installable" ones in a form that is easier to parse. it also lists packages which are in versions not available in the currently active apt repositories.
<tomreyn> it does not discuss "unsupported" packages, though.
<maxel> alright, hopefully this helps, removing some packages before upgrading
<tomreyn> you could temporarily disable all apt sources which are not supported by Canonical, then run the foreign_packes script, this would cause those packages ubuntu-support-status considers "unsupported" to be listed as "no longer downloadable"
<tomreyn> i don'T expect the packages ubuntu-support-status lists under "Unsupported" to cause problems during the release upgrade.
<tomreyn> at least not those that come from ubuntu apt sources
<maxel> ok, cleaned everything up and am trying a release upgrade again
<maxel> first thing coming up is "some third party entries in your sources.list were disabled. you can re-enable them after the upgrade with the 'software-properties' tool or your package manger."
<maxel> I just don't know what that is referring to or what could be affected
<maxel> 3 packages no longer supported by canonical
<tomreyn> some of your packages are clearly from 3rd party repositories (apt-cache policy lists actiive apt repositories). personally, i would remove any packages from 3rd party repositories before upgrading.
<tomreyn> (or test that they don't cause problems during the upgrade)
<maxel> ok, so this was the postgres issue I was talking about. I've got a prompt about libc6. it says "running services and programs that are usign NSS need to be restarted, otherwise they might not be able to do lookup or authentication any more. The installation process is able to restart some services (such as ssh or telnetd), but other programs cannot be restarted automatically. One such program that needs manual stopping and restart
<maxel> after the glibc upgrade by yourself is xdm - because automatic restart might disconnect your active X11 sessions."
<maxel> and it found postgresql as the service
<maxel> I should be fine to upgrade glibc now?
<maxel> also, I don't know how I would test packages don't cause problems during the upgrade
<tomreyn> you run a graphical login manager on a server?
<maxel> not that I'm aware of
<maxel> if something is, I don't actively use it
<maxel> I only use ssh to connect to this server
<tomreyn> xdm is that
<tomreyn> this message basically tells you that if you proceed, you will need to ensure all services are restarted in the end. but since you will be rebooting anyways, this is nothing to worry about.
<maxel> ok
<tomreyn> but you really should review the packages you have installed if this is a (clone of a) production system
<maxel> it is a production system only for myself. it's just a samba share host and it runs miscellaneous services I use
<tomreyn> i see
<maxel> so I appreciate the help!
<tomreyn> you're welcome.
<maxel> this server had turned into something I'm terrified to touch because I do not want to mess up that shared drive, but I really need to get more comfortable with ubuntu
<maxel> alright so now I'm to choosing which config to use. sshd_config is up first, and I believe you said I should be able to use the existing config
<tomreyn> i don't think i said so, no. but if changes are needed, it should be brought up during this phase.
<maxel> ok, I'm giving a shot retaining existing configs for sshd and nginx. now I'm removing packagges
<blackflow> question... are service configs comin from Debian preferred, or is Ubuntu willing to change them for the better? Eg. munin-node is confiugred to run in the background, as Type=forking. I presume because Debian wants to keep the illusion of init freedom. Ubuntu doesn't. Would a patch be accepted to make it Type=simple?
<JanC> did you ask the Debian maintainer?
<blackflow> nope. But I know what their answer will be. "we support sysv which requires type=forking etc..."
<blackflow> since Ubuntu does not need to support sysv (I assume?!), then I suppose it could have proper service unit files, properly confined and run in most optimal fashion, but that deviates from Debian.
<blackflow> I'm running munin that way, and am testing confinement, and I believe those settings are generally useful in Ubuntu's Munin.
<sarnold> my guess is such a change would be more likely to be accepted if we already have a delta from debian for a given package
<JanC> Debian might be open for a solution that adapts to the init system if that's possible
<blackflow> I even have some apparmor profiles but those are extremely hard to write generally useful because of so man plugin options
<blackflow> so *many
<blackflow> sarnold: I've got maintainership experience with other systems, but .deb is outright scary and I wouldn't know where to begin checking that.
<sarnold> blackflow: yes
<blackflow> JanC: I don't think they are. I alredy asked for some other services that suffer from teh same problem: Type=forking even though they support being run as simple or even, in some cases, as notify
<sarnold> blackflow: you can see the delta that ubuntu carries from debian on https://patches.ubuntu.com/
<JanC> I assume it probably depends on who the Debian maintainer is...
<blackflow> sarnold: yea there's a tiny patch fixing permissions
<blackflow> JanC: so, start by asking the Debian maintainer, and if they refuse, try to fix it downstream in Ubuntu?
<blackflow> I mean... am  I correct in assuming that Ubuntu does not _need_ to support other inits? systemd is the only officially supported?
<sarnold> yes
<blackflow> because these changes I have in mind would not be possible otherwise.
<JanC> if it can be fixed in Debian, I'm sure the Ubuntu developers prefer that
<blackflow> I can try that, see what happens.
<blackflow> sarnold: oh btw, that ZFS wiki... was I talking to you about it? can't remember... neway, folks from #zfsonlinux said they had the wiki access and would fix it, but I don't think they did...
<maxel> so I just finished my upgrade to 18.04 lts. my ssh service is no longer running and I'm not sure why the upgrade stopped the service
<blackflow> sarnold: oh YOU removed those..... thanks! :)
<sarnold> blackflow: indeed, someone said they'd get to it, but a few days alter I noticed it wasn't done, so I just removed the ones that were outright ridiculous
<sarnold> blackflow: I think ptx0 preferred them there so he had something to laugh at :)
<maxel> ok, I've recreated the situation I do not understand. I stared up ssh service, and I am able to connect with putty, but as soon as I type in my password it kills the connection
<blackflow> sarnold: of course we would :)
<blackflow> *he
<blackflow> sarnold: it was me who asked you about it, but we were having a discussion in #zfsonlinux about that, and then later someone said they had the access (as I myself don't), and they would fix it, so I left it at that.
<sarnold> blackflow: aha :) it was long enough ago that I've forgotten the details.. and long since out of scrollback :)
<blackflow> then ptx0 kicked me out of the chan because I said Oracle vs Google was about Dalvik, and I never bothered to return in that cesspool of a chan, and forgot about the wiki.
<sarnold> rofl
<blackflow> banned me for 24 hours actually :)
<sarnold> that sounds like him. yeah.
<blackflow> yeah :)
<blackflow> never actually managed to get any meaningful help from #zfsonlinux, so there's no loss. :)
<blackflow> maxel: do you have server side access? the service journal entries might hold a hint
<blackflow> journalctl -eu ssh.service
<maxel> yeah, I found the journalctl error, got a PAM access denied, tweaked the sshd_config to not use PAM and it's working
<maxel> I'm not sure why it changed through the upgrade though
<Mead> hello, I just installed 18.04-server on a system, strange thing is that it gets a ipv6 address from my gateway but not a ipv4.
<blackflow> maxel: UsePAM is default, so if you didn't have it before, it must've been your change
<maxel> and also it is not automatically starting the ssh service
<blackflow> maxel: is it enabled?    systemctl status ssh.service
<maxel> loaded, but inactive after a restart
<maxel> I started it up manually last boot
<blackflow> oh wait, I think it's now socket activated?
<blackflow> as in, won't start until first attempt on the port
<maxel> it is denying my attempt to connect, well not denying
<maxel> but my connection doesn't work
<blackflow> until you manually start ssh.service?
<maxel> right
<maxel> it might be because my server is starting in emergency mode
<maxel> I've never had it in that state before though
<blackflow> just checked, there's ssh.socket but it's not require'd by ssh.service, so that's not being socket activated, I guess. look through the logs, if it's enabled, but doesnt' start, is something preventing it?
<maxel> hmm, ok. and the other problem I'm wrestling with is the mounting of a filesystem failing. the log just says "wrong fs type, bad option, bad superblock" to a raid disk that is formatted in XFS that worked in my 16.04 install
<maxel> I see the drive listed in both fdisk and parted
<blackflow> maxel: what/how are you trying to mount exactly?
<blackflow> can you pastebin the commands you run, and their outputs, and/or fstab itself?
<maxel> here is my fstab: https://pastebin.com/TjLV499M
<maxel> trying to remember what I can look at to see the actual command being run when mounting
<maxel> internet is being spotty
<blackflow> maxel: is that hardware raid, or software/mdadm? if the latter, you must mount the appropriate /dev/mdX
<maxel> it is raid, and mounted through a raid card in the server and then shared through esxi to the vm
<blackflow> so the VM has no idea it's raid?
<maxel> I don't believe so
<maxel> I'm not sure how to validate that
<blackflow> btw what do you mean by "mounted through a raid card?" fstab needs a block device
<maxel> I just mean to set up the raid I had to install a raid controller in the copmuter, and connect my 4 drives to that
<blackflow> I don't know what "shared through esxi" means but if it's anything like "sharing an external directory" into the VM, like host-guest dir sharing, that's not gonna work
<maxel> and then I created the raid volume through an interface to that raid card
<blackflow> and then you formatted the resulting RAID _device_ with xfs?
<maxel> I'm trying to describe that esxi (the hypervisor OS) sees the volume fine, and that I had to basically share the resources with this vm
<maxel> I'm sorry I am so unhelpful here, I can't even remember how I set up the raid volume anymore. I believe the raid card formatted the volume and it happened to use xfs
<blackflow> "share" how, I have no idea what vmware does there. for that fstab line, /dev/sda needs to be a block device. is it? does `blkid` run in the VM show it?
<blackflow> maxel: that's the thing with hardware raid. it's proprietary who-knows-what-and-good-luck-if-you-have-issues kind of nonsense.
<maxel> ah, and I needed hardware raid because I wanted raid 5, and that can't be done with my mobo
<maxel> blkid seems to see the volume yes
<blackflow> also, is it just the fact that you're maybe missing a partition on that device? it's a bit unusal the entire drive is being mounted like that. usually there's a GPT or MBR-based partition table on them
<blackflow> what does `parted /dev/sda unit mib print` (run as root)   show?
<maxel> certainly possible, every memory of how I initially set this up has escaped me, so I'm struggling with some of these questions
<blackflow> maxel: mdadm doesn't need motherboard support. "fakeraid" does, but that's not good either.
<maxel>  1      0.00MiB  7680000MiB  7680000MiB  xfs
<mdeslaur> kstenerud: how far did you get with php 7.2.16? looks like 7.2.17 is out with some security fixes: bug 1823386
<ubottu> bug 1823386 in php7.2 (Ubuntu) "[MRE] Please update to latest upstream release 7.2.17 & 7.3.4" [Undecided,New] https://launchpad.net/bugs/1823386
<blackflow> maxel: ah so you need /dev/sda1 in that fstab line
<blackflow> and you mount /dev/sda1 and not /dev/sda
<maxel> is that a normal thing to change in a release upgrade?
<blackflow> change what exactly?
<maxel> the sda volume name or whatever that is
<blackflow> I don't think anything changed there.
<maxel> I'm not sure if you saw what I was doing before, but this volume was working 2 hours ago in ubuntu 16.04 before I ran the release upgrade
<blackflow> unless.... can you actually please pastebin the full output of that parted command? I'm interested in the partition table shown
<maxel> pastebinit is nor working for some reason
<blackflow> maxel: `parted /dev/sda unit mib print | nc termbin.com 9999`
<maxel> https://pastebin.com/5SG2vqsd
<blackflow> sudo parted if you're not root
<friendlyguy> hi there! i just upgraded a server from ubuntu 1604 to 1804, but there were some errors during the upgrade
<maxel> friendlyguy, I'm going through the same process right now
<friendlyguy> now dhcp isnt working any more, had to configure the interface manually
<friendlyguy> there is an error with systemd-shim
<friendlyguy> i tries to override a file and comes back with an error that its not allowed
<blackflow> maxel: I see, so that's "loop", no partition table there. that's a bit weird setup. dunno if supported by xfs actually. could be that's what changed.
<maxel> hmmm, well I set something up that is over my my head apparently
<blackflow> maxel: or you know what, this could all be red herring. if you didn't format it as xfs and are expecting the raid system to have done so, just reformat it yourself
<friendlyguy> /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service with /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service.systemd
<blackflow> maxel: but if you DID have data there, then DON'T
<maxel> I do have data that I want to keep
<maxel> this is where I'm confused because however it is set up, it works pre-upgrade
<maxel> I can go back to a vm snapshot and see everything fine
<blackflow> maxel: try to mount it without specifying the fs type.    `mount /dev/sda /mnt`   and see if that works
<maxel> but when I come back to the 18.04 ugpraded snapshot there are problems
<maxel> how would I even validate that worked?
<blackflow> maxel: newer kernel, it's possible xfs changed some expectations, I wouldn't know sorry.
<blackflow> maxel: if it didn't complain and there be files on /mnt, then it worked.
<maxel> huh, yeah, looks like it worked
<maxel> how do I unmount? I want to change it back to the original name I had
<friendlyguy> ah... looks like renaming that file did the trick. now apt-get install -f continues
<friendlyguy> lets see how that works out :)
<blackflow> maxel: umount /dev/sda
<blackflow> maxel: maybe it's something about those options, lemme see
<friendlyguy> yay
<maxel> hmm, umount says target is busy
<teward> are you currently `cd`'d into wherever it's mounted?
<maxel> I was not
<maxel> just restarting the server now
<blackflow> sudo umount it
<blackflow> weirdly, it's very difficult to find xfs manpage :)
<maxel> and ssh still isn't starting up automatically, but one problem at a time
<maxel> here's the process and options it is attempting: Process: 1042 ExecMount=/bin/mount /dev/sda /media/BMFD -t xfs -o defaults,users,barrier=0 (code=exited, status=32)
<maxel> so when I run mount with no arguments it seems to work
<blackflow> maxel: well, I don't use xfs, but that barrier=0 options doesn't seem right. the manpage says its a boolean, so   either barrier   or  nobarier   should be specified
<marz> When Debian goes stable there are no version bumps to the software. Is it the same with Ubuntu ?
<blackflow> maxel: what process is that, can you pastebin the whole unit?
<maxel> I can't even remember where those arguments are defined
<blackflow> marz: yes with exceptions like SRU, and some desktopy things like FireFox being latest
<blackflow> marz: in the xfs(5) manpage
<maxel> you may be on to something, if the options changed then this could explain all my problems
<sarnold> marz: it depends. firefox, chromium-browser, mysql, mariadb, etc get bumped.. the browsers to new versions as they are released, databases to the new minor releases..
<bipul> I am stuck while giving a chroot jail configuration at /etc/schroot/schroot.conf  for more then one chroot jail.
<blackflow> (which needed xfsprogs to be installed, and funnily google didn't return a "xfs manpage" but insisted if I meant ZFS)
<bipul> Does anyone know?
<blackflow> maxel: that wouldn't be unusual across kernel versions
<sarnold> marz: we've had to do new version bumps for samba, in the past, but that was painful all aronud and we really hope to never do that again.
<sarnold> marz: but yes we do *vastly* prefer to apply specific security patches
<marz> Iâve seen updates to packages on the server that arenât security updates
<maxel> blackflow, yeah, that has to be it, I just ran the mount command manually and it worked
<blackflow> maxel: but try the command with all the options you specified in that unit
<maxel> right, I just changed barrier=0 to just barrier and it mounted
<blackflow> maxel: eh, barrier=0 sounds more like nobarier
<maxel> now I need to tweak the service so it works correctly
<maxel> good point
<blackflow> not sure it'd be wise to change that tho'
<blackflow> I mean, barriers = good, on systems with no data checksums.
<blackflow> slower, but more integrity there.
<maxel> I can try both, I had some issues mounting that drive as a samba share and I don't remember what I had to do to get it to work
<blackflow> I'd leave defaults (ie. don't specify it at all) if you didn't know exactly which one you need
<sarnold> marz: yes, there's standard bug updates too; those should also be smaller fixes, but sometimes new minor versions are accepted there too
<sarnold> hah
<friendlyguy> humm, dhcp is working though it didnt update the resolv.conf file
<friendlyguy> there is no name resolution
<friendlyguy> any idea where to start searching?
<maxel> blackflow, well, looks like you figured it out. it's all because of that barrier option. I removed that option and everything is working. drive is mounted, all the issues with services not working were because of the emerggency mode
<maxel> so thanks a ton for the help!
<blackflow> maxel: awesome!
<maxel> I wish I could remember why I added that option to begin with
<blackflow> maxel: usually to speed things up. barriers force flushes of writeback caches periodically so it's slightly slower in some cases, but better for integrity.
<blackflow> I'd leave defaults, if you didn't know exactly what and why you need of it.
<ahasenack> rbasak: another question is, there is a directory with a file in version 1 of the package, and in version 2 that directory isn't declared in d/dirs anymore, nor is that file used
<ahasenack> while upgrading, dpkg warns that it can't remove a non-empty directory
<ahasenack> I'm adding a preinst snippet for that, it checks the deb version and does an rm -f of the file, and the directory is taken care of by dpkg itself after the upgrade
<ahasenack> sounds right?
<ahasenack> it's in /var
<ahasenack>  /var/cache/<dirname>/file
<RoyK> ahasenack: stop the process, remove the chache dir or move it somewhere else and restart the process
<ahasenack> there is no process (no daemon, if that's what you mean)
<RoyK> then just move the dir somewhere else and remove it once the problem is solved
<ahasenack> the new version doesn't use that anymore
<ahasenack> what I did is working, seems clean, I just wanted to double check
<ahasenack> preinst: rm -f /dir/file
<ahasenack> new package: debian/dirs doesn't mention /dir anymore
<ahasenack> upgrade: dpkg takes care of removing /dir as long as it's empty (which preinst took care of)
#ubuntu-server 2019-04-06
<Mead> I have 18.04 and trying to follow this:  https://help.ubuntu.com/community/SerialConsoleHowto in step 1  of Server setup, it say to create a file in /etc/init but I don't have that directory in my install.
<tomreyn> Mead: it's outdated, i'd say.
<tomreyn> it works differently on systemd
<tomreyn> see if just adding console=ttyS0 to the linux comand line (grub) does it.
<Mead> soo all I'd need to do is add something to grub, and reboot? No other configuration?
<tomreyn> i haven't actually tried, this is a guess based on the first paragraph on http://0pointer.de/blog/projects/serial-console.html
<tomreyn> and you may need ot use the alternate installer for this
<tomreyn> actually it may work on the 'server live' installer (subiquity) like this according to bug 1770962
<ubottu> bug 1770962 in subiquity "Support serial-port based install" [Medium,Triaged] https://launchpad.net/bugs/1770962
<tomreyn> oh you didn'T actually mention "installer2, looks like i made this up.
<friendlyguy> hi there! i am wondering about landscape: the service rabbitmq-server wont start
<tomreyn> * "installer"
<friendlyguy> the startup_log of rabbitmq contains: ERROR: epmd error for host landscape: timeout (timed out)
<friendlyguy> however, epmd is running
<friendlyguy> i dont get it
<Mead> my grub install has this GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity"  so I need to use a character after the existing command to seperate this new command?
<Mead> icantdie
<Mead> sorry wrong window
<rbasak> ahasenack: it seems bit odd to me to do it in the preinst. What if the package version is just removed - shouldn't it also get removed then in that case?
<rbasak> ahasenack: maybe it's needed both in preinst and the old postrm, though if it's gone in the newer package, perhaps to late to add to the old postrm now.
<zzlatev> He guys, can you help to install tvheadend on ubuntu-server 14.04
<zzlatev> E: Failed to fetch http://apt.tvheadend.org/unstable/artifacts/0p/kn5lqob9/tvheadend_4.3-1231~gc597a56~trusty_i386.deb Size mismatch
<zzlatev> E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
<sdeziel> zzlatev: I don't know that apt repo but the 0p/ dir doesn't exist
<sdeziel> http://apt.tvheadend.org/unstable/artifacts/ seems to indicate that 0h/ is the latest one
<zzlatev> this is automatically after apt-get install tvheadend
<sdeziel> zzlatev: also, please note that 14.04 goes EOL at the end of the month
<sdeziel> (unless you go with ESM)
<zzlatev> yes, I know that
<zzlatev> but my machine is very old
<sdeziel> does it support 64 bit?
<zzlatev> nope
<sdeziel> OK so that indeed qualifies as old
<sdeziel> how much RAM?
<zzlatev> 1 gb
<sdeziel> should be good enough
<sdeziel> good enough for Bionic (18.04.2)
<zzlatev> sdeziel: so what may be the problem here
<zzlatev> 14.04?
<sdeziel> no, I was just checking your options for the OS itself ;)
<sdeziel> your problem seems to come from a bad apt repo in your sources.list somewhere
<sdeziel> cause that 0p/ dir is not there on the web server
<zzlatev> sdeziel: the problem was that the repo was in /etc/apt/sources.list
<zzlatev> after I remove it now everything works fine
<sdeziel> glad it's now working for you
#ubuntu-server 2019-04-07
<teward> ahasenack: what's the timeline you're after for the TLS 1.3 build for NGINX?
<teward> I ask this because I'll have to get Release Team approval for the changes I'll need to add (version-limiting the libssl deps at build and run)
<cmosguy> hello, can anyone help me with my `xrdp` issue with ubuntu?
<lotuspsychje> describe your details what you want to do cmosguy so volunteers can think along
<cmosguy> hello, ok so I a installed the `xrdp` packages, I uninstalled all the previous `gdm` `unity` desktop stuff
<cmosguy> I am getting the error message in my `~/.xsession-errors`: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown
<cmosguy> https://www.irccloud.com/pastebin/fPfDLJGm/
<cmosguy> nevermind
<cmosguy> i had some lame `.xsession` file left over
<aditya> Hi people, can any one help me I have a headless ubuntu server which is not able to boot due to power failure and is booting into initramfs, how can I resolve this issue since I am unable to boot into bios as there is no keyboard attached to it only a USB logitech wireless keyboard
<aditya> basically the keyboard is only getting detected when the system boots into initramfs and not during the booting time, I guess some settings in my bios are making it not work so is there a way I directly type into initramfs prompt like reboot and some option to directly go into bios
<aditya> Also is it possible to directly login to the server from something like ssh even when ubuntu has not loaded to manage the server headlessly instead of getting a monitor to connect to the server
<blackflow> aditya: power loss should not cause anything software-wise that'd result with what you describe, unless it's root on btrfs and btrfs somehow went belly up due to that (which is known to happen). more like it casued some hardware/bios issue
<blackflow> aditya: can you upload to imgur.com a screenshot of what's on when it "boots into initramfs"?
<aditya> blackflow: thanks for the input but I managed to run fsck from a live usb on my /dev/sda2 and the problem was resolved
<blackflow> requirement for fsck shouldn't break boot. is that btrfs?
<aditya> well I don't know now since its booted easily but I am looking at a way I can manage the boot issues remotely.
<aditya> since once the system is not booted I cannot ssh into it and now I need to find a way to login to the bio of my server remotely and for that I am unable to find any resources online
<blackflow> aditya: you'd need something like IPMPI or another out of band management suite, on the motherboard itself
<blackflow> *IPMI
<qman__> or an IP KVM
<qman__> but IPMI is the better way
<friendlyguy> hi there! i am trying to configure landscape-client. upon "landscape-config" i run into  pycurl: libcurl link-time version (7.47.0) is older than compile-time version (7.58.0)
<friendlyguy> any idea how to fix that?
<qman__> are all your packages up to date?
<friendlyguy> yes
<friendlyguy> looks like there were problems with python2.7 on the machine
<friendlyguy> i removed that and now its "fine"
